Search Results

Search found 1830 results on 74 pages for 'joomla dbo'.

Page 36/74 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • how to have files created by CMS have the same ownership as SSH user

    - by Cam
    I am having difficulty on our ubuntu server whereby I have an SSH user that when I create files using this user the ownership is web_user:www-data The problem is when a file is uploaded or created using a content management system like joomla. When files are uploaded through Joomla - such as components / modules... The ownership is set to www-data:www-data This means that I need to then chown all new files to web_user:www-data so we can edit the files. Is there a way to set for a directory and sub-directories that all new files created have the ownership of web_user:www-data? Do I need to use something like setuid or setgid? Any help would be greatly appreciated.

    Read the article

  • Choosing a CMS to use with backend modules involving haskell and python [on hold]

    - by Butterflycode
    Hi I am trying to decide on a CMS to use for a new project. Security is the most important element of the CMS. I am looking to use a PHP based CMS such as Joomla or Drupal however, PHP has many security flaws which worries me. The data which needs to be secure will be inside a database and relate to account information. I am wondering what is the best way to do this? What I am wanting is a frontend which is made in php/js(joomla) and then I have a backend api which is written in Haskell to handle money transfers ensuring nothing goes wrong. In between the two I want a controller written in perhaps Python or C. I never want the php to touch the database. I want it to relay messages to the controller that's written in python or C and then it inputs to the database, sanitising data etc Am I perhaps thinking too deeply about this? Just wondering if anyone has any ideas on what I should do.... I can't quite explain what the project is as I don't want the idea to be stolen, but it has a lot money transactions involved so security is essential.

    Read the article

  • Query Logging in Analysis Services

    - by MikeD
    On a project I work on, we capture the queries that get executed on our Analysis Services instance (SQL Server 2008 R2) and use the table for helping us to build aggregations and also we aggregate the query log daily into a data warehouse of operational data so we can track usage of our Analysis databases by users over time. We've learned a couple of helpful things about this logging that I'd like to share here.First off, the query log table automatically gets cleaned out by SSAS under a few conditions - schema changes to the analysis database and even regular data and aggregation processing can delete rows in the table. We like to keep these logs longer than that, so we have a trigger on the table that copies all rows into another table with the same structure:Here is our trigger code:CREATE TRIGGER [dbo].[SaveQueryLog] on [dbo].[OlapQueryLog] AFTER INSERT AS       INSERT INTO dbo.[OlapQueryLog_History] (MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration)      SELECT MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration FROM inserted Second, the query logging process is "best effort" - if SSAS cannot connect to the database listed in the QueryLogConnectionString in the Analysis Server properties, it just stops logging - it doesn't generate any errors to the client at all, which is a good thing. Once it stops logging, it doesn't retry later - an hour, a day, a week, or even a month later, so long as the service doesn't restart.That has burned us a couple of times, when we have made changes to the service account that is used for SSAS, and that account doesn't have access to the database we want to log to. The last time this happened, we noticed a while later that no logging was taking place, and I determined that the service account didn't have sufficient permissions, so I made the necessary changes to give that service account access to the logging database. I first tried just the db_datawriter role and that wasn't enough, so I granted the service account membership in the db_owner role. Yes, that's a much bigger set of permissions, but I didn't want to search out the specific permissions at the time. Once I determined that the service account had the appropriate permissions, I wanted to get query logging restarted from SSAS, and I wondered how to do that? Having just used a larger hammer than necessary with the db_owner role membership, I considered just restarting SSAS to get it logging again. However, this was a production server, and it was in the middle of business hours, and there were active users connecting to that SSAS instance, so I thought better of it.As I considered the options, I remembered that the first time I set up query logging, by putting in a valid connection string to the QueryLogConnectionString server property, logging started immediately after I saved the properties. I wondered if I could make some other change to the connection string so that the query logging would start again without restarting the service. I went into the connection string dialog, went to the All page, and looked at the properties I could change that wouldn't affect the actual connection. Aha! The Application Name property would do just nicely - I set it to "SSAS Query Logging" (it was previously blank) and saved the changes to the server properties. And the query logging started up right away. If I need to get this running again in the future, I could just make a small change in the Application Name property again, save it, and even change it back again if I wanted to.The other nice side effect of setting the Application Name property is that now I can see (and possibly filter for or filter out) the SQL activity in that database that is related to the query logging process in Profiler:  To sum up:The SSAS Query Logging process will automatically delete rows from the QueryLog table, so if you want to keep them longer, put a trigger on the table to copy the rows to another tableThe SSAS service account requires more than db_datawriter role membership (and probably less than db_owner) in the database specified in the QueryLogConnectionString server property to successfully insert log rows to the QueryLog  table.Query logging will stop quietly whenever it encounters an error. Make a change to the QueryLogConnectionString server property (such as the Application Name attribute) to get query logging to restart and you won't have to restart the service.

    Read the article

  • Using Recursive SQL and XML trick to PIVOT(OK, concat) a "Document Folder Structure Relationship" table, works like MySQL GROUP_CONCAT

    - by Kevin Shyr
    I'm in the process of building out a Data Warehouse and encountered this issue along the way.In the environment, there is a table that stores all the folders with the individual level.  For example, if a document is created here:{App Path}\Level 1\Level 2\Level 3\{document}, then the DocumentFolder table would look like this:IDID_ParentFolderName1NULLLevel 121Level 232Level 3To my understanding, the table was built so that:Each proposal can have multiple documents stored at various locationsDifferent users working on the proposal will have different access level to the folder; if one user is assigned access to a folder level, she/he can see all the sub folders and their content.Now we understand from an application point of view why this table was built this way.  But you can quickly see the pain this causes the report writer to show a document link on the report.  I wasn't surprised to find the report query had 5 self outer joins, which is at the mercy of nobody creating a document that is buried 6 levels deep, and not to mention the degradation in performance.With the help of 2 posts (at the end of this post), I was able to come up with this solution:Use recursive SQL to build out the folder pathUse SQL XML trick to concat the strings.Code (a reminder, I built this code in a stored procedure.  If you copy the syntax into a simple query window and execute, you'll get an incorrect syntax error) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} -- Get all folders and group them by the original DocumentFolderID in PTSDocument table;WITH DocFoldersByDocFolderID(PTSDocumentFolderID_Original, PTSDocumentFolderID_Parent, sDocumentFolder, nLevel)AS (-- first member      SELECT 'PTSDocumentFolderID_Original' = d1.PTSDocumentFolderID            , PTSDocumentFolderID_Parent            , 'sDocumentFolder' = sName            , 'nLevel' = CONVERT(INT, 1000000)      FROM (SELECT DISTINCT PTSDocumentFolderID                  FROM dbo.PTSDocument_DY WITH(READPAST)            ) AS d1            INNER JOIN dbo.PTSDocumentFolder_DY AS df1 WITH(READPAST)                  ON d1.PTSDocumentFolderID = df1.PTSDocumentFolderID      UNION ALL      -- recursive      SELECT ddf1.PTSDocumentFolderID_Original            , df1.PTSDocumentFolderID_Parent            , 'sDocumentFolder' = df1.sName            , 'nLevel' = ddf1.nLevel - 1      FROM dbo.PTSDocumentFolder_DY AS df1 WITH(READPAST)            INNER JOIN DocFoldersByDocFolderID AS ddf1                  ON df1.PTSDocumentFolderID = ddf1.PTSDocumentFolderID_Parent)-- Flatten out folder path, DocFolderSingleByDocFolderID(PTSDocumentFolderID_Original, sDocumentFolder)AS (SELECT dfbdf.PTSDocumentFolderID_Original            , 'sDocumentFolder' = STUFF((SELECT '\' + sDocumentFolder                                         FROM DocFoldersByDocFolderID                                         WHERE (PTSDocumentFolderID_Original = dfbdf.PTSDocumentFolderID_Original)                                         ORDER BY PTSDocumentFolderID_Original, nLevel                                         FOR XML PATH ('')),1,1,'')      FROM DocFoldersByDocFolderID AS dfbdf      GROUP BY dfbdf.PTSDocumentFolderID_Original) And voila, I use the second CTE to join back to my original query (which is now a CTE for Source as we can now use MERGE to do INSERT and UPDATE at the same time).Each part of this solution would not solve the problem by itself because:If I don't use recursion, I cannot build out the path properly.  If I use the XML trick only, then I don't have the originating folder ID info that I need to link to the document.If I don't use the XML trick, then I don't have one row per document to show in the report.I could conceivably do this in the report function, but I'd rather not deal with the beginning or ending backslash and how to attach the document name.PIVOT doesn't do strings and UNPIVOT runs into the same problem as the above.I'm excited that each version of SQL server provides us new tools to solve old problems and/or enables us to solve problems in a more elegant wayThe 2 posts that helped me along:Recursive Queries Using Common Table ExpressionHow to use GROUP BY to concatenate strings in SQL server?

    Read the article

  • sp_send_dbmail attach files stored as varbinary in database

    - by Mindstorm Interactive
    I have a two part question relating to sending query results as attachments using sp_send_dbmail. Problem 1: Only basic .txt files will open. Any other format like .pdf or .jpg are corrupted. Problem 2: When attempting to send multiple attachments, I receive one file with all file names glued together. I'm running SQL Server 2005 and I have a table storing uploaded documents: CREATE TABLE [dbo].[EmailAttachment]( [EmailAttachmentID] [int] IDENTITY(1,1) NOT NULL, [MassEmailID] [int] NULL, -- foreign key [FileData] [varbinary](max) NOT NULL, [FileName] [varchar](100) NOT NULL, [MimeType] [varchar](100) NOT NULL I also have a MassEmail table with standard email stuff. Here is the SQL Send Mail script. For brevity, I've excluded declare statements. while ( (select count(MassEmailID) from MassEmail where status = 20 )>0) begin select @MassEmailID = Min(MassEmailID) from MassEmail where status = 20 select @Subject = [Subject] from MassEmail where MassEmailID = @MassEmailID select @Body = Body from MassEmail where MassEmailID = @MassEmailID set @query = 'set nocount on; select cast(FileData as varchar(max)) from Mydatabase.dbo.EmailAttachment where MassEmailID = '+ CAST(@MassEmailID as varchar(100)) select @filename = '' select @filename = COALESCE(@filename+ ',', '') +FileName from EmailAttachment where MassEmailID = @MassEmailID exec msdb.dbo.sp_send_dbmail @profile_name = 'MASS_EMAIL', @recipients = '[email protected]', @subject = @Subject, @body =@Body, @body_format ='HTML', @query = @query, @query_attachment_filename = @filename, @attach_query_result_as_file = 1, @query_result_separator = '; ', @query_no_truncate = 1, @query_result_header = 0; update MassEmailset status= 30,SendDate = GetDate() where MassEmailID = @MassEmailID end I am able to successfully read files from the database so I know the binary data is not corrupted. .txt files only read when I cast FilaData to varchar. But clearly original headers are lost. It's also worth noting that attachment file sizes are different than the original files. That is most likely due to improper encoding as well. So I'm hoping there's a way to create file headers using the stored mimetype, or some way to include file headers in the binary data? I'm also not confident in the values of the last few parameters, and I know coalesce is not quite right, because it prepends the first file name with a comma. But good documentation is nearly impossible to find. Please help!

    Read the article

  • NHibernate - joining on a subquery using ICriteria

    - by owensymes.mp
    I have a SQL query that I need to represent using NHibernate's ICriteria API. SELECT u.Id as Id, u.Login as Login, u.FirstName as FirstName, u.LastName as LastName, gm.UserGroupId_FK as UserGroupId, inner.Data1, inner.Data2, inner.Data3 FROM dbo.User u inner join dbo.GroupMember gm on u.Id = gm.UserAnchorId_FK left join ( SELECT di.UserAnchorId_FK, sum(di.Data1) as Data1, sum(di.Data2) as Data2, sum(di.Data3) as Data3 FROM dbo.DailyInfo di WHERE di.Date between '2009-04-01' and '2009-06-01' GROUP BY di.UserAnchorId_FK ) inner ON inner.UserAnchorId_FK = u.Id WHERE gm.UserGroupId_FK = 195 Attempts so far have included mapping 'User' and 'DailyInfo' classes (my entities) and making a DailyInfo object a property of the User object. However, how to map the foreign key relationship between them is still a mystery, ie <one-to-one></one-to-one> <one-to-many></one-to-many> <generator class="foreign"><param name="property">Id</param></generator> (!) Solutions on the web are generally to do with subqueries within a WHERE clause, however I need to left join on this subquery instead to ensure NULL values are returned for rows that do not join. I have the feeling that I should be using a Criteria for the outer query, then forming a 'join' with a DetachedCriteria to represent the subquery?

    Read the article

  • sp_addlinkserver using trigger

    - by Nanda
    I have the following trigger, which causes an error when it runs: CREATE TRIGGER ... ON ... FOR INSERT, UPDATE AS IF UPDATE(STATUS) BEGIN DECLARE @newPrice VARCHAR(50) DECLARE @FILENAME VARCHAR(50) DECLARE @server VARCHAR(50) DECLARE @provider VARCHAR(50) DECLARE @datasrc VARCHAR(50) DECLARE @location VARCHAR(50) DECLARE @provstr VARCHAR(50) DECLARE @catalog VARCHAR(50) DECLARE @DBNAME VARCHAR(50) SET @server=xx SET @provider=xx SET @datasrc=xx SET @provstr='DRIVER={SQL Server};SERVER=xxxxxxxx;UID=xx;PWD=xx;' SET @DBNAME='[xx]' SET @newPrice = (SELECT STATUS FROM Inserted) SET @FILENAME = (SELECT INPUT_XML_FILE_NAME FROM Inserted) IF @newPrice = 'FAIL' BEGIN EXEC master.dbo.sp_addlinkedserver @server, '', @provider, @datasrc, @provstr EXEC master.dbo.sp_addlinkedsrvlogin @server, 'true' INSERT INTO [@server].[@DBNAME].[dbo].[maildetails] ( 'to', 'cc', 'from', 'subject', 'body', 'status', 'Attachment', 'APPLICATION', 'ID', 'Timestamp', 'AttachmentName' ) VALUES ( 'P23741', '', '', 'XMLFAILED', @FILENAME, '4', '', '8', '', GETDATE(), '' ) EXEC sp_dropserver @server END END The error is: Msg 15002, Level 16, State 1, Procedure sp_MSaddserver_internal, Line 28 The procedure 'sys.sp_addlinkedserver' cannot be executed within a transaction. Msg 15002, Level 16, State 1, Procedure sp_addlinkedsrvlogin, Line 17 The procedure 'sys.sp_addlinkedsrvlogin' cannot be executed within a transaction. Msg 15002, Level 16, State 1, Procedure sp_dropserver, Line 12 The procedure 'sys.sp_dropserver' cannot be executed within a transaction. How can I prevent this error from occurring?

    Read the article

  • [SOLVED] How to create a FBO with stencil buffer in OpenGL ES 2.0?

    - by Alphones
    I need stencil buffer on 3GS to render planar shadow, and polygon offset won't work prefect, still has z-fighting problem. So I use stencil buffer to make the shadow correct, it works on win32 gles2 emulator, but not on iPhone. After I added a post effect to the whole scene. The stencil buffer won't work even on win32 gles2 emulator. And I tried to attach a stencil buffer to FBO, buf the screen turns to black. Here's my code, glGenRenderbuffers(1, &dbo); // depth buffer glBindRenderbuffer(GL_RENDERBUFFER, dbo); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24_OES, widthGL, heightGL); glGenRenderbuffers(1, &sbo); // stencil buffer glBindRenderbuffer(GL_RENDERBUFFER, sbo); glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, widthGL, heightGL); glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex, 0); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, dbo); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, sbo); // this make the whole screen black. The eglContext is created with STENCIL_SIZE=8, it works without a RTT. I tried to change the RenderbufferStorage for both depth buffer and stencil buffer, but none of them works. Is there anything I have missed? Does the stencil buffer pack with depth buffer? (I cannot find things like GL_DEPTH24_STENCIL8 ...)

    Read the article

  • Full-text search error during full-text index population : Error Code '0x80092003'

    - by user360074
    Dear All, I have problem with Full-Text Search service in production environment. Each time I rebuild full-text catalog, there is no error in User Interface, but there is no data in Full-Text Catalog Item Count : 0 Catalog size : 0 MB OS : Windows Server 2003 R2 Standard Edition Service Pack2 SQL Server Version : Microsoft SQL Server 2005 - 9.00.1399.06 (Intel X86) Oct 14 2005 00:33:37 Copyright (c) 1988-2005 Microsoft Corporation Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 2) It work on dev server (windows xp professional version 2002 service pack 3) but error on prod server (Windows Server 2003 R2 Standard Edition Service Pack2) This is error log. Scrawl Log: 2010-06-02 03:51:31.06 spid24s Informational: Full-text Full population initialized for table or indexed view '[test1].[dbo].[test]' (table or indexed view ID '37575172', database ID '9'). Population sub-tasks: 1. 2010-06-02 03:51:31.06 spid24s Error '0x80092003' occurred during full-text index population for table or indexed view '[test1].[dbo].[test]' (table or indexed view ID '37575172', database ID '9'), full-text key value 0x00000006. Attempt will be made to reindex it. 2010-06-02 03:51:31.06 spid24s The component 'MSFTE.DLL' reported error while indexing. Component path 'D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Binn\MSFTE.DLL'. 2010-06-02 03:51:31.06 spid24s Error '0x80092003' occurred during full-text index population for table or indexed view '[test1].[dbo].[test]' (table or indexed view ID '37575172', database ID '9'), full-text key value 0x00000005. Attempt will be made to reindex it.

    Read the article

  • Which non-clustered index should I use?

    - by Junior Mayhé
    Here I am studying nonclustered indexes on SQL Server Management Studio. I've created a table with more than 1 million records. This table has a primary key. CREATE TABLE [dbo].[Customers]( [CustomerId] [int] IDENTITY(1,1) NOT NULL, [CustomerName] [varchar](100) NOT NULL, [Deleted] [bit] NOT NULL, [Active] [bit] NOT NULL, CONSTRAINT [PK_Customers] PRIMARY KEY CLUSTERED ( [CustomerId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] This is the query I'll be using to see what execution plan is showing: SELECT CustomerName FROM Customers Well, executing this command with no additional non-clustered index, it leads the execution plan to show me: I/O cost = 3.45646 Operator cost = 4.57715 Now I'm trying to see if it's possible to improve performance, so I've created a non-clustered index for this table: 1) First non-clustered index CREATE NONCLUSTERED INDEX [IX_CustomerID_CustomerName] ON [dbo].[Customers] ( [CustomerId] ASC, [CustomerName] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO Executing again the select against Customers table, the execution plan shows me: I/O cost = 2.79942 Operator cost = 3.92001 It seems better. Now I've deleted this just created non-clustered index, in order to create a new one: 2) First non-clustered index CREATE NONCLUSTERED INDEX [IX_CustomerIDIncludeCustomerName] ON [dbo].[Customers] ( [CustomerId] ASC ) INCLUDE ( [CustomerName]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this new non-clustered index, I've executed the select statement again and the execution plan shows me the same result: I/O cost = 2.79942 Operator cost = 3.92001 So, which non-clustered index should I use? Why the costs are the same on execution plan for I/O and Operator? Am I doing something wrong or this is expected? thank you

    Read the article

  • Why Sql not bringing back results unless I set varchar size?

    - by Tom
    I've got an SQL script that fetches results based on the colour passed to it, but unless I set the size of the variable defined as a varchar to (50) no results are returned. If I use: like ''+@Colour+'%' then it works but I don't really want to use it in case it brings back results I don't need or want. The column FieldValue has a type of Varchar(Max) (which can't be changed as this field can store different things). It is part of aspdotnetstorefront package so I can't really change the tables or field types. This doesn't work: declare @Col VarChar set @Col = 'blu' select * from dbo.MetaData as MD where MD.FieldValue = @Colour But this does work: declare @Col VarChar (50) set @Col = 'blu' select * from dbo.MetaData as MD where MD.FieldValue = @Colour The code is used in the following context, but should work either way <query name="Products" rowElementName="Variant"> <sql> <![CDATA[ select * from dbo.MetaData as MD where MD.Colour = @Colour ]]> </sql> <queryparam paramname="@ProductID" paramtype="runtime" requestparamname="pID" sqlDataType="int" defvalue="0" validationpattern="^\d{1,10}$" /> <queryparam paramname="@Colour" paramtype="runtime" requestparamname="pCol" sqlDataType="varchar" defvalue="" validationpattern=""/> </query> Any Ideas?

    Read the article

  • SQL Concurrent test update question

    - by ptoinson
    Howdy Folks, I have a SQLServer 2008 database in which I have a table for Tags. A tag is just an id and a name. The definition of the tags table looks like: CREATE TABLE [dbo].[Tag]( [ID] [int] IDENTITY(1,1) NOT NULL, [Name] [varchar](255) NOT NULL CONSTRAINT [PK_Tag] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ) Name is also a unique index. further I have several processes adding data to this table at a pretty rapid rate. These processes use a stored proc that looks like: ALTER PROC [dbo].[lg_Tag_Insert] @Name varchar(255) AS DECLARE @ID int SET @ID = (select ID from Tag where Name=@Name ) if @ID is null begin INSERT Tag(Name) VALUES (@Name) RETURN SCOPE_IDENTITY() end else begin return @ID end My issues is that, other than being a novice at concurrent database design, there seems to be a race condition that is causing me to occasionally get an error that I'm trying to enter duplicate keys (Name) into the DB. The error is: Cannot insert duplicate key row in object 'dbo.Tag' with unique index 'IX_Tag_Name'. This makes sense, I'm just not sure how to fix this. If it where code I would know how to lock the right areas. SQLServer is quite a different beast. First question is what is the proper way to code this 'check, then update pattern'? It seems I need to get an exclusive lock on the row during the check, rather than a shared lock, but it's not clear to me the best way to do that. Any help in the right direction will be greatly appreciated. Thanks in advance.

    Read the article

  • NHibernate CreateSqlQuery and object graph

    - by magellings
    Hello I'm a newbie to NHibernate. I'd like to make one sql query to the database using joins to my three tables. I have an Application with many Roles with many Users. I'm trying to get NHibernate to properly form the object graph starting with the Application object. For example, if I have 10 application records, I want 10 application objects and then those objects have their roles which have their users. What I'm getting however resembles a Cartesian product in which I have as many Application objects as total User records. I've looked into this quite a bit and am not sure if it is possible to form the application hierarchy correctly. I can only get the flattened objects to come back. It seems "maybe" possible as in my research I've read about "grouped joins" and "hierarchical output" with an upcoming LINQ to NHibernate release. Again though I'm a newbie. [Update Based on Frans comment in Ayende's post here I'm guessing what I want to do is not possible http://ayende.com/Blog/archive/2008/12/01/solving-the-select-n1-problem.aspx ] Thanks for you time in advance. Session.CreateSQLQuery(@"SELECT a.ID, a.InternalName, r.ID, r.ApplicationID, r.Name, u.UserID, u.RoleID FROM dbo.[Application] a JOIN dbo.[Roles] r ON a.ID = r.ApplicationID JOIN dbo.[UserRoleXRef] u ON u.RoleID = r.ID") .AddEntity("app", typeof(RightsBasedSecurityApplication)) .AddJoin("role", "app.Roles") .AddJoin("user", "role.RightsUsers") .List<RightsBasedSecurityApplication>().AsQueryable();

    Read the article

  • sql exception arithmetic overflow?

    - by MyHeadHurts
    In my program the user imports a date and it works whenever the year is in 2011 but if i try a date in 2010 i get this error which is weird [ SqlException (0x80131904): Arithmetic overflow error converting int to data type numeric.] System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) +1950890 System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +4846875 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +194 System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) +2392 System.Data.SqlClient.SqlDataReader.HasMoreRows() +157 System.Data.SqlClient.SqlDataReader.ReadInternal(Boolean setTimeout) +197 System.Data.SqlClient.SqlDataReader.Read() +9 System.Data.Common.DataAdapter.FillLoadDataRow(SchemaMapping mapping) +78 System.Data.Common.DataAdapter.FillFromReader(DataSet dataset, DataTable datatable, String srcTable, DataReaderContainer dataReader, Int32 startRecord, Int32 maxRecords, DataColumn parentChapterColumn, Object parentChapterValue) +164 System.Data.Common.DataAdapter.Fill(DataTable[] dataTables, IDataReader dataReader, Int32 startRecord, Int32 maxRecords) +282 System.Data.Common.LoadAdapter.FillFromReader(DataTable[] dataTables, IDataReader dataReader, Int32 startRecord, Int32 maxRecords) +19 System.Data.DataTable.Load(IDataReader reader, LoadOption loadOption, FillErrorEventHandler errorHandler) +222 System.Data.DataTable.Load(IDataReader reader) +14 ( @YearToGet int, @current datetime, @y int, @search datetime ) AS SET @YearToGet = 2006; WITH Years AS ( SELECT DATEPART(year, GETDATE()) [Year] UNION ALL SELECT [Year]-1 FROM Years WHERE [Year]>@YearToGet ), q_00 as ( select DIVISION , DYYYY , sum(PARTY) as asofPAX , sum(InsAmount) as asofSales from dbo.B101BookingsDetails INNER JOIN Years ON B101BookingsDetails.DYYYY = Years.Year where Booked <= CONVERT(int, DateAdd(year, (Years.Year - @y), @search)) and DYYYY = Years.Year group by DIVISION, DYYYY, years.year having DYYYY = years.year ), q_01 as ( select DIVISION , DYYYY , sum(PARTY) as YEPAX , sum(InsAmount) as YESales from dbo.B101BookingsDetails INNER JOIN Years ON B101BookingsDetails.DYYYY = Years.Year group by DIVISION, DYYYY , years.year having DYYYY = years.year ), q_02 as ( select DIVISION , DYYYY , sum(PARTY) as CurrentPAX , sum(InsAmount) as CurrentSales from dbo.B101BookingsDetails INNER JOIN Years ON B101BookingsDetails.DYYYY = Years.Year where Booked <= CONVERT(int,@current) and DYYYY = (year( getdate() )) group by DIVISION, DYYYY ) select a.DIVISION , a.DYYYY , asofPAX , asofSales , YEPAX , YESales , CurrentPAX , CurrentSales ,asofsales/ ISNULL(NULLIF(yesales,0),1) as percentsales, CAST((asofpax) AS DECIMAL(5,1))/yepax as percentpax from q_00 as a join q_01 as b on (b.DIVISION = a.DIVISION and b.DYYYY = a.DYYYY) join q_02 as c on (b.DIVISION = c.DIVISION) JOIN Years as d on (b.dyyyy = d.year) where A.DYYYY <> (year( getdate() )) order by a.DIVISION, a.DYYYY ;

    Read the article

  • How could I create a FBO with stencil buffer in OpenGL ES 2.0?

    - by Alphones
    I need stencil buffer on 3GS to render planar shadow, and polygon offset won't work prefect, still has z-fighting problem. So I use stencil buffer to make the shadow correct, it works on win32 gles2 emulator, but not on iPhone. After I added a post effect to the whole scene. The stencil buffer won't work even on win32 gles2 emulator. And I tried to attach a stencil buffer to FBO, buf the screen turns to black. Here's my code, glGenRenderbuffers(1, &dbo); // depth buffer glBindRenderbuffer(GL_RENDERBUFFER, dbo); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24_OES, widthGL, heightGL); glGenRenderbuffers(1, &sbo); // stencil buffer glBindRenderbuffer(GL_RENDERBUFFER, sbo); glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, widthGL, heightGL); glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex, 0); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, dbo); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, sbo); // this make the whole screen black. The eglContext is created with STENCIL_SIZE=8, it works without a RTT. I tried to change the RenderbufferStorage for both depth buffer and stencil buffer, but none of them works. Is there anything I have missed? Does the stencil buffer pack with depth buffer? (I cannot find things like GL_DEPTH24_STENCIL8 ...)

    Read the article

  • Same query has nested loops when used with INSERT, but Hash Match without.

    - by AaronLS
    I have two tables, one has about 1500 records and the other has about 300000 child records. About a 1:200 ratio. I stage the parent table to a staging table, SomeParentTable_Staging, and then I stage all of it's child records, but I only want the ones that are related to the records I staged in the parent table. So I use the below query to perform this staging by joining with the parent tables staged data. --Stage child records INSERT INTO [dbo].[SomeChildTable_Staging] ([SomeChildTableId] ,[SomeParentTableId] ,SomeData1 ,SomeData2 ,SomeData3 ,SomeData4 ) SELECT [SomeChildTableId] ,D.[SomeParentTableId] ,SomeData1 ,SomeData2 ,SomeData3 ,SomeData4 FROM [dbo].[SomeChildTable] D INNER JOIN dbo.SomeParentTable_Staging I ON D.SomeParentTableID = I.SomeParentTableID; The execution plan indicates that the tables are being joined with a Nested Loop. When I run just the select portion of the query without the insert, the join is performed with Hash Match. So the select statement is the same, but in the context of an insert it uses the slower nested loop. I have added non-clustered index on the D.SomeParentTableID so that there is an index on both sides of the join. I.SomeParentTableID is a primary key with clustered index. Why does it use a nested loop for inserts that use a join? Is there a way to improve the performance of the join for the insert?

    Read the article

  • MS Query returns data inside itself but does not export it to Excel

    - by kappa
    Hi, I'm having a strange problem with Excel and MS Query: I'm using MS Query to run a T-SQL query against a Microsoft SQL Server 2000 and return the results to Excel. To do this, I open Excel, go to Data - Import external data - New database query, select my data source, paste the SQL script in MS Query and click File - Return data to Microsoft Office Excel, leaving all the query options to their defaults. This works fine for many other Excel files, but this time although MS Query shows the correct data when I paste the SQL script, after returning to Excel all I get is the query name in the upper left cell, with no data returned. I fear the cause could be the SQL script, as it contains some advanced functions like union all, UDFs and variables. Here's the script: declare @date smalldatetime set @date = dateadd(day, datediff(day, 0, getdate()), 0) select [date], sum([hours]) as [hours] from ( select [date], [hours] from [server].[dbo].[udf] (84, '2010-01-01', @date) union all select [date], [hours] from [server].[dbo].[udf] (89, '2010-01-01', @date) union all select [date], [hours] from [server].[dbo].[udf] (93, '2010-01-01', @date) ) as [a] group by [date] order by [date] asc I can't get rid of the UDF as inside them are done advanced groupings involving cursors and temporary tables, nor I can remove the variable as the UDF won't accept dateadd(day, datediff(day, 0, getdate()), 0) as parameter. Any ideas? Thanks in advance, Andrea.

    Read the article

  • Heterogeneous queries require the ANSI_NULLS

    - by Dezigo
    I wrote a trigger. USE [TEST] GO /****** Object: Trigger [dbo].[TR_POSTGRESQL_UPDATE_YC] Script Date: 05/26/2010 08:54:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER TRIGGER [dbo].[TR_POSTGRESQL_UPDATE_YC] ON [dbo].[BCT_CNTR_EVENTS] FOR INSERT AS BEGIN DECLARE @MOVE_TIME varchar(14); DECLARE @MOVE_TIME_FORMATED varchar(20); DECLARE @RELEASE_NOTE varchar(32); DECLARE @CMR_NUMBER varchar(15); DECLARE @MOVE_TYPE varchar(2); SELECT @MOVE_TIME = inserted.move_time ,@MOVE_TYPE = inserted.move_type ,@RELEASE_NOTE = inserted.release_note ,@CMR_NUMBER = inserted.cmr_number FROM inserted IF(@MOVE_TYPE = 'YC') BEGIN SET @MOVE_TIME_FORMATED = SUBSTRING(@MOVE_TIME,1,4) + '-' + SUBSTRING(@MOVE_TIME,5,2) + '-' + SUBSTRING(@MOVE_TIME,7,2) + ' 00:00:00' --UPDATE OpenQuery(POSTGRESQL_SERV,'SELECT visit_cmr,visit_timestamp,visit_pin FROM VISIT') -- SET visit_cmr = @RELEASE_NOTE -- WHERE visit_timestamp = @MOVE_TIME_FORMATED -- AND visit_pin = right(@CMR_NUMBER,5) -- AND visit_cmr IS NULL END SET NOCOUNT ON; END When I have inserted a row,I have get an error **Heterogeneous queries require the ANSI_NULLS and ANSI_WARNINGS options to be set for the connection. This ensures consistent query semantics. Enable these options and then reissue your query.** Then I ofcourse SET SET ANSI_WARNINGS is ON but it`s not work for me. (trigger fo linked server postgresql) I have restarted a server. not work again.

    Read the article

  • SQL Server problems reading columns with a foreign key

    - by illdev
    I have a weird situation, where simple queries seem to never finish for instance SELECT top 100 ArticleID FROM Article WHERE ProductGroupID=379114 returns immediately SELECT top 1000 ArticleID FROM Article WHERE ProductGroupID=379114 never returns SELECT ArticleID FROM Article WHERE ProductGroupID=379114 never returns SELECT top 1000 ArticleID FROM Article returns immediately By 'returning' I mean 'in query analyzer the green check mark appears and it says "Query executed successfully"'. I sometimes get the rows painted to the grid in qa, but still the query goes on waiting for my client to time out - 'sometimes': SELECT ProductGroupID AS Product23_1_, ArticleID AS ArticleID1_, ArticleID AS ArticleID18_0_, Inventory_Name AS Inventory3_18_0_, Inventory_UnitOfMeasure AS Inventory4_18_0_, BusinessKey AS Business5_18_0_, Name AS Name18_0_, ServesPeople AS ServesPe7_18_0_, InStock AS InStock18_0_, Description AS Descript9_18_0_, Description2 AS Descrip10_18_0_, TechnicalData AS Technic11_18_0_, IsDiscontinued AS IsDisco12_18_0_, Release AS Release18_0_, Classifications AS Classif14_18_0_, DistributorName AS Distrib15_18_0_, DistributorProductCode AS Distrib16_18_0_, Options AS Options18_0_, IsPromoted AS IsPromoted18_0_, IsBulkyFreight AS IsBulky19_18_0_, IsBackOrderOnly AS IsBackO20_18_0_, Price AS Price18_0_, Weight AS Weight18_0_, ProductGroupID AS Product23_18_0_, ConversationID AS Convers24_18_0_, DistributorID AS Distrib25_18_0_, type AS Type18_0_ FROM Article AS articles0_ WHERE (IsDiscontinued = '0') AND (ProductGroupID = 379121) shows this behavior. I have no idea what is going on. Probably select is broken ;) I got a foreign key on ProductGroups ALTER TABLE [dbo].[Article] WITH CHECK ADD CONSTRAINT [FK_ProductGroup_Articles] FOREIGN KEY([ProductGroupID]) REFERENCES [dbo].[ProductGroup] ([ProductGroupID]) GO ALTER TABLE [dbo].[Article] CHECK CONSTRAINT [FK_ProductGroup_Articles] there are some 6000 rows and IsDiscontinued is a bit, not null, but leaving this condition out does not change the outcome. Anyone can tell me how to handle such a situation? More info, anyone? Additional Info: this does not seem to be restricted to this Foreign Key, but all/some referencing this entity.

    Read the article

  • Update/Insert without select

    - by user348731
    I have this very simple class public class ProductAttributeValuePortal { public virtual int ID { get; set; } public virtual Domain.Entity.Portals.ProductPortal Product { get; set; } public virtual Attribute Attribute { get; set; } public virtual string Value { get; set; } } with this very simple map public ProductAttributeValueMap () { Table("DM.dbo.ProductAttributeValues"); Id(x => x.ID, "ProductAttributeValue_id"); References(x => x.Product); References(x => x.Attribute); Map(x => x.Value); } Each time i make a insert NHibernate makes a Select of the attribute like : NHibernate: INSERT INTO MachineData.dbo.ProductAttributeValues (Value, Product_id, Attribute_id) VALUES (@p0, @p1, @p2); select SCOPE_IDENTITY();@p0 = '6745', @p1 = 39, @p2 = 'BSTD' NHibernate: SELECT attribute_.Attribute_id, attribute_.Name as Name21_, attribute_.AttributeType as Attribut3_21_, attribute_.TagName as TagName21_, attribute_.MapTo as MapTo21_ FROM MachineShared.dbo.Attributes attribute_ WHERE attribute_.Attribute_id=@p0;@p0 = 'DLB' What am i doing wrong. And where do i find some really uptodate books about nhibernate/Fluent nhibernate

    Read the article

  • MS Access: Permission problems with views

    - by Keith Williams
    "I'll use an Access ADP" I said, "it's only a tiny project and I've got better things to do", I said, "I can build an interface really quickly in Access" I said. </sarcasm> Sorry for the rant, but it's Friday, I have a date in just under two hours, and I'm here late because this just isn't working - so, in despair, I turn to SO for help. Access ADP front-end, linked to a SQL Server 2008 database Using a SQL Server account to log into the database (for testing); this account is a member of the role, "Api"; this role has SELECT, EXECUTE, INSERT, UPDATE, DELETE access to the "Api" schema The "Api" schema is owned by "dbo" All tables have a corresponding view in the Api schema: e.g. dbo.Customer -- Api.Customers The rationale is that users don't have direct table access, but can deal with views as if they were tables I can log into SQL using my test login, and it works fine: no access to the tables, but I can select, insert, update and delete from the Api views. In Access, I see the views, I can open them, but whenever I try to insert or update, I get the following error: The SELECT permission was denied on the object '[Table name which the view is using]', database '[database name]', schema 'dbo' Crazy as it sounds, Access seems to be trying to access the underlying table rather than the view. Any ideas?

    Read the article

  • How to work with CTE. There is some error related to anchor.

    - by Shantanu Gupta
    I am creating a hierarchy representaion of a column. But an error occurs Details are Msg 240, Level 16, State 1, Line 1 Types don't match between the anchor and the recursive part in column "DISPLAY" of recursive query "CTE". I know there is some typecasting error. But I dont know how to remove error. Please just dont only sort out my error. I need explanation why this error is coming. When this error occurs. I am trying to sort table on the basis of sort col that i m introducing. I want to add '-' at every level and want to sort accordingly. Please help WITH CTE (PK_CATEGORY_ID, [DESCRIPTION], FK_CATEGORY_ID, DISPLAY, SORT, DEPTH) AS ( SELECT PK_CATEGORY_ID, [DESCRIPTION], FK_CATEGORY_ID, '-' AS DISPLAY, '--' AS SORT, 0 AS DEPTH FROM dbo.L_CATEGORY_TYPE WHERE FK_CATEGORY_ID IS NULL UNION ALL SELECT T.PK_CATEGORY_ID, T.[DESCRIPTION], T.FK_CATEGORY_ID, CAST(DISPLAY+T.[DESCRIPTION] AS VARCHAR(1000)), '--' AS SORT, C.DEPTH +1 FROM dbo.L_CATEGORY_TYPE T JOIN CTE C ON C.PK_CATEGORY_ID = T.FK_CATEGORY_ID --SELECT T.PK_CATEGORY_ID, C.SORT+T.[DESCRIPTION], T.FK_CATEGORY_ID --, CAST('--' + C.SORT AS VARCHAR(1000)) AS SORT, CAST(DEPTH +1 AS INT) AS DEPTH --FROM dbo.L_CATEGORY_TYPE T JOIN CTE C ON C.FK_CATEGORY_ID = T.PK_CATEGORY_ID ) SELECT PK_CATEGORY_ID, [DESCRIPTION], FK_CATEGORY_ID, DISPLAY, SORT, DEPTH FROM CTE ORDER BY SORT

    Read the article

  • Sql Server 2005 Check Constraint not being applied in execution when using variables

    - by DarylS
    Here is some SQL sample code: --Create 2 Sales tables with constraints based on the saledate create table Sales1(SaleDate datetime, Amount money) ALTER TABLE dbo.Sales1 ADD CONSTRAINT CK_Sales1 CHECK (([SaleDate]>='01 May 2010')) GO create table Sales2(SaleDate datetime, Amount money) ALTER TABLE dbo.Sales2 ADD CONSTRAINT CK_Sales2 CHECK (([SaleDate]<'01 May 2010')) GO --Insert some data into Sales1 insert into Sales1 (SaleDate, Amount) values ('02 May 2010', 50) insert into Sales1 (SaleDate, Amount) values ('03 May 2010', 60) GO --Insert some data into Sales2 insert into Sales2 (SaleDate, Amount) values ('30 Mar 2010', 10) insert into Sales2 (SaleDate, Amount) values ('31 Mar 2010', 20) GO --Create a view that combines these 2 tables create VIEW [dbo].[Sales] AS SELECT SaleDate, Amount FROM Sales1 UNION ALL SELECT SaleDate, Amount FROM Sales2 GO --Get the results --Query 1 select * from Sales where SaleDate < '31 Mar 2010' -- if you look at the execution plan this query only looks at Sales2 (Which is good) --Query 2 DECLARE @SaleDate datetime SET @SaleDate = '31 Mar 2010' select * from Sales where SaleDate < @SaleDate -- if you look at the execution plan this query looks at Sales1 and Sales2 (Which is NOT good) Looking at the execution plan you will see that the two queries are differnt. For Query 1 the only table that is accessed is Sales1 (which is good). For Query 2 both tables are accessed (Which is bad). Why are these execution plans different, and how do i get Query 2 to only access the relevant table when variables are used? I have tried to add indexes for the SaleDate column and that does not seem to help.

    Read the article

  • C# SQL Data Adapter Fill on existing typed Dataset

    - by René
    I have an option to choose between local based data storing (xml file) or SQL Server based. I already created a long time ago a typed dataset for my application to save data local in the xml file. Now, I have a bool that changes between Server based version and local version. If true my application get the data from the SQL Server. I'm not sure but It seems that Sql Adapter's Fill Method can't fill the Data in my existing schema SqlCommand cmd = new SqlCommand("Select * FROM dbo.Categories WHERE CatUserId = 1", _connection); cmd.CommandType = CommandType.Text; _sqlAdapter = new SqlDataAdapter(cmd); _sqlAdapter.TableMappings.Add("Categories", "dbo.Categories"); _sqlAdapter.Fill(Program.Dataset); This should fill my data from dbo.Categories to Categories (in my local, typed dataset). but it doesn't. It creates a new table with the name "Table". It looks like it can't handle the existing schema. I can't figure it out. Where is the problem? btw. of course the database request I do isn't very useful that way. It's just a simplified version for testing...

    Read the article

  • List all foreign key constraints that refer to a particular column in a specific table

    - by Sid
    I would like to see a list of all the tables and columns that refer (either directly or indirectly) a specific column in the 'main' table via a foreign key constraint that has the ON DELETE=CASCADE setting missing. The tricky part is that there would be an indirect relationships buried across up to 5 levels deep. (example: ... great-grandchild- FK3 = grandchild = FK2 = child = FK1 = main table). We need to dig up the leaf tables-columns, not just the very 1st level. The 'good' part about this is that execution speed isn't of concern, it'll be run on a backup copy of the production db to fix any relational issues for the future. I did SELECT * FROM sys.foreign_keys but that gives me the name of the constraint - not the names of the child-parent tables and the columns in the relationship (the juicy bits). Plus the previous designer used short, non-descriptive/random names for the FK constraints, unlike our practice below The way we're adding constraints into SQL Server: ALTER TABLE [dbo].[UserEmailPrefs] WITH CHECK ADD CONSTRAINT [FK_UserEmailPrefs_UserMasterTable_UserId] FOREIGN KEY([UserId]) REFERENCES [dbo].[UserMasterTable] ([UserId]) ON DELETE CASCADE GO ALTER TABLE [dbo].[UserEmailPrefs] CHECK CONSTRAINT [FK_UserEmailPrefs_UserMasterTable_UserId] GO The comments in this SO question inpire this question.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >