Search Results

Search found 5743 results on 230 pages for 'max van heiningen'.

Page 34/230 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Getting XQuery [modify()]: Top-level attribute nodes are not supported error while modifying xml

    - by sam
    DECLARE @mycur CURSOR DECLARE @id int DECLARE @ParentNodeName varchar(max) DECLARE @NodeName varchar(max) DECLARE @NodeText varchar(max) SET @mycur = CURSOR FOR SELECT * FROM @temp OPEN @mycur FETCH NEXT FROM @mycur INTO @id,@ParentNodeName,@NodeName,@NodeText WHILE @@FETCH_STATUS = 0 BEGIN PRINT @id -- sample statements PRINT @ParentNodeName PRINT @NodeName SET @x.modify (' insert attribute status {sql:variable("@status")} as first into (/@ParentNodeName/@NodeName/child::*[position()=sql:variable("@status")])[1] ') FETCH NEXT FROM @mycur INTO @id,@ParentNodeName,@NodeName,@NodeText END DEALLOCATE @mycur Any idea why I am getting this error as query works fine if I manually insert path

    Read the article

  • Help with Strings in C

    - by Anon
    Given the char * variables name1 , name2 , and name3 , write a fragment of code that assigns the largest value to the variable max (assume all three have already been declared and have been assigned values). I've tried and came up with this: if ((strcmp(name1,name2)0)&&(strcmp(name1,name3)0)){ max=name1; } else if ((strcmp(name2,name1)0)&&(strcmp(name2,name3)0)){ max=name2; } else if ((strcmp(name3,name1)0)&&(strcmp(name3,name2)0)){ max=name3; } However, I get this error Your code is incorrect. You are not handling the situation                where two or more strings are equal.

    Read the article

  • Table alias has an effect with execution time?

    - by RedFux227
    So I have this query : SELECT A.e_cd "INFLATE" , A.ccgpt "1" , ( SELECT J.comp FROM JBB J WHERE J.e_cd=A.e_cd AND emplo_RCD=:1 AND J.Dte = ( SELECT MAX(J1.Dte) FROM JBB J1 WHERE J.e_cd=J1.e_cd AND TO_CHAR(J1.Dte,'MM') <=A.mth_to AND TO_CHAR(J1.Dte,'YYYY') <=A.year ) AND J.seq = ( SELECT MAX(J2.seq) FROM PS_JOB J2 WHERE J2.e_cd=J.e_cd AND J2.Dte=J.Dte)) "Company" FROM PPS A WHERE orcd=:2 AND rcd=:3 AND tx_cd=:4 AND year=:5 With this one : SELECT A.e_cd "INFLATE" , A.ccgpt "1" , ( SELECT J.comp FROM JBB J WHERE J.e_cd=A.e_cd AND emplo_RCD=:1 AND J.Dte = ( SELECT MAX(J1.Dte) FROM JBB J1 WHERE J.e_cd=J1.e_cd AND TO_CHAR(J1.Dte,'MM') <=A.mth_to AND TO_CHAR(J1.Dte,'YYYY') <=A.year ) AND J.seq = ( SELECT MAX(J1.seq) **FROM PS_JOB J1 WHERE J1.e_cd=J.e_cd AND J1.Dte=J.Dte**)) "Company" FROM PPS A WHERE orcd=:2 AND rcd=:3 AND tx_cd=:4 AND year=:5 The 1st query run for about 3.20 sec with buffer gets 8,134 and for the 2nd query it run for about 1.73 sec with buffer gets 7,006. So, is the table alias somehow has an impact on the execution time / buffer gets? Thanks in advance!

    Read the article

  • Can I spead out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Can I spread out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Can I spread out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Split large text string into variable length strings without breaking words and keeping linebreaks a

    - by Frank
    I am trying to break a large string of text into several smaller strings of text and define each smaller text strings max length to be different. for example: "The quick brown fox jumped over the red fence. The blue dog dug under the fence." I would like to have code that can split this into smaller lines and have the first line have a max of 5 characters, the second line have a max of 11, and rest have a max of 20, resulting in this: Line 1: The Line 2: quick brown Line 3: fox jumped over the Line 4: red fence. Line 5: The blue dog Line 6: dug under the fence. All this in C# or MSSQL, is it possible?

    Read the article

  • Converting SQL statement into Linq

    - by DMan
    I'm trying to convert the following to a LINQ to SQL statement in C#. Can anyone give me a hand? Basically my table keeps record of all history of changes such that the created date max date for each seedlot is the most recent record and the correct one to show. SELECT reports.* FROM [dbo].[Reports] reports WHERE reports.createdDate IN ( SELECT MAX(report_max_dates.createdDate) FROM [dbo].[Reports] report_max_dates GROUP BY report_max_dates.Lot ) So far this is what I have. var result = (from report in db.Reports where report.createdDate == (from report_max in db.Reports group report_max by report_max.Lot into report_max_grouped select report_max_grouped).Max() select report); I can't figure out how to get the MAX dates for all reports and how to do an IN statement on the report.createdDate. Thansk, Dman

    Read the article

  • MS Sql Get Top xx Blog Record From Umbraco by parameter...

    - by ccppjava
    Following SQL get what I need: SELECT TOP (50) [nodeId] FROM [dbo].[cmsContentXml] WHERE [xml] like '%creatorID="29"%' AND [xml] like '%nodeType="1086"%' ORDER BY [nodeId] DESC I need to pass in the numbers as parameters, so I have follows: exec sp_executesql N'SELECT TOP (@max) [nodeId] FROM [dbo].[cmsContentXml] WHERE [xml] like ''%creatorID="@creatorID"%'' AND [xml] like ''%nodeType="@nodeType"%'' ORDER BY [nodeId] DESC',N'@max int,@creatorID int,@nodeType int',@max=50,@creatorID=29,@nodeType=1086 which however, returns no record, any idea?

    Read the article

  • Parametrize the WHERE clause?

    - by ControlFlow
    Hi, stackoverflow! I'm need to write an stored procedure for SQL Server 2008 for performing some huge select query and I need filter it results with specifying filtering type via procedure's parameters (parameterize where clause). I found some solutions like this: create table Foo( id bigint, code char, name nvarchar(max)) go insert into Foo values (1,'a','aaa'), (2,'b','bbb'), (3,'c','ccc') go create procedure Bar @FilterType nvarchar(max), @FilterValue nvarchar(max) as begin select * from Foo as f where case @FilterType when 'by_id' then f.id when 'by_code' then f.code when 'by_name' then f.name end = case @FilterType when 'by_id' then cast(@FilterValue as bigint) when 'by_code' then cast(@FilterValue as char) when 'by_name' then @FilterValue end end go exec Bar 'by_id', '1'; exec Bar 'by_code', 'b'; exec Bar 'by_name', 'ccc'; But it doesn't work when the columns has different data types... It's possible to cast all the columns to nvarchar(max) and compare they as strings, but I think it will cause a performance degradation... Is it possible to parameterize where clause in stored procedure without using things like EXEC sp_executesql (dynamic SQL and etc.)?

    Read the article

  • Automatically truncate to MaxLength during mapping

    - by aceinthehole
    I have a schema that has the max length property set on all of its elements, of various size. I am mapping to it and expect that the max length will be exceeded quite often. Is there a way tell BizTalk to automatically truncate without having to go in and manually configure a functoid for each element? Is there a purpose for the max length property other than validation?

    Read the article

  • Java OutOfMemoryError message changes when trying to create Arrays of different sizes

    - by Gordon
    In the question by DKSRathore How to simulate the Out Of memory : Requested array size exceeds VM limit some odd behavior was noted when creating an arrays. When creating an array of size Integer.MAX_VALUE an exception with the error java.lang.OutOfMemoryError Requested array size exceeds VM limit was thrown. However when an array was created with a size less than the max but still above the virtual machine memory limit the error message read java.lang.OutOfMemoryError: Java heap space. Testing further I managed to narrow down where the error messages changes. long[] l = new long[2147483645]; exceptions message reads - Requested array size exceeds VM limit long[] l = new long[2147483644]; exceptions message reads - Java heap space errors I increased my virtual machine memory and still produced the same result. Has anyone any idea why this happens? Some extra info: Integer.MAX_VALUE = 2147483647. Edit: Here's the code I used to find the value, might be helpful. int max = Integer.MAX_VALUE; boolean done = false; while (!done) { try { max--; // Throws an error long[] l = new long[max]; // Exit if an error is no longer thrown done = true; } catch (OutOfMemoryError e) { if (!e.getMessage().contains("Requested array size exceeds VM limit")) { System.out.println("Message changes at " + max); done = true; } } }

    Read the article

  • SQL Select - adding field to Select is changing the results

    - by nycdan
    I'm stumped by this SQL problem that I suspect will be easy pickings for someone out there. I have a table that contains rows representing several daily lists of ranked items. The relevent fields are as follows: ID, ListID, ItemID, ItemName, ItemRank, Date. I have a query that returns the items that were on a list yesterday but not today (Items Off List) as follows: Select ItemID, ListID, ItemName, convert(varchar(10),MAX(date),101) as date, COUNT(ItemName) as days_on_list From Table Group By ItemID, ListID, ItemName Having Max(date) = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and ListID = 1 Order By ListID, ItemName, COUNT(ItemName) Basically I'm looking for records where the max date is yesterday. It works fine and shows the number of days each item was previously on the list (although not necessarily consecutively, but that's fine for now). The problem is when I try to add ranking to see what yesterday's rank was. I tried the following: Select ItemID, ListID, ItemName, ranking, convert(varchar(10),MAX(date),101) as date, COUNT(ItemName) as days_on_list From Table Group By ItemID, ListID, ItemName, ranking Having Max(date) = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and ListID = 1 Order By ListID, ItemName, ranking, COUNT(ItemName) This returns a great deal more records than the previous query so something isn't right with it. I want the same number of records, but with the ranking included. I can get the rank by doing a self-join with a subquery and getting records where the ItemID occurs yesterday but not today - but then I don't know how to get the Count any more. Appreciation in advance for any help with this. ======== SOLVED ============== Select ItemID, ListID, ItemName, ranking, convert(varchar(10),MAX(date),101) as date, COUNT(ItemName) as days_on_list from Table T Where date = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and ListID = 1 and T.ItemID Not In (select T.ItemID from Table T join Table T2 on T.ItemID = T2.ItemID and T.ListID = T2.ListID where T.date = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and T2.date = convert (varchar(10),getdate(),101) and T.ListID = 1) Group by ItemID, ListID, ItemName, ranking Basically, what I did was create a subquery that finds all items that appear in both days, and finds items that appeared yesterday but are not in the set of items that appeared both days. Then I was able to do the aggregate function and grouping correctly. I would NOT be surprised if this is more convoluted than necessary but I understand it and can modify it as needed and performance doesn't seem to be an issue. Thanks everyone for the assist.

    Read the article

  • check if a tree is a binary search tree

    - by TimeToCodeTheRoad
    I have written the following code to check if a tree is a Binary search tree. Please help me check the code: Pair p{ boolean isTrue; int min; int max; } public boo lean isBst(BNode v){ return isBST1(v).isTrue; } public Pair isBST1(BNode v){ if(v==null) return new Pair(true, INTEGER.MIN,INTEGER.MAX); if(v.left==null && v.right==null) return new Pair(true, v.data, v.data); Pair pLeft=isBST1(v.left); Pair pRight=isBST1(v.right); boolean check=pLeft.max<v.data && v.data<= pRight.min; Pair p=new Pair(); p.isTrue=check&&pLeft.isTrue&&pRight.isTrue; p.min=pLeft.min; p.max=pRight.max; return p; } Note: This function checks if a tree is a binary search tree

    Read the article

  • logrotate isn't rotating a particular log file (and i think it should be)

    - by Max Williams
    Hi all. For a particular app, i have log files in two places. One of the places has just one log file that i want to use with logrotate, for the other location i want to use logrotate on all log files in that folder. I've set up an entry called millionaire-staging in /etc/logrotate.d and have been testing it by calling logrotate -f millionaire-staging. Here's my entry: #/etc/logrotate.d/millionaire-staging compress rotate 1000 dateext missingok sharedscripts copytruncate /var/www/apps/test.millionaire/log/staging.log { weekly } /var/www/apps/test.millionaire/shared/log/*log { size 40M } So, for the first folder, i want to rotate weekly (this seems to have worked fine). For the other, i want to rotate only when the log files get bigger than 40 meg. When i look in that folder (using the same locator as in the logrotate config), i can see a file in there that's 54M and which hasn't been rotated: $ ls -lh /var/www/apps/test.millionaire/shared/log/*log -rw-r--r-- 1 www-data root 33M 2010-12-29 15:00 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.access-log -rw-r--r-- 1 www-data root 54M 2010-09-10 16:57 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.debug-log -rw-r--r-- 1 www-data root 53K 2010-12-14 15:48 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.error-log -rw-r--r-- 1 www-data root 3.8M 2010-12-29 14:30 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.ssl.access-log -rw-r--r-- 1 www-data root 16K 2010-12-17 15:00 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.ssl.error-log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 /var/www/apps/test.millionaire/shared/log/unicorn.stderr.log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 /var/www/apps/test.millionaire/shared/log/unicorn.stdout.log Some of the other log files in that folder have been rotated though: $ ls -lh /var/www/apps/test.millionaire/shared/log total 91M -rw-r--r-- 1 www-data root 33M 2010-12-29 15:05 test.millionaire.charanga.com.access-log -rw-r--r-- 1 www-data root 54M 2010-09-10 16:57 test.millionaire.charanga.com.debug-log -rw-r--r-- 1 www-data root 53K 2010-12-14 15:48 test.millionaire.charanga.com.error-log -rw-r--r-- 1 www-data root 3.8M 2010-12-29 14:30 test.millionaire.charanga.com.ssl.access-log -rw-r--r-- 1 www-data root 16K 2010-12-17 15:00 test.millionaire.charanga.com.ssl.error-log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 unicorn.stderr.log -rw-r--r-- 1 deploy deploy 41K 2010-12-29 11:03 unicorn.stderr.log-20101229.gz -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 unicorn.stdout.log -rw-r--r-- 1 deploy deploy 1.1K 2010-10-15 11:05 unicorn.stdout.log-20101229.gz I think what might have happened is that i first ran this config with a pattern matching *.log, and that means it only rotated the two files that ended in .log (as opposed to -log). Then, when i changed the config and ran it again, it won't do any more since it think's its already had its weekly run, or something. Can anyone see what i'm doing wrong? Is it to do with those top folders being owned by root rather than deploy do you think? thanks, max

    Read the article

  • logrotate isn't rotating a particular log file (and i think it should be)

    - by Max Williams
    Hi all. For a particular app, i have log files in two places. One of the places has just one log file that i want to use with logrotate, for the other location i want to use logrotate on all log files in that folder. I've set up an entry called millionaire-staging in /etc/logrotate.d and have been testing it by calling logrotate -f millionaire-staging. Here's my entry: #/etc/logrotate.d/millionaire-staging compress rotate 1000 dateext missingok sharedscripts copytruncate /var/www/apps/test.millionaire/log/staging.log { weekly } /var/www/apps/test.millionaire/shared/log/*log { size 40M } So, for the first folder, i want to rotate weekly (this seems to have worked fine). For the other, i want to rotate only when the log files get bigger than 40 meg. When i look in that folder (using the same locator as in the logrotate config), i can see a file in there that's 54M and which hasn't been rotated: $ ls -lh /var/www/apps/test.millionaire/shared/log/*log -rw-r--r-- 1 www-data root 33M 2010-12-29 15:00 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.access-log -rw-r--r-- 1 www-data root 54M 2010-09-10 16:57 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.debug-log -rw-r--r-- 1 www-data root 53K 2010-12-14 15:48 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.error-log -rw-r--r-- 1 www-data root 3.8M 2010-12-29 14:30 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.ssl.access-log -rw-r--r-- 1 www-data root 16K 2010-12-17 15:00 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.ssl.error-log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 /var/www/apps/test.millionaire/shared/log/unicorn.stderr.log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 /var/www/apps/test.millionaire/shared/log/unicorn.stdout.log Some of the other log files in that folder have been rotated though: $ ls -lh /var/www/apps/test.millionaire/shared/log total 91M -rw-r--r-- 1 www-data root 33M 2010-12-29 15:05 test.millionaire.charanga.com.access-log -rw-r--r-- 1 www-data root 54M 2010-09-10 16:57 test.millionaire.charanga.com.debug-log -rw-r--r-- 1 www-data root 53K 2010-12-14 15:48 test.millionaire.charanga.com.error-log -rw-r--r-- 1 www-data root 3.8M 2010-12-29 14:30 test.millionaire.charanga.com.ssl.access-log -rw-r--r-- 1 www-data root 16K 2010-12-17 15:00 test.millionaire.charanga.com.ssl.error-log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 unicorn.stderr.log -rw-r--r-- 1 deploy deploy 41K 2010-12-29 11:03 unicorn.stderr.log-20101229.gz -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 unicorn.stdout.log -rw-r--r-- 1 deploy deploy 1.1K 2010-10-15 11:05 unicorn.stdout.log-20101229.gz I think what might have happened is that i first ran this config with a pattern matching *.log, and that means it only rotated the two files that ended in .log (as opposed to -log). Then, when i changed the config and ran it again, it won't do any more since it think's its already had its weekly run, or something. Can anyone see what i'm doing wrong? Is it to do with those top folders being owned by root rather than deploy do you think? thanks, max

    Read the article

  • Advanced control of recursive parser in scala

    - by Jeriho
    val uninterestingthings = ".".r val parser = "(?ui)(regexvalue)".r | (uninterestingthings~>parser) This recursive parser will try to parse "(?ui)(regexvalue)".r until the end of input. Is in scala a way to prohibit parsing when some defined number of characters were consumed by "uninterestingthings" ? UPD: I have one poor solution: object NonRecursiveParser extends RegexParsers with PackratParsers{ var max = -1 val maxInput2Consume = 25 def uninteresting:Regex ={ if(max<maxInput2Consume){ max+=1 ("."+"{0,"+max.toString+"}").r }else{ throw new Exception("I am tired") } } lazy val value = "itt".r def parser:Parser[Any] = (uninteresting~>value)|parser def parseQuery(input:String) = { try{ parse(parser, input) }catch{ case e:Exception => } } } Disadvantages: - not all members are lazy vals so PackratParser will have some time penalty - constructing regexps on every "uninteresting" method call - time penalty - using exception to control program - code style and time penalty

    Read the article

  • jquery two ajax call asynchrounsly in asp.net not working...

    - by eswaran
    Hi all, I am developed an web application in asp.net. In this application I have used jquery ajax for some pages. In this application, when I make two ajax call asynchrounoulsy that would not do as I expceted. what is happening is even the second ajax call finishes i can see the result when the maximum time out ajax call finished. I mean i can see the both results in the same time, not one by one. for an example. I have 3 pages 1) main.aspx - for make two ajax request. 2) totalCount.aspx - to find the total count. (max it takes 7 seconds to return, as corresponding table contains 3 lak records) 3) rowCount.aspx - to find the row details. (max it takes 5 seconds to return result). due to this scene, I have planed to make asyn call in jquery ajax in asp.net. here is my code... function getResult() { getTotalCount(); getRows(); } // it takes max 7 seconds to complete // as it take 7 seconds it should display second.( I mean after the rows dispaying) // but displaying both at the same time after the max time consuming ajax call completed. function getTotalCount() { $.ajax({ type : "POST", async : true, url : "totalCount.aspx?data1=" + document.getElementById("data").value, success : function(responseText) { $("#totalCount").attr("value", responseText); } }) } // it takes max 5 seconds to complete. // after finished, this should display first.( i mean before total count displays) // but displaying both at the same time after the max time consuming ajax call completed. function getRows() { $.ajax({ type : "POST", url : "getrows.aspx?data1=" + document.getElementById("data").value, async : true, success : function(responseText) { $("#getRows").attr("value", responseText); } }); } I would like to know, If there is any possible to make asyn call in jquery ajax in asp.net. I searched in net, I got some points that says we cannot do this in asp.net ref link: http://www.mail-archive.com/[email protected]/msg55125.html if we can do this in asp.net How to do that? thanks r.eswaran.

    Read the article

  • Access: strange results with queries against MDB file

    - by Craig Johnston
    I am running the following SQL against an MDB file, a copy of which is located here: http://hotfile.com/dl/40641614/2353dfc/test.mdb.html (perfectly clean file, no macros or viruses) SELECT datediff("d", MAX(invoice.date), Now) As Date_Diff , MAX(invoice.date) AS max_invoice_date , customer.number AS customer_number FROM invoice INNER JOIN customer ON invoice.customer_number = customer.number GROUP BY customer.number If the the following was added: HAVING datediff("d", MAX(invoice.date), Now) > 365 would this simply exclude rows with Date_Diff <= 365? What should be the effect of the HAVING clause here?

    Read the article

  • What's the best way to convert a .eps (CMYK) to a .jpg (RGB) with Image Magick

    - by Slinky
    Hi All, I have a bunch of .eps files (CMYK) that I need to convert to .jpg (RGB) files. The following command sometimes gives me under or over saturated .jpg images, when compared to the source EPS file: $cmd = "convert -density 300 -quality 100% -colorspace RGB ".$epsURL." -flatten -strip ".$convertedURL; Is there a smarter way to do this such that the converted image will have the same qualities as the source EPS file? Here is an example of the source file info: Image: rejm.eps Format: PS (PostScript) Class: DirectClass Geometry: 537x471 Base geometry: 1074x941 Type: ColorSeparation Endianess: Undefined Colorspace: CMYK Channel depth: Cyan: 8-bit Magenta: 8-bit Yellow: 8-bit Black: 8-bit Channel statistics: Cyan: Min: 0 (0) Max: 255 (1) Mean: 161.913 (0.634955) Standard deviation: 72.8257 (0.285591) Magenta: Min: 0 (0) Max: 255 (1) Mean: 184.261 (0.722591) Standard deviation: 75.7933 (0.297229) Yellow: Min: 0 (0) Max: 255 (1) Mean: 70.6607 (0.277101) Standard deviation: 39.8677 (0.156344) Black: Min: 0 (0) Max: 195 (0.764706) Mean: 34.4382 (0.135052) Standard deviation: 38.1863 (0.14975) Total ink density: 292% Colors: 210489 Rendering intent: Undefined Resolution: 28.35x28.35 Units: PixelsPerCentimeter Filesize: 997.727kb Interlace: None Background color: white Border color: #DFDFDFDFDFDF Matte color: grey74 Page geometry: 537x471+0+0 Dispose: Undefined Iterations: 0 Compression: Undefined Orientation: Undefined Signature: 8ea00688cb5ae496812125e8a5aea40b0f0e69c9b49b2dc4eb028b22f76f2964 Profile-iptc: 19738 bytes Thanks

    Read the article

  • Configuring the expiry time for the messages destined to the "Expired message address" in Hornetq

    - by Rohit
    I have configured a message expiry destination in Hornetq as below <address-setting match="#"> <dead-letter-address>jms.queue.error</dead-letter-address> <expiry-address>jms.queue.error</expiry-address> <max-delivery-attempts>3</max-delivery-attempts> <redelivery-delay>2000</redelivery-delay> <max-size-bytes>10485760</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>BLOCK</address-full-policy> <redistribution-delay>60000</redistribution-delay> </address-setting> And the messages do get redirected to the expiry address once the expiration time is exceeded. These messages live indefinitely on the expiry address, Is there a way to provide a expiry time for these messages so they live only limited time on the expiry address?

    Read the article

  • Insert multiple records from a XML string differing on one parameter in SQL SERVER 2008

    - by Rohit
    Below in a query which inserts records to SimpleDictationProfileMapping table after reading it from a XML string. Now this query inserts a single record in which DictationCaptureProfileID is @dictationCaptureProfileId . Now i want to insert multiple rows in which @dictationCaptureProfileId is different and other 2 values are same. What i want to achieve by this is in case parent changes all child values should also change. INSERT INTO SimpleDictationProfileMapping ( DictationCaptureProfileID, DictationProfileMappingAttributeID, DictationProfileMappingAttributeValue ) SELECT @dictationCaptureProfileId, row.value('@attrId','varchar(max)'), row.value('@value', 'varchar(max)') FROM @simpleDictationCaptureProfileMappings.nodes('/simpleMappingAtribute/attribute') AS d ( row ) ; I want INSERT INTO SimpleDictationProfileMapping ( DictationCaptureProfileID OR (SELECT DictationCaptureProfileID FROM DictationCaptureProfile WHERE SystemDictationCaptureProfileID = @systemDictationCaptureProfileID), DictationProfileMappingAttributeID, DictationProfileMappingAttributeValue ) SELECT @dictationCaptureProfileId , row.value('@attrId','varchar(max)'), row.value('@value', 'varchar(max)') FROM @simpleDictationCaptureProfileMappings.nodes ('/simpleMappingAtribute/attribute') AS d ( row ) ; Please tell how to achieve this.

    Read the article

  • Sql server query using function and view is slower

    - by Lieven Cardoen
    I have a table with a xml column named Data: CREATE TABLE [dbo].[Users]( [UserId] [int] IDENTITY(1,1) NOT NULL, [FirstName] [nvarchar](max) NOT NULL, [LastName] [nvarchar](max) NOT NULL, [Email] [nvarchar](250) NOT NULL, [Password] [nvarchar](max) NULL, [UserName] [nvarchar](250) NOT NULL, [LanguageId] [int] NOT NULL, [Data] [xml] NULL, [IsDeleted] [bit] NOT NULL,... In the Data column there's this xml <data> <RRN>...</RRN> <DateOfBirth>...</DateOfBirth> <Gender>...</Gender> </data> Now, executing this query: SELECT UserId FROM Users WHERE data.value('(/data/RRN)[1]', 'nvarchar(max)') = @RRN after clearing the cache takes (if I execute it a couple of times after each other) 910, 739, 630, 635, ... ms. Now, a db specialist told me that adding a function, a view and changing the query would make it much more faster to search a user with a given RRN. But, instead, these are the results when I execute with the changes from the db specialist: 2584, 2342, 2322, 2383, ... This is the added function: CREATE FUNCTION dbo.fn_Users_RRN(@data xml) RETURNS varchar(100) WITH SCHEMABINDING AS BEGIN RETURN @data.value('(/data/RRN)[1]', 'varchar(max)'); END; The added view: CREATE VIEW vwi_Users WITH SCHEMABINDING AS SELECT UserId, dbo.fn_Users_RRN(Data) AS RRN from dbo.Users Indexes: CREATE UNIQUE CLUSTERED INDEX cx_vwi_Users ON vwi_Users(UserId) CREATE NONCLUSTERED INDEX cx_vwi_Users__RRN ON vwi_Users(RRN) And then the changed query: SELECT UserId FROM Users WHERE dbo.fn_Users_RRN(Data) = '59021626919-61861855-S_FA1E11' Why is the solution with a function and a view going slower?

    Read the article

  • Problem with Clojure function

    - by Bozhidar Batsov
    Hi, everyone, I've started working yesterday on the Euler Project in Clojure and I have a problem with one of my solutions I cannot figure out. I have this function: (defn find-max-palindrom-in-range [beg end] (reduce max (loop [n beg result []] (if (>= n end) result (recur (inc n) (concat result (filter #(is-palindrom? %) (map #(* n %) (range beg end))))))))) I try to run it like this: (find-max-palindrom-in-range 100 1000) and I get this exception: java.lang.Integer cannot be cast to clojure.lang.IFn [Thrown class java.lang.ClassCastException] which I presume means that at some place I'm trying to evaluate an Integer as a function. I however cannot find this place and what puzzles me more is that everything works if I simply evaluate it like this: (reduce max (loop [n 100 result []] (if (>= n 1000) result (recur (inc n) (concat result (filter #(is-palindrom? %) (map #(* n %) (range 100 1000)))))))) (I've just stripped down the function definition and replaced the parameters with constants) Thanks in advance for your help and sorry that I probably bother you with idiotic mistake on my part. Btw I'm using Clojure 1.1 and the newest SLIME from ELPA.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >