Search Results

Search found 8213 results on 329 pages for 'safari bug'.

Page 59/329 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Weird caching bug where old version of the same web page (same filename) is still called (Windows 2008 R2, Tomcat 5.5)

    - by user717236
    This is definitely one of the strangest errors I've seen and it occurs intermittently. I am running Windows 2008 R2, IIS 7.5, and Apache Tomcat 5.5, by the way. Let's say I have two machines, A and B. Both A and B are running Windows 2008 R2. I have a web page called login.jsp on machine A, and I have a newer, modified version of login.jsp on machine B . Now, I copy the new login.jsp from machine B and paste it to machine A, replacing the older version with the same filename. For whatever reason, when I hit up the web page in my browser from a local machine (i.e. my laptop), it still recalls the old version of the web page, even though it's been replaced! I tried restarting IIS and Apache Tomcat. That didn't work. I tried restarting machine A and that didn't work. I tried a cold reboot of my local machine and that didn't work, either. So, I spoke to someone I can confide in for help. He said to open the login.jsp page in notepad, put a space in, save the file, and try again. Sure enough, it worked. He said he hasn't seen it in Windows 2003, but this is occurring with Windows 2008. What I don't understand is why did it work and what the heck is this error and I do I really diagnose it and resolve it for good, instead of the hack my colleague proposed? Is this bug related to Windows 2008, Windows 2008 R2, Tomcat, or something else entirely? Anyone else have the same problem? Thank you for any help.

    Read the article

  • checksecurity / setuid changes, is this a bug or did somebody break in?

    - by Fabian Zeindl
    I received a mail by checksecurity from my ubuntu 12.04 server with the following content: --- setuid.today 2012-06-03 06:48:09.892436281 +0200 +++ /var/log/setuid/setuid.new.tmp 2012-06-17 06:47:51.376597730 +0200 @@ -30,2 +30,2 @@ - 131904 4755 2 root root 71280 Wed May 16 07:23:08.0000000000 2012 ./usr/bin/sudo - 131904 4755 2 root root 71280 Wed May 16 07:23:08.0000000000 2012 ./usr/bin/sudoedit + 143967 4755 2 root root 71288 Fri Jun 1 05:53:44.0000000000 2012 ./usr/bin/sudo + 143967 4755 2 root root 71288 Fri Jun 1 05:53:44.0000000000 2012 ./usr/bin/sudoedit @@ -42 +42 @@ - 130507 666 1 root root 0 Sat Jun 2 18:04:57.0752979385 2012 ./var/spool/postfix/dev/urandom + 130507 666 1 root root 0 Mon Jun 11 08:47:16.0919802556 2012 ./var/spool/postfix/dev/urandom First i was worried, then i realized that the change was actually 2 weeks ago, i think there was a sudo-update back then. Since checksecurity runs in /etc/cron.daily i wondered why i only get that email now. I looked into /var/log/setuid/ and found the following files: total 32 -rw-r----- 1 root adm 816 Jun 17 06:47 setuid.changes -rw-r----- 1 root adm 228 Jun 3 06:48 setuid.changes.1.gz -rw-r----- 1 root adm 328 May 27 06:47 setuid.changes.2.gz -rw-r----- 1 root root 1248 May 20 06:47 setuid.changes.3.gz -rw-r----- 1 root adm 4473 Jun 17 06:47 setuid.today -rw-r----- 1 root adm 4473 Jun 3 06:48 setuid.yesterday The obvious thing that confuses me is that the file setuid.yesterday is not from yesterday = Jun/16. Is this a bug?

    Read the article

  • Duplicate GET request from multiple IPs - can anyone explain this?

    - by dwq
    We've seen a pattern in our webserver access logs which we're having problem explaining. A GET request appears in the access log which is a legitimate, but private, url as part of normal e-commerce website use (by private, we mean there is a unique key in a url form variable generated specifically for that customer session). Then a few seconds later we get hit with an identical request maybe 10-15 times within the space of a second. The duplicate requests are all from different IP addresses. The UserAgent for the duplicates are all the same (but different from the original request). The reverse DNS lookup on the IPs for all the duplicates requests resolve to the same large hosting company. Can anyone think of a scenario what would explain this? EDIT 1 Here's an example that's probably anonymised beyond being any actual use, but it might give an idea of the sort of pattern we're seeing (it's from a search query as they sometimes get duplicated too): xx.xx.xx.xx - - [21/Jun/2013:21:42:57 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "http://www.ourdomain.com/index.html" "Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0)" xx.xx.xx.xx - - [21/Jun/2013:21:43:03 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:03 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" UPDATE 2 Sometimes it is part of a checkout flow that's duplicated to I'd think twitter is unlikely.

    Read the article

  • MySQL Enterprise Backup 3.8.2 has been released!

    - by Hema Sridharan
    MySQL Enterprise Backup v3.8.2, a maintenance release of online MySQL backup tool, is now available for download from My Oracle Support  (MOS) website as our latest GA release.  It will also be available via the Oracle Software Delivery Cloud in approximately 1-2 weeks. A brief summary of the changes in MySQL Enterprise Backup version 3.8.2 is given below.   A. Functionality Added or Changed:  MySQL Enterprise Backup has a new --on-disk-full command line option. mysqlbackup could hang when the disk became full, rather than detecting the low space condition. mysqlbackup now monitors disk space when running backup commands, and users can now specify the action to take at a disk-full condition with the --on-disk-full option. For more details, refer this page MySQL Enterprise Backup has a new progress report feature, which periodically outputs short progress indicators on its  operations to user-selected destinations (for example, stdout, stderr, a file, or other choices). For more details on progress report options, refer here   B. Bugs Fixed: When --innodb-file-per-table=ON, if a table was renamed and backup-to-image was in progress, apply-log would fail when being run on the backup. (Bug #16903973)   MySQL Server failed to start after a backup was restored if  there had been online DDL transactions on partitioned tables during the time of backup. (Bug #16924499)   apply-log failed if ALTER TABLE ... REORGANIZE PARTITION was applied to partitioned InnoDB tables during backup. (Bug #16721824, Bug #16903951)  apply-incremental-backup might fail with an assertion error if  the InnoDB tables being backed up were created in Barracuda format and with their KEY_BLOCK_SIZE  values  different from the innodb_page_size . This fix ensures that different KEY_BLOCK_SIZE  values are handled properly during incremental backup and apply-incremental-backup operations.  If a table was renamed following a full backup, a subsequent incremental backup could copy the .frm file with the new name, but not the associated .ibd file with the new name. After a  restore, the InnoDB data dictionary could be in an  inconsistent state. This issue primarily occurred if the table  was not changed between the full backup and the subsequent  incremental backup. Bug #16262690)  After a full backup, if a table was renamed and modified,  apply-incremental-backup would crash when run on the backup directory. (Bug #16262609) The value of the binary log position in backup_variables.txt  could be different from the output displayed during the   backup-and-apply-log operation. (This issue did not occur if  the backup and apply-log steps were done separately.) (Bug  #16195529) When using the --only-innodb-with-frm option, MySQL Enterprise Backup tried to create temporary files at unintended locations in the file system, which might cause a failure when, for example, the user had no write privilege for those locations.   This fix makes sure the paths for the temporary files are  correct. (Bug #14787324)  A backup process might hang when it ran into an LSN mismatch between a data file  and the redo log. This fix makes sure the process does not hang and it displays an error message showing the  name of the problematic data file (Bug #14791645) Please post your questions / comments about Backup in forums. Thanks, MEB Team

    Read the article

  • Windows 7 BSOD with Service Exception Error and Randomly Reboots

    - by Jason Shultz
    I've got a windows 7 laptop that BSOD with a Service Exception Error when I connect to a wireless network. It also does it when it's just sitting still doing nothing. I ran bluescreenview and here are the last four BSOD's from today: ================================================== Dump File : 051210-18642-01.dmp Crash Time : 5/12/2010 8:36:14 AM Bug Check String : SYSTEM_SERVICE_EXCEPTION Bug Check Code : 0x0000003b Parameter 1 : 00000000`c000001d Parameter 2 : fffff880`00000000 Parameter 3 : fffff880`06fda160 Parameter 4 : 00000000`00000000 Caused By Driver : Ntfs.sys Caused By Address : Ntfs.sys+7f030 File Description : Product Name : Company : File Version : Processor : x64 Computer Name : Full Path : C:\Windows\Minidump\051210-18642-01.dmp Processors Count : 2 Major Version : 15 Minor Version : 7600 ================================================== ================================================== Dump File : 051210-16551-01.dmp Crash Time : 5/12/2010 8:41:04 AM Bug Check String : SYSTEM_SERVICE_EXCEPTION Bug Check Code : 0x0000003b Parameter 1 : 00000000`c000001d Parameter 2 : fffff880`00000000 Parameter 3 : fffff880`06f40160 Parameter 4 : 00000000`00000000 Caused By Driver : ntoskrnl.exe Caused By Address : ntoskrnl.exe+70600 File Description : NT Kernel & System Product Name : Microsoft® Windows® Operating System Company : Microsoft Corporation File Version : 6.1.7600.16539 (win7_gdr.100226-1909) Processor : x64 Computer Name : Full Path : C:\Windows\Minidump\051210-16551-01.dmp Processors Count : 2 Major Version : 15 Minor Version : 7600 ================================================== ================================================== Dump File : 051210-17269-01.dmp Crash Time : 5/12/2010 8:45:51 AM Bug Check String : SYSTEM_SERVICE_EXCEPTION Bug Check Code : 0x0000003b Parameter 1 : 00000000`c000001d Parameter 2 : fffff880`00000000 Parameter 3 : fffff880`07db1160 Parameter 4 : 00000000`00000000 Caused By Driver : Ntfs.sys Caused By Address : Ntfs.sys+7f030 File Description : Product Name : Company : File Version : Processor : x64 Computer Name : Full Path : C:\Windows\Minidump\051210-17269-01.dmp Processors Count : 2 Major Version : 15 Minor Version : 7600 ================================================== ================================================== Dump File : 051210-19453-01.dmp Crash Time : 5/12/2010 5:46:25 PM Bug Check String : SYSTEM_SERVICE_EXCEPTION Bug Check Code : 0x0000003b Parameter 1 : 00000000`c000001d Parameter 2 : fffff880`00000000 Parameter 3 : fffff880`02625160 Parameter 4 : 00000000`00000000 Caused By Driver : win32k.sys Caused By Address : win32k.sys+2d4201 File Description : Product Name : Company : File Version : Processor : x64 Computer Name : Full Path : C:\Windows\Minidump\051210-19453-01.dmp Processors Count : 2 Major Version : 15 Minor Version : 7600 ==================================================  

    Read the article

  • Trying to delete a directory stored on a WIndows server, mounted on a mac

    - by AdamG
    I am trying to delete a directory stored on a Windows 2008 R2 server, mounted on a Mac as network home (10.8.5). The directory was created by Safari and stores temporary internet files. I need to be able to delete this folder on logout from a Mac bash script. The Terminal on Mac shows the directory as empty: 36W-FacRm-02:History lwickham$ cd /home/lwickham/Library/Caches/Metadata/Safari/History 36W-FacRm-02:History lwickham$ ls -al total 0 drwx------ 1 lwickham CGPS\Domain Users 264 Nov 8 09:24 . drwx------ 1 lwickham CGPS\Domain Users 264 Nov 8 09:28 .. However, on the Windows server it has a single 0kb file that doesn't start with a "." but yet is invisible to the Mac. E:\FacultyHome2\lwickham\Library\Caches\Metadata\Safari\History>dir Volume in drive E is FacultyUsers2 Volume Serial Number is 8C17-4EF3 Directory of E:\FacultyHome2\lwickham\Library\Caches\Metadata\Safari\History 11/08/2013 09:24 AM <DIR> . 11/08/2013 09:24 AM <DIR> .. 11/07/2013 04:28 PM 0 http?%2F%2Fwww.google.com%2Furl?sa=t&rct= j&q=&esrc=s&source=web&cd=6&ved=0CFsQFjAF&url=http%253A%252F%252Fwww.usbanklocat ions.com%252Fhsbc-bank-usa-96th-street-branch.html&ei=5vR7UtmXEPjfsATe0YCIBA&usg =AFQjCNF9ypKbpYbXRng00FY3W8Y6cF1Tiw&bvm=bv.56146854,d. 1 File(s) 0 bytes 2 Dir(s) 514,231,967,744 bytes free All my attempts to delete the dir from the Mac have failed: 36W-FacRm-02:History lwickham$ rm -fr /home/lwickham/Library/Caches/Metadata/Safari/History/* 36W-FacRm-02:History lwickham$ rm -frd /home/lwickham/Library/Caches/ rm: /home/lwickham/Library/Caches//Metadata/Safari/History: Directory not empty rm: /home/lwickham/Library/Caches//Metadata/Safari: Directory not empty rm: /home/lwickham/Library/Caches//Metadata: Directory not empty rm: /home/lwickham/Library/Caches/: Directory not empty

    Read the article

  • Trying to delete a directory stored on a Windows server, from on a Mac, containing files created on the Mac, getting "Directory not empty"

    - by AdamG
    I am trying to delete a directory stored on a Windows 2008 R2 server, mounted on a Mac as network home (10.8.5). The directory was created by Safari and stores temporary internet files. I need to be able to delete this folder on logout from a Mac bash script. The Terminal on Mac shows the directory as empty: 36W-FacRm-02:History lwickham$ cd /home/lwickham/Library/Caches/Metadata/Safari/History 36W-FacRm-02:History lwickham$ ls -al total 0 drwx------ 1 lwickham CGPS\Domain Users 264 Nov 8 09:24 . drwx------ 1 lwickham CGPS\Domain Users 264 Nov 8 09:28 .. However, on the Windows server it has a single 0kb file that doesn't start with a "." but yet is invisible to the Mac. E:\FacultyHome2\lwickham\Library\Caches\Metadata\Safari\History>dir Volume in drive E is FacultyUsers2 Volume Serial Number is 8C17-4EF3 Directory of E:\FacultyHome2\lwickham\Library\Caches\Metadata\Safari\History 11/08/2013 09:24 AM <DIR> . 11/08/2013 09:24 AM <DIR> .. 11/07/2013 04:28 PM 0 http?%2F%2Fwww.google.com%2Furl?sa=t&rct= j&q=&esrc=s&source=web&cd=6&ved=0CFsQFjAF&url=http%253A%252F%252Fwww.usbanklocat ions.com%252Fhsbc-bank-usa-96th-street-branch.html&ei=5vR7UtmXEPjfsATe0YCIBA&usg =AFQjCNF9ypKbpYbXRng00FY3W8Y6cF1Tiw&bvm=bv.56146854,d. 1 File(s) 0 bytes 2 Dir(s) 514,231,967,744 bytes free 9ypKbpYbXRng00FY3W8Y6cF1Tiw&bvm=bv.56146854,d.1 File(s) 0 bytes2 Dir(s) 514,231,967,744 bytes free All my attempts to delete the dir from the Mac have failed: 36W-FacRm-02:History lwickham$ rm -fr /home/lwickham/Library/Caches/Metadata/Safari/History/* 36W-FacRm-02:History lwickham$ rm -frd /home/lwickham/Library/Caches/ rm: /home/lwickham/Library/Caches//Metadata/Safari/History: Directory not empty rm: /home/lwickham/Library/Caches//Metadata/Safari: Directory not empty rm: /home/lwickham/Library/Caches//Metadata: Directory not empty rm: /home/lwickham/Library/Caches/: Directory not empty

    Read the article

  • Do you have ideas for a workaround for this Known bug in Visual Studio 2010's addIn model ?

    - by Jesper
    When developing AddIns for Visual Studio 2010 the following line fails: CommandBarEvents handler = (EnvDTE.CommandBarEvents)m_VSStudio.DTE.Events.get_CommandBarEvents(popup); Update: Forgot to tell that m_VSStudio is of the type DTE2 Where popup is of the type CommandBarPopup (for the type CommandBarControl it works though) The line fails with this Exception: System.Runtime.InteropServices.COMException (0x80020003): Member not found. (Exception from HRESULT: 0x80020003 (DISP_E_MEMBERNOTFOUND)) The exact same line worked in Visual Studio 2008. The purpose of the line is to get a handler which handles clickevents, when one clicks the Popup. After som searching I found this link: http://connect.microsoft.com/VisualStudio/feedback/details/524335/events-get-commandbarevents-exception-on-submenus-reproducible-bug-addin Which basicly states that it is a known bug, which will not be fixed, because there is a workaround. But unfortunately it does not state the workaround :( I would be extremely pleased if anybody has a great idea for a workaround ? The reason why I want to listen to the click events is because I want to show or hide the submenuitems (CommandBarControl) given some condition, when one clicks a menu (CommandBarPopup). So a workaround that uses something else than the click event would also be appreciated.

    Read the article

  • PHP doesn't properly URL-decode POST values, or is the bug somewhere else?

    - by Vilx-
    I have a PHP website that integrates with PayPal. As part of it's normal operation it sends a POST request to a user defined URL about every transaction (and some other events). It's called IPN for those who know. All was/is working just fine, until we moved to a new hosting provider last week. This naturally resulted in downtime, changing DNS entries, PHP misconfigurations, etc. In short - PayPal was unable to send the notifications for a few days. Now, this is not that bad, because there is an option just for that - you can log into PayPal, go to the appropriate menu item, and have it resend the failed notifications. Which we did, and it resulted in over 400 error emails. The problem is that normally the email of the receiver ($_POST['business']) is received by PHP as [email protected], but when resending it comes out as my%40business.com. And the request validation goes crazy. Obviously either something somewhere is missing a call to urldecode() or something is doing one too many urlencode(). But who? Is this a bug in PHP? I've never had such a problem before, not with any POST data ever. Perhaps PayPal has a bug? Wouldn't be surprising, but then this should have been caught ages ago. Or perhaps I'm doing something wrong and I should really be calling the urldecode() myself?

    Read the article

  • Is this a bug or something I am doing wrong?

    - by Brian Gideon
    I cannot imagine how this is anything other than a bug, but since I do not currently have a login for the MS Connect website I will ask here first. I have Visual Studio 2008 SP 1 with all post SP1 hotfixes I could find relating to the crash installed. Can you reproduce the following crash? 1) Create a new "WPF Application" project using VB as the language (though I suspect it will happen in C# as well). 2) Enter the following code in the Window1.xaml.vb file. Friend MustInherit Class A End Class Friend MustInherit Class A(Of T) Inherits A End Class 3) Add a namespace declaration the Window1.xaml file so that it looks like the following. <Window x:Class="Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:WpfApplication1" Title="Window1" Height="300" Width="300"> <Grid> </Grid> </Window> 4) Now attempt to edit the xaml file by opening a new xml tag via the < character. 5) CRASH! Edit: Microsoft has confirmed this bug. The issue still exists in VS2010 beta 2, but will be fixed in the next release.

    Read the article

  • Any solution to IE8 bug rendering error when hiding elements?

    - by Magnar
    Bug: when hiding an element with JavaScript in IE8, the margin of still-visible other elements on the side is ignored. This bug has been introduced with IE8, since it works as expected in IE6+7 (and other browsers). <html> <head> <style> #a, #b, #c, #d {background: #ccf; padding: 4px; margin-bottom: 8px;} </style> </head> <body> <div id="a">a</div> <div id="b">b</div> <div id="c">c</div> <div id="d">d</div> <script> setTimeout(function () { document.getElementById("b").style.display = "none"; }, 1000); </script> </body> </html> When running this code, notice how a and c have a margin of 8 between them in normal browsers, but a margin of 0 in IE8. Remove the padding, and IE8 behaves like normal. Remove the timeout and IE8 behaves like normal. A border behaves the same way. I've been working with IE-bugs the last 10 years, but this has me stumped. The solution for now is to wrap the divs, and apply the margin to the outer element and other styles to the inner. But that's reminicent of horrible IE6-workarounds. Any better solutions?

    Read the article

  • How can I generate a FindBugs report that shows me the bugs removed between two revisions in the bug

    - by David Deschenes
    I am attempting to execute a combination of the FindBugs commands filterBugs and convertXmlToText, against a bug database that I created, to generate a report that shows me the all of the bugs removed between two revisions of the system that I am working on. Unfortunately, the resulting report does not show any bug details. It appears that the convertXmlToText throws away all bugs that are dead (aka inactive)... the exact set of bugs that I'd like to see. Below is what I see when I pass the results of the filterBugs command to the mineBugHistory command: build/findbugs/bin> ./filterBugs -before r39921 -absent r41558 -active:false ../../../mmfg/bugDB-2.xml | ./mineBugHistory seq version time classes NCSS added newCode fixed removed retained dead active 0 r39764 1271169398000 438 74069 0 64 0 0 0 0 64 1 r39921 1271186932000 441 74333 0 0 22 0 42 0 42 2 r40149 1271185876000 449 74636 0 0 3 0 39 22 39 3 r40344 1271180332000 452 74789 0 0 7 0 32 25 32 4 r40558 1271179612000 452 74806 0 0 1 0 31 32 31 5 r40793 1271178818000 464 75610 0 0 20 0 11 33 11 6 r41016 1271176154000 467 75712 0 0 4 0 7 53 7 7 r41303 1271175616000 481 76931 0 0 7 0 0 57 0 8 r41558 1271175026000 486 77793 0 0 0 0 0 64 0 What I'd like to see in the HTML report is the list of the 64 bugs that are shown as active in version r39764 (sequence # 0). Below is the command line that I am using to generate the HTML report: build/findbugs/bin> ./filterBugs -before r39921 -absent r41558 -active:false ../../../mmfg/bugDB-2.xml | ./convertXmlToText -html:fancy-hist.xsl > ../../../mmfg/bugDB-removed.html

    Read the article

  • php weird bug where an array is not an array !

    - by iko
    I've been going mad trying to figure out why an array would not be an array in php. For a reason I can't understand I have a bug in a smarty class. The code is this : $compiled_tags = array(); for ($i = 0, $for_max = count($template_tags); $i < $for_max; $i++) { $this->_current_line_no += substr_count($text_blocks[$i], "\n"); // I tried array push instead to see // bug is here array_push($compiled_tags,$this->_compile_tag($template_tags[$i])); //$compiled_tags[] = $this->_compile_tag($template_tags[$i]); $this->_current_line_no += substr_count($template_tags[$i], "\n"); } the error message is Warning: array_push() expects parameter 1 to be array, integer given in .... OR before with [] Warning: Cannot use a scalar value as an array in .... I trying a var_debug on $compiled_tags and as soon I enter the for loop is not an array anymore but an integer. I tried renaming the variable, but same problem. I'm sure is something simple that I missed but I can't figure it out. Any help is (as always) welcomed !

    Read the article

  • How do I block safari users from my website?

    - by user360322
    I would like to block all users of Safari from visiting my flash game web site. I would like them to see a picture of someone being punched in the face instead of the games. My understanding is that you can use javascript to do it, but I don't want to use some heavy framework like JQuery. Is there a way to do it in like a single line or two of javascript?

    Read the article

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • tail stops displaying in case of a log rotation

    - by Rudy Vissers
    I have to tail the log of a server (servicemix) and the log rotation is enabled. As soon as the rotation happens, tail stops displaying. I did some investigations and it is a bug in Debian : Debian Bug Report. The bug has been around for a long time ago. Does anyone knows if this bug in Ubuntu is to be fixed? I'm on Ubuntu 12.04 64 bit. I don't have to mention that this bug is total hell! Every time I have the problem, I have to interrupt the command tail and re-execute the command!

    Read the article

  • Isolated Unit Tests and Fine Grained Failures

    - by Winston Ewert
    One of the reasons often given to write unit tests which mock out all dependencies and are thus completely isolated is to ensure that when a bug exists, only the unit tests for that bug will fail. (Obviously, an integration tests may fail as well). That way you can readily determine where the bug is. But I don't understand why this is a useful property. If my code were undergoing spontaneous failures, I could see why its useful to readily identify the failure point. But if I have a failing test its either because I just wrote the test or because I just modified the code under test. In either case, I already know which unit contains a bug. What is the useful in ensuring that a test only fails due to bugs in the unit under test? I don't see how it gives me any more precision in identifying the bug than I already had.

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >