Search Results

Search found 8785 results on 352 pages for 'bug reporting'.

Page 73/352 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • How can I stop IIS7 (integrated mode) from reporting a 404 before I get a chance to handle it?

    - by Gary McGill
    I have an ASP.NET MVC 2 application running on IIS7 in integrated mode. I'm trying to do my own 404 handling, but IIS7 seems to be intercepting the error and returning its own 404 message to the client before I get a chance to handle it. I'm not having much luck coming at the problem from a programming perspective over on Stack Overflow, so I wondered if maybe it's a configuration problem. Is there something I have to do to tell IIS to let me handle my own errors? (I'm trying to use Application_Error in my global.asax but it's not even getting there). There is a custom error page defined (at the machine level, I think) for 404 but when I tried removing that it didn't really help - it simply showed a bald one-liner message instead. My code still didn't get a look in. Is it perhaps something to do with the routing? Maybe the "mysite.com/nosuchpage" URL isn't being routed to me and that's why I don't get a chance to intercept it? Do I need to do something so that ALL requests get routed through my app?

    Read the article

  • Installed 4GB memory but Windows XP 32 bit only reporting 2GB?

    - by AnthonyWJones
    I've just taken an existing XP Pro 32 bit system that had only 0.5GB of memory installed and maxed it out to 4GB. The BIOS reports the 4GB ram however when XP is booted and I look at the computer properties only 2GB of RAM is reported. Can anyone explain this? Before we go up any blind allys the /3GB switch is not the answer here, I have no need for a single process to use more the 2GB of memory. I'm wondering if the the 32 bit XP Pro is deliberately limited to 2GB. I seem to remember seeing an excellent table on a Microsoft site listing all the various SKUs of Windows and what each one was limited to. However I can't seem to find that table now.

    Read the article

  • Installed 4GB memory but Windows XP 32 bit only reporting 2GB?

    - by AnthonyWJones
    I've just taken an existing XP Pro 32 bit system that had only 0.5GB of memory installed and maxed it out to 4GB. The BIOS reports the 4GB ram however when XP is booted and I look at the computer properties only 2GB of RAM is reported. Can anyone explain this? Before we go up any blind allys the /3GB switch is not the answer here, I have no need for a single process to use more the 2GB of memory. I'm wondering if the the 32 bit XP Pro is deliberately limited to 2GB. I seem to remember seeing an excellent table on a Microsoft site listing all the various SKUs of Windows and what each one was limited to. However I can't seem to find that table now.

    Read the article

  • On RouterOS, how will transparent proxying (with DNAT) affect reporting of netflows?

    - by Tim
    I have a box running Mikrotik RouterOS, which is set up to do transparent web proxying, as described here. In short, this means that I have a firewall rule for destination NAT causing any port 80 traffic to get redirected to port 8080 on the router, which is received by the Mikrotik local web proxy. The local web proxy then makes the web request on the client's behalf, in this case to a parent web proxy server (which in turn does the real web request). My question is, how will this two-part process get reported in the logging of traffic flow information (netflows)? Looking at the logged information, what I seem to be seeing is this: One flow recorded from client machine (private IP) to remote proxy (8080) Another flow recorded from router to remote proxy (8080) The original request that the client made to port 80 isn't recorded. I want to write code to analyse traffic usage, so I want to be sure I'm not losing information if I discard the latter of these.

    Read the article

  • My network manager stop running very shotly when installed VirtualBox-4.0, reporting network manager not running

    - by wangolo joel
    I fell into a snare when i downloaded and install virtual box 4.0 on my ubuntu 10.4, when i started the virtualbox i tried i saw that my network icon on my desktop showed my red x showing iam disconnected and this happened shortily when i installed virtual box 4.0 I have searched through the web and found some people who have faced some problem. but the solution they gave them i tried but it coudn't work, of restarting,starting ,stoping the netowrk manager, i was even thinking of uninstalling this virutalbox but not netowrk connection truely i need your help

    Read the article

  • Where are my date ranges in Analytics coming from?

    - by Jeffrey McDaniel
    In the P6 Reporting Database there are two main tables to consider when viewing time - W_DAY_D and W_Calendar_FS.  W_DAY_D is populated internally during the ETL process and will provide a row for every day in the given time range. Each row will contain aspects of that day such as calendar year, month, week, quarter, etc. to allow it to be used in the time element when creating requests in Analytics to group data into these time granularities. W_Calendar_FS is used for calculations such as spreads, but is also based on the same set date range. The min and max day_dt (W_DAY_D) and daydate (W_Calendar_FS) will be related to the date range defined, which is a start date and a rolling interval plus a certain range. Generally start date plus 3 years.  In P6 Reporting Database 2.0 this date range was defined in the Configuration utility.  As of P6 Reporting Database 3.0, with the introduction of the Extended Schema this date range is set in the P6 web application. The Extended Schema uses this date range to calculate the data for near real time reporting in P6.  This same date range is validated and used for the P6 Reporting Database.  The rolling date range means if today is April 1, 2010 and the rolling interval is set to three years, the min date will be 1/1/2010 and the max date will be 4/1/2013.  1/1/2010 will be the min date because we always back fill to the beginning of the year. On April 2nd, the Extended schema services are run and the date range is adjusted there to move the max date forward to 4/2/2013.  When the ETL process is run the Reporting Database will pick up this change and also adjust the max date on the W_DAY_D and W_Calendar_FS. There are scenarios where date ranges affecting areas like resource limit may not be adjusted until a change occurs to cause a recalculation, but based on general system usage these dates in these tables will progress forward with the rolling intervals. Choosing a large date range can have an effect on the ETL process for the P6 Reporting Database. The extract portion of the process will pull spread data over into the STAR. The date range defines how long activity and resource assignment spread data is spread out in these tables. If an activity lasts 5 days it will have 5 days of spread data. If a project lasts 5 years, and the date range is 3 years the spread data after that 3 year date range will be bucketed into the last day in the date range. For the overall project and even the activity level you will still see the correct total values.  You just would not be able to see the daily spread 5 years from now. This is an important question when choosing your date range, do you really need to see spread data down to the day 5 years in the future?  Generally this amount of granularity years in the future is not needed. Remember all those values 5, 10, 15, 20 years in the future are still available to report on they would be in more of a summary format on the activity or project.  The data is always there, the level of granularity is the decision.

    Read the article

  • Firefox 11.0 Right-Click Dropdown Menu (and Menu Options at Window Top) Flickering - Potential Bug?

    - by nicorellius
    About a week ago my Windows 7 machine starting exhibiting odd Firefox behavior. When I right click on something the drop-down menu flickers and when the mouse is hovering over a field, I can't really see it. This same behavior happens in other ways too, like in the top File, Edit , View, etc menus. I click them and they flicker, when I hover over them, and then they get obscured when I hover over the one I want. I did some research and didn't really find anything. Some people found similar behavior in applications THEY were building. But mine is a global problem. Happens on all sites, as far as I can tell. This is why I suspect it's my Firefox installation. I thought about reinstalling, but if there was another option, I'd like to explore that too. The behavior is not really capture-able, so I don't have anything except words to describe it. Sorry ;-( Thanks.

    Read the article

  • What data should I (can I) gather to prove a .net bug in third party software?

    - by gef05
    Our users are using a third-party system and every so often - a user might go all day without this happening, other days they may see it two or three times in seven hours - a red x will appear on screen rather than a button, field, or control. This is a .net system. Looking online I can see that this is a .net error. Problem is, our devs didn't write it, and the vendor wants proof that the problem resides with their system, not our PCs. What can we do to gather information on such an issue that would allow us to make our case to the vendor and get them to create a fix? I'm thinking of data to gather - anything from OS, memory. .net version installed etc. But I'd also like to know about error logging - if it's possible to log errors that occur in third-party systems.

    Read the article

  • what type of bug causes a program to slowly use more processor power and all of a sudden go to 100%?

    - by reinier
    Hi, I was hoping to get some good ideas as to what might be causing a really nasty bug. This is a program which is transmitting data over a socket, and also receives messages back. I could explain lots more, but I don't think this will help here. I'm just searching for hypothetical problems which can cause the following behaviour: program runs processor time slowly accumulates (till around 60%) all of a sudden (could be after 30 but also after 60 seconds) the processor time shoots to 100%. the program halts completely In my syslog it always ends on one line with a memory allocation (something similar to: myArray = new byte[16384]) in the same thread. now here is the weird part: if I set the debugger anywhere...it immediately stops on that line. So just the act of setting a breakpoint, made the thread continue (it wasn't running since I saw no log output anymore) I was thinking 'deadlock' but that would not cause 100% processor power. If anything, the opposite. Also, setting a breakpoint would not cause a deadlock to end. anyone else a theoretical suggestion as to what kind of 'construct' might cause this effect? (apart from 'bad programming') ;^) thanks EDIT: I just noticed.... by setting the sendspeed slower, the problem shows itself much later than expected. I would think around the same amount of packets send...but no the amount of packets send is much higher this way before it has the same problem.

    Read the article

  • ASP.NET MVC Ajax.BeginForm eats params of submit button clicked. Looks like bug.

    - by RonnBlack
    If you are using Ajax.BeginForm() with multiple submit buttons similar to this: // View.aspx <% using (Ajax.BeginForm("Action", "Controller", new AjaxOptions { UpdateTargetId = "MyControl", })) { %> <span id="MyControl"> <% Html.RenderPartial("MyControl"); %> </span> <% } %> //MyControl.ascx <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> <input name="prev" type="submit" value="prev" /> <input name="next" type="submit" value="next" /> //... Everything is submitted to the controller fine but the params for the submit button that was clicked are absent from the Request. In otherwords Request["next"] and Request["prev"] are always null. I looked in to the JavaScript in Microsoft.MvcAjax.js and it looks like the function Sys_Mvc_MvcHelpers$_serializeForm completely skips over the inputs that are of type 'submit'. This doesn't seem logical at all. How else can you find out what button has been clicked? It looks like a bug to me. Is there any logical reason to skip these form parameters?

    Read the article

  • Does somebody know something about Flash-CS5' "multiple copies" bug?

    - by Slav
    I'm, generally, complaining, not asking, but if somebody will be able (Adobe isn't) to help - it will be perfect. I use Flash-CS5 IDE to compose .swc files to import it, then, into FlashDevelop. Evetyrhing works fine until SOMETIMES. About each 10th "Ctrl+S" (save) click Flash-CS5 duplicates ALL images (loaded .bmp, .jpg, .png... files) and names them like "Copy ..." where "..." is incrementing number. Furhtermore, Flash-CS5 creates duplications of some layers - other layers have duplications of items which put on them. Restoring project's state to previous (not copied) state - is the hell. This bug is not reproducible but it happens each 2 hours or so - both right after project's opening and after several hours of work. It happens on different machines - I know at least 5 people which told me that they also run into this issue and do not know how to escape it. Did somebody experienced it? Does somebody know how to handle it?

    Read the article

  • ADO.NET zombie transaction bug? How to ensure that commands will not be executed on implicit transac

    - by TN
    e.g. When deadlock occurs, following SQL commands are successfully executed, even if they have assigned SQL transaction that is after rollback. It seems, it is caused by a new implicit transaction that is created on SQL Server. Someone could expect that ADO.NET would throw an exception that the commands are being executed on a zombie transaction. However, such exception is not thrown. (I think this is a bug in ASP.NET.) Moreover, because of zombie transaction the final Dispose() silently ignores the rollback. Any ideas, how can I ensure that nobody can execute commands on implicit transaction? Or, how to check that transaction is zombie? I found that Commit() and Rollback() check for zombie transaction, however I can call them for a test:) I also found that also reading IsolationLevel will do the check, but I am not sure whether simple calling transaction.IsolationLevel.ToString(); will not be removed by a future optimizer. Or do you know any other safe way invoke a getter (without using reflection or IL emitting)?

    Read the article

  • (Not So) Silly Objective-C inheritance problem when using property - GCC Bug?

    - by Ben Packard
    Update 2 - Many people are insisting I need to declare an iVar for the property. Some are saying not so, as I am using Modern Runtime (64 bit). I can confirm that I have been successfully using @property without iVars for months now. Therefore, I think the 'correct' answer is an explanation as to why on 64bit I suddenly have to explicitly declare the iVar when (and only when) i'm going to access it from a child class. The only one I've seen so far is a possible GCC bug (thanks Yuji). Not so simple after all... Update - I messed up one line of the original copy and paste - corrected. The @property call was missing (nonatomic, retain) but is a red herring - STILL NEED AN ANSWER! Thanks. I've been scratching my head with this for a couple of hours - I haven't used inheritance much. Here I have set up a simple Test B class that inherits from Test A, where an ivar is declared. But I get the compilation error that the variable is undeclared. This only happens when I add the property and synthesize declarations - works fine without them. TestA Header: #import <Cocoa/Cocoa.h> @interface TestA : NSObject { NSString *testString; } @end TestA Implementation is empty: #import "TestA.h" @implementation TestA @end TestB Header: #import <Cocoa/Cocoa.h> #import "TestA.h" @interface TestB : TestA { } @property (nonatomic, retain) NSString *testProp; @end TestB Implementation (Error - 'testString' is undeclared) #import "TestB.h" @implementation TestB @synthesize testProp; - (void)testing{ NSLog(@"test ivar is %@", testString); } @end

    Read the article

  • Loading Native library to external Package in Eclipse not working. is it a Bug?

    - by TacB0sS
    I was about to report a but to Eclipse, but I was thinking to give this a chance here first: If I add an external package, the application cannot find the referenced native library, except in the case specified at the below: If my workspace consists of a single project, and I import an external package 'EX_package.jar' from a folder outside of the project folder, I can assign a folder to the native library location via: mouse over package - right click - properties - Native Library - Enter your folder. This does not work. In runtime the application does not load the library, System.mapLibraryName(Path) also does not work. Further more, if I create a User Library, and add the package to it and define a folder for the native library it still does not. If it works for you then I have a major bug since it does not work on my computer I test this in any combination I could think of, including adding the path to the windows PATH parameter, and so many other ways I can't even start to remember, nothing worked, I played with this for hours and had a colleague try to assist me, but we both came up empty. Further more, if I have a main project that is dependent on few other projects in my workspace, and they all need to use the same 'EX_package.jar' I MUST supply a HARD COPY INTO EACH OF THEM, it will ONLY (I can't stress the ONLYNESS, I got freaked out by this) work if I have a hard copy of the package in ALL of the project folders that the main project has a dependency on, and ONLY if I configure the Native path in each of them!! This also didn't do the trick. please tell me there is a solution to this, this drives me nuts... Thanks, Adam Zehavi.

    Read the article

  • NSCollectionView: Can the same object not be in the array more than once or is this a bug?

    - by Sean
    I may be doing this all wrong, but I thought I was on the right track until I hit this little snag. Basically I was putting together a toy using NSCollectionView and trying to understand how to hook that all up using IB. I have a button which will add a couple of strings to the NSArrayController: The first time I press this button, my strings appear in the collection view as expected: The second time I press the button, the views scroll down and room is made - but the items don't appear to get added. I just see blank space: The button is implemented as follows (controller is a pointer to the NSArrayController I added in IB): - (IBAction)addStuff:(id)control { [controller addObjects:[NSArray arrayWithObjects:@"String 1",@"String 2",@"String 3",nil]]; } I'm not sure what I'm doing wrong. Rather than try to explain all the connections/binds/etc, if you need more info, I'd be grateful if you could just take a quick look at the toy project itself. UPDATE: After more experimentation as suggested by James Williams, it seems the problem stems from having multiple objects with the same memory address in the array. This confuses either NSArrayController or NSCollectionView (not sure which). Changing my addStuff: to this resulted in the behavior I originally expected: [controller addObjects:[NSArray arrayWithObjects:[NSMutableString stringWithString:@"String 1"],[NSMutableString stringWithString:@"String 2"],[NSMutableString stringWithString:@"String 3"],nil]]; So the question now, I guess, is if this is a bug I should report to Apple or if this is intended/documented behavior and I just missed it?

    Read the article

  • Am I a discoverer of a bug in the WPF engine?

    - by bitbonk
    We have a MFC 8 application compiled with /CLR that contains a larger amount of Windows Forms UserControls wich again contain WPF user controls using ElementHost. Due to the architecture of our software we can not use HwndHost directly. We observed an extremely strange behavior here that we can not make any sense of: When the CPU load is very high during startup of the application and there are a lot live of ElementHost instances, the whole property engine completely stops working. For example animations that usually just work fine now never update the values of the bound properties, they just stay at some random value after startup. When I set a property that is not bound to anything the value is correctly stored in the dependency property (calling the getter returns the new value) but the visual representation never reflects that. I set the background to red but the background color does not change. We tested this on a lot of different machines all running Windows XP SP2 and it is pretty reproducible. The funny thing here is, that there is in fact one situation where the bound properties actually pickup a new value from the animation and the visual gets updated based on the property values. It is when I resize the ElementHost or when I hide and reshow the parent native control. As soon as I do this, properties that are bound to an animation pickup a new value and the visuals rerender based on the new property values - but just once - if I want to see another update I have to resize the ElementHost. Do you have any explanation of what could be happening here or how I could approach this problem to find it out? What can I do to debug this? Is there a way I can get more information about what WPF actually does or where WPF might have crashed? To me it currently seems like a bug in WPF itself since it only happens at high CPU load at startup.

    Read the article

  • Is this a bug? Or is it a setting in ASP.NET 4 (or MVC 2)?

    - by John Gietzen
    I just recently started trying out T4MVC and I like the idea of eliminating magic strings. However, when trying to use it on my master page for my stylesheets, I get this: <link href="<%: Links.Content.site_css %>" rel="stylesheet" type="text/css" /> rending like this: <link href="&lt;%: Links.Content.site_css %>" rel="stylesheet" type="text/css" /> Whereas these render correctly: <link href="<%: Url.Content("~/Content/Site.css") %>" rel="stylesheet" type="text/css" /> <link href="<%: Links.Content.site_css + "" %>" rel="stylesheet" type="text/css" /> It appears that, as long as I have double quotes inside of the code segment, it works. But when I put anything else in there, it escapes the leading "less than". Is this something I can turn off? Is this a bug? Edit: This does not happen for <script src="..." ... />, nor does it happen for <a href="...">. Edit 2: Minimal case: <link href="<%: string.Empty %>" /> vs <link href="<%: "" %>" />

    Read the article

  • Is there bug in the Matrix.RotateAt method for certain angles? .Net Winforms

    - by Jules
    Here's the code i'm using to rotate: Dim m As New System.Drawing.Drawing2D.Matrix Dim size = image.Size m.RotateAt(degreeAngle, New PointF(CSng(size.Width / 2), CSng(size.Height / 2))) Dim temp As New Bitmap(600, 600, Imaging.PixelFormat.Format32bppPArgb) Dim g As Graphics = Graphics.FromImage(temp) g.Transform = m g.DrawImage(image, 0, 0) (1) Disposals removed for brevity. (2) I test the code with a 200 x 200 rectangle. (3) Size 600,600 it just an arbitrary large value that I know will fit the right and bottom sides of the rotated image for testing purposes. (4) I know, with this code, the top and left edges will be clipped because I'm not transforming the orgin after the rotate. The problem only occurs at certain angles: (1) At 90, the right hand edge disappears completely. (2) At 180, the right and bottom edges are there, but very faded. (3) At 270, the bottom edge disappears completely. Is this a known bug? If I manually rotate the corners an draw the image by specifying an output rectangle, I don't get the same problem - though it is slightly slower than using RotateAt.

    Read the article

  • Deleted array value still showing up on foreach loop in AS3 (bug in flash?)

    - by nexus
    It took me many hours to narrow down a problem in some code to this reproducible error, which seems to me like a bug in AVM2. Can anyone shed light on why this is occurring or how to fix it? When the value at index 1 is deleted and a value is subsequently set at index 0, the non-existent (undefined) value at index 1 will now show up in a foreach loop. I have only been able to produce this outcome with index 1 and 0 (not any other n and n-1). Run this code: package { import flash.display.Sprite; public class Main extends Sprite { public function Main():void { var bar : Array = new Array(6); out(bar); //proper behavior trace("bar[1] = 1", bar[1] = 1); out(bar); //proper behavior trace("delete bar[1]", delete bar[1]); out(bar); //proper behavior trace("bar[4] = 4", bar[4] = 4); out(bar); //for each loop will now iterate over the undefined position at index 1 trace("bar[0] = 0", bar[0] = 0); out(bar); trace("bar[3] = 3", bar[3] = 3); out(bar); } private function out(bar:Array):void { trace(bar); for each(var i : * in bar) { trace(i); } } } } It will give this output: ,,,,, bar[1] = 1 1 ,1,,,, 1 delete bar[1] true ,,,,, bar[4] = 4 4 ,,,,4, 4 bar[0] = 0 0 0,,,,4, 0 undefined 4 bar[3] = 3 3 0,,,3,4, 0 undefined 4 3

    Read the article

  • Django GenericRelation doesn't save related object's id - is this a bug or am I doing it wrong?

    - by pinkeen
    I have a model with a generic relation (call it A), when creating an instance of this object I pass an instance of another model (call it B) as the initializer of the content_object field (via kwargs of the constructor). If I don't save B before creating A then when saving A the content_object_id is saved to the db as NULL. If I save B before passing it to the constructor of A then everything's allright. It's not logical. I assumed that the ID of the related object (B) is fetched when doing A.save() and it should throw some kind of an exception if B isn't saved yet but it just fails silently. I don't like the current solution (saving B beforhand) because we don't know yet if I will be always willing to keep the object, not just scrap it, and there are performance considerations - what if I will add some another data and save it once more shortly after. class BaseNodeData(models.Model): ... extnodedata_content_type = models.ForeignKey(ContentType, null=True) extnodedata_object_id = models.PositiveIntegerField(null=True) extnodedata = generic.GenericForeignKey(ct_field='extnodedata_content_type', fk_field='extnodedata_object_id') class MarkupNodeData(models.Model): raw_content = models.TextField() Suppose we do: markup = MarkupNodeData(raw_content='...') base = BaseNodeData(..., extnodedata=markup) markup.save() base.save() # both records are inserted to the DB but base is stored with extnodedata_object_id=NULL markup = MarkupNodeData(raw_content='...') base = BaseNodeData(..., extnodedata=markup) base.save() markup.save() # no exception is thrown and everything is the same as above markup = MarkupNodeData(raw_content='...') markup.save() base = BaseNodeData(..., extnodedata=markup) base.save() # this works as expected Of course I can do it this way, but it doesn't change anything: base = BaseNodeData(...) base.extnodedata = markup My question is - is this a bug in django which I should report or maybe I'm doing something wrong. Docs on GenericRelations aren't exactly verbose.

    Read the article

  • Perl Moose::Util::TypeConstraints bug ? what is this error about the name has invalid chars ?

    - by alex8657
    That has been hours i am tracking a Moose::Util::TypeConstraints exceptions, i don't understand where it gets to check a type and tells me that the name is incorrect. I tracked the error to a reduced example to try to locate the problem, and it just shows me that i do not get it. Did i get to a Moose::Util::TypeConstraints bug ? aoffice:new alex$ perl -c ../codesnippets/typeconstrainterror.pl ../codesnippets/typeconstrainterror.pl syntax OK aoffice:new alex$ perl -d ../codesnippets/typeconstrainterror.pl (...) DB<1> r Something::File::LocalFile=HASH(0x100d1bfa8) contains invalid characters for a type name. Names can contain alphanumeric character, ":", and "." at /opt/local/lib/perl5/vendor_perl/5.10.1/darwin-multi-2level/Moose/Util/TypeConstraints.pm line 508 Moose::Util::TypeConstraints::_create_type_constraint('Something::File::LocalFile=HASH(0x100d1bfa8)', undef, undef, undef, undef) called at /opt/local/lib/perl5/vendor_perl/5.10.1/darwin-multi-2level/Moose/Util/TypeConstraints.pm line 285 Moose::Util::TypeConstraints::type('Something::File::LocalFile=HASH(0x100d1bfa8)') called at ../codesnippets/typeconstrainterror.pl line 7 Something::File::is_slink('Something::File::LocalFile=HASH(0x100d1bfa8)') called at ../codesnippets/typeconstrainterror.pl line 33 Debugged program terminated. Use q to quit or R to restart, use o inhibit_exit to avoid stopping after program termination, h q, h R or h o to get additional info. Below, the code that crashes: package Something::File; use Moose; has 'type' =>(is=>'ro', isa=>'Str', writer=>'_set_type' ); sub is_slink { my $self = shift; return ( $self->type eq 'slink' ); } no Moose; __PACKAGE__->meta->make_immutable; 1; package Something::File::LocalFile; use Moose; use Moose::Util::TypeConstraints; extends 'Something::File'; subtype 'PositiveInt' => as 'Int' => where { $_ >0 } => message { 'Only positive greater than zero integers accepted' }; no Moose; __PACKAGE__->meta->make_immutable; 1; my $a = Something::File::LocalFile->new; # $a->_set_type('slink'); print $a->is_slink ." end\n";

    Read the article

  • GCC, -O2, and bitfields - is this a bug or a feature?

    - by Rooke
    Today I discovered alarming behavior when experimenting with bit fields. For the sake of discussion and simplicity, here's an example program: #include <stdio.h> struct Node { int a:16 __attribute__ ((packed)); int b:16 __attribute__ ((packed)); unsigned int c:27 __attribute__ ((packed)); unsigned int d:3 __attribute__ ((packed)); unsigned int e:2 __attribute__ ((packed)); }; int main (int argc, char *argv[]) { Node n; n.a = 12345; n.b = -23456; n.c = 0x7ffffff; n.d = 0x7; n.e = 0x3; printf("3-bit field cast to int: %d\n",(int)n.d); n.d++; printf("3-bit field cast to int: %d\n",(int)n.d); } The program is purposely causing the 3-bit bit-field to overflow. Here's the (correct) output when compiled using "g++ -O0": 3-bit field cast to int: 7 3-bit field cast to int: 0 Here's the output when compiled using "g++ -O2" (and -O3): 3-bit field cast to int: 7 3-bit field cast to int: 8 Checking the assembly of the latter example, I found this: movl $7, %esi movl $.LC1, %edi xorl %eax, %eax call printf movl $8, %esi movl $.LC1, %edi xorl %eax, %eax call printf xorl %eax, %eax addq $8, %rsp The optimizations have just inserted "8", assuming 7+1=8 when in fact the number overflows and is zero. Fortunately the code I care about doesn't overflow as far as I know, but this situation scares me - is this a known bug, a feature, or is this expected behavior? When can I expect gcc to be right about this? Edit (re: signed/unsigned) : It's being treated as unsigned because it's declared as unsigned. Declaring it as int you get the output (with O0): 3-bit field cast to int: -1 3-bit field cast to int: 0 An even funnier thing happens with -O2 in this case: 3-bit field cast to int: 7 3-bit field cast to int: 8 I admit that attribute is a fishy thing to use; in this case it's a difference in optimization settings I'm concerned about.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >