Search Results

Search found 10312 results on 413 pages for 'compiler bug'.

Page 83/413 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Compiling for T4

    - by Darryl Gove
    I've recently had quite a few queries about compiling for T4 based systems. So it's probably a good time to review what I consider to be the best practices. Always use the latest compiler. Being in the compiler team, this is bound to be something I'd recommend But the serious points are that (a) Every release the tools get better and better, so you are going to be much more effective using the latest release (b) Every release we improve the generated code, so you will see things get better (c) Old releases cannot know about new hardware. Always use optimisation. You should use at least -O to get some amount of optimisation. -xO4 is typically even better as this will add within-file inlining. Always generate debug information, using -g. This allows the tools to attribute information to lines of source. This is particularly important when profiling an application. The default target of -xtarget=generic is often sufficient. This setting is designed to produce a binary that runs well across all supported platforms. If the binary is going to be deployed on only a subset of architectures, then it is possible to produce a binary that only uses the instructions supported on these architectures, which may lead to some performance gains. I've previously discussed which chips support which architectures, and I'd recommend that you take a look at the chart that goes with the discussion. Crossfile optimisation (-xipo) can be very useful - particularly when the hot source code is distributed across multiple source files. If you're allowed to have something as geeky as favourite compiler optimisations, then this is mine! Profile feedback (-xprofile=[collect: | use:]) will help the compiler make the best code layout decisions, and is particularly effective with crossfile optimisations. But what makes this optimisation really useful is that codes that are dominated by branch instructions don't typically improve much with "traditional" compiler optimisation, but often do respond well to being built with profile feedback. The macro flag -fast aims to provide a one-stop "give me a fast application" flag. This usually gives a best performing binary, but with a few caveats. It assumes the build platform is also the deployment platform, it enables floating point optimisations, and it makes some relatively weak assumptions about pointer aliasing. It's worth investigating. SPARC64 processor, T3, and T4 implement floating point multiply accumulate instructions. These can substantially improve floating point performance. To generate them the compiler needs the flag -fma=fused and also needs an architecture that supports the instruction (at least -xarch=sparcfmaf). The most critical advise is that anyone doing performance work should profile their application. I cannot overstate how important it is to look at where the time is going in order to determine what can be done to improve it. I also presented at Oracle OpenWorld on this topic, so it might be helpful to review those slides.

    Read the article

  • IE Display bug - solved when using the "developer tools"?

    - by nute
    I have some Javascript/CSS issues with IE (no suprises?). So when I see the error in IE8, I go to Tools Developer Tools. Then I navigate to the correct DIV. I click on it, look around ... then I look again at my page, and the problem is fixed! This makes debugging pretty difficult... To see the display error, go to http://www.makemeheal.com/mmh/home.do in IE, and look at the "Your Recent History" module.

    Read the article

  • Why is the colspan not working properly in this script? js bug or IE ?

    - by Perpetualcoder
    This question is related to this question I asked a little while back. The updated code is posted here. This to note is that i am looking to create a HTML table dynamically that looks similar to this: <table> <tbody> <tr> <td colspan="3" align="right">Header</td> </tr> <tr> <td colspan="3" align="right">Header</td> </tr> <tr> <td colspan="3" align="right">Header</td> </tr> <tr> <td>Col1</td> <td>Col3</td> <td>Col4</td> </tr> <tr> <td>Col1</td> <td>Col3</td> <td>Col4</td> </tr> </tbody> </table> I can get this done in markup but when I do it in js the colspan does not seem to work in IE7. Any hep will be greatly appreciated.

    Read the article

  • ValidateInputAttribute bug in VS 2010 RC / ASP.NET MVC 2.0?

    - by Ben
    Am I doing something wrong here? I have a text area on a view and am posting back the html contents. In VS 2008 and MVC 1.0 the following code successfully prevents input validation: [HttpPost] [ValidateInput(false)] public ActionResult Index(int? id) { return View(); } If I execute this code in VS 2010 / MVC 2.0 I always get this error: A potentially dangerous Request.Form value was detected from the client (body=""). Any ideas?

    Read the article

  • code throws std::bad_alloc, not enough memory or can it be a bug?

    - by Andreas
    I am parsing using a pretty large grammar (1.1 GB, it's data-oriented parsing). The parser I use (bitpar) is said to be optimized for highly ambiguous grammars. I'm getting this error: 1terminate called after throwing an instance of 'std::bad_alloc' what(): St9bad_alloc dotest.sh: line 11: 16686 Aborted bitpar -p -b 1 -s top -u unknownwordsm -w pos.dfsa /tmp/gsyntax.pcfg /tmp/gsyntax.lex arbobanko.test arbobanko.results Is there hope? Does it mean that it has ran out of memory? It uses about 15 GB before it crashes. The machine I'm using has 32 GB of RAM, plus swap as well. It crashes before outputting a single parse tree. The parser is an efficient CYK chart parser using bit vector representations; I presume it is already near the limit of memory efficiency. If it really requires too much memory I could sample from the grammar rules, but this will decrease parse accuracy of course.

    Read the article

  • InvalidProgramException Running Unit Test

    - by Anthony Trudeau
    There is a bug in the unit testing framework in Visual Studio 2010 with unit testing.  The bug appears in a very special circumstance involving an internal generic type. The bug causes the following exception to be thrown: System.InvalidProgramException: JIT Compiler encountered an internal limitation. This occurs under the following circumstances: Type being tested is internal or private Method being tested is generic  Method being tested has an out parameter Type accessor functionality used to access the internal type The exception is not thrown if the InternalsVisibleToAttribute is assigned to the source assembly and the accessor type is not used; nor is it thrown if the method is not a generic method. Bug #635093 has been added through Microsoft Connect

    Read the article

  • Testing To Prevent Cascading Bugs

    - by jfrankcarr
    Yesterday, Twitter was hit with a "Cascading Bug" as described in this blog post: A “cascading bug” is a bug with an effect that isn’t confined to a particular software element, but rather its effect “cascades” into other elements as well. I've seen this kind of bug, on a smaller scale of course, on some projects I've worked on. They can be difficult to identify in dev/test environments, even within a test driven development environment. My questions are... What are some strategies you use, beyond the basic TDD and standard regression testing, to identify and prevent the potential trouble points that might only occur in the production environment? Does the presence of such problems indicate a breakdown in the software development process or simply a by-product of complex software systems?

    Read the article

  • How to push oath token to LocalStorage or LocalSession and listen to the Storage Event? (SoundCloud Php/JS bug workaround)

    - by afxjzs
    This references this issue: Javascript SDK connect() function not working in chrome I asked for more information on how to resolve with localstorage and was asked to create a new topic. The answer was "A workaround is instead of using window.opener, push the oauth token into LocalStorage or SessionStorage and have the opener window listen to the Storage event." but i have no idea how to do that. It seems really simple, but i don't know where to start. I couldn't find an relevant examples. thanks for your help!

    Read the article

  • What does the C# compiler mean when it prints "an explicit conversion exists"?

    - by Wim Coenen
    If I make an empty test class: public class Foo { } And I try to compile code with this statement: Foo foo = "test"; Then I get this error as expected: Cannot implicitly convert type 'string' to 'ConsoleApplication1.Foo' However, if I change the declaration of Foo from class to interface, the error changes to this (emphasis mine): Cannot implicitly convert type 'string' to 'ConsoleApplication1.Foo'. An explicit conversion exists (are you missing a cast?) What is this "explicit conversion" which is supposed to exist?

    Read the article

  • Does Microsoft hate firefox? ASP.Net gridview performance in firefox bug?

    - by Maxim Gershkovich
    Could someone please explain the significant difference in speed between a firefox updatepanel async postback and one performed in IE? Average Firefox Postback Time For 500 objects: 1.183 Second Average IE Postback Time For 500 objects: 0.295 Seconds Using firebug I can see that the majority of this time in FireFox is spent on the server side. A total of 1.04 seconds. Given this fact the only thing I can assume is causing this problem is the way that ASP.Net renders its controls between the two browsers. Has anyone run into this problem before? VB.Net Code Protected Sub Button1_Click(ByVal sender As Object, ByVal e As EventArgs) Handles Button1.Click GridView1.DataBind() End Sub Public Function GetStockList() As StockList Dim res As New StockList For l = 0 To 500 Dim x As New Stock With {.Description = "test", .ID = Guid.NewGuid} res.Add(x) Next Return res End Function Public Class Stock Private m_ID As Guid Private m_Description As String Public Sub New() End Sub Public Property ID() As Guid Get Return Me.m_ID End Get Set(ByVal value As Guid) Me.m_ID = value End Set End Property Public Property Description() As String Get Return Me.m_Description End Get Set(ByVal value As String) Me.m_Description = value End Set End Property End Class Public Class StockList Inherits List(Of Stock) End Class Markup <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server"> </asp:ScriptManager> <script type="text/javascript" language="Javascript"> function timestamp_class(this_current_time, this_start_time, this_end_time, this_time_difference) { this.this_current_time = this_current_time; this.this_start_time = this_start_time; this.this_end_time = this_end_time; this.this_time_difference = this_time_difference; this.GetCurrentTime = GetCurrentTime; this.StartTiming = StartTiming; this.EndTiming = EndTiming; } //Get current time from date timestamp function GetCurrentTime() { var my_current_timestamp; my_current_timestamp = new Date(); //stamp current date & time return my_current_timestamp.getTime(); } //Stamp current time as start time and reset display textbox function StartTiming() { this.this_start_time = GetCurrentTime(); //stamp current time } //Stamp current time as stop time, compute elapsed time difference and display in textbox function EndTiming() { this.this_end_time = GetCurrentTime(); //stamp current time this.this_time_difference = (this.this_end_time - this.this_start_time) / 1000; //compute elapsed time return this.this_time_difference; } //--> </script> <script type="text/javascript" language="javascript"> var time_object = new timestamp_class(0, 0, 0, 0); //create new time object and initialize it Sys.WebForms.PageRequestManager.getInstance().add_beginRequest(BeginRequestHandler); Sys.WebForms.PageRequestManager.getInstance().add_endRequest(EndRequestHandler); function BeginRequestHandler(sender, args) { var elem = args.get_postBackElement(); ActivateAlertDiv('visible', 'divAsyncRequestTimer', elem.value + ''); time_object.StartTiming(); } function EndRequestHandler(sender, args) { ActivateAlertDiv('visible', 'divAsyncRequestTimer', '(' + time_object.EndTiming() + ' Seconds)'); } function ActivateAlertDiv(visstring, elem, msg) { var adiv = $get(elem); adiv.style.visibility = visstring; adiv.innerHTML = msg; } </script> <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <Triggers> <asp:AsyncPostBackTrigger ControlID="Button1" EventName="click" /> </Triggers> <ContentTemplate> <asp:UpdateProgress ID="UpdateProgress1" runat="server" AssociatedUpdatePanelID="UpdatePanel1"> </asp:UpdateProgress> <asp:Button ID="Button1" runat="server" Text="Button" /> <div id="divAsyncRequestTimer" style="font-size:small;"> </div> <asp:GridView ID="GridView1" runat="server" DataSourceID="ObjectDataSource1" AutoGenerateColumns="False"> <Columns> <asp:BoundField DataField="ID" HeaderText="ID" SortExpression="ID" /> <asp:BoundField DataField="Description" HeaderText="Description" SortExpression="Description" /> </Columns> </asp:GridView> <asp:ObjectDataSource ID="ObjectDataSource1" runat="server" SelectMethod="GetStockList" TypeName="WebApplication1._Default"> </asp:ObjectDataSource> </ContentTemplate> </asp:UpdatePanel> </form>

    Read the article

  • How to plan/manage multi-platform (mobile) products?

    - by PhD
    Say I've to develop an app that runs on iOS, Android and Windows 8 Mobile. Now all three platforms are technically in different program languages. The only 'reuse' that I can see is that of the boxes-and-lines drawings (UML :) charts and nothing else. So how do companies/programmers manage the variation of the same product across different platforms especially since the implementation languages differ? It's 'easier' in the desktop world IMO given the plethora of languages and cross-platform libraries to make your life easier. Not so in the mobile world. More so, product line management principles don't seem to be all that applicable - what is same and variant doesn't really matter - the application is the same (conceptually) and the implementation is variant. Some difficulties that come to mind: Bug Fixing: Applications maybe designed in a similar manner but the bug identification and fixing would be radically different. A bug on iOS may/may-not be existent for that on Android. Or a bug fix approach on one platform may not be the same on another (unless it's a semantic bug like a!=b instead of a==b which would require the same 'approach' to fixing in essence Enhancements: Making a change on one platform would be radically different than on another Code-Design Divergence: They way the code is written/organized, the class structures etc., could be very different given the different implementation environments - leading to further reuse of the (above) UML models. There are of course many others - just keeping the development in sync and making sure all applications are up to the same version with the same set of features etc. Seems the effort is 3x that of a single application. So how exactly does one manage this nightmarish situation? Some thoughts: Split application to client/server to minimize the effect to client side only (not always doable) Use frameworks like Unity-3D that could take care of the cross-platform problem (mostly applicable to games and probably not to other applications etc.) Any other ways of managing a platform line? What are some proven approaches to managing/taming the effects?

    Read the article

  • What is a de-compiler how does it work?

    - by thyrgle
    So is a decompiler really a thing that gives gives the source of a compiled/interpreted piece of code? Because to me that sounds impossible. How would you get the names of the functions, variables, classes, etc if it is compiled. Or am I misinterpreting the definition? How does it work? And what is the general principal behind making one?

    Read the article

  • Rails syntax for comments in templates: is this bug understood?

    - by brahn
    Using rails 2.3.2 I have a partial _foo.rhtml that begins with a comment as follows: <% # here is a comment %> <li><%= foo %></li> When I render the partial from a view in the traditional way, e.g. <% some_numbers = [1, 2, 3, 4, 5] %> <ul> <%= render :partial => "foo", :collection => some_numbers %> </ul> I found that the <li> and </li> tags are ommitted in the output -- i.e. the resulting HTML is <ul> 1 2 3 4 5 </ul> However, I can solve this problem by fixing _foo.rhtml to eliminate the space between the <% and the # so that the partial now reads: <%# here is a comment %> <li><%= foo %></li> My question: what's going on here? E.g., is <% # comment %> simply incorrect syntax for including comments in a template? Or is the problem more subtle? Thanks!

    Read the article

  • Why is it that I can include a header file in multiple cpp files that contains const int and not have a compiler error?

    - by tree
    Let's assume that I have files a.cpp b.cpp and file c.h. Both of the cpp files include the c.h file. The header file contains a bunch of const int definitions and when I compile them I get no errors and yet I can access those const as if they were global variables. So the question, why don't I get any compilation errors if I have multiple const definitions as well as these const int's having global-like scope?

    Read the article

  • unit testing on ARM

    - by NomadAlien
    We are developing application level code that runs on an ARM processor. The BSP (low level code) is being delivered by a 3d party so our code sits just on top of this abstraction layer (code is written in c++). To do unit testing, I assume we will have to mock/stub out the BSP library(essentially abstracting out the HW), but what I'm not sure of is if I write/run the unit test on my pc, do I compile it with for example GCC? Normally we use Realview compiler to compile our code for the ARM. Can I assume that if I compile and run the code with x86 compiler and the unit tests pass that it will also pass when compiled with RealView compiler? I'm not sure how much difference the compiler makes and if you can trust that if the x86 compiled code pass the unit tests that you can also be confident that the Realview compiled code is ok.

    Read the article

  • Why do all procedures have to be defined before the compiler sees them?

    - by incrediman
    For example, take a look at this code (from tspl4): (define proc1 (lambda (x y) (proc2 y x))) If I run this as my program in scheme... #!r6rs (import (rnrs)) (define proc1 (lambda (x y) (proc2 y x))) I get this error: expand: unbound identifier in module in: proc2 ...This code works fine though: #!r6rs (import (rnrs)) (define proc2 +) (define proc1 (lambda (x y) (proc2 y x))) (display (proc1 2 3)) ;output: 5

    Read the article

  • What language/compiler for native running of application in any windows(XP/Vista/7) platform?

    - by Xinxua
    Hi, I want to develop an application that runs on any windows platform (XP, Vista, 7) but does not require a dependency like .NET Framework or JVM. I have given the other requirements below: Runs in any windows platform Must have GUI libraries to create windows/primitive controls I also want the output file size of the application to be minimal (So cannot include .net frameword etc in the exe file) Any suggestions for this requirement?

    Read the article

  • Is there a IDE/compiler PC benchmark I can use to compare my PCs performance?

    - by RickL
    I'm looking for a benchmark (and results on other PCs) which would give me an idea of the development performance gain I could get by upgrading my PC, also the benchmark could be used to justify the upgrade to my boss. I use Visual Studio 2008 for my development, so I'd like to get an idea of by what factor the build times would be improved, and also it would be good if the benchmark could incorporate IDE performance (i.e. when editing, using intellisense, opening code files etc) into its result. I currently have an AMD 3800x2, with 2GB RAM on Vista 32. For example, I'd like to know what kind of performance gain I'd see in Visual Studio 2008 with a Q6600, 4GB RAM on Vista 64. And also with other processors, and other RAM sizes... also see whether hard disk performance is a big factor. EDIT: I mentioned Vista 64 because I'm aware that Vista 32 can only use 3GB RAM maximum. So I'd presume that wanting to use more RAM would require Vista 64, but perhaps it could still be slower overall there is a large overhead in using the 32 bit VS 2008 on 64 bit OS.

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when third-party installers are involved?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >