Search Results

Search found 4893 results on 196 pages for 'expect'.

Page 150/196 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • C++ cin questions

    - by Kim
    This seems to be weird: int main(int argc, char* argv[]) { cout << "function main() .." << '\n'; char ch = 0; double number_value=1.1; cin >> ch; cin.putback(ch); cin >> number_value; cout << "1 .. " << " " << cin.good() << " " << number_value << '\n'; cin >> number_value; cout << "2 .. " << " " << cin.good() << " " << number_value << '\n'; return 0; } If I input the following: 7a 1 I get the following: function main() .. 7a 1 1 .. 1 7 2 .. 0 0 I understand the: 1 .. 1 7 but why the variable number_value is 0. cin.good() shows failure so nothing would have read and the value in number_value from the previous assignment would remain. I expect the value of 7.

    Read the article

  • Another rsAccessDenied problem with SSRS

    - by Rich.Carpenter
    I've read through a lot of posts regarding the problem, but none of the proposed solutions have worked for me. I continue to get an error stating, "The permissions granted to user '\Rich' are insufficient for performing this operation. (rsAccessDenied)." If I am logged in as the local administrator account, entering the Reporting Services URL in IE doesn't give me that error, but it takes me to a blank page. I haven't been able to get to a SSRS home page at all. Order of operations: I installed and patched Windows 7 Ultimate 64-bit I installed SQL Server Express 2008 with Advanced Services using the MS web installer. I downloaded and installed SP1 for SQL Server Express 2008. I've tried running IE as administrator, adding local machine to trusted sites, and just about every other suggestion I've found. I even ran the entire installation logged in as the local administrator. Nothing seems to work. Could someone please tell me, considering the above installation process, what I should expect to do after to make this work?

    Read the article

  • In Inform 7, is it possible to use a second noun construct with "pull"?

    - by Beska
    I'll eat my hat if I get a good answer to this...I suspect that although I'm a rank beginner in Inform 7, and I'm guessing this isn't that hard, but there are probably not many people here who are familiar with Inform 7. Still, nothing ventured... I'm trying to create a custom response to a "pull" action. Unfortunately, I think the "pull" action doesn't normally expect a second noun. So I'm trying something like this: The nails are some things in the Foyer. The nails are scenery. Instead of pulling the nails: If the second noun is nothing: say "How? Are you going to pull the nails with your teeth?"; otherwise: say "I don't think that's going to do the job." But while this compiles, and the first part works, the "I don't think..." section is never called...the interpreter just responds "I only understood you as far as wanting to pull the nails." Do I have to create my own custom action for this? Overwrite the standard pull action? Am I missing something simple that will allow me to get this to work?

    Read the article

  • iPad width in viewport settings overflowing past device width

    - by user1327771
    I am at my wit's end here, and I beseech the fine folks here at stackoverflow for help. I am working on a design for a blog for a friend of mine, and I'm working in HTML5. I am trying to get the width of the page to span the width of an iPad. So, in my document head, I have the following: <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0"> Now, if you go to the index page, on an iPad, it looks just fine. The width of the page spans the width of the iPad in portrait mode, as I expect it to: http://www.alanreuterart.com/femmegamers/index.html It also looks fine on any individual blog posts as well. However, on the "about" and "join" pages, the width does not do this. It's actually overflowing past the length of the iPad to the right. View the following page on an iPad to see what I mean: http://www.alanreuterart.com/femmegamers/about.html I've tried everything. I cannot, for the life of me, figure out how to get it those two pages to correctly span the device width of the iPad. One important note, though: before anyone tells me to set my viewport width to 960, I cannot do that because the pages have media queries in them to convert to a mobile layout for the iPhone and other mobile phones. (I am not making a special unique layout for the iPad.) Can ANYONE help out here? Thanks in advance! —Me.

    Read the article

  • Why are references compacted inside Perl lists?

    - by parkan
    Putting a precompiled regex inside two different hashes referenced in a list: my @list = (); my $regex = qr/ABC/; push @list, { 'one' => $regex }; push @list, { 'two' => $regex }; use Data::Dumper; print Dumper(\@list); I'd expect: $VAR1 = [ { 'one' => qr/(?-xism:ABC)/ }, { 'two' => qr/(?-xism:ABC)/ } ]; But instead we get a circular reference: $VAR1 = [ { 'one' => qr/(?-xism:ABC)/ }, { 'two' => $VAR1->[0]{'one'} } ]; This will happen with indefinitely nested hash references and shallowly copied $regex. I'm assuming the basic reason is that precompiled regexes are actually references, and references inside the same list structure are compacted as an optimization (\$scalar behaves the same way). I don't entirely see the utility of doing this (presumably a reference to a reference has the same memory footprint), but maybe there's a reason based on the internal representation Is this the correct behavior? Can I stop it from happening? Aside from probably making GC more difficult, these circular structures create pretty serious headaches. For example, iterating over a list of queries that may sometimes contain the same regular expression will crash the MongoDB driver with a nasty segfault (see https://rt.cpan.org/Public/Bug/Display.html?id=58500)

    Read the article

  • How does one create and use a pointer to an array of an unknown number of structures inside a class?

    - by user1658731
    Sorry for the confusing title... I've been playing around with C++, working on a project to parse a game's (Kerbal Space Program) save file so I can modify it and eventually send it over a network. I'm stuck with storing an unknown number of vessels and crew members, so I need to have an array of unknown size. Is this possible? I figured having a pointer to an array would be the way to go. I have: class SaveFileSystem { string version; string UT; int activeVessel; int numCrew; ??? Crews; // !! int numVessels; ??? Vessels; // !! } Where Crews and Vessels should be arrays of structures: struct Crew { string name; //Other stuff }; struct Vessel { string name; //Stuff }; I'm guessing I should have something like: this->Crews = new ???; this->Vessels = new ???; in my constructor to initialize the arrays, and attempt to access it with: this->Crews[0].name = "Ship Number One"; Does this make any sense??? I'd expect the "???"'s to involve a mess of asterisk's, like "*struct (*)Crews" but I have no real idea. I've got normal pointers down and such, but this is a tad over my head... I'd like to access the structures like in the last snippet, but if C++ doesn't like that I could do pointer arithmetic. I've looked into vectors, but I have an unhealthy obsession with efficiency, and it really pains me how you don't know what's going on behind it.

    Read the article

  • One big executable or many small DLL's?

    - by Patrick
    Over the years my application has grown from 1MB to 25MB and I expect it to grow further to 40, 50 MB. I don't use DLL's, but put everything in this one big executable. Having one big executable has certain advantages: Installing my application at the customer is really: copy and run. Upgrades can be easily zipped and sent to the customer There is no risk of having conflicting DLL's (where the customer has version X of the EXE, but version Y of the DLL) The big disadvantage of the big EXE is that linking times seem to grow exponentially. Additional problem is that a part of the code (let's say about 40%) is shared with another application. Again, the advantages are that: There is no risk on having a mix of incorrect DLL versions Every developer can make changes on the common code which speeds up developments. But again, this has a serious impact on compilation times (everyone compiles the common code again on his PC) and on linking times. The question http://stackoverflow.com/questions/2387908/grouping-dlls-for-use-in-executable mentions the possibility of mixing DLL's in one executable, but it looks like this still requires you to link all functions manually in your application (using LoadLibrary, GetProcAddress, ...). What is your opinion on executable sizes, the use of DLL's and the best 'balance' between easy deployment and easy/fast development?

    Read the article

  • NullPointerException with static variables

    - by tomekK
    I just hit very strange (to me) behaviour of java. I have following classes: public abstract class Unit { public static final Unit KM = KMUnit.INSTANCE; public static final Unit METERS = MeterUnit.INSTANCE; protected Unit() { } public abstract double getValueInUnit(double value, Unit unit); protected abstract double getValueInMeters(double value); } And: public class KMUnit extends Unit { public static final Unit INSTANCE = new KMUnit(); private KMUnit() { } //here are abstract methods overriden } public class MeterUnit extends Unit { public static final Unit INSTANCE = new MeterUnit(); private MeterUnit() { } ///abstract methods overriden } And my test case: public class TestMetricUnits extends TestCase { @Test public void testConversion() { System.out.println("Unit.METERS: " + Unit.METERS); System.out.println("Unit.KM: " + Unit.KM); double meters = Unit.KM.getValueInUnit(102.11, Unit.METERS); assertEquals(0.10211, meters, 0.00001); } } 1) MKUnit and MeterUnit are both singletons initialized statically, so during class loading. Constructors are private, so they can't be initialized anywhere else. 2) Unit class contains static final references to MKUnit.INSTANCE and MeterUnit.INSTANCE I would expect that: KMUnit class is loaded and instance is created. MeterUnit class is loaded and instance is created. Unit class is loaded and both KM and METERS variable are initialized, they are final so they cant be changed. But when I run my test case in console with maven my result is: T E S T S Running de.audi.echargingstations.tests.TestMetricUnits Unit.METERS: m Unit.KM: null Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.089 sec <<< FAILURE! - in de.audi.echargingstations.tests.TestMetricUnits testConversion(de.audi.echargingstations.tests.TestMetricUnits) Time elapsed: 0.011 sec <<< ERROR! java.lang.NullPointerException: null at de.audi.echargingstations.tests.TestMetricUnits.testConversion(TestMetricUnits.java:29) Results : Tests in error: TestMetricUnits.testConversion:29 NullPointer And the funny part is that, when I run this test from eclipse via JUnit runner everything is fine, I have no NullPointerException and in console I have: Unit.METERS: m Unit.KM: km So the question is: what can be the reason that KM variable in Unit is null (and in the same time METERS is not null)

    Read the article

  • ajax html vs xml/json responses - perfomance or other reasons

    - by pedalpete
    I've got a fairly ajax heavy site and some 3k html formatted pages are inserted into the DOM from ajax requests. What I have been doing is taking the html responses and just inserting the whole thing using jQuery. My other option is to output in xml (or possibly json) and then parse the document and insert it into the page. I've noticed it seems that most larger site do things the json/xml way. Google Mail returns xml rather than formatted html. Is this due to performance? or is there another reason to use xml/json vs just retrieving html? From a javascript standpoint, it would seem injecting direct html is simplest. In jQuery I just do this jQuery.ajax({ type: "POST", url: "getpage.php", data: requestData, success: function(response){ jQuery('div#putItHear').html(response); } with an xml/json response I would have to do jQuery.ajax({ type: "POST", url: "getpage.php", data: requestData, success: function(xml){ $("message",xml).each(function(id) { message = $("message",xml).get(id); $("#messagewindow").prepend(""+$("author",message).text()+ ": "+$("text",message).text()+ ""); }); } }); clearly not as efficient from a code standpoint, and I can't expect that it is better browser performance, so why do things the second way?

    Read the article

  • A commercial software but open and free for personal/edu. How to license?

    - by Ivan
    I am developing a software to sell for business use but am willing to make it free and open-source for personal and educational use. Actually I can see the flowing requirements I would like the license to set: Personal and educational usage of the program and its source codes is to be free. In case of publishing of derivative works the original work and author (me) must be mentioned (incl. textual link to my website in a not-very-far-hidden place) and the derivative work must have different name. A derivative work can be closed-source. In every case of commercial (when the end-user is a commercial body (as a company (expect of non-profit organizations), an individual entrepreneur or government office)) usage of my work or any of derivative works made by anyone, the end-user, service provider or the derivative author must buy a commercial license from me. I mean no guarantees or responsibilities, whether expressed or implied... (except the case when one explicitly purchases a support service contract from me and the particular contract specifies a responsibility). Is there a known common license for this case? As far as I can see now it can not be OSI-approved as it does not comply to the §6. of OSI definition of open source. But there still can be an a common known reusable license for this case as it looks quite natural, I think.

    Read the article

  • LGPL library with plugins of varied licenses

    - by Chris
    Note: "Plugins" here refers to shared objects that are accessed via dlopen() and friends. I'm writing a library that I'm planning on releasing under the LGPL. Its functionality can be extended (supporting new audio file formats, specifically) through plugins. I'm planning on creating an exception to the LGPL for this library so that plugins can be released under any license. So far so good. I've written a number of plugins already, some of which use LGPL and some of which use GPL libraries. I'm wary of releasing them with the main library, however, due to licensing issues. The LGPL-based ones would generally be fine, but for my "any license" clause. Would distributing these LGPL-based plugins with the library require the consent of the other license holders to create this exception? Along the same lines, would the inclusion of GPL-based plugins with my library force the whole thing to go GPL? I could also release the plugins separately. The advantage, I presume, is that the plugins an d library will now not be distributed together, creating more separation. But this seems to be no different, really, in the end. Boiled down: Can I include, with my LGPL library, plugins of varied licenses? If not, is it really any different releasing them separately? And if so, there's no real need to create an exception for non-LGPL plugins, is there? It's LGPL or nothing. I'd prefer asking a lawyer, of course, but this is just a hobby and I can't afford to hire a lawyer when I don't expect or want monetary compensation. I'm just hoping others have been in similar situations and have insight.

    Read the article

  • Looking for combinations of server and embedded database engines

    - by codeelegance
    I'm redesigning an application that will be run as both a single user and multiuser application. It is a .NET 2.0 application. I'm looking for server and embedded databases that work well together. I want to deploy the embedded database in the single user setup and of course, the server in the multiuser setup. Past releases have been based on MSDE but in the past year we've been having a lot of install issues: new installs hanging and leaving the system in an unknown state, upgrades disconnecting the database, etc. I migrated the application to SQL Server 2005 and the install is more reliable (as long as a user doesn't try to install over a broken MSDE installation). Since next year's release will be a complete redesign I figured now's the best time to address the database issue as well. The database has been abstracted from the rest of the application so I just need to choose which database(s) to use and write an implementation for each one. So far I've considered: SQL Server/ SQL Server Compact Edition Firebird (same DB engine is available in two different server modes and an embedded dll) Each has its own merits but I'm also interested in any other suggestions. This is a fairly simple program and its data requirements are simple as well. I don't expect it to strain whatever database I eventually choose. So easy configuration and deployment hold more weight than performance.

    Read the article

  • ExpertPDF and Caching of URLs

    - by Josh
    We are using ExpertPDF to take URLs and turn them into PDFs. Everything we do is through memory, so we build up the request and then read the stream into ExpertPDF and then write the bits to file. All the files we have been requesting so far are just plain HTML documents. Our designers update CSS files or change the HTML and rerequest the documents as PDFs, but often times, things are getting cached. Take, for example, if I rename the only CSS file and view the HTML page through a web browser, the page looks broke because the CSS doesn't exist. But if I request that page through the PDF Generator, it still looks ok, which means somewhere the CSS is cached. Here's the relevant PDF creation code: // Create a request HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(url); request.UserAgent = "IE 8.0"; request.ContentType = "application/x-www-form-urlencoded"; request.Method = "GET"; // Send the request HttpWebResponse resp = (HttpWebResponse)request.GetResponse(); if (resp.IsFromCache) { System.Web.HttpContext.Current.Trace.Write("FROM THE CACHE!!!"); } else { System.Web.HttpContext.Current.Trace.Write("not from cache"); } // Read the response pdf.SavePdfFromHtmlStream(resp.GetResponseStream(), System.Text.Encoding.UTF8, "Output.pdf"); When I check the trace file, nothing is being loaded from cache. I checked the IIS log file and found a 200 response coming from the request, even after a file had been updated (I would expect a 302). We've tried putting the No-Cache attribute on all HTML pages, but still no luck. I even turned off all caching at the IIS level. Is there anything in ExpertPDF that might be caching somewhere or something I can do to the request object to do a hard refresh of all resources? UPDATE I put ?foo at the end of my style href links and this updates the CSS everytime. Is there a setting someplace that can prevent stylesheets from being cached so I don't have to do this inelegant solution?

    Read the article

  • How to handle alpha in a manual "Overlay" blend operation?

    - by quixoto
    I'm playing with some manual (walk-the-pixels) image processing, and I'm recreating the standard "overlay" blend. I'm looking at the "Photoshop math" macros here: http://www.nathanm.com/photoshop-blending-math/ (See also here for more readable version of Overlay) Both source images are in fairly standard RGBA (8 bits each) format, as is the destination. When both images are fully opaque (alpha is 1.0), the result is blended correctly as expected: But if my "blend" layer (the top image) has transparency in it, I'm a little flummoxed as to how to factor that alpha into the blending equation correctly. I expect it to work such that transparent pixels in the blend layer have no effect on the result, opaque pixels in the blend layer do the overlay blend as normal, and semitransparent blend layer pixels have some scaled effect on the result. Can someone explain to me the blend equations or the concept behind doing this? Bonus points if you can help me do it such that the resulting image has correctly premultiplied alpha (which only comes into play for pixels that are not opaque in both layers, I think.) Thanks! // factor in blendLayerA, (1-blendLayerA) somehow? resultR = ChannelBlend_Overlay(baseLayerR, blendLayerR); resultG = ChannelBlend_Overlay(baseLayerG, blendLayerG); resultB = ChannelBlend_Overlay(baseLayerB, blendLayerB); resultA = 1.0; // also, what should this be??

    Read the article

  • Flex fixed and variable height - can it be set in markup?

    - by Prutswonder
    I've got the following Flex application markup: <app:MyApplicationClass xmlns:app="*" width="100%" height="100%" layout="vertical" horizontalScrollPolicy="off" verticalScrollPolicy="off"> <mx:VBox id="idPageContainer" width="100%" height="100%" verticalGap="0" horizontalScrollPolicy="off" verticalScrollPolicy="off"> <mx:HBox id="idTopContainer" width="100%" height="28" horizontalGap="2"> (top menu stuff goes here) </mx:HBox> <mx:HBox id="idBottomContainer" width="100%" height="100%" verticalScrollPolicy="off" clipContent="false"> (page stuff goes here) </mx:HBox> </mx:VBox> </app:MyApplicationClass> When I run it, it displays top panel with a fixed height, and a bottom panel with variable height. I expect the bottom panel's height to contain the remaining height, but it somehow overflows off-page. The only way I found to fix this height issue (so far) is to programmatically set the height to be fixed instead of variable: <mx:HBox id="idBottomContainer" width="100%" height="700" verticalScrollPolicy="off" clipContent="false"> (page stuff goes here) </mx:HBox> And code-behind: package { import mx.containers.HBox; import mx.core.Application; import mx.events.ResizeEvent; // (...) public class MyApplicationClass extends Application { public var idBottomContainer:HBox; // (...) private function ON_CreationComplete (event:FlexEvent) : void { // (...) addEventListener(ResizeEvent.RESIZE, ON_Resize); } private function ON_Resize (event:Event) : void { idBottomContainer.height = this.height - idTopContainer.height; } } } But this solution is too "dirty" and I'm looking for a more elegant way. Does anyone know an alternative?

    Read the article

  • TSQL - make a literal float value

    - by David B
    I understand the host of issues in comparing floats, and lament their use in this case - but I'm not the table author and have only a small hurdle to climb... Someone has decided to use floats as you'd expect GUIDs to be used. I need to retrieve all the records with a specific float value. sp_help MyTable -- Column_name Type Computed Length Prec -- RandomGrouping float no 8 53 Here's my naive attempt: --yields no results SELECT RandomGrouping FROM MyTable WHERE RandomGrouping = 0.867153569942739 And here's an approximately working attempt: --yields 2 records SELECT RandomGrouping FROM MyTable WHERE RandomGrouping BETWEEN 0.867153569942739 - 0.00000001 AND 0.867153569942739 + 0.00000001 -- 0.867153569942739 -- 0.867153569942739 In my naive attempt, is that literal a floating point literal? Or is it really a decimal literal that gets converted later? If my literal is not a floating point literal, what is the syntax for making a floating point literal? EDIT: Another possibility has occurred to me... it may be that a more precise number than is displayed is stored in this column. It may be impossible to create a literal that represents this number. I will accept answers that demonstrate that this is the case. EDIT: response to DVK. TSQL is MSSQLServer's dialect of SQL. This script works, and so equality can be performed deterministically between float types: DECLARE @X float SELECT top 1 @X = RandomGrouping FROM MyTable WHERE RandomGrouping BETWEEN 0.839110948199148 - 0.000000000001 AND 0.839110948199148 + 0.000000000001 --yields two records SELECT * FROM MyTable WHERE RandomGrouping = @X I said "approximately" because that method tests for a range. With that method I could get values that are not equal to my intended value. The linked article doesn't apply because I'm not (intentionally) trying to straddle the world boundaries between decimal and float. I'm trying to work with only floats. This isn't about the non-convertibility of decimals to floats.

    Read the article

  • ProtoInclude for fields ?

    - by Big
    I have a simple object [ProtoContract] public class DataChangedEventArgs<T> : EventArgs { private readonly object key; private readonly T data; private readonly DataChangeType changeType; ///<summary> /// Key to identify the data item ///</summary> public object Key { get { return key; } } [ProtoMember(2, IsRequired = true)] public T Data { get { return data; } } [ProtoMember(3, IsRequired = true)] public DataChangeType ChangeType { get { return changeType; } } and I have a problem with the key. Its type is object, but it can be either int, long or string. I would intuitively use a ProtoInclude attribute to say "expect these types" but unfortunately they are class only attribute. Does anybody has any idea how I could work around this ? For background, the public object Key is here for historical reasons (and all over the place) so I would very much like to avoid the mother of all refactorings ;-) Any chance I could get this to Serialize, even force it to Serialize as a string ?

    Read the article

  • Creating nested arrays on the fly

    - by adardesign
    I am trying to do is to loop this HTML and get a nested array of this HTML values that i want to grab. It might look complex at first but is a simple question... This script is just part of a Object containing methods. html <div class="configureData"> <div title="Large"> <a href="yellow" title="true" rel="$55.00" name="sku22828"></a> <a href="green" title="true" rel="$55.00" name="sku224438"></a> <a href="Blue" title="true" rel="$55.00" name="sku22222"></a> </div> <div title="Medium"> <a href="yellow" title="true" rel="$55.00" name="sku22828"></a> <a href="green" title="true" rel="$55.00" name="sku224438"></a> <a href="Blue" title="true" rel="$55.00" name="sku22222"></a> </div> <div title="Small"> <a href="yellow" title="true" rel="$55.00" name="sku22828"></a> <a href="green" title="true" rel="$55.00" name="sku224438"></a> <a href="Blue" title="true" rel="$55.00" name="sku22222"></a> </div> </div> javascript // this is part of a script..... parseData:function(dH){ dH.find(".configureData div").each(function(indA, eleA){ colorNSize.tempSizeArray[indA] = [eleA.title,[],[],[],[]] $(eleZ).find("a").each(function(indB, eleB){ colorNSize.tempSizeArray[indA][indB+1] = eleC.title }) }) }, I expect the end array should look like this. [ ["large", ["yellow", "green", "blue"], ["true", "true", "true"], ["$55", "$55","$55"] ], ["Medium", ["yellow", "green", "blue"], ["true", "true", "true"], ["$55", "$55","$55"] ] ] // and so on....

    Read the article

  • Selecting data in clustered index order without ORDER BY

    - by kcrumley
    I know there is no guarantee without an ORDER BY clause, but are there any techniques to tune SQL Server tables so they're more likely to return rows in clustered index order, without having to specify ORDER BY every single time I want to run a super-quick ad hoc query? For example, would rebuilding my clustered index or updating statistics help? I'm aware that I can't count on a query like: select * from AuditLog where UserId = 992 to return records in the order of the clustered index, so I would never build code into an application based on this assumption. But for simple ad hoc queries, on almost all of my tables, the data consistently comes out in clustered index order, and I've gotten used to being able to expect the most recent results to be at the bottom. Out of all the many tables we use, I've only noticed two ever giving me results in an unpredicted order. This is really just an annoyance, but it would be nice to be able to minimize it. In case this is relevant because of page boundary issues or something like that, I should mention that one of the tables that has inconsistent ordering, the AuditLog table, is the longest table we have that has a clustered index on an identity column. Also, this database has recently been moved from SQL 2005 to SQL 2008, and we've seen no noticeable change in this behavior.

    Read the article

  • New projects not built when target platform is set explicitly

    - by stiank81
    I create a new solution with one project, and then change the target platform from "Any CPU" to "x86". After this new projects added doesn't get built by default, and their target platform doesn't follow the global settings. Why?! Looking at the configuration manager new projects added are not checked to "Build", and they get target platform "Any CPU" instead of the globally set x86. Why is this happening? I expect new projects too to get the globally set and defined x86 target platform.. Some things I've tried: Toggle global platform back to Any CPU, and then to x86 again. No change.. Choosing platform explicitly for the new project. x86 is not available in the list, and when I say <New..> and try adding it I'm not allowed as ".. a solution platform with the same name already exists.". On the build properties for the new project I can't change the platform in the Configuration section, but I can set "Platform target" to x86 in the General section. It is however not clear whether this actually makes a difference, and it wouldn't respond if I change the target platform globally later. Initially I thought this was a problem from converting my solution from VS2008 to VS2010, but the problem applies both places. I.e. when I create a solution in VS2008 and just stay in VS2008 I still get the problem.

    Read the article

  • TFS How does merging work?

    - by Johannes Rudolph
    I have a release branch (RB, starting at C5) and a changeset on trunk (C10) that I now want to merge onto RB. The file has changes at C3 (common to both), one in CS 7 on RB, and one in C9 (trunk) and one in C10). So the history for my changed file looks like this: RB: C5 -> C7 Trunk: C3 -> C9 -> C10 When I merge C10 from trunk to RB, I'd expect to see a merge window showing me C10 | C3 | C7 since C3 is the common ancestor revision and C10 and C7 are the tips of my two branches respectively. However, my merge tool shows me C10 | C9 | C7. My merge tool is configured to show %1(OriginalFile)|%3(BaseFile)|%2(Modified File), so this tells me TFS chose C9 as the base revision. This is totally unexpected and completely contrary to the way I'm used to merges working in Mercurial or Git. Did I get something wrong or is TFS trying to drive me nuts with merging? Is this the default TFS Merge behavior? If so, can you provide insight into why they chose to implement it this way? I'm using TFS 2008 with VS2010 as a Client.

    Read the article

  • javascript innerHTML without childNodes?

    - by John Doe
    hi all im having a firefox issue where i dont see the wood for the trees using ajax i get html source from a php script this html code contains a tag and within the tbody some more tr/td's now i want to append this tbody plaincode to an existing table. but there is one more condition: the table is part of a form and thus contains checkboxe's and drop down's. if i would use table.innerHTML += content; firefox reloads the table and reset's all elements within it which isnt very userfriendly as id like to have what i have is this // content equals transport.responseText from ajax request function appendToTable(content){ var wrapper = document.createElement('table'); wrapper.innerHTML = content; wrapper.setAttribute('id', 'wrappid'); wrapper.style.display = 'none'; document.body.appendChild(wrapper); // get the parsed element - well it should be wrapper = document.getElementById('wrappid'); // the destination table table = document.getElementById('tableid'); // firebug prints a table element - seems right console.log(wrapper); // firebug prints the content ive inserted - seems right console.log(wrapper.innerHTML); var i = 0; // childNodes is iterated 2 times, both are textnode's // the second one seems to be a simple '\n' for(i=0;i<wrapper.childNodes.length;i++){ // firebug prints 'undefined' - wth!?? console.log(wrapper.childNodes[i].innerHTML); // firebug prints a textnode element - <TextNode textContent=" "> console.log(wrapper.childNodes[i]); table.appendChild(wrapper.childNodes[i]); } // WEIRD: firebug has no problems showing the 'wrappid' table and its contents in the html view - which seems there are the elements i want and not textelements } either this is so trivial that i dont see the problem OR its a corner case and i hope someone here has that much of expirience to give an advice on this - anyone can imagine why i get textnodes and not the finally parsed dom elements i expect? btw: btw i cant give a full example cause i cant write a smaller non working piece of code its one of those bugs that occure in the wild and not in my testset thx all

    Read the article

  • Question about Client IDs

    - by George
    I have a user control that is emmitting javascript using the ClientId function. For example: Out &= "ValidatorHookupControlID(" & Quote & ddlMonth.ClientID & Quote & "), document.all(" & Quote & CustomValidator1.ClientID & Quote & "));" & vbCrLf It appears to me that the ClientID function DOES not return the ultimate ID that is sent to the browser. Instead, the ClientID function only appears to be aware of its current parent control, which in this case is a the User Control and so the ID that is returned is the ID "dtmPassportExpirationDate_ddlMonth" When in fact the usercontrol is included in a master page and the ultimate ID that is used is: "ctl00_phPageContent_dtmPassportExpirationDate_ddlMonth" I may be nuts, but that's what it appears to be doing. I expect that the ClientID function would return the ultimate ID used in the HTML. Am I missing something?

    Read the article

  • g++ Linking Error on Mac while compiling FFMPEG

    - by Saptarshi Biswas
    g++ on Snow Leopard is throwing linking errors on the following piece of code test.cpp #include <iostream> using namespace std; #include <libavcodec/avcodec.h> // required headers #include <libavformat/avformat.h> int main(int argc, char**argv) { av_register_all(); // offending library call return 0; } When I try to compile this using the following command g++ test.cpp -I/usr/local/include -L/usr/local/lib \ -lavcodec -lavformat -lavutil -lz -lm -o test I get the error Undefined symbols: "av_register_all()", referenced from: _main in ccUD1ueX.o ld: symbol(s) not found collect2: ld returned 1 exit status Interestingly, if I have an equivalent c code, test.c #include <stdio.h> #include <libavcodec/avcodec.h> #include <libavformat/avformat.h> int main(int argc, char**argv) { av_register_all(); return 0; } gcc compiles it just fine gcc test.c -I/usr/local/include -L/usr/local/lib \ -lavcodec -lavformat -lavutil -lz -lm -o test I am using Mac OS X 10.6.5 $ g++ --version i686-apple-darwin10-g++-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664) $ gcc --version i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664) FFMPEG's libavcodec, libavformat etc. are C libraries and I have built them on my machine like thus: ./configure --enable-gpl --enable-pthreads --enable-shared \ --disable-doc --enable-libx264 make && sudo make install As one would expect, libavformat indeed contains the symbol av_register_all $ nm /usr/local/lib/libavformat.a | grep av_register_all 0000000000000000 T _av_register_all 00000000000089b0 S _av_register_all.eh I am inclined to believe g++ and gcc have different views of the libraries on my machine. g++ is not able to pick up the right libraries. Any clue?

    Read the article

  • OpenGL pixels drawn with each horizontal pair swapped

    - by Tim Kane
    I'm somewhat new to OpenGL though I'm fairly sure my problem lies in the pixel format being used, or how my texture is being generated... I'm drawing a texture onto a flat 2D quad using a 16bit RGB5_A1 pixel format, though I don't make use of any alpha at this stage. The problem I'm having is that each pair of horizontal pixel values have been swapped. That is... if the pixels positions should be in this order (assume 8x2 image) 0 1 2 3 4 5 6 7 they are instead drawn as 1 0 3 2 5 4 7 6 Or, more clearly from this image (below). Left is what I get... Right is what I should get. . The question is... How have I ended up with this? Is there something wrong with the pixel format? Unlikely since the colours all appear correct, and I would expect all kinds of nasty if it were down to endian-ness. Suggestions greatly appreciated. Update: Turns out the problem was in my source renderer. Interestingly, I've avoided the problem entirely by using 32-bit textures (haven't tried 24-bit at this point).

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >