Search Results

Search found 5658 results on 227 pages for 'eric fail'.

Page 21/227 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Why does the following Java Script fail to load XML?

    - by Pavitar
    I have taken an example taught to us in class,wherein a javascript is used to retrieve data from the XML,but it doesn't work.Please help I have also added the XML file below. <html> <head> <title>Customer Info</title> <script language="javascript"> var xmlDoc = 0; var xmlObj = 0; function loadCustomers(){ xmlDoc = new ActiveXObject("Microsoft.XMLDOM"); xmlDoc.async = "false"; xmlDoc.onreadystatechange = displayCustomers; xmlDoc.load("customers.xml"); } function displayCustomers(){ if(xmlDoc.readyState == 4){ xmlObj = xmlDoc.documentElement; var len = xmlObj.childNodes.length; for(i = 0; i < len; i++){ var nodeElement = xmlObj.childNodes[i]; document.write(nodeElement.attributes[0].value); for(j = 0; j < nodeElement.childNodes.length; j++){ document.write(" " + nodeElement.childNodes[j].firstChild.nodeValue); } document.write("<br/>"); } } } </script> </head> <body> <form> <input type="button" value="Load XML" onClick="loadCustomers()"> </form> </body> </html> XML(customers.xml) <?xml version="1.0" encoding="UTF-8"?> <customers> <customer custid="CU101"> <pwd>PW101</pwd> <email>[email protected]</email> </customer> <customer custid="CU102"> <pwd>PW102</pwd> <email>[email protected]</email> </customer> <customer custid="CU103"> <pwd>PW103</pwd> <email>[email protected]</email> </customer> <customer custid="CU104"> <pwd>PW104</pwd> <email>[email protected]</email> </customer> </customers>

    Read the article

  • Why does my submit button fail to trigger Javascript MVC?

    - by user54197
    I have a simple code from a book and the code should display data from my controller in the "results" span. What am I missing? <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <script type="text/javascript"> $("form[action$='GetQuote']").submit(function() { $.post($(this).attr("action"), $(this).serialize(), function(response) { $("#results").html(response); }); return false; }); </script> <h2>Index</h2> <%using (Html.BeginForm("GetQuote","Stocks")) { %> Symbol: <%= Html.TextBox("symbol") %> <input type="submit" /> <span id="results"></span> <% } %> <p><i><%=DateTime.Now.ToLongTimeString() %></i></p> </asp:Content>

    Read the article

  • Have you had DLL's fail after upgrading to 64 bit server?

    - by quakkels
    Hey All, I'm wondering if anyone else has experienced failed DLL's after upgrading their servers. My company is in the process of upgrading our code and server's after ten years of using classic ASP. We've set up our new server running Windows 2008 and IIS 7. Our classic ASP code and our new asp.net mvc code work pretty well. Our problems started happening when we began moving our old websites to the new server. When trying to load the page on the actual server machine's browser, we initially got a 500 error. If we refreshed the page then some of the page would load but then display an error: Server object error 'ASP 0177 : 800401f3' Server.CreateObject Failed /folder/scriptname.asp, line 24 800401f3 btw: On remote machines we would just get 500 errors. Line 24 is the first executable code in the script: '23 lines of comments set A0SQL_DATA = server.createobject("olddllname.Data") 'the rest of the script That specific line is trying to use a ten year old DLL to create a server object. I don't think the server configuration is a problem because I'm able to create "adodb.recordset" server objects without any problems. Is there an issue when running correctly registered old DLL's on 64 bit systems? Is there a way to get old DLL's working on 64 bit systems?

    Read the article

  • Why does adding a reference to project targeting .NET Framework 4.0 fail?

    - by Malcolm Post
    We have two projects that are both class libraries. Project 1 is a VS 2008 project and targets the .NET Framework 3.5. Project 2 is a VS 2010 (release candidate) project that targets the .NET Framework 4.0. When I try to add a reference to Project 2 in Project 1, it fails with a less than informative error message. I know that if I change the target Framework for Project 2 to 3.5, then adding the reference will work. My question is, if I don't change the target frameworks, but convert Project 1 to VS 2010, will the referencing work? Stated another way, is there some inherent incompatiblity between class libraries targeting different framework versions, or is it failing for me because VS 2008 doesn't know about the 4.0 framework?

    Read the article

  • Where/When does C# and the .NET Framework fail to be the right tool?

    - by Nate Bross
    In my non-programming life, I always attempt to use the approprite tool for the job, and I feel that I do the same in my programming life, but I find that I am choosing C# and .NET for almost everything. I'm finding it hard to come up with (realistic business) needs that cannot be met by .NET and C#. Obviously embedded systems might require something less bloated than the .NET Micro Framework, but I'm really looking for line of business type situations where .NET is not the best tool. I'm primarly a C# and .NET guy since its what I'm the most comfertable in, but I know a fair amount of C++, php, VB, powershell, batch files, and Java, as well as being versed in the web technologes (javascript, html/css). But I'm open minded about it my skill set and I'm looking for cases where C# and .NET are not the right tool for the job. The bottom line here, is that I feel that I'm choosing C# and .NET simply because I am very comfertable with it, so I'm looking for cases where you have chosen something other than .NET, even though you are primarly a .NET developer.

    Read the article

  • git noob : why does "git push origin master" fail to github ?

    - by anjanb
    hi there, Here are the steps I took. I created a repository on github and generated a rails project on my windows vista home premium (which has msys git 1.7.0.2). 3) I then committed the generated files 4) g it remote add origin [email protected]:anjanb/Jobs2Go.git git push origin master On the 5th step, I get the following error. "Permission denied (publickey). fatal: The remote end hung up unexpectedly" I vaguely remember following some sshgen steps I took when I created my 1st github repository but I have forgotten what it was. Can someone point me what I did wrong, what I need to do right. Thank you,

    Read the article

  • What are the most likely reasons an application would fail on only one of my servers?

    - by Rising Star
    I have several servers to test new code on. I primarily push out asp.NET web applications. Last week, I had an issue where I installed a newly developed web application on three servers. The three servers all run in separate environments. The application worked fine on two of them, but consistently crashed on the third server with each web request. The problem was eventually traced to an in-house developed .dll file being out of date on the third server. I'm certain that this kind of thing happens all the time. However, there are numerous things that could go wrong to cause this kind of behavior. I spent quite a bit of time tracing this problem. I would like to make a list of things to be suspicious of next time this happens? What are the most likely reasons that a web application would crash on one of my servers while identical code runs fine on another server.

    Read the article

  • Security Alert for CVE-2011-5035 Updated

    - by Eric P. Maurice
    Hi, this is Eric Maurice again.  Oracle has just updated the Security Alert for CVE-2011-5035 to announce the availability of additional fixes for products that were affected by this vulnerability through their use of the WebLogic Server and Oracle Container for J2EE components.  As explained in a previous blog entry, a number of programming language implementations and web servers were found vulnerable to hash table collision attacks.  This vulnerability is typically remotely exploitable without authentication, i.e., it may be exploited over a network without the need for a username and password.  If successfully exploited, malicious attackers can use this vulnerability to create denial of service conditions against the targeted system. A complete list of affected products and their versions, as well as instructions on how to obtain the fixes, are listed on the Security Alert Advisory.  Oracle highly recommends that customers apply these fixes as soon as possible.

    Read the article

  • Why does this asp.net mvc unit test fail?

    - by Brian McCord
    I have this unit test: [TestMethod] public void Delete_Post_Passes_With_State_4() { //Arrange ViewResult result = stateController.Delete( 4 ) as ViewResult; var model = (State)result.ViewData.Model; //Act RedirectToRouteResult redirectResult = stateController.Delete( model ) as RedirectToRouteResult; var newresult = stateController.Delete( 4 ) as ViewResult; var newmodel = (State)newresult.ViewData.Model; //Assert Assert.AreEqual( redirectResult.RouteValues["action"], "Index" ); Assert.IsNull( newmodel ); } Here are the two controller actions that handle deleting: // // GET: /State/Delete/5 public ActionResult Delete(int id) { var x = _stateService.GetById( id ); return View(x); } // // POST: /State/Delete/5 [HttpPost] public ActionResult Delete(State model) { try { if( model == null ) { return View( model ); } _stateService.Delete( model ); return RedirectToAction("Index"); } catch { return View( model ); } } What I can't figure out is why this test fails. I have verified that the record actually gets deleted from the list. If I set a break point in the Delete method on the line: var x = _stateService.GetById( id ); The GetById does indeed return a null just as it should, but when it gets back to the newresult variable in the test, the ViewData.Model is the deleted model. What am I doing wrong?

    Read the article

  • Why does my Perl CGI program fail with "Software error: ..."?

    - by kiran
    When I try to run my Perl CGI program, the returned web page tells me: Software error: For help, please send mail to the webmaster (root@localhost), giving this error message and the time and date of the error. Here is my code in one of the file: #!/usr/bin/perl use lib "/home/ecoopr/ecoopr.com/CPAN"; use CGI; use CGI::FormBuilder; use CGI::Session; use CGI::Carp (fatalsToBrowser); use CGI::Session; use HTML::Template; use MIME::Base64 (); use strict; require "./db_lib.pl"; require "./config.pl"; my $query = CGI-new; my $url = $query-url(); my $hostname = $query-url(-base = 1); my $login_url = $hostname . '/login.pl'; my $redir_url = $login_url . '?d=' . $url; my $domain_name = get_domain_name(); my $helpful_msg = $query-param('m'); my $new_trusted_user_fname = $query-param('u'); my $action = $query-param('a'); $new_trusted_user_fname = MIME::Base64::decode($new_trusted_user_fname); ####### Colin: Added July 12, 2009 ####### my $view = $query-param('view'); my $offset = $query-param('offset'); ####### Colin: Added July , 2009 ####### #print $session-header; #print $new_trusted_user; my $helpful_msg_txt = qq[]; my $helpful_msg_div = qq[]; if ($helpful_msg)

    Read the article

  • jquery fail to retrieve accurate data from sibling field.

    - by i need help
    wonder what's wrong <table id=tblDomainVersion> <tr> <td>Version</td> <td>No of sites</td> </tr> <tr> <td class=clsversion>1.25</td> <td><a id=expanddomain>3 sites</a><span id=spanshowall></span></td> </tr> <tr> <td class=clsversion>1.37</td> <td><a id=expanddomain>7 sites</a><span id=spanshowall></span></td> </tr> </table> $('#expanddomain').click(function() { //the siblings result incorrect //select first row will work //select second row will no response var versionforselected= $('#expanddomain').parent().siblings("td.clsversion").text(); alert(versionforselected); $.ajax({ url: "ajaxquery.php", type: "POST", data: 'version='+versionforselected, timeout: 900000, success: function(output) { output= jQuery.trim(output); $('#spanshowall').html(output); }, }); });

    Read the article

  • Why does document.evaluate succeed in Firebug but fail in Selenium?

    - by anil
    browser.getEval function in selenium makes iterateNext return null ..Otherwise in firebug it returns a value(same script) document.evaluate("//button[text()='Save']", document, null, XPathResult.ANY_TYPE, null) .iterateNext() .disabled; returns true But browser.getEval("document.evaluate(\"//button[text()='Save']\", document, null, XPathResult.ANY_TYPE, null) .iterateNext() .disabled;"); returns that error as : "com.thoughtworks.selenium.SeleniumException: ERROR: Threw an exception: res.iterateNext() is null "

    Read the article

  • Why does internet explorer 8 fail to locate jpg files that other browsers can find?

    - by user278457
    The following URL doesn't display for me in Internet Explorer 8. I even tried compatibility mode and it didn't fix the issue. http://beat.com.au/sites/default/files/images/_DSC5596.jpg It appears just fine in Chrome/Safari/Firefox. I suspect it has something to do with the filename starting with _ but that seems like a fairly big stretch to me. Is this error repeatable on other people's computers? And why on earth would such a strange thing happen anyway?

    Read the article

  • How to design authentication in a thick client, to be fail safe?

    - by Jay
    Here's a use case: I have a desktop application (built using Eclipse RCP) which on start, pops open a dialog box with 'UserName' and 'Password' fields in it. Once the end user, inputs his UserName and Password, a server is contacted (a spring remote-servlet, with the client side being a spring httpclient: similar to the approaches here.), and authentication is performed on the server side. A few questions related to the above mentioned scenario: If said this authentication service were to go down, what would be the best way to handle further proceedings? Authentication is something that I cannot do away with. Would running the desktop client in a "limited" mode be a good idea? For instance, important features/menus/views will be disabled, rest of the application will be accessible? Should I have a back up authentication service running on a different machine, working as a backup? What are the general best-practices in this scenario? I remember reading about google gears and how it would let you edit and do stuff offline - should something like this be designed? Please let me know your design/architectural comments/suggestions. Appreciate your help.

    Read the article

  • How to write a Python 2.6+ script that does gracefully fail with older pyhton?

    - by Sorin Sbarnea
    I'm using the new print from Python 3.x and I observed that the following code does not compile due to the end=' '. from __future__ import print_function import sys if sys.hexversion < 0x02060000: raise Exception("py too old") ... print("x",end=" ") # fails to compile with py24 How can I continue using the new syntax but make the script fails nicely? Is it mandatory to call another script and use only safe syntax in this one?

    Read the article

  • Best way to use Google's hosted jQuery, but fall back to my hosted library on Google fail

    - by Nosredna
    What would be a good way to attempt to load the hosted jQuery at Google (or other Google hosted libs), but load my copy of jQuery if the Google attempt fails? I'm not saying Google is flaky. There are cases where the Google copy is blocked (apparently in Iran, for instance). Would I set up a timer and check for the jQuery object? What would be the danger of both copies coming through? Not really looking for answers like "just use the Google one" or "just use your own." I understand those arguments. I also understand that the user is likely to have the Google version cached. I'm thinking about fallbacks for the cloud in general. Edit: This part added... Since Google suggests using google.load to load the ajax libraries, and it performs a callback when done, I'm wondering if that's the key to serializing this problem. I know it sounds a bit crazy. I'm just trying to figure out if it can be done in a reliable way or not. Update: jQuery now hosted on Microsoft's CDN. http://www.asp.net/ajax/cdn/

    Read the article

  • Is there a way to cause a new C++ class instance to fail, if certain conditions in the contructor ar

    - by Jim Fell
    As I understand it, when a new class is instantiated in C++, a pointer to the new class is returned, or NULL, if there is insufficient memory. I am writing a class that initializes a linked list in the constructor. If there is an error while initializing the list, I would like the class instantiator to return NULL. For example: MyClass * pRags = new MyClass; If the linked list in the MyClass constructor fails to initialize properly, I would like pRags to equal NULL. I know that I can use flags and additional checks to do this, but I would like to avoid that, if possible. Does anyone know of a way to do this? Thanks.

    Read the article

  • Why does mmap() fail with ENOMEM on a 1TB sparse file?

    - by metadaddy
    I've been working with large sparse files on openSUSE 11.2 x86_64. When I try to mmap() a 1TB sparse file, it fails with ENOMEM. I would have thought that the 64 bit address space would be adequate to map in a terabyte, but it seems not. Experimenting further, a 1GB file works fine, but a 2GB file (and anything bigger) fails. I'm guessing there might be a setting somewhere to tweak, but an extensive search turns up nothing. Here's some sample code that shows the problem - any clues? #include <errno.h> #include <fcntl.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/mman.h> #include <sys/types.h> #include <unistd.h> int main(int argc, char *argv[]) { char * filename = argv[1]; int fd; off_t size = 1UL << 40; // 30 == 1GB, 40 == 1TB fd = open(filename, O_RDWR | O_CREAT | O_TRUNC, 0666); ftruncate(fd, size); printf("Created %ld byte sparse file\n", size); char * buffer = (char *)mmap(NULL, (size_t)size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); if ( buffer == MAP_FAILED ) { perror("mmap"); exit(1); } printf("Done mmap - returned 0x0%lx\n", (unsigned long)buffer); strcpy( buffer, "cafebabe" ); printf("Wrote to start\n"); strcpy( buffer + (size - 9), "deadbeef" ); printf("Wrote to end\n"); if ( munmap(buffer, (size_t)size) < 0 ) { perror("munmap"); exit(1); } close(fd); return 0; }

    Read the article

  • Why does OpenGL's glDrawArrays() fail with GL_INVALID_OPERATION under Core Profile 3.2, but not 3.3 or 4.2?

    - by metaleap
    I have OpenGL rendering code calling glDrawArrays that works flawlessly when the OpenGL context is (automatically / implicitly obtained) 4.2 but fails consistently (GL_INVALID_OPERATION) with an explicitly requested OpenGL core context 3.2. (Shaders are always set to #version 150 in both cases but that's beside the point here I suspect.) According to specs, there are only two instances when glDrawArrays() fails with GL_INVALID_OPERATION: "if a non-zero buffer object name is bound to an enabled array and the buffer object's data store is currently mapped" -- I'm not doing any buffer mapping at this point "if a geometry shader is active and mode? is incompatible with [...]" -- nope, no geometry shaders as of now. Furthermore: I have verified & double-checked that it's only the glDrawArrays() calls failing. Also double-checked that all arguments passed to glDrawArrays() are identical under both GL versions, buffer bindings too. This happens across 3 different nvidia GPUs and 2 different OSes (Win7 and OSX, both 64-bit -- of course, in OSX we have only the 3.2 context, no 4.2 anyway). It does not happen with an integrated "Intel HD" GPU but for that one, I only get an automatic implicit 3.3 context (trying to explicitly force a 3.2 core profile with this GPU via GLFW here fails the window creation but that's an entirely different issue...) For what it's worth, here's the relevant routine excerpted from the render loop, in Golang: func (me *TMesh) render () { curMesh = me curTechnique.OnRenderMesh() gl.BindBuffer(gl.ARRAY_BUFFER, me.glVertBuf) if me.glElemBuf > 0 { gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, me.glElemBuf) gl.VertexAttribPointer(curProg.AttrLocs["aPos"], 3, gl.FLOAT, gl.FALSE, 0, gl.Pointer(nil)) gl.DrawElements(me.glMode, me.glNumIndices, gl.UNSIGNED_INT, gl.Pointer(nil)) gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, 0) } else { gl.VertexAttribPointer(curProg.AttrLocs["aPos"], 3, gl.FLOAT, gl.FALSE, 0, gl.Pointer(nil)) /* BOOM! */ gl.DrawArrays(me.glMode, 0, me.glNumVerts) } gl.BindBuffer(gl.ARRAY_BUFFER, 0) } So of course this is part of a bigger render-loop, though the whole "*TMesh" construction for now is just two instances, one a simple cube and the other a simple pyramid. What matters is that the entire drawing loop works flawlessly with no errors reported when GL is queried for errors under both 3.3 and 4.2, yet on 3 nvidia GPUs with an explicit 3.2 core profile fails with an error code that according to spec is only invoked in two specific situations, none of which as far as I can tell apply here. What could be wrong here? Have you ever run into this? Any ideas what I have been missing?

    Read the article

  • Fail to save a managed object to core-data after its properties were updated.

    - by Tzur Gazit
    I have to trouble to create the object, but updating it fails. Here is the creation code: // Save data from pList to core data fro the first time - (void) saveToCoreData:(NSDictionary *)plistDictionary { // Create system parameter entity SystemParameters *systemParametersEntity = (SystemParameters *)[NSEntityDescription insertNewObjectForEntityForName:@"SystemParameters" inManagedObjectContext:mManagedObjectContext]; //// // GPS SIMULATOR //// NSDictionary *GpsSimulator = [plistDictionary valueForKey:@"GpsSimulator"]; [systemParametersEntity setMGpsSimulatorEnabled:[[GpsSimulator objectForKey:@"Enabled"] boolValue]]; [systemParametersEntity setMGpsSimulatorFileName:[GpsSimulator valueForKey:@"FileName"]]; [systemParametersEntity setMGpsSimulatorPlaybackSpeed:[[GpsSimulator objectForKey:@"PlaybackSpeed"] intValue]]; [self saveAction]; } During execution the cached copy is changed and then it is saved (or trying) to the database. Here is the code to save the changed copy: // Save data from pList to core data fro the first time - (void) saveSystemParametersToCoreData:(SystemParameters *)theSystemParameters { // Step 1: Select Data NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"SystemParameters" inManagedObjectContext:mManagedObjectContext]; [fetchRequest setEntity:entity]; NSError *error = nil; NSArray *items = [self.managedObjectContext executeFetchRequest:fetchRequest error:&error]; [fetchRequest release]; if (error) { NSLog(@"CoreData: saveSystemParametersToCoreData: Unresolved error %@, %@", error, [error userInfo]); abort(); } // Step 2: Update Object SystemParameters *systemParameters = [items objectAtIndex:0]; //// // GPS SIMULATOR //// [systemParameters setMGpsSimulatorEnabled:[theSystemParameters mGpsSimulatorEnabled]]; [systemParameters setMGpsSimulatorFileName:[theSystemParameters mGpsSimulatorFileName]]; [systemParameters setMGpsSimulatorPlaybackSpeed:[theSystemParameters mGpsSimulatorPlaybackSpeed]]; // Step 3: Save Updates [self saveAction]; } As to can see, I fetch the object that I want to update, change its values and save. Here is the saving code: - (void)saveAction { NSError *error; if (![[self mManagedObjectContext] save:&error]) { NSLog(@"ERROR:saveAction. Unresolved Core Data Save error %@, %@", error, [error userInfo]); exit(-1); } } The Persistent store method: - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (mPersistentStoreCoordinator != nil) { return mPersistentStoreCoordinator; } NSString *path = [self databasePath]; NSURL *storeUrl = [NSURL fileURLWithPath:path]; NSError *error = nil; mPersistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; if (![mPersistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:nil error:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } return mPersistentStoreCoordinator; } There is no error but the sqLite file is not updated, hence the data is not persistent. Thanks in advance.

    Read the article

  • How can I configure apigee to fail stale if the backend is unavialble?

    - by Anurag Kapur
    We have an API proxy configured with cache ttl of 2mins. Is it possible to configure apigee to serve stale cached content if the backend goes down so that our end users don't see errors? We would rather have our end users get copies of stale cached content (even after the configured ttl of 2mins expires) instead of errors when the backend goes down. Would appreciate if someone could point me to the relevant documentation if this is possible.

    Read the article

  • Why might one app connect to SQL backend OK and a second app fail if they share the same connectionstring?

    - by hawbsl
    Trying to figure out a SQL connection error 26 in our app. We've got two closely related apps Foo and FooAddIn. Foo is a Winforms app built in VS2010 and runs fine and connects fine to our SQLExpress back end. FooAddIn is an Outlook AddIn which references Foo.exe and connects to the same SQL Express instance. Or rather, it doesn't connect, instead reporting: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) Now, both apps share the same connectionstring and we've verified they really do share the same connectionstring. At this stage we're just testing from within the same developer machine, so the apps are on the same machine, going via the same VS2010 IDE. So a lot of the advice online for this error doesn't apply because the fact that Foo connects through to SQL Express tells us the database is there and available and can be reached. What else is there to check? One thing is that Foo and FooAddIn are running different runtime versions of System.Data (v2.0.50727 and v4.0.30319). Could that be a factor?

    Read the article

  • Why does a Linq Cast<T> operation fail when I have an implicit cast defined?

    - by Ryan Versaw
    I've created two classes, with one of them having an implicit cast between them: public class Class1 { public int Test1; } public class Class2 { public int Test2; public static implicit operator Class1(Class2 item) { return new Class1{Test1 = item.Test2}; } } When I create a new list of one type and try to Cast<T> to the other, it fails with an InvalidCastException: List<Class2> items = new List<Class2>{new Class2{Test2 = 9}}; foreach (Class1 item in items.Cast<Class1>()) { Console.WriteLine(item.Test1); } This, however, works fine: foreach (Class1 item in items) { Console.WriteLine(item.Test1); } Why is the implicit cast not called when using Cast<T>?

    Read the article

  • Why does only one of these @-webkit-keyframes declarations fail?

    - by Alex Ford
    I have two -webkit-keyframes declarations (see below). blink2 works fine. blink does nothing. What's the deal? Is there a limit to the number of keyframes that can be declared? @-webkit-keyframes blink { 0% { opacity:1; } 40% { opacity:1; } 50% { opacity:.5; } 90% { opacity:1; } 99% { opacity:1; } } @-webkit-keyframes blink2 { 0% { opacity:1; } 50% { opacity:.25; } 100% { opacity:1; } }

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >