Search Results

Search found 10106 results on 405 pages for 'fail fast'.

Page 41/405 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Why does this code fail in D2010, but not D7?

    - by Tom1952
    Why does this code get an access error on the Result := Buffer line in D2010, but not D7? Something, I'd guess, involving UniCode, but the compiler doesn't generate any warnings. Any suggestions on an elegant workaround? function GetTempPathAndFileName( const aExtension: string): string; var Buffer: array[0..MAX_PATH] of Char; begin repeat GetTempPath(SizeOf(Buffer) - 1, Buffer); GetTempFileName(Buffer, '~', 0, Buffer); Result := Buffer; // <--- crashes on this line, Result := ChangeFileExt(Result, aExtension); until not FileExists(Result); end; { GetTempPathAndFileName }

    Read the article

  • What would cause native gem extensions on OS X to build but fail to load?

    - by goodmike
    I am having trouble with some of my rubygems, in particular those that use native extensions. I am on a MacBookPro, with Snow Leopard. I have XCode 3.2.1 installed, with gcc 4.2.1. Ruby 1.8.6, because I'm lazy and a scaredy cat and don't want to upgrade yet. Ruby is running in 32-bit mode. I built this ruby from scratch when my MBP ran OSX 10.4. When I require one of the affected gems in irb, I get a Load Error for the gem extension's bundle file. For example, here's nokogigi dissing me: > require 'rubygems' = true > require 'nokogiri' LoadError: Failed to load /usr/local/lib/ruby/gems/1.8/gems/nokogiri-1.4.1/lib/nokogiri/nokogiri.bundle This is also happening with the Postgres pg and MongoDB mongo gems. My first thought was that the extensions must not be building right. But gem install wasn't throwing any errors. So I reinstalled with the verbose flag, hoping to see some helpful warnings. I've put the output in a Pastie, and the only warning I see is a consistent one about "passing argument n of ‘foo’ with different width due to prototype." I suspect that this might be an issue from upgrading to Snow Leopard, but I'm a little surprised to experience it now, since I've updated my XCode. Could it stem from running Ruby in 1.8.6? I'm embarrassed that I don't know quite enough about my Mac and OSX to know where to look next, so any guidance, even just a pointer to some document I couldn't find via Google, would be most welcome. Michael

    Read the article

  • Fast sign in C++ float...are there any platform dependencies in this code?

    - by Patrick Niedzielski
    Searching online, I have found the following routine for calculating the sign of a float in IEEE format. This could easily be extended to a double, too. // returns 1.0f for positive floats, -1.0f for negative floats, 0.0f for zero inline float fast_sign(float f) { if (((int&)f & 0x7FFFFFFF)==0) return 0.f; // test exponent & mantissa bits: is input zero? else { float r = 1.0f; (int&)r |= ((int&)f & 0x80000000); // mask sign bit in f, set it in r if necessary return r; } } (Source: ``Fast sign for 32 bit floats'', Peter Schoffhauzer) I am weary to use this routine, though, because of the bit binary operations. I need my code to work on machines with different byte orders, but I am not sure how much of this the IEEE standard specifies, as I couldn't find the most recent version, published this year. Can someone tell me if this will work, regardless of the byte order of the machine? Thanks, Patrick

    Read the article

  • Infiniband: a highperformance network fabric - Part I

    - by Karoly Vegh
    Introduction:At the OpenWorld this year I managed to chat with interesting people again - one of them answering Infiniband deepdive questions with ease by coffee turned out to be one of Oracle's IB engineers, Ted Kim, who actually actively participates in the Infiniband Trade Association and integrates Oracle solutions with this highspeed network. This is why I love attending OOW. He granted me an hour of his time to talk about IB. This post is mostly based on that tech interview.Start of the actual post: Traditionally datatransfer between servers and storage elements happens in networks with up to 10 gigabit/seconds or in SANs with up to 8 gbps fiberchannel connections. Happens. Well, data rather trickles through.But nowadays data amounts grow well over the TeraByte order of magnitude, and multisocket/multicore/multithread Servers hunger data that these transfer technologies just can't deliver fast enough, causing all CPUs of this world do one thing at the same speed - waiting for data. And once again, I/O is the bottleneck in computing. FC and Ethernet can't keep up. We have half-TB SSDs, dozens of TB RAM to store data to be modified in, but can't transfer it. Can't backup fast enough, can't replicate fast enough, can't synchronize fast enough, can't load fast enough. The bad news is, everyone is used to this, like back in the '80s everyone was used to start compile jobs and go for a coffee. Or on vacation. The good news is, there's an alternative. Not so-called "bleeding-edge" 8gbps, but (as of now) 56. Not layers of overhead, but low latency. And it is available now. It has been for a while, actually. Welcome to the world of Infiniband. Short history:Infiniband was born as a result of joint efforts of HPAQ, IBM, Intel, Sun and Microsoft. They planned to implement a next-generation I/O fabric, in the 90s. In the 2000s Infiniband (from now on: IB) was quite popular in the high-performance computing field, powering most of the top500 supercomputers. Then in the middle of the decade, Oracle realized its potential and used it as an interconnect backbone for the first Database Machine, the first Exadata. Since then, IB has been booming, Oracle utilizes and supports it in a large set of its HW products, it is the backbone of the famous Engineered Systems: Exadata, SPARC SuperCluster, Exalogic, OVCA and even the new DB backup/recovery box. You can also use it to make servers talk highspeed IP to eachother, or to a ZFS Storage Appliance. Following Oracle's lead, even IBM has jumped the wagon, and leverages IB in its PureFlex systems, their first InfiniBand Machines.IB Structural Overview: If you want to use IB in your servers, the first thing you will need is PCI cards, in IB terms Host Channel Adapters, or HCAs. Just like NICs for Ethernet, or HBAs for FC. In these you plug an IB cable, going to an IB switch providing connection to other IB HCAs. Of course you're going to need drivers for those in your OS. Yes, these are long-available for Solaris and Linux. Now, what protocols can you talk over IB? There's a range of choices. See, IB isn't accepting package loss like Ethernet does, and hence doesn't need to rely on TCP/IP as a workaround for resends. That is, you still can run IP over IB (IPoIB), and that is used in various cases for control functionality, but the datatransfer can run over more efficient protocols - like native IB. About PCI connectivity: IB cards, as you see are fast. They bring low latency, which is just as important as their bandwidth. Current IB cards run at 56 gbit/s. That is slightly more than double of the capacity of a PCI Gen2 slot (of ~25 gbit/s). And IB cards are equipped usually with two ports - that is, altogether you'd need 112 gbit/s PCI slots, to be able to utilize FDR IB cards in an active-active fashion. PCI Gen3 slots provide you with around ~50gbps. This is why the most IB cards are configured in an active-standby way if both ports are used. Once again the PCI slot is the bottleneck. Anyway, the new Oracle servers are equipped with Gen3 PCI slots, an the new IB HCAs support those too. Oracle utilizes the QDR HCAs, running at 40gbp/s brutto, which translates to a 32gbp/s net traffic due to the 10:8 signal-to-data information ratio. Consolidation techniques: Technology never stops to evolve. Mellanox is working on the 100 gbps (EDR) version already, which will be optical, since signal technology doesn't allow EDR to be copper. Also, I hear you say "100gbps? I will never use/need that much". Are you sure? Have you considered consolidation scenarios, where (for example with Oracle Virtual Network) you could consolidate your platform to a high densitiy virtualized solution providing many virtual 10gbps interfaces through that 100gbps? Technology never stops to evolve. I still remember when a 10mbps network was impressively fast. Back in those days, 16MB of RAM was a lot. Now we usually run servers with around 100.000 times more RAM. If network infrastrucure speends could grow as fast as main memory capacities, we'd have a different landscape now :) You can utilize SRIOV as well for consolidation. That is, if you run LDoms (aka Oracle VM Server for SPARC) you do not have to add physical IB cards to all your guest LDoms, and you do not need to run VIO devices through the hypervisor either (avoiding overhead). You can enable SRIOV on those IB cards, which practically virtualizes the PCI bus, and you can dedicate Physical- and Virtual Functions of the virtualized HCAs as native, physical HW devices to your guests. See Raghuram's excellent post explaining SRIOV. SRIOV for IB is supported since LDoms 3.1.  This post is getting lengthier, so I will rename it to Part I, and continue it in a second post. 

    Read the article

  • Why do socket.makefile objects fail after the first read for UDP sockets?

    - by Eli Courtwright
    I'm using the socket.makefile method to create a file-like object on a UDP socket for the purposes of reading. When I receive a UDP packet, I can read the entire contents of the packet all at once by using the read method, but if I try to split it up into multiple reads, my program hangs. Here's a program which demonstrates this problem: import socket from sys import argv SERVER_ADDR = ("localhost", 12345) sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind(SERVER_ADDR) f = sock.makefile("rb") sock.sendto("HelloWorld", SERVER_ADDR) if "--all" in argv: print f.read(10) else: print f.read(5) print f.read(5) If I run the above program with the --all option, then it works perfectly and prints HelloWorld. If I run it without that option, it prints Hello and then hangs on the second read. I do not have this problem with socket.makefile objects when using TCP sockets. Why is this happening and what can I do to stop it?

    Read the article

  • .odx Files in BizTalk 2009 (VS2008 IDE) Fail to open. "Unspecified Error"

    - by AllenG
    Okay, I'm not sure if this is an SO question, or a ServerFault question, but I figured I'd post here first. I have a BizTalk project which works like a champ for its original design. It's been deployed and it's working fine. Today, I went in to add some new fuctionality by modifying one of my orchestrations. When I attempted to open it, I got a message which simply stated: "The operation could not be completed. Unspecified error." I've closed the IDE and re-opened; I've restarted the machine; I've even allowed Microsoft Updates to run and restart the machine. Everything else opens just fine (.xsd files, .btms, etc.) so it appears only to be the orchestrations which are failing. Has anyone ever encountered a similar issue and resolved it (short of reinstalling BizTalk/VS, or blowing the orchs away and rebuilding)? Any help would be appreciated.

    Read the article

  • Is there a reason why a submit button would fail in ie6 using jquery?

    - by kgrad
    I have a form being submit to a servlet, there is a jQuery Datepicker on the page. On some computers using ie6, for some reason the page is crashing on submit. I am getting a 404 error, but only sometimes. I have no idea why this would occur. My theory is that somehow it's bypassing the servlet, or its not loading jquery properly, or... I am at a loss. What are some reasons why this could occur on some computers? The exact version of ie6 is 6.0.2900.2180, on windows XP SP2. How can I test with this specific version?

    Read the article

  • Why does git branch -t fail with "Not tracking: ambiguous information"?

    - by che
    When I try to create a new branch tracking a remote branch, I get this: che@nok ~/prj/git-ipc $ git branch -t test main/some_remote_branch error: Not tracking: ambiguous information for ref refs/remotes/main/some_remote_branch The source seems to somehow search for branches to track and throws me out because it finds less more than one, but I don't exactly get what it's looking for since I already told it what to track on the command line. Can anybody tell me what's going on and how to fix it?

    Read the article

  • Why does sorl.thumbnail ImageField fail in the admin?

    - by Mark0978
    I have code that looks like this: from sorl.thumbnail import ImageField class Gallery(models.Model): pass class GalleryImage(models.Model): image = ImageField(upload_to='galleries') In the admin: class GalleryImageInline(admin.TabularInline): model = GalleryImage class GalleryAdmin(admin.ModelAdmin): inlines = (GalleryImageInline,) If I use the sorl.thumbnail as above, it is impossible to add images in the admin. I get the validation error Enter a list of values. If I replace the sorl.thumbnail.ImageField with a plain django ImageField, everything works. If I want sorl.thumbnail to clean up the cache thumbnails, I need to use it in the model, but if I use it in the model, I can't seem to add any images to need thumbnails. Anyone else found and fixed this problem yet?

    Read the article

  • What's the easiest/fast way to get my website up and running on the web?

    - by ggfan
    This is probably a really really beginner's question, but I would like to know what's the fastest way to get my site on the web so that people can start using it. I'm learning everything about programming out of books and at home so I don't have much experience. --Before I go to like godaddy.com or such site to get a domain name, is there any free sites that would allow me to upload my site so users can use it? I have html,css,php,mysql,javascipt in my scripts so I don't think many sites allow free uploads with such languages. --If I can't find a free site, is there any good places to get a domain name and web hosting that supports most languages at a low price? (doesn't have to be professional hosting because I am still a beginner) --If I go to say godaddy.com and get their webhosting and domain name, would I be allowed to run php,mysql,python,java on it? (I looked at some hosting sites and most only allow php/mysql)

    Read the article

  • Reliable and fast way to convert a zillion ODT files in PDF?

    - by Marco Mariani
    I need to pre-produce a million or two PDF files from a simple template (a few pages and tables) with embedded fonts. Usually, I would stay low level in a case like this, and compose everything with a library like ReportLab, but I joined late in the project. Currently, I have a template.odt and use markers in the content.xml files to fill with data from a DB. I can smoothly create the ODT files, they always look rigth. For the ODT to PDF conversion, I'm using openoffice in server mode (and PyODConverter w/ named pipe), but it's not very reliable: in a batch of documents, there is eventually a point after which all the processed files are converted into garbage (wrong fonts and letters sprawled all over the page). Problem is not predictably reproducible (does not depend on the data), happens in OOo 2.3 and 3.2, in Ubuntu, XP, Server 2003 and Windows 7. My Heisenbug detector is ticking. I tried to reduce the size of batches and restarting OOo after each one; still, a small percentage of the documents are messed up. Of course I'll write about this on the Ooo mailing lists, but in the meanwhile, I have a delivery and lost too much time already. Where do I go? Completely avoid the ODT format and go for another template system. Suggestions? Anything that takes a few seconds to run is way too slow. OOo takes around a second and it sums to 15 days of processing time. I had to write a program for clustering the jobs over several clients. Keep the format but go for another tool/program for the conversion. Which one? There are many apps in the shareware or commercial repositories for windows, but trying each one is a daunting task. Some are too slow, some cannot be run in batch without buying it first, some cannot work from command line, etc. Open source tools tend not to reinvent the wheel and often depend on openoffice. Converting to an intermediate .DOC format could help to avoid the OOo bug, but it would double the processing time and complicate a task that is already too hairy. Try to produce the PDFs twice and compare them, discarding the whole batch if there's something wrong. Although the documents look equal, I know of no way to compare the binary content. Restart OOo after processing each document. it would take a lot more time to produce them it would lower the percentage of the wrong files, and make it very hard to identify them. Go for ReportLab and recreate the pages programmatically. This is the approach I'm going to try in a few minutes. Learn to properly format bulleted lists Thanks a lot.

    Read the article

  • Axis webservice calls fail sometimes, access.log shows content!

    - by epischel
    Hi, our app is a webservice client (axis 1) to a third party webservice (also axis 1). We use it for some years now. Since a few weeks, we (as a client) get sometimes HTTP status 400 (bad request) or read timeouts when calling the webservice. Strangely, the access.log of the service shows part of the request or the response instead of the URL. It looks like this (looks like the end of the request string) x.x.x.x -> y.y.y.y:8080 - - [timestamp] "POST /webservice HTTP/1.0" 200 16127 0 x.x.x.x -> y.y.y.y:8080 - - [timestamp] "POST /webservice HTTP/1.0" 200 22511 1 x.x.x.x -> y.y.y.y:8080 - - [timestamp] "il=\"true\"/><nsl:text xsi:type=\"xsd:string\" xsi:nil=\"true\"/></SOAPSomeOperation></soapenv:Body></soapenv:Envelope> Axis/1.4" 400 299 0 or (some string out of the what looks like the request) x.x.x.x -> y.y.y.y:8080 - - [timestamp] ":string\">some text</sometag><othertag>moretext" 400 299 0 or in some other cases it looks like two requests thrown together (... means xml string left out): x.x.x.x -> y.y.y.y:8080 - - [timestamp] "...</someop></soapenv:Body></soapenv:Envelope>:xsd=\"http://www.w3.org/2001/XMLSchema\"...</soapenv:Body></soapenv:Envelope>" 400 299 0 Application log does not give any hints. Frequency of such call is 1% of all calls to that service. The only discriminator I know of so far is that it happens since operations informed us that the service url changed because of "server migration". Has anyone experienced such phenomenon yet? Has somebody got an idea whats wrong and how to fix? Thanks,

    Read the article

  • JQuery never reaches success or fail on aspx returning json.

    - by David
    Basicaly I have my aspx page doing <% Response.Clear(); Response.Write("{\"Success\": \"true\" }"); Response.End(); %> My JQuery code is function DoSubmit(r) { if (r == null || r.length == 0 || formdata == null || formdata.length == 0) return; for (i = 0; i < formdata.length; i++) { var fd = formdata[i]; r[fd.Name] = fd.Value; } r["ModSeq"] = tblDef.ModSeq; jQuery.ajax({ url: "NashcoUpdate.aspx" , succsess: doRow , error: DoSubmitError , complete: DoSubmitComplete , dataType: "json" , cache: false , data: r , type: "post" }) } When I call the DoSubmit() function every thing works but the doRow or DoSubmitError functions never get called only the DoSubmitComplete function. When I look at the response text in teh DoSubmitComple function it is {"Success": "true" } Every JSON tester I have tried says that this is valied JSON. What am I doing wrong here?

    Read the article

  • Why would fopen fail to open a file that exists?

    - by void
    I'm on Windows XP using Visual Studio 6 (yes I know it's old) building/maintaining a C++ DLL. I'm encountered a problem with fopen failing to open an existing file, it always returns NULL. I've tried: Checking errno and _doserrno by setting both to zero and then checking them again, both remain zero, and thus GetLastError() reports no errors. I know fopen isn't required to set errno when it encounters an error according to a C standard. Hardcoding the file path, which are not relative. Tried on another developers machine which the same result. The really strange thing is CreateFile works and the file can be read with ReadFile. We believe this works in a release build, however we are also seeing some very odd behaviour in other areas of the application and we're not sure if this is related. The code is below, I don't see anything odd it looks quite standard to me. The source file hasn't changed for just under half a year. HRESULT CDataHandler::LoadFile( CStdString szFilePath ) { //Code FILE* pFile; if ( NULL == ( pFile = fopen( szFilePath.c_str(), "rb") ) ) { return S_FALSE; } //More code }

    Read the article

  • Is there a fast alternative to creating a Texture2D from a Bitmap object in XNA?

    - by Matthew Bowen
    I've looked around a lot and the only methods I've found for creating a Texture2D from a Bitmap are: using (MemoryStream s = new MemoryStream()) { bmp.Save(s, System.Drawing.Imaging.ImageFormat.Png); s.Seek(0, SeekOrigin.Begin); Texture2D tx = Texture2D.FromFile(device, s); } and Texture2D tx = new Texture2D(device, bmp.Width, bmp.Height, 0, TextureUsage.None, SurfaceFormat.Color); tx.SetData<byte>(rgbValues, 0, rgbValues.Length, SetDataOptions.NoOverwrite); Where rgbValues is a byte array containing the bitmap's pixel data in 32-bit ARGB format. My question is, are there any faster approaches that I can try? I am writing a map editor which has to read in custom-format images (map tiles) and convert them into Texture2D textures to display. The previous version of the editor, which was a C++ implementation, converted the images first into bitmaps and then into textures to be drawn using DirectX. I have attempted the same approach here, however both of the above approaches are significantly too slow. To load into memory all of the textures required for a map takes for the first approach ~250 seconds and for the second approach ~110 seconds on a reasonable spec computer. If there is a method to edit the data of a texture directly (such as with the Bitmap class's LockBits method) then I would be able to convert the custom-format images straight into a Texture2D and hopefully save processing time. Any help would be very much appreciated. Thanks

    Read the article

  • What is causing my HTTP server to fail with "exit status -1073741819"?

    - by Keeblebrox
    As an exercise I created a small HTTP server that generates random game mechanics, similar to this one. I wrote it on a Windows 7 (32-bit) system and it works flawlessly. However, when I run it on my home machine, Windows 7 (64-bit), it always fails with the same message: exit status -1073741819. I haven't managed to find anything on the web which references that status code, so I don't know how important it is. Here's code for the server, with redundancy abridged: package main import ( "fmt" "math/rand" "time" "net/http" "html/template" ) // Info about a game mechanic type MechanicInfo struct { Name, Desc string } // Print a mechanic as a string func (m MechanicInfo) String() string { return fmt.Sprintf("%s: %s", m.Name, m.Desc) } // A possible game mechanic var ( UnkillableObjects = &MechanicInfo{"Avoiding Unkillable Objects", "There are objects that the player cannot touch. These are different from normal enemies because they cannot be destroyed or moved."} //... Race = &MechanicInfo{"Race", "The player must reach a place before the opponent does. Like \"Timed\" except the enemy as a \"timer\" can be slowed down by the player's actions, or there may be multiple enemies being raced against."} ) // Slice containing all game mechanics var GameMechanics []*MechanicInfo // Pseudorandom number generator var prng *rand.Rand // Get a random mechanic func RandMechanic() *MechanicInfo { i := prng.Intn(len(GameMechanics)) return GameMechanics[i] } // Initialize the package func init() { prng = rand.New(rand.NewSource(time.Now().Unix())) GameMechanics = make([]*MechanicInfo, 34) GameMechanics[0] = UnkillableObjects //... GameMechanics[33] = Race } // serving var index = template.Must(template.ParseFiles( "templates/_base.html", "templates/index.html", )) func randMechHandler(w http.ResponseWriter, req *http.Request) { mechanics := [3]*MechanicInfo{RandMechanic(), RandMechanic(), RandMechanic()} if err := index.Execute(w, mechanics); err != nil { http.Error(w, err.Error(), http.StatusInternalServerError) } } func main() { http.HandleFunc("/", randMechHandler) if err := http.ListenAndServe(":80", nil); err != nil { panic(err) } } In addition, the unabridged code, the _base.html template, and the index.html template. What could be causing this issue? Is there a process for debugging a cryptic exit status like this?

    Read the article

  • Oracle why does creating trigger fail when there is a field called timestamp?

    - by Omar Kooheji
    I've just wasted the past two hours of my life trying to create a table with an auto incrementing primary key bases on this tutorial, The tutorial is great the issue I've been encountering is that the Create Target fails if I have a column which is a timestamp and a table that is called timestamp in the same table... Why doesn't oracle flag this as being an issue when I create the table? Here is the Sequence of commands I enter: Creating the Table: CREATE TABLE myTable (id NUMBER PRIMARY KEY, field1 TIMESTAMP(6), timeStamp NUMBER, ); Creating the Sequence: CREATE SEQUENCE test_sequence START WITH 1 INCREMENT BY 1; Creating the trigger: CREATE OR REPLACE TRIGGER test_trigger BEFORE INSERT ON myTable REFERENCING NEW AS NEW FOR EACH ROW BEGIN SELECT test_sequence.nextval INTO :NEW.ID FROM dual; END; / Here is the error message I get: ORA-06552: PL/SQL: Compilation unit analysis terminated ORA-06553: PLS-320: the declaration of the type of this expression is incomplete or malformed Any combination that does not have the two lines with a the word "timestamp" in them works fine. I would have thought the syntax would be enough to differentiate between the keyword and a column name. As I've said I don't understand why the table is created fine but oracle falls over when I try to create the trigger... CLARIFICATION I know that the issue is that there is a column called timestamp which may or may not be a keyword. MY issue is why it barfed when I tried to create a trigger and not when I created the table, I would have at least expected a warning. That said having used Oracle for a few hours, it seems a lot less verbose in it's error reporting, Maybe just because I'm using the express version though. If this is a bug in Oracle how would one who doesn't have a support contract go about reporting it? I'm just playing around with the express version because I have to migrate some code from MySQL to Oracle.

    Read the article

  • Reporting services genius only: a fast way to get Reporting Services local site working?

    - by Junior Mayhé
    Hello I was here trying to figure out why my Reports manager is empty, there's no tabs at all. I installed SQL Server 2008 complete, but didn't not configure Reporting Services. When installing SQL Server 2008, this Windows 7 version didn't have yet IIS installed, I installed it later. I don't see where is this localhost/Reports physically on my Hard Drive, where is the physic folder? I don't see on IIS where is Report folder, would it exist? The site settings people talk about, I can't find it. The "Reporting Services" service is running on automatic at SQL Server Configuration Manager. How can I get Reporting Services this working without struggling? (I can't see why is necessary to customize all these user permissions)

    Read the article

  • Why does an onclick property set with setAttribute fail to work in IE?

    - by Aeon
    Ran into this problem today, posting in case someone else has the same issue. var execBtn = document.createElement('input'); execBtn.setAttribute("type", "button"); execBtn.setAttribute("id", "execBtn"); execBtn.setAttribute("value", "Execute"); execBtn.setAttribute("onclick", "runCommand();"); Turns out to get IE to run an onclick on a dynamically generated element, we can't use setAttribute. Instead, we need to set the onclick property on the object with an anonymous function wrapping the code we want to run. execBtn.onclick = function() { runCommand() }; BAD IDEAS: You can do execBtn.setAttribute("onclick", function() { runCommand() }); but it will break in IE in non-standards mode according to @scunliffe. You can't do this at all execBtn.setAttribute("onclick", runCommand() ); because it executes immediately, and sets the result of runCommand() to be the onClick attribute value, nor can you do execBtn.setAttribute("onclick", runCommand);

    Read the article

  • Why would Ruby fail equality on 2 floats that appear the same?

    - by btelles
    Hi there, I have a calculation that generates what appears to be the Float 22.23, and a literal 22.23 like so: some_object.total => 22.23 some_object.total.class => Float 22.23 => 22.23 22.23.class => Float But for some reason, the following is false: some_object.total == 22.23 ? true : false Wacky, right? Is there some kind of precision mechanism being used that maybe isn't completely transparent through the some_object.total call?

    Read the article

  • Django + Virtualenv: manage.py commands fail with ImportError of project name.

    - by Bartek
    This is blowing my mind because it's probably an easy solution, but I can't figure out what could be causing this. So I have a new dev box and am setting everything up. I installed virtualenv, created a new environment for my project under ~/.virtualenvs/projectname Then, I cloned my project from github into my projects directory. Nothing fancy here. There are no .pyc files sitting around so it's a clean slate of code. Then, I activated my virtualenv and installed Django via pip. All looks good so far. Then, I run python manage.py syncdb within my project dir. This is where I get confused: ImportError: No module named projectname So I figured I may have had some references of projectname within my code. So I grep (ack, actually) through my code base and I find nothing of the sorts. So now I'm at a loss, given this environment why am I getting an ImportError on a module named projectname that isn't referenced anywhere in my code? I look forward to a solution .. thanks guys!

    Read the article

  • SQL placeholder in WHERE IN issue, inserted strings fail.

    - by Alastair Pitts
    As part of my jobs, I need to writes SQL queries to connect to our PI database. To generate a query, I need to pass an array of tags (essentially primary keys), but these have to be inserted as strings. As this will be a modular query and used for multiple tags, a placeholder is being used. The query relies upon the use of the WHERE IN statement, which is where the placeholder is, like below: SELECT SUM(value * 5/1000) as "Hourly Flow [kL]" from piarchive..pitotal WHERE tag IN (?) AND time between ? and ? and timestep = '1d' and calcbasis = 'Eventweighted' and value <> '' The issue is the format in which the tags are to be passed in as. If I add them directly into the query (for testing), they go in the format (these are example numbers): '000000012','00000032','005050236','4560236' and the query looks like: SELECT SUM(value * 5/1000) as "Hourly Flow [kL]" from piarchive..pitotal WHERE tag IN ('000000012','00000032','005050236','4560236') AND time between ? and ? and timestep = '1d' and calcbasis = 'Eventweighted' and value <> '' which works. If I try and add the same tags into the placeholder, the query fails. If I only add 1 tag, with no quotes (using the placeholder), the query works. Why is this happening? Is there anyway around it?

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >