Search Results

Search found 1604 results on 65 pages for 'standards'.

Page 56/65 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Inconsistent height of text input elements between Firefox and WebKit

    - by Trevor Burnham
    OK, I realize that this is something of an eternal question, but here goes: I've got a single text input, <input type="text" name="whatever" /> and I've specified its font-family, font-size and padding. Yet, even on the same machine (my Mac, let's say), the input has a different height in Firefox (3.6) than it does in Chrome or Safari. Specifically, Firefox adds a little bit more padding below the text. And no, specifying height in pixels doesn't achieve consistency either. Is there any way to achieve text input height consistency across Gecko- and WebKit-based browsers (let alone IE and Opera) without resorting to JavaScript? And if I must use JavaScript, has someone already devised a jQuery plugin or something to easily do this? Update: Here's what not to do. The jqTransform plugin lets you skin form elements and promises that they'll look the same across browsers. Here's how the demo input looks in Chrome 5 on my Mac: and here's how the same input looks in Firefox 3.6.4: I haven't altered these screenshots in any way, just cropped them. Now, my first reaction is, "Ugh, I don't want to support Firefox." But there are currently more Firefox users than Safari and Chrome users combined, so that's not an option. Someone, please help! I just want my forms to look the same across modern, standards-compliant browsers! And by "look the same," I'm not talking about the outline on selection or anything like that; I'm just talking about having the same width, height, and text placement!

    Read the article

  • Performance Difference between HttpContext user and Thread user

    - by atrueresistance
    I am wondering what the difference between HttpContext.Current.User.Identity.Name.ToString.ToLower and Thread.CurrentPrincipal.Identity.Name.ToString.ToLower. Both methods grab the username in my asp.net 3.5 web service. I decided to figure out if there was any difference in performance using a little program. Running from full Stop to Start Debugging in every run. Dim st As DateTime = DateAndTime.Now Try 'user = HttpContext.Current.User.Identity.Name.ToString.ToLower user = Thread.CurrentPrincipal.Identity.Name.ToString.ToLower Dim dif As TimeSpan = Now.Subtract(st) Dim break As String = "nothing" Catch ex As Exception user = "Undefined" End Try I set a breakpoint on break to read the value of dif. The results were the same for both methods. dif.Milliseconds 0 Integer dif.Ticks 0 Long Using a longer duration, loop 5,000 times results in these figures. Thread Method run 1 dif.Milliseconds 125 Integer dif.Ticks 1250000 Long run 2 dif.Milliseconds 0 Integer dif.Ticks 0 Long run 3 dif.Milliseconds 0 Integer dif.Ticks 0 Long HttpContext Method run 1 dif.Milliseconds 15 Integer dif.Ticks 156250 Long run 2 dif.Milliseconds 156 Integer dif.Ticks 1562500 Long run 3 dif.Milliseconds 0 Integer dif.Ticks 0 Long So I guess what is more prefered, or more compliant with webservice standards? If there is some type of a performance advantage, I can't really tell. Which one scales to larger environments easier?

    Read the article

  • Who architected / designed C++'s IOStreams, and would it still be considered well-designed by today'

    - by stakx
    First off, it may seem that I'm asking for subjective opinions, but that's not what I'm after. I'd love to hear some well-grounded arguments on this topic. In the hope of getting some insight into how a modern streams / serialization framework ought to be designed, I recently got myself a copy of the book Standard C++ IOStreams and Locales by Angelika Langer and Klaus Kreft. I figured that if IOStreams wasn't well-designed, it wouldn't have made it into the C++ standard library in the first place. After having read various parts of this book, I am starting to have doubts if IOStreams can compare to e.g. the STL from an overall architectural point-of-view. Read e.g. this interview with Alexander Stepanov (the STL's "inventor") to learn about some design decisions that went into the STL. What surprises me in particular: It seems to be unknown who was responsible for IOStreams' overall design (I'd love to read some background information about this — does anyone know good resources?); Once you delve beneath the immediate surface of IOStreams, e.g. if you want to extend IOStreams with your own classes, you get to an interface with fairly cryptic and confusing member function names, e.g. getloc/imbue, uflow/underflow, snextc/sbumpc/sgetc/sgetn, pbase/pptr/epptr (and there's probably even worse examples). This makes it so much harder to understand the overall design and how the single parts co-operate. Even the book I mentioned above doesn't help that much (IMHO). Thus my question: If you had to judge by today's software engineering standards (if there actually is any general agreement on these), would C++'s IOStreams still be considered well-designed? (I wouldn't want to improve my software design skills from something that's generally considered outdated.)

    Read the article

  • Python alignment of assignments (style)

    - by ikaros45
    I really like following style standards, as those specified in PEP 8. I have a linter that checks it automatically, and definitely my code is much better because of that. There is just one point in PEP 8, the E251 & E221 don't feel very good. Coming from a JavaScript background, I used to align the variable assignments as following: var var1 = 1234; var2 = 54; longer_name = 'hi'; var lol = { 'that' : 65, 'those' : 87, 'other_thing' : true }; And in my humble opinion, this improves readability dramatically. Problem is, this is dis-recommended by PEP 8. With dictionaries, is not that bad because spaces are allowed after the colon: dictionary = { 'something': 98, 'some_other_thing': False } I can "live" with variable assignments without alignment, but what I don't like at all is not to be able to pass named arguments in a function call, like this: some_func(length= 40, weight= 900, lol= 'troll', useless_var= True, intelligence=None) So, what I end up doing is using a dictionary, as following: specs = { 'length': 40, 'weight': 900, 'lol': 'troll', 'useless_var': True, 'intelligence': None } some_func(**specs) or just simply some_func(**{'length': 40, 'weight': 900, 'lol': 'troll', 'useless_var': True, 'intelligence': None}) But I have the feeling this work around is just worse than ignoring the PEP 8 E251 / E221. What is the best practice?

    Read the article

  • Extension methods for encapsulation and reusability

    - by tzaman
    In C++ programming, it's generally considered good practice to "prefer non-member non-friend functions" instead of instance methods. This has been recommended by Scott Meyers in this classic Dr. Dobbs article, and repeated by Herb Sutter and Andrei Alexandrescu in C++ Coding Standards (item 44); the general argument being that if a function can do its job solely by relying on the public interface exposed by the class, it actually increases encapsulation to have it be external. While this confuses the "packaging" of the class to some extent, the benefits are generally considered worth it. Now, ever since I've started programming in C#, I've had a feeling that here is the ultimate expression of the concept that they're trying to achieve with "non-member, non-friend functions that are part of a class interface". C# adds two crucial components to the mix - the first being interfaces, and the second extension methods: Interfaces allow a class to formally specify their public contract, the methods and properties that they're exposing to the world. Any other class can choose to implement the same interface and fulfill that same contract. Extension methods can be defined on an interface, providing any functionality that can be implemented via the interface to all implementers automatically. And best of all, because of the "instance syntax" sugar and IDE support, they can be called the same way as any other instance method, eliminating the cognitive overhead! So you get the encapsulation benefits of "non-member, non-friend" functions with the convenience of members. Seems like the best of both worlds to me; the .NET library itself providing a shining example in LINQ. However, everywhere I look I see people warning against extension method overuse; even the MSDN page itself states: In general, we recommend that you implement extension methods sparingly and only when you have to. So what's the verdict? Are extension methods the acme of encapsulation and code reuse, or am I just deluding myself?

    Read the article

  • Parse usable Street Address, City, State, Zip from a string

    - by Rob Allen
    Problem: I have an address field from an Access database which has been converted to Sql Server 2005. This field has everything all in one field. I need to parse out the individual sections of the address into their appropriate fields in a normalized table. I need to do this for approximately 4,000 records and it needs to be repeatable. Here are the rules for this exercise: 1 - no whining about how this should have been separate fields in the first place, we are often confronted with less than ideal situations and have to make the best of them 2- for this post, use any language you want 3- feel free to play code golf 4 - Assume an address in the US (for now) 5 - assume that the input string will sometimes contain an addressee (the person being addressed) and/or a second street address (i.e. Suite B) 6 - states may be abbreviated 7 - zip code could be standard 5 digit or zip+4 8 - there are typos in some instances UPDATE: In response to the questions posed, standards were not universally followed, I need need to store the individual values, not just geocode and errors means typo (corrected above) Sample Data: A. P. Croll & Son 2299 Lewes-Georgetown Hwy, Georgetown, DE 19947 11522 Shawnee Road, Greenwood DE 19950 144 Kings Highway, S.W. Dover, DE 19901 Intergrated Const. Services 2 Penns Way Suite 405 New Castle, DE 19720 Humes Realty 33 Bridle Ridge Court, Lewes, DE 19958 Nichols Excavation 2742 Pulaski Hwy Newark, DE 19711 2284 Bryn Zion Road, Smyrna, DE 19904 VEI Dover Crossroads, LLC 1500 Serpentine Road, Suite 100 Baltimore MD 21 580 North Dupont Highway Dover, DE 19901 P.O. Box 778 Dover, DE 19903

    Read the article

  • Potential g++ template bug?

    - by Evan Teran
    I've encountered some code which I think should compile, but doesn't. So I'm hoping some of the local standards experts here at SO can help :-). I basically have some code which resembles this: #include <iostream> template <class T = int> class A { public: class U { }; public: U f() const { return U(); } }; // test either the work around or the code I want... #ifndef USE_FIX template <class T> bool operator==(const typename A<T>::U &x, int y) { return true; } #else typedef A<int> AI; bool operator==(const AI::U &x, int y) { return true; } #endif int main() { A<int> a; std::cout << (a.f() == 1) << std::endl; } So, to describe what is going on here. I have a class template (A) which has an internal class (U) and at least one member function which can return an instance of that internal class (f()). Then I am attempting to create an operator== function which compares this internal type to some other type (in this case an int, but it doesn't seem to matter). When USE_FIX is not defined I get the following error: test.cc: In function 'int main()': test.cc:27:25: error: no match for 'operator==' in 'a.A<T>::f [with T = int]() == 1' Which seems odd, because I am clearly (I think) defining a templated operator== which should cover this, in fact if I just do a little of the work for the compiler (enable USE_FIX), then I no longer get an error. Unfortunately, the "fix" doesn't work generically, only for a specific instantiation of the template. Is this supposed to work as I expected? Or is this simply not allowed? BTW: if it matters I am using gcc 4.5.2.

    Read the article

  • Sparse (Pseudo) Infinite Grid Data Structure for Web Game

    - by Ming
    I'm considering trying to make a game that takes place on an essentially infinite grid. The grid is very sparse. Certain small regions of relatively high density. Relatively few isolated nonempty cells. The amount of the grid in use is too large to implement naively but probably smallish by "big data" standards (I'm not trying to map the Internet or anything like that) This needs to be easy to persist. Here are the operations I may want to perform (reasonably efficiently) on this grid: Ask for some small rectangular region of cells and all their contents (a player's current neighborhood) Set individual cells or blit small regions (the player is making a move) Ask for the rough shape or outline/silhouette of some larger rectangular regions (a world map or region preview) Find some regions with approximately a given density (player spawning location) Approximate shortest path through gaps of at most some small constant empty spaces per hop (it's OK to be a bad approximation often, but not OK to keep heading the wrong direction searching) Approximate convex hull for a region Here's the catch: I want to do this in a web app. That is, I would prefer to use existing data storage (perhaps in the form of a relational database) and relatively little external dependency (preferably avoiding the need for a persistent process). Guys, what advice can you give me on actually implementing this? How would you do this if the web-app restrictions weren't in place? How would you modify that if they were? Thanks a lot, everyone!

    Read the article

  • ASP MVC - Routing Required?

    - by evo_9
    I've been reading up on MVC2 which came in VS2010 and it sounds pretty interesting. I'm actually in the middle of a large multi-tenant application project, and have just started coding the UI. I'm considering changing to MVC as I'm not that far along at this point. I have some questions about the Routing capabilities, namely are they required to use MVC or can I more or less ignore Routing? Or do I have to setup a default routing record that will make things work like standard ASPX (as far as routing alone is concerned)? The reason why I don't want to use Routing is because I've already defined a custom URL 'rewrite' mechanism of my own (which fires on session_start). In addition, I'm using jquery and opens-standards for the entire UI, and MVC's aspx overhead-free approach seems like a better fit based on how I've already started to build the application (I am not using viewstate at all, for example). I guess my big concern is whether the routing can be ignored, of if I will have to re-implement my custom URL rewriting to work with MVC, and if that's the case, how would I do that? As a new Routing routine, or stick with the session_start (if that's even possible?). Lastly, I don't want to use anything even remotely 'intelligent/readable' for the url - for a site like StackOverflow, the readability of the URL is a positive, but the opposite is true if it's not a public website like this one. In fact, it would seem to me that the more friendly MVC routing URL (which indirectly show method names) could pose a security risk on a private, non-public website app like I'm developing. For all these reasons I would love to use the lightweight aspects of MVC but skip the Routing entirely - is this possible?

    Read the article

  • Project management: Implementing custom errors in VS compilation process

    - by David Lively
    Like many architects, I've developed coding standards through years of experience to which I expect my developers to adhere. This is especially a problem with the crowd that believes that three or four years of experience makes you a senior-level developer.Approaching this as a training and code review issue has generated limited success. So, I was thinking that it would be great to be able to add custom compile-time errors to the build process to more strictly enforce this and other guidelines. For instance, we use stored procedures for ALL database access, which provides procedure-level security, db encapsulation (table structure is hidden from the app), and other benefits. (Note: I am not interested in starting a debate about this.) Some developers prefer inline SQL or parametrized queries, and that's fine - on their own time and own projects. I'd like a way to add a compilation check that finds, say, anything that looks like string sql = "insert into some_table (col1,col2) values (@col1, @col2);" and generates an error or, in certain circumstances, a warning, with a message like Inline SQL and parametrized queries are not permitted. Or, if they use the var keyword var x = new MyClass(); Variable definitions must be explicitly typed. Do Visual Studio and MSBuild provide a way to add this functionality? I'm thinking that I could use a regular expression to find unacceptable code and generate the correct error, but I'm not sure what, from a performance standpoint, is the best way to to integrate this into the build process. We could add a pre- or post-build step to run a custom EXE, but how can I return line- and file-specifc errors? Also, I'd like this to run after compilation of each file, rather than post-link. Is a regex the best way to perform this type of pattern matching, or should I go crazy and run the code through a C# parser, which would allow node-level validation via the parse tree? I'd appreciate suggestions and tales of prior experience.

    Read the article

  • Nested Class member function can't access function of enclosing class. Why?

    - by Rahul
    Please see the example code below: class A { private: class B { public: foobar(); }; public: foo(); bar(); }; Within class A & B implementation: A::foo() { //do something } A::bar() { //some code foo(); //more code } A::B::foobar() { //some code foo(); //<<compiler doesn't like this } The compiler flags the call to foo() within the method foobar(). Earlier, I had foo() as private member function of class A but changed to public assuming that B's function can't see it. Of course, it didn't help. I am trying to re-use the functionality provided by A's method. Why doesn't the compiler allow this function call? As I see it, they are part of same enclosing class (A). I thought the accessibility issue for nested class meebers for enclosing class in C++ standards was resolved. How can I achieve what I am trying to do without re-writing the same method (foo()) for B, which keeping B nested within A? I am using VC++ compiler ver-9 (Visual Studio 2008). Thank you for your help.

    Read the article

  • How do I use IPTC/EXIF metadata to categorise photos?

    - by kbro
    Many photo viewing and editing applications allow you to examine and change EXIF and IPTC data in JPEG and other image files. For example, I can see things like shutter speed, aperture and orientation in the picture files that come off my Canon A430. There are many, many name/value pairs in all this metadata. But... What do I do if I want to store some data that doesn't have a build-in field name. Let's say I'm photographing an athletics competition and I want to tag every photo with the competitor's bib number. Can I create a "bib_number" field and assign it a values of "0001", "5478", "8124" etc, and then search for all photos with bib_number="5478"? I've spent a few hours searching and the best I can come up with is to put this custom information in the "keywords" field but this isn't quite what I'm after. With this socution I'd have to craft a query like "keywords contains bib_number_5478" whereas what I want it "bib_number is 5478". So do the EXIF and/or IPTC standards allow addtional user-defined field names? Thanks Kev

    Read the article

  • How do you parse the XDG/gnome/kde menu/desktop item structure in c++??

    - by Joe Soul-bringer
    I would like to parse the menu structure for Gnome Panels (the standard Gnome Desktop application launcher) and it's KDE equivalent using c/c++ function calls. That is, I'd like a list of what the base menu categories and submenu are installed in a given machine. I would like to do with using fairly simple c/c++ function calls (with NO shelling out please). I understand that these menus are in the standard xdg format. I understand that this menu structure is stored in xml files such as: /home/user/.config/menus/applications.menu I've look here: http://www.freedesktop.org/wiki/Specifications/menu-spec?action=show&redirect=Standards%2Fmenu-spec but all they offer is the standard and some shell files to insert item entries (I don't want shell scripts, I don't want installation, I definitely don't want to create a c-library from the XDG specification. I want to find the existing menu structure). I've looked here: http://library.gnome.org/admin/system-admin-guide/stable/menustructure-13.html.en for more notes on these structures. None of this gives me a good idea of how determine the menu structures using a c/c++ program. The actual gnome menu structures seem to be a horrifically hairy things - they don't seem to show the menu structure but to give an XML-coded description of all the changes that the menus have gone through since installation. I assume gnome panels parses these file so there's a function buried somewhere to do this but I've yet to find where that function is after scanning library.gnome.org for a couple of days. I've scanned the Nautilus source code as well but Panels seem to exist elsewhere or are burried well. Thanks in advance

    Read the article

  • Why won't the vertical margins between <p> and <hr> collapse in IE7

    - by Nicolas
    Hello all, Perhaps I am missing something, but I can't explain this from any IE bug I know of. Why in this example do the margins of the <p> and <hr> elements collapse as expected in standards compliant browsers (i.e. FF3, IE8, etc) but not in IE7 (including IE8 compatibility mode)? <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" > <head> <title>IE7 Box Model</title> <style type="text/css"> p { border: 1px solid #00f; background-color: #fefecb; margin: 20x 0 20px 0; } hr { margin: 20px 0 20px 0; } </style> </head> <body> <p> box 1 </p> <hr /> <p> box 2 </p> <hr /> <p> box 3 </p> </body> </html>

    Read the article

  • typename resolution in cases of ambiguity

    - by parapura rajkumar
    I was playing with Visual Studio and templates. Consider this code struct Foo { struct Bar { }; static const int Bar=42; }; template<typename T> void MyFunction() { typename T::Bar f; } int main() { MyFunction<Foo>(); return 0; } When I compile this is either Visual Studio 2008 and 11, I get the following error error C2146: syntax error : missing ';' before identifier 'f' Is Visual Studio correct in this regard ? Is the code violating any standards ? If I change the code to struct Foo { struct Bar { }; static const int Bar=42; }; void SecondFunction( const int& ) { } template<typename T> void MyFunction() { SecondFunction( T::Bar ); } int main() { MyFunction<Foo>(); return 0; } it compiles without any warnings. In Foo::BLAH a member preferred over a type in case of conflicts ?

    Read the article

  • Is there a supplementary guide/answer key for ruby koans?

    - by corroded
    I have recently tried sharpening my rails skills with this tool: http://github.com/edgecase/ruby_koans but I am having trouble passing some tests. Also I am not sure if I'm doing some things correctly since the objective is just to pass the test, there are a lot of ways in passing it and I may be doing something that isn't up to standards. Is there a way to confirm if I'm doing things right? a specific example: in about_nil, def test_nil_is_an_object assert_equal __, nil.is_a?(Object), "Unlike NULL in other languages" end so is it telling me to check if that second clause is equal to an object(so i can say nil is an object) or just put assert_equal true, nil.is_a?(Object) because the statement is true? and the next test: def test_you_dont_get_null_pointer_errors_when_calling_methods_on_nil # What happens when you call a method that doesn't exist. The # following begin/rescue/end code block captures the exception and # make some assertions about it. begin nil.some_method_nil_doesnt_know_about rescue Exception => ex # What exception has been caught? assert_equal __, ex.class # What message was attached to the exception? # (HINT: replace __ with part of the error message.) assert_match(/__/, ex.message) end end Im guessing I should put a "No method error" string in the assert_match, but what about the assert_equal?

    Read the article

  • TinyMCE is glitchy/unusable in IE8

    - by Force Flow
    I'm using the jQuery version of TinyMCE 3.3.9.3 In firefox, it works fine (10 sec video depicting it in use): http://www.youtube.com/watch?v=TrAE0igfT3I In IE8 (in IE8 standards mode), I can't type or click any buttons. However, if I use ctrl+v to paste, then I can start typing, but the buttons still don't work (a 45 sec video depicting it in use): http://www.youtube.com/watch?v=iBSRlE8D8F4 The jQuery TinyMCE demo on TinyMCE's site works for me in IE8. Here's the init code: $().ready(function(){ function tinymce_focus(){ $('.defaultSkin table.mceLayout').css({'border-color' : '#6478D7'}); $('.defaultSkin table.mceLayout tr.mceFirst td').css({'border-top-color' : '#6478D7'}); $('.defaultSkin table.mceLayout tr.mceLast td').css({'border-bottom-color' : '#6478D7'}); } function tinymce_blur(){ $('.defaultSkin table.mceLayout').css({'border-color' : '#93a6e1'}); $('.defaultSkin table.mceLayout tr.mceFirst td').css({'border-top-color' : '#93a6e1'}); $('.defaultSkin table.mceLayout tr.mceLast td').css({'border-bottom-color' : '#93a6e1'}); } $('textarea.tinymce').tinymce({ script_url : 'JS/tinymce/tiny_mce.js', theme : "advanced", mode : "exact", invalid_elements : "b,i,iframe,font,input,textarea,select,button,form,fieldset,legend,script,noscript,object,embed,table,img,a,h1,h2,h3,h4,h5,h6", //theme options theme_advanced_buttons1 : "cut,copy,paste,pastetext,pasteword,selectall,|,undo,redo,|,cleanup,removeformat,|", theme_advanced_buttons2 : "bold,italic,underline,|,bullist,numlist,|,forecolor,backcolor,|", theme_advanced_buttons3 : "", theme_advanced_buttons4 : "", theme_advanced_toolbar_location : "top", theme_advanced_toolbar_align : "left", theme_advanced_statusbar_location : "none", theme_advanced_resizing : false, //plugins plugins : "inlinepopups,paste", dialog_type : "modal", paste_auto_cleanup_on_paste : true, setup: function(ed){ ed.onInit.add(function(ed){ //check for addEventListener -- primarily supported by firefox only var edDoc = ed.getDoc(); if ("addEventListener" in edDoc){ edDoc.addEventListener("focus", function(){ tinymce_focus(); }, false); edDoc.addEventListener("blur", function(){ tinymce_blur(); }, false); } }); } }); }); Any ideas as to why it's not working in IE8? [edit]: stripping everything out of the init (leaving just script_url and theme) results in the same symptoms

    Read the article

  • XAttribute Generating strange namespaces

    - by Adam Driscoll
    I'm constructing an XElement with a couple attributes that have different namespaces. The code looks like this: var element = new XElement("SynchronousCommand", new XAttribute("{wcm}action", "add"), new XAttribute("{ns}id", Guid.NewGuid()), new XElement... ); The XML that is generated looks like this: <unattend xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="urn:schemas-microsoft-com:unattend"> <SynchronousCommand d5p1:action="add" d5p2:id="c0f5fc6d-d407-4d3d-8a05-d84236cca2fb" xmlns:d5p2="ns" xmlns:d5p1="wcm"> ... </SynchronousCommand> </unattend> I'm just wondering if the auto-generated d5p2 is valid and why it is doing this. According to the XML standards here it seems like it would be valid. But why is it not: <unattend xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="urn:schemas-microsoft-com:unattend"> <SynchronousCommand wcm:action="add" ns:id="c0f5fc6d-d407-4d3d-8a05-d84236cca2fb" > To generate the XML I'm doing this: public class unattend { public List<XElement> Any {get;} } var unattend = new unattend(); unattend.Add(element); serializer.Serialize(xmlWriter, unattend);

    Read the article

  • Expanding DIV slides behind DIV beneath it...

    - by Paddy
    I'm not sure that I'm going to get an answer here, as I'd need to post a lot of CSS and html to get a working recreation, however... I have structure something like this: <fieldset> <legend>Test A</legend> <h3>Test A</h3> <p> Something here. </p> <div style="display:hidden;">I'm dynamically displayed</div> </fieldset> <fieldset> <legend>Test B</legend> <h3>Test B</h3> <p> Something B here. </p> </fieldset> I have code that toggles the display of my hidden div using jQuery and .show(). This works fine in IE8, firefox and Safari, but when I stick IE8 into compatibility mode, then the first fieldset (Test A) will expand, but the expansion happens behind the second fieldset, which doesn't move (i.e. it slides down behind it). I have quite a bit of CSS in use here, and I'm going to have to go back and unpick the whold lot, which isn't a fun idea. If anybody has any idea of one of the IE7 rendering issues that might be affecting this, then I'd very much appreciate it. (note that there is more to the content in these fieldsets than shown, including floated divs). Quick note - if I stick IE7 into quirks mode, it works (but wrecks the rest of my layout) - in standards mode, I get the above behaviour.

    Read the article

  • Is there a safe / standard way to manage unstructured memory in C++?

    - by andand
    I'm building a toy VM that requires a block of memory for storing and accessing data elements of different types and of different sizes. I've done this by writing a wrapper class around a uint8_t[] data block of the needed size. That class has some template methods to write / read typed data elements to / from arbitrary locations in the memory block, both of which check to make certain the bounds aren't violated. These methods use memmove in what I hope is a more or less safe manner. That said, while I am willing to press on in this direction, I've got to believe that other with more expertise have been here before and might be willing to share their wisdom. In particular: 1) Is there a class in one of the C++ standards (past, present, future) that has been defined to perform a function similar to what I have outlined above? 2) If not, is there a (preferably free as in beer) library out there that does? 3) Short of that, besides bounds checking and the inevitable issue of writing one type to a memory location and reading a different from that location, are there other issues I should be aware of? Thanks.-&&

    Read the article

  • javascript toolkit for offline webapps

    - by anjanb
    hi all, we're building an survey webapp which will let the user to add new records to the survey when offline and will upload when the browser reconnects with the server. We've identified that this will need offline storage and hence google gears seems to be an obvious choice (we understand that adobe Flash has Offline Storage but not sure if that is the best way). I am aware of Dojo offline javascript toolkit which uses google gears for the underlying functionality. However, dojo offline is not part of the dojo toolkit after version 1.3. (currently dojo is 1.4.2). Google gears toolkit is currently frozen except for critical vulnerability fixes (it has not been updated almost for the last 1 yr) because they think that HTML 5 is the way to go ahead. Hence, we're looking for a higher abstraction on top of Google Gears engine TODAY, AND which will (in the future) switch the underlying engine to HTML5 if the browser supports HTML5 standards. We'd love to use Dojo but they have discontinued Dojo offline -- we'd prefer something that will be maintained for some time. Which are possible good strategies, JS toolkits/libraries to use for building this webapp ? Pls. advise.

    Read the article

  • Varchar columns: Nullable or not.

    - by NYSystemsAnalyst
    The database development standards in our organization state the varchar fields should not allow null values. They should have a default value of an empty string (""). I know this makes querying and concatenation easier, but today, one of my coworkers questioned me about why that standard only existed for varchar types an not other datatypes (int, datetime, etc). I would like to know if others consider this to be a valid, defensible standard, or if varchar should be treated the same as fields of other data types? I believe this standard is valid for the following reason: I believe that an empty string and null values, though technically different, are conceptually the same. An empty, zero length string is a string that does not exist. It has no value. However, a numeric value of 0 is not the same as NULL. For example, if a field called OutstandingBalance has a value of 0, it means there are $0.00 remaining. However, if the same field is NULL, that means the value is unknown. On the other hand, a field called CustomerName with a value of "" is basically the same as a value of NULL because both represent the non-existence of the name. I read somewhere that an analogy for an empty string vs. NULL is that of a blank CD vs. no CD. However, I believe this to be a false analogy because a blank CD still phyically exists and still has physical data space that does not have any meaningful data written to it. Basically, I believe a blank CD is the equivalent of a string of blank spaces (" "), not an empty string. Therefore, I believe a string of blank spaces to be an actual value separate from NULL, but an empty string to be the absense of value conceptually equivalent to NULL. Please let me know if my beliefs regarding variable length strings are valid, or please enlighten me if they are not. I have read several blogs / arguments regarding this subject, but still do not see a true conceptual difference between NULLs and empty strings.

    Read the article

  • C89, Mixing Variable Declarations and Code

    - by rutski
    I'm very curious to know why exactly C89 compilers will dump on you when you try to mix variable declarations and code, like this for example: rutski@imac:~$ cat test.c #include <stdio.h> int main(void) { printf("Hello World!\n"); int x = 7; printf("%d!\n", x); return 0; } rutski@imac:~$ gcc -std=c89 -pedantic test.c test.c: In function ‘main’: test.c:7: warning: ISO C90 forbids mixed declarations and code rutski@imac:~$ Yes, you can avoid this sort of thing by staying away from -pedantic. But then your code is no longer standards compliant. And as anybody capable of answering this post probably already knows, this is not just a theoretical concern. Platforms like Microsoft's C compiler enforce this quick in the standard under any and all circumstances. Given how ancient C is, I would imagine that this feature is due to some historical issue dating back to the extraordinary hardware limitations of the 70's, but I don't know the details. Or am I totally wrong there?

    Read the article

  • Appropriate Footwear for An Interview

    - by EoRaptor013
    There's a raging debate going on at my house about appropriate footwear for an IT interview. I have an interview, on Thursday, for a SQL/C# developer with the Fraud dept. at a large accounting firm. I was planning on wearing what I have pretty much always worn for an interview: a nice suit, white shirt, subdued tie, and a pair of dress cowboy boots. My spouse and daughter both know that my dress code for nearly every professional job I've ever gotten, is pretty much the same -- including the boots -- with what I just described. Now, however, because I've been out of work for an unfortunately long time (my last contract ended 03/09 -- pretty much coincidental with the bottom falling out of the economy). My wife insists that style standards are fundamentally different on the left side of the Mississippi vs. the right side of the river. My view is that I've always worn "cowboy" boots; since I was old enough to fit into a real pair. I moved East, as an adult, over 30 years ago, but my dress patterns haven't changed. And in all that time, my dress patterns have never changed. Now I both really want, and really need, this job. But, is that sufficient reason to change a habit 40 years in the making? I would really appreciate the thoughts ya'all (little West of Ms. colloquialism, there) might have on this matter. Thanks. P.S. If this sort of question is inappropriate for this form, I apologize.

    Read the article

  • C/C++ Control Structure Limitations?

    - by STingRaySC
    I have heard of a limitation in VC++ (not sure which version) on the number of nested if statements (somewhere in the ballpark of 300). The code was of the form: if (a) ... else if (b) ... else if (c) ... ... I was surprised to find out there is a limit to this sort of thing, and that the limit is so small. I'm not looking for comments about coding practice and why to avoid this sort of thing altogether. Here's a list of things that I'd imagine could have some limitation: Number of functions in a scope (global, class, or namespace). Number of expressions in a single statement (e.g., compound conditionals). Number of cases in a switch. Number of parameters to a function. Number of classes in a single hierarchy (either inheritance or containment). What other control structures/language features have limits such as this? Do the language standards say anything about these limits (perhaps minimum requirements for an implementation)? Has anyone run into a particular language limitation like this with a particular compiler/implementation? EDIT: Please note that the above form of if statements is indeed "nested." It is equivalent to: if (a) { //... } else { if (b) { //... } else { if (c) { //... } else { //... } } }

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >