Search Results

Search found 27766 results on 1111 pages for 'bad idea jeans'.

Page 60/1111 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • When is it a good idea to use the CSS display property?

    - by allyourcode
    I think I first learned of this property when I thought "I should put this list of items in a ul, but I want it to be laid out horizontally. I wonder if I can do that with CSS?" When I googled this, I found a couple of sites suggesting that I create a CSS rule that would change the value of the display property of the li elements to inline. I've also seen the suggestion that a div (or other block element) be given display: table-cell in order to force the vertical align property to work. These techniques seem kind of hacky. Does that make sense? This might not be a good analogy, but it seems like trying to ride a car as if it were a motorcycle. Yeah, I could replace the steering wheel with handle bars, wear a helmet, and remove all the passenger seating, but how the heck is a car going to drive on two wheels??

    Read the article

  • Mutiple FK columns all pointing to the same parent table - a good idea?

    - by Randy Minder
    For those of you who live and breath database design, have you ever found compelling reasons to have multiple FK's in a table that all point to the same parent table? We recently had to deal with a situation where we had a table that contained six columns which were all FK columns to the same parent table. We're debating whether this indicates a poor design on our part or whether this is more common than we think. Thanks very much.

    Read the article

  • (iphone) maintaining CGContextRef or CGLayerRef is a bad idea?

    - by Eugene
    Hi, I need to work with many images, and I can't hold them as UIImage in memory because they are too big. I also need to change colors of image and merge them on the fly. Creating UIImage from underlying NSData, change color, and combine them when you can't have many images on memory is fairly slow. (as far as I can get) I thought maybe I can store underlying CGLayerRef(for image that will be combined) and CGContextRef(the resulting combined image). I am new to drawing world, and not sure if CGLayerRef or CGContextRef is smaller in memory than UIImage. I recently heard that w*h image takes up w*h*4 bytes in memory. Does CGLayerRef or CGContextRef also take up that much memory? Thank you

    Read the article

  • IIS headers of aspx page appear on page sometimes, any idea why?

    - by Chris
    At random this output it occurring at the top of the page. Site is installed on a lot of servers issue only happens on one server. HTTP/1.1 200 OK Date: Mon, 24 May 2010 04:18:30 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: 2.0.50727 Cache-Control: private Content-Type: text/html; charset=utf-8 Content-Length: 39611

    Read the article

  • Is it always bad idea to use inline css for used-once property?

    - by user93422
    I have a table, with 10 columns. I want to control the width of each column. Each column is unique, right now I create an external CSS style for each column: div#my-page table#members th.name-col { width: 40px; } I know there is a best practice to avoid inline style. I do approve using external CSS for anything look'n'feel related: fonts, colors, images. But is it really better to use external CSS in this case? It does not incur extra maintenance cost. It is easier to produce. Cons I can think of: If you have separate designers and development team - using inline styles will force designers to modify content-file (aspx in my case). It might use more bandwidth. Anything else I've missed?

    Read the article

  • c++: what is a good idea for a list of strings?

    - by John
    I simply want to build an RPG and make it as neat as possible, I wish to define a pile of strings which I may want to edit later, so I tried something like this: enum {MSG_INIT = "Welcome to ...", MSG_FOO = "bar"}; But I just get errors, such as that MSG_INIT is not an integer! Why must it not be a string, are that what enums are only for? What do you think is the best way to define a pile of strings? In a struct called msg or something? I'm kinda new to all this so I'd really appreciate small examples.

    Read the article

  • debate: Is adding third party libraries to a war a good idea?

    - by Master Chief
    We have a debate going on . a. The "standard" way of assembling a web app. Create a WAR with all our app artifacts and all other components like hibernate and memcached etc are deployed in the tomcat/shared/lib area. b. Create a humongous war with everything included and nothing in tomcat/shared/lib. Pros for a - It keeps things modular and the war is small. Cons for a - dependency on shared/lib has to be managed especially by the deployment process. Pros for b - All dependencies are controlled by the build process removing any room for error. Cons for b - War is really, really big. If you are deploying over a network to a huge farm, then it might have an impact. want to see what thoughts others might have about this.

    Read the article

  • Is method reference caching a good idea in Java 8?

    - by gexicide
    Consider I have code like the following: class Foo { Y func(X x) {...} void doSomethingWithAFunc(Function<X,Y> f){...} void hotFunction(){ doSomethingWithAFunc(this::func); } } Consider that hotFunction is called very often. Would it then be advisable to cache this::func, maybe like this: class Foo { Function<X,Y> f = this::func; ... void hotFunction(){ doSomethingWithAFunc(f); } } As far as my understanding of java method references goes, the Virtual Machine creates an object of an anonymous class when a method reference is used. Thus, caching the reference would create that object only once while the first approach creates it on each function call. Is this correct? Should method references that appear at hot positions in the code be cached or is the VM able to optimize this and make the caching superfluous? Is there a general best practice about this or is this highly VM-implemenation specific whether such caching is of any use?

    Read the article

  • Singleton pattern with Web application, Not a good idea!!

    - by Tony
    Hi I found something funny, I notice it by luck while I was debugging other thing. I was applying MCP pattern and I made a singleton controller to be shared among all presentations. Suddenly I figured out that some event is called once at first postback, twice if there is two postback, 100 times if there is 100 postbacks. because Singleton is based on a static variable which hold the instance, and the static variable live across postbacks, and I wired the event assuming that it will be wired once, and rewired for each postback. I think we should think twice before applying a singleton in a web application, or I miss something?? thanks

    Read the article

  • What do you do when a client ask for a feature which is a really bad idea?

    - by TAG
    Recently there was a SO question asking how to implement a feature which blocked users from copying text from a page in their browser. There were many negative comments on this feature, both because it's not practically possible to implement effectively and because it will interfere with the users' experience? What's a programmer to do in these sorts of situations in dealing with their clients or employers?

    Read the article

  • Any idea why this query always returns duplicate items?

    - by Kardo
    I want to get all Images not used by current ItemID. The this subquery but it also always returns duplicate Images: EDITED select Images.ImageID, Images.ItemStatus, Images.UserName, Images.Url, Image_Item.ItemID, Image_Item.ItemID from Images left join (select ImageID, ItemID, MAX(DateCreated) x from Image_Item where ItemID != '5a0077fe-cf86-434d-9f3b-7ff3030a1b6e' group by ImageID, ItemID having count(*) = 1) image_item on Images.imageid = image_item.imageid where ItemID is not null I guess the problem is with the subquery which I can't avoid duplicate rows: select ImageID, ItemID, MAX(DateCreated) x from Image_Item where ItemID != '5a0077fe-cf86-434d-9f3b-7ff3030a1b6e' group by ImageID, ItemID having count(*) = 1 Result: F2EECBDC-963D-42A7-90B1-4F82F89A64C7 0578AC61-3C32-4A1D-812C-60A09A661E71 F2EECBDC-963D-42A7-90B1-4F82F89A64C7 9A4EC913-5AD6-4F9E-AF6D-CF4455D81C10 42BC8B1A-7430-4915-9CDA-C907CBC76D6A CB298EB9-A105-4797-985E-A370013B684F 16371C34-B861-477C-9A7C-DEB27C8F333D 44E6349B-7EBF-4C7E-B3B0-1C6E2F19992C Table: Images ImageID uniqueidentifier UserName nvarchar(100) DateCreated smalldatetime Url nvarchar(250) ItemStatus char(1) Table: Image_Item ImageID uniqueidentifier ItemID uniqueidentifier UserName nvarchar(100) ItemStatus char(1) DateCreated smalldatetime Any kind help is highly appreciated.

    Read the article

  • The remote server returned an error: (400) Bad Request - uploading less 2MB file size?

    - by fiberOptics
    The file succeed to upload when it is 2KB or lower in size. The main reason why I use streaming is to be able to upload file up to at least 1 GB. But when I try to upload file with less 1MB size, I get bad request. It is my first time to deal with downloading and uploading process, so I can't easily find the cause of error. Testing part: private void button24_Click(object sender, EventArgs e) { try { OpenFileDialog openfile = new OpenFileDialog(); if (openfile.ShowDialog() == System.Windows.Forms.DialogResult.OK) { string port = "3445"; byte[] fileStream; using (FileStream fs = new FileStream(openfile.FileName, FileMode.Open, FileAccess.Read, FileShare.Read)) { fileStream = new byte[fs.Length]; fs.Read(fileStream, 0, (int)fs.Length); fs.Close(); fs.Dispose(); } string baseAddress = "http://localhost:" + port + "/File/AddStream?fileID=9"; HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(baseAddress); request.Method = "POST"; request.ContentType = "text/plain"; //request.ContentType = "application/octet-stream"; Stream serverStream = request.GetRequestStream(); serverStream.Write(fileStream, 0, fileStream.Length); serverStream.Close(); using (HttpWebResponse response = request.GetResponse() as HttpWebResponse) { int statusCode = (int)response.StatusCode; StreamReader reader = new StreamReader(response.GetResponseStream()); } } } catch (Exception ex) { MessageBox.Show(ex.Message); } } Service: [WebInvoke(UriTemplate = "AddStream?fileID={fileID}", Method = "POST", BodyStyle = WebMessageBodyStyle.Bare)] public bool AddStream(long fileID, System.IO.Stream fileStream) { ClasslLogic.FileComponent svc = new ClasslLogic.FileComponent(); return svc.AddStream(fileID, fileStream); } Server code for streaming: namespace ClasslLogic { public class StreamObject : IStreamObject { public bool UploadFile(string filename, Stream fileStream) { try { FileStream fileToupload = new FileStream(filename, FileMode.Create); byte[] bytearray = new byte[10000]; int bytesRead, totalBytesRead = 0; do { bytesRead = fileStream.Read(bytearray, 0, bytearray.Length); totalBytesRead += bytesRead; } while (bytesRead > 0); fileToupload.Write(bytearray, 0, bytearray.Length); fileToupload.Close(); fileToupload.Dispose(); } catch (Exception ex) { throw new Exception(ex.Message); } return true; } } } Web config: <system.serviceModel> <bindings> <basicHttpBinding> <binding> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="2097152" maxBytesPerRead="4096" maxNameTableCharCount="2097152" /> <security mode="None" /> </binding> <binding name="ClassLogicBasicTransfer" closeTimeout="00:05:00" openTimeout="00:05:00" receiveTimeout="00:15:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="67108864" maxReceivedMessageSize="67108864" messageEncoding="Mtom" textEncoding="utf-8" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="67108864" maxBytesPerRead="4096" maxNameTableCharCount="67108864" /> <security mode="None"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> <binding name="BaseLogicWSHTTP"> <security mode="None" /> </binding> <binding name="BaseLogicWSHTTPSec" /> </basicHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" aspNetCompatibilityEnabled="true" /> </system.serviceModel> I'm not sure if this affects the streaming function, because I'm using WCF4.0 rest template which config is dependent in Global.asax. One more thing is this, whether I run the service and passing a stream or not, the created file always contain this thing. How could I remove the "NUL" data? Thanks in advance. Edit public bool UploadFile(string filename, Stream fileStream) { try { FileStream fileToupload = new FileStream(filename, FileMode.Create); byte[] bytearray = new byte[10000]; int bytesRead, totalBytesRead = 0; do { bytesRead = fileStream.Read(bytearray, totalBytesRead, bytearray.Length - totalBytesRead); totalBytesRead += bytesRead; } while (bytesRead > 0); fileToupload.Write(bytearray, 0, totalBytesRead); fileToupload.Close(); fileToupload.Dispose(); } catch (Exception ex) { throw new Exception(ex.Message); } return true; }

    Read the article

  • Is Visual Source Safe (The latest Version) really that bad? Why? What's the Best Alternative? Why? [closed]

    - by hanzolo
    Over the years I've constantly heard horror stories, had people say "Real Programmers Dont Use VSS", and so on. BUT, then in the workplace I've worked at two companies, one, a very well known public facing high traffic website, and another high end Financial Services "Web-Based" hosted solution catering to some very large, very well known companies, which is where I currently Reside and everything's working just fine (KNOCK KNOCK!!). I'm constantly interfacing with EXTREMELY Old technology with some of these financial institutions.. OLD LIKE YOU WOULDN'T BELIEVE.. which leads me to the conclusion that if it works "LEAVE IT", and that maybe there's some value in old technology? at least enough value to overrule a rewrite!? right?? Is there something fundamentally flawed with the underlying technology that VSS uses? I have a feeling that if i said "someone said VSS Sucks" they would beg to differ, most likely give me this look like i dont know -ish, and I'd never gain back their respect and my credibility (well, that'll be hard to blow.. lol), BUT, give me an argument that I can take to someone whose been coding for 30 years, that builds Platforms that leverage current technology (.NET 3.5 / SQL 2008 R2 ), write's their own ORM with scaffolding and is able to provide a quality platform that supports thousands of concurrent users on a multi-tenant hosted solution, and does not agree with any benefits from having Source Control Integrated, and yet uses the Infamous Visual Source Safe. I have extensive experience with TFS up to 2010, and honestly I think it's great when a team (beyond developers) can embrace it. I've worked side by side with someone whose a die hard SVN'r and from a purist standpoint, I see the beauty in it (I need a bit more, out of my SS, but it surely suffices). So, why are such smarties not running away from Visual Source Safe? surely if it was so bad, it would've have been realized by now, and I would not be sitting here with this simple old, Check In, Check Out, Version Resistant, Label Intensive system. But here I am... I would love to drop an argument that would be the end all argument, but if it's a matter of opinion and personal experience, there seems to be too much leeway for keeping VSS. UPDATE: I guess the best case is to have the VSS supporters check other people's experiences and draw from that until we (please no) experience the breaking factor ourselves. Until then, i wont be engaging in a discussion to migrate off of VSS.. UPDATE 11-2012: So i was able to convince everyone at my work place that since MS is sun downing Visual Source Safe it might be time to migrate over to TFS. I was able to convince them and have recently upgraded our team to Visual Studio 2012 and TFS 2012. The migration was fairly painless, had to run analyze.exe which found a bunch of errors (not sure they'll ever affect the project) and then manually run the VSSConverter.exe. Again, painless, except it took 16 hours to migrate 5 years worth of everything.. and now we're on TFS.. much more integrated.. much more cooler.. so all in all, VSS served it's purpose for years without hick-up. There were no horror stories and Visual Source Save as source control worked just fine. so to all the nay sayers (me included). there's nothing wrong with using VSS. i wouldnt start a new project with it, and i would definitely consider migrating to TFS. (it's really not super difficult and a new "wizard" type converter is due out any day now so migrating should be painless). But from my experience, it worked just fine and got the job done.

    Read the article

  • How can I disable DNSSC for Google Apps (GMail) MX records on my authoritative domains?

    - by meinemitternacht
    I'm running a BIND Master / Slave setup with DNSSEC, but some of my domains use Google Apps for e-mail services. Google doesn't support DNSSEC and BIND doesn't like it at all. Log output: Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM.dlv.isc.org/DLV/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM/A/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM/AAAA/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755cb83950: ALT2.ASPMX.L.GOOGLE.COM AAAA: bad cache hit (ALT2.ASPMX.L.GOOGLE.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM/AAAA/IN': 69.147.224.178#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755ca52c30: ALT2.ASPMX.L.GOOGLE.COM A: bad cache hit (ALT2.ASPMX.L.GOOGLE.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ALT2.ASPMX.L.GOOGLE.COM/A/IN': 69.147.224.178#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755ca52c30: ASPMX2.GOOGLEMAIL.COM AAAA: bad cache hit (ASPMX2.GOOGLEMAIL.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ASPMX2.GOOGLEMAIL.COM/AAAA/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755cb83950: ASPMX2.GOOGLEMAIL.COM A: bad cache hit (ASPMX2.GOOGLEMAIL.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ASPMX2.GOOGLEMAIL.COM/A/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f754c1b0bd0: ASPMX2.GOOGLEMAIL.COM A: bad cache hit (ASPMX2.GOOGLEMAIL.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ASPMX2.GOOGLEMAIL.COM/A/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f754c1a6a30: ASPMX2.GOOGLEMAIL.COM AAAA: bad cache hit (ASPMX2.GOOGLEMAIL.COM.dlv.isc.org/DLV) Sep 6 17:12:51 srv549 named[5376]: error (broken trust chain) resolving 'ASPMX2.GOOGLEMAIL.COM/AAAA/IN': 70.32.45.42#53 Sep 6 17:12:51 srv549 named[5376]: validating @0x7f755cb83950: ASPMX3.GOOGLEMAIL.COM AAAA: bad cache hit (ASPMX3.GOOGLEMAIL.COM.dlv.isc.org/DLV) I'm not absolutely sure this is stopping Google Apps from working, because I just enabled all of the DNSSEC features. Does anyone here have experience with this?

    Read the article

  • My boss decided to add a "person to blame" field to every bug report. How can I convince him that it's a bad idea?

    - by MK_Dev
    In one of the latest "WTF" moves, my boss decided that adding a "Person To Blame" field to our bug tracking template will increase accountability (although we already have a way of tying bugs to features/stories). My arguments that this will decrease morale, increase finger-pointing and would not account for missing/misunderstood features reported as bug have gone unheard. What are some other strong arguments against this practice that I can use? Is there any writing on this topic that I can share with the team and the boss? I find this sort of culture unacceptable to work in but want to try and change it before jumping ship. Any input is appreciated.

    Read the article

  • Game 30% done on HTML5. Maybe it was a bad idea. Should I change to Unity3d? [on hold]

    - by Dokkat
    I'm creating a 3d game on HTML5. It's 30% complete and the hard part is already coded. The server is on node.js.Now I'm realizing that maybe it was not a wise choice. This is because I realized: Three.js still has many bugs. I don't see the same thing on every machine. Each browser, OS, can give different results. I'm afraid my clients will have a great stress installing my game properly. I have tons of sprites and models on my game. I wonder if my clients will have to load all them again everytime they want to play? I wonder if a Node.js server will be fast enough to handle it, and I'm afraid it won't be scalable. What would you advise me? Should I continue and finish the game on HTML5 or is it better to remake it on something else, like Unity3d for the client and (what?) for the server?

    Read the article

  • Is it a good idea to create seperate root, home, swap prior to installing Ubuntu or just Installing Ubuntu on a Single partition is a Good Choice?

    - by Curious Apprentice
    I wish to go for dual boot installation with already installed windows 7. Now, should I choose " Install along Side of Windows 7 " or go to advanced and make separate partitions for home, swap ,root etc ? What are the advantages of doing it ? There are similar topics on askubuntu.com. But here I want a complete answer. Edit : What is / and /root ? How i can allocate maximum space for software installation ? (70% for software and 30 % for home)

    Read the article

  • What is the most elegant way to access current_user from the models? or why is it a bad idea?

    - by TheLindyHop
    So, I've implemented some permissions between my users and the objects the users modify.. and I would like to lessen the coupling between the views/controllers with the models (calling said permissions). To do that, I had an idea: Implementing some of the permission functionality in the before_save / before_create / before_destroy callbacks. But since the permissions are tied to users (current_user.can_do_whatever?), I didn't know what to do. This idea may even increase coupling, as current_user is specifically controller-level. The reason why I initially wanted to do this is: All over my controllers, I'm having to check if a user has the ability to save / create / destroy. So, why not just return false upon save / create / destroy like rails' .save already does, and add an error to the model object and return false, just like rails' validations? Idk, is this good or bad? is there a better way to do this?

    Read the article

  • java.lang.UnsupportedClassVersionError: Bad version number in .class file?

    - by grmn.bob
    I am getting this error when I include an opensource library that I had to compile from source. Now, all the suggestions on the web indicate that the code was compiled in one version and executed in another version (new on old). However, I only have one version of JRE on my system. If I run the commands: $ javac -version javac 1.5.0_18 $ java -version java version "1.5.0_18" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_18-b02) Java HotSpot(TM) Server VM (build 1.5.0_18-b02, mixed mode) and check in Eclipse for the properties of the java library, I get 1.5.0_18 Therefore, I have to conclude something else, internal to a class itself, is throwing the exception?? Is that even possible?

    Read the article

  • Which functions in the C standard library commonly encourage bad practice?

    - by Ninefingers
    Hello all, This is inspired by this question and the comments on one particular answer in that I learnt that strncpy is not a very safe string handling function in C and that it pads zeros, until it reaches n, something I was unaware of. Specifically, to quote R.. strncpy does not null-terminate, and does null-pad the whole remainder of the destination buffer, which is a huge waste of time. You can work around the former by adding your own null padding, but not the latter. It was never intended for use as a "safe string handling" function, but for working with fixed-size fields in Unix directory tables and database files. snprintf(dest, n, "%s", src) is the only correct "safe strcpy" in standard C, but it's likely to be a lot slower. By the way, truncation in itself can be a major bug and in some cases might lead to privilege elevation or DoS, so throwing "safe" string functions that truncate their output at a problem is not a way to make it "safe" or "secure". Instead, you should ensure that the destination buffer is the right size and simply use strcpy (or better yet, memcpy if you already know the source string length). And from Jonathan Leffler Note that strncat() is even more confusing in its interface than strncpy() - what exactly is that length argument, again? It isn't what you'd expect based on what you supply strncpy() etc - so it is more error prone even than strncpy(). For copying strings around, I'm increasingly of the opinion that there is a strong argument that you only need memmove() because you always know all the sizes ahead of time and make sure there's enough space ahead of time. Use memmove() in preference to any of strcpy(), strcat(), strncpy(), strncat(), memcpy(). So, I'm clearly a little rusty on the C standard library. Therefore, I'd like to pose the question: What C standard library functions are used inappropriately/in ways that may cause/lead to security problems/code defects/inefficiencies? In the interests of objectivity, I have a number of criteria for an answer: Please, if you can, cite design reasons behind the function in question i.e. its intended purpose. Please highlight the misuse to which the code is currently put. Please state why that misuse may lead towards a problem. I know that should be obvious but it prevents soft answers. Please avoid: Debates over naming conventions of functions (except where this unequivocably causes confusion). "I prefer x over y" - preference is ok, we all have them but I'm interested in actual unexpected side effects and how to guard against them. As this is likely to be considered subjective and has no definite answer I'm flagging for community wiki straight away. I am also working as per C99.

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >