Search Results

Search found 458 results on 19 pages for 'destroyer of evil'.

Page 3/19 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Is GOTO really as evil as we are led to believe?

    - by RoboShop
    I'm a young programmer, so all my working life I've been told GOTO is evil, don't use it, if you do, your first born son will die. Recently, I've realized that GOTO actually still exists in .NET and I was wondering, is GOTO really as bad as they say, or is it just because everyone says you shouldn't use it, so that's why you don't. I know GOTO can be used badly, but are there any legit situations where you may possibly use it. The only thing I can think of is maybe to use GOTO to break out of a bunch of nested loops. I reckon that might be better then having to "break" out of each of them but because GOTO is supposedly always bad, I would never use it and it would probably never pass a peer review. What are your views? Is GOTO always bad? Can it sometimes be good? Has anyone here actually been gutsy enough to use GOTO for a real life system?

    Read the article

  • What techniques can be used to detect so called "black holes" (a spider trap) when creating a web crawler?

    - by Tom
    When creating a web crawler, you have to design somekind of system that gathers links and add them to a queue. Some, if not most, of these links will be dynamic, which appear to be different, but do not add any value as they are specifically created to fool crawlers. An example: We tell our crawler to crawl the domain evil.com by entering an initial lookup URL. Lets assume we let it crawl the front page initially, evil.com/index The returned HTML will contain several "unique" links: evil.com/somePageOne evil.com/somePageTwo evil.com/somePageThree The crawler will add these to the buffer of uncrawled URLs. When somePageOne is being crawled, the crawler receives more URLs: evil.com/someSubPageOne evil.com/someSubPageTwo These appear to be unique, and so they are. They are unique in the sense that the returned content is different from previous pages and that the URL is new to the crawler, however it appears that this is only because the developer has made a "loop trap" or "black hole". The crawler will add this new sub page, and the sub page will have another sub page, which will also be added. This process can go on infinitely. The content of each page is unique, but totally useless (it is randomly generated text, or text pulled from a random source). Our crawler will keep finding new pages, which we actually are not interested in. These loop traps are very difficult to find, and if your crawler does not have anything to prevent them in place, it will get stuck on a certain domain for infinity. My question is, what techniques can be used to detect so called black holes? One of the most common answers I have heard is the introduction of a limit on the amount of pages to be crawled. However, I cannot see how this can be a reliable technique when you do not know what kind of site is to be crawled. A legit site, like Wikipedia, can have hundreds of thousands of pages. Such limit could return a false positive for these kind of sites. Any feedback is appreciated. Thanks.

    Read the article

  • Security Issues with Single Page Apps

    - by Stephen.Walther
    Last week, I was asked to do a code review of a Single Page App built using the ASP.NET Web API, Durandal, and Knockout (good stuff!). In particular, I was asked to investigate whether there any special security issues associated with building a Single Page App which are not present in the case of a traditional server-side ASP.NET application. In this blog entry, I discuss two areas in which you need to exercise extra caution when building a Single Page App. I discuss how Single Page Apps are extra vulnerable to both Cross-Site Scripting (XSS) attacks and Cross-Site Request Forgery (CSRF) attacks. This goal of this blog post is NOT to persuade you to avoid writing Single Page Apps. I’m a big fan of Single Page Apps. Instead, the goal is to ensure that you are fully aware of some of the security issues related to Single Page Apps and ensure that you know how to guard against them. Cross-Site Scripting (XSS) Attacks According to WhiteHat Security, over 65% of public websites are open to XSS attacks. That’s bad. By taking advantage of XSS holes in a website, a hacker can steal your credit cards, passwords, or bank account information. Any website that redisplays untrusted information is open to XSS attacks. Let me give you a simple example. Imagine that you want to display the name of the current user on a page. To do this, you create the following server-side ASP.NET page located at http://MajorBank.com/SomePage.aspx: <%@Page Language="C#" %> <html> <head> <title>Some Page</title> </head> <body> Welcome <%= Request["username"] %> </body> </html> Nothing fancy here. Notice that the page displays the current username by using Request[“username”]. Using Request[“username”] displays the username regardless of whether the username is present in a cookie, a form field, or a query string variable. Unfortunately, by using Request[“username”] to redisplay untrusted information, you have now opened your website to XSS attacks. Here’s how. Imagine that an evil hacker creates the following link on another website (hackers.com): <a href="/SomePage.aspx?username=<script src=Evil.js></script>">Visit MajorBank</a> Notice that the link includes a query string variable named username and the value of the username variable is an HTML <SCRIPT> tag which points to a JavaScript file named Evil.js. When anyone clicks on the link, the <SCRIPT> tag will be injected into SomePage.aspx and the Evil.js script will be loaded and executed. What can a hacker do in the Evil.js script? Anything the hacker wants. For example, the hacker could display a popup dialog on the MajorBank.com site which asks the user to enter their password. The script could then post the password back to hackers.com and now the evil hacker has your secret password. ASP.NET Web Forms and ASP.NET MVC have two automatic safeguards against this type of attack: Request Validation and Automatic HTML Encoding. Protecting Coming In (Request Validation) In a server-side ASP.NET app, you are protected against the XSS attack described above by a feature named Request Validation. If you attempt to submit “potentially dangerous” content — such as a JavaScript <SCRIPT> tag — in a form field or query string variable then you get an exception. Unfortunately, Request Validation only applies to server-side apps. Request Validation does not help in the case of a Single Page App. In particular, the ASP.NET Web API does not pay attention to Request Validation. You can post any content you want – including <SCRIPT> tags – to an ASP.NET Web API action. For example, the following HTML page contains a form. When you submit the form, the form data is submitted to an ASP.NET Web API controller on the server using an Ajax request: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title></title> </head> <body> <form data-bind="submit:submit"> <div> <label> User Name: <input data-bind="value:user.userName" /> </label> </div> <div> <label> Email: <input data-bind="value:user.email" /> </label> </div> <div> <input type="submit" value="Submit" /> </div> </form> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { user: { userName: ko.observable(), email: ko.observable() }, submit: function () { $.post("/api/users", ko.toJS(this.user)); } }; ko.applyBindings(viewModel); </script> </body> </html> The form above is using Knockout to bind the form fields to a view model. When you submit the form, the view model is submitted to an ASP.NET Web API action on the server. Here’s the server-side ASP.NET Web API controller and model class: public class UsersController : ApiController { public HttpResponseMessage Post(UserViewModel user) { var userName = user.UserName; return Request.CreateResponse(HttpStatusCode.OK); } } public class UserViewModel { public string UserName { get; set; } public string Email { get; set; } } If you submit the HTML form, you don’t get an error. The “potentially dangerous” content is passed to the server without any exception being thrown. In the screenshot below, you can see that I was able to post a username form field with the value “<script>alert(‘boo’)</script”. So what this means is that you do not get automatic Request Validation in the case of a Single Page App. You need to be extra careful in a Single Page App about ensuring that you do not display untrusted content because you don’t have the Request Validation safety net which you have in a traditional server-side ASP.NET app. Protecting Going Out (Automatic HTML Encoding) Server-side ASP.NET also protects you from XSS attacks when you render content. By default, all content rendered by the razor view engine is HTML encoded. For example, the following razor view displays the text “<b>Hello!</b>” instead of the text “Hello!” in bold: @{ var message = "<b>Hello!</b>"; } @message   If you don’t want to render content as HTML encoded in razor then you need to take the extra step of using the @Html.Raw() helper. In a Web Form page, if you use <%: %> instead of <%= %> then you get automatic HTML Encoding: <%@ Page Language="C#" %> <% var message = "<b>Hello!</b>"; %> <%: message %> This automatic HTML Encoding will prevent many types of XSS attacks. It prevents <script> tags from being rendered and only allows &lt;script&gt; tags to be rendered which are useless for executing JavaScript. (This automatic HTML encoding does not protect you from all forms of XSS attacks. For example, you can assign the value “javascript:alert(‘evil’)” to the Hyperlink control’s NavigateUrl property and execute the JavaScript). The situation with Knockout is more complicated. If you use the Knockout TEXT binding then you get HTML encoded content. On the other hand, if you use the HTML binding then you do not: <!-- This JavaScript DOES NOT execute --> <div data-bind="text:someProp"></div> <!-- This Javacript DOES execute --> <div data-bind="html:someProp"></div> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { someProp : "<script>alert('Evil!')<" + "/script>" }; ko.applyBindings(viewModel); </script>   So, in the page above, the DIV element which uses the TEXT binding is safe from XSS attacks. According to the Knockout documentation: “Since this binding sets your text value using a text node, it’s safe to set any string value without risking HTML or script injection.” Just like server-side HTML encoding, Knockout does not protect you from all types of XSS attacks. For example, there is nothing in Knockout which prevents you from binding JavaScript to a hyperlink like this: <a data-bind="attr:{href:homePageUrl}">Go</a> <script src="Scripts/jquery-1.7.1.min.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { homePageUrl: "javascript:alert('evil!')" }; ko.applyBindings(viewModel); </script> In the page above, the value “javascript:alert(‘evil’)” is bound to the HREF attribute using Knockout. When you click the link, the JavaScript executes. Cross-Site Request Forgery (CSRF) Attacks Cross-Site Request Forgery (CSRF) attacks rely on the fact that a session cookie does not expire until you close your browser. In particular, if you visit and login to MajorBank.com and then you navigate to Hackers.com then you will still be authenticated against MajorBank.com even after you navigate to Hackers.com. Because MajorBank.com cannot tell whether a request is coming from MajorBank.com or Hackers.com, Hackers.com can submit requests to MajorBank.com pretending to be you. For example, Hackers.com can post an HTML form from Hackers.com to MajorBank.com and change your email address at MajorBank.com. Hackers.com can post a form to MajorBank.com using your authentication cookie. After your email address has been changed, by using a password reset page at MajorBank.com, a hacker can access your bank account. To prevent CSRF attacks, you need some mechanism for detecting whether a request is coming from a page loaded from your website or whether the request is coming from some other website. The recommended way of preventing Cross-Site Request Forgery attacks is to use the “Synchronizer Token Pattern” as described here: https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29_Prevention_Cheat_Sheet When using the Synchronizer Token Pattern, you include a hidden input field which contains a random token whenever you display an HTML form. When the user opens the form, you add a cookie to the user’s browser with the same random token. When the user posts the form, you verify that the hidden form token and the cookie token match. Preventing Cross-Site Request Forgery Attacks with ASP.NET MVC ASP.NET gives you a helper and an action filter which you can use to thwart Cross-Site Request Forgery attacks. For example, the following razor form for creating a product shows how you use the @Html.AntiForgeryToken() helper: @model MvcApplication2.Models.Product <h2>Create Product</h2> @using (Html.BeginForm()) { @Html.AntiForgeryToken(); <div> @Html.LabelFor( p => p.Name, "Product Name:") @Html.TextBoxFor( p => p.Name) </div> <div> @Html.LabelFor( p => p.Price, "Product Price:") @Html.TextBoxFor( p => p.Price) </div> <input type="submit" /> } The @Html.AntiForgeryToken() helper generates a random token and assigns a serialized version of the same random token to both a cookie and a hidden form field. (Actually, if you dive into the source code, the AntiForgeryToken() does something a little more complex because it takes advantage of a user’s identity when generating the token). Here’s what the hidden form field looks like: <input name=”__RequestVerificationToken” type=”hidden” value=”NqqZGAmlDHh6fPTNR_mti3nYGUDgpIkCiJHnEEL59S7FNToyyeSo7v4AfzF2i67Cv0qTB1TgmZcqiVtgdkW2NnXgEcBc-iBts0x6WAIShtM1″ /> And here’s what the cookie looks like using the Google Chrome developer toolbar: You use the [ValidateAntiForgeryToken] action filter on the controller action which is the recipient of the form post to validate that the token in the hidden form field matches the token in the cookie. If the tokens don’t match then validation fails and you can’t post the form: public ActionResult Create() { return View(); } [ValidateAntiForgeryToken] [HttpPost] public ActionResult Create(Product productToCreate) { if (ModelState.IsValid) { // save product to db return RedirectToAction("Index"); } return View(); } How does this all work? Let’s imagine that a hacker has copied the Create Product page from MajorBank.com to Hackers.com – the hacker grabs the HTML source and places it at Hackers.com. Now, imagine that the hacker trick you into submitting the Create Product form from Hackers.com to MajorBank.com. You’ll get the following exception: The Cross-Site Request Forgery attack is blocked because the anti-forgery token included in the Create Product form at Hackers.com won’t match the anti-forgery token stored in the cookie in your browser. The tokens were generated at different times for different users so the attack fails. Preventing Cross-Site Request Forgery Attacks with a Single Page App In a Single Page App, you can’t prevent Cross-Site Request Forgery attacks using the same method as a server-side ASP.NET MVC app. In a Single Page App, HTML forms are not generated on the server. Instead, in a Single Page App, forms are loaded dynamically in the browser. Phil Haack has a blog post on this topic where he discusses passing the anti-forgery token in an Ajax header instead of a hidden form field. He also describes how you can create a custom anti-forgery token attribute to compare the token in the Ajax header and the token in the cookie. See: http://haacked.com/archive/2011/10/10/preventing-csrf-with-ajax.aspx Also, take a look at Johan’s update to Phil Haack’s original post: http://johan.driessen.se/posts/Updated-Anti-XSRF-Validation-for-ASP.NET-MVC-4-RC (Other server frameworks such as Rails and Django do something similar. For example, Rails uses an X-CSRF-Token to prevent CSRF attacks which you generate on the server – see http://excid3.com/blog/rails-tip-2-include-csrf-token-with-every-ajax-request/#.UTFtgDDkvL8 ). For example, if you are creating a Durandal app, then you can use the following razor view for your one and only server-side page: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> @Html.AntiForgeryToken() <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that this page includes a call to @Html.AntiForgeryToken() to generate the anti-forgery token. Then, whenever you make an Ajax request in the Durandal app, you can retrieve the anti-forgery token from the razor view and pass the token as a header: var csrfToken = $("input[name='__RequestVerificationToken']").val(); $.ajax({ headers: { __RequestVerificationToken: csrfToken }, type: "POST", dataType: "json", contentType: 'application/json; charset=utf-8', url: "/api/products", data: JSON.stringify({ name: "Milk", price: 2.33 }), statusCode: { 200: function () { alert("Success!"); } } }); Use the following code to create an action filter which you can use to match the header and cookie tokens: using System.Linq; using System.Net.Http; using System.Web.Helpers; using System.Web.Http.Controllers; namespace MvcApplication2.Infrastructure { public class ValidateAjaxAntiForgeryToken : System.Web.Http.AuthorizeAttribute { protected override bool IsAuthorized(HttpActionContext actionContext) { var headerToken = actionContext .Request .Headers .GetValues("__RequestVerificationToken") .FirstOrDefault(); ; var cookieToken = actionContext .Request .Headers .GetCookies() .Select(c => c[AntiForgeryConfig.CookieName]) .FirstOrDefault(); // check for missing cookie or header if (cookieToken == null || headerToken == null) { return false; } // ensure that the cookie matches the header try { AntiForgery.Validate(cookieToken.Value, headerToken); } catch { return false; } return base.IsAuthorized(actionContext); } } } Notice that the action filter derives from the base AuthorizeAttribute. The ValidateAjaxAntiForgeryToken only works when the user is authenticated and it will not work for anonymous requests. Add the action filter to your ASP.NET Web API controller actions like this: [ValidateAjaxAntiForgeryToken] public HttpResponseMessage PostProduct(Product productToCreate) { // add product to db return Request.CreateResponse(HttpStatusCode.OK); } After you complete these steps, it won’t be possible for a hacker to pretend to be you at Hackers.com and submit a form to MajorBank.com. The header token used in the Ajax request won’t travel to Hackers.com. This approach works, but I am not entirely happy with it. The one thing that I don’t like about this approach is that it creates a hard dependency on using razor. Your single page in your Single Page App must be generated from a server-side razor view. A better solution would be to generate the anti-forgery token in JavaScript. Unfortunately, until all browsers support a way to generate cryptographically strong random numbers – for example, by supporting the window.crypto.getRandomValues() method — there is no good way to generate anti-forgery tokens in JavaScript. So, at least right now, the best solution for generating the tokens is the server-side solution with the (regrettable) dependency on razor. Conclusion The goal of this blog entry was to explore some ways in which you need to handle security differently in the case of a Single Page App than in the case of a traditional server app. In particular, I focused on how to prevent Cross-Site Scripting and Cross-Site Request Forgery attacks in the case of a Single Page App. I want to emphasize that I am not suggesting that Single Page Apps are inherently less secure than server-side apps. Whatever type of web application you build – regardless of whether it is a Single Page App, an ASP.NET MVC app, an ASP.NET Web Forms app, or a Rails app – you must constantly guard against security vulnerabilities.

    Read the article

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • Usernames are evil. How can I make Restful Authentication only require an email address and password

    - by Koning WWWWWWWWWWWWWWWWWWWWWWW
    As the title says: how can I use the Restful Authentication Plugin with Ruby on Rails. When I want to create a new user, it requires me to set the (wrong-named, confusing field) login (= username), email address and password. However, I want, like Facebook does, to require the user to enter only an email address and password, not a username. People will also login with this email address. Can anyone help me?

    Read the article

  • If tables are so evil, why does joomla use tables for its layout?

    - by DMin
    When you create a template you don't use tables. but when joomla rendereds the page, there are at least 4 places where tables are used to structure the content. Does this tell you something about tables or the fact that the joomla community should invest more time and learn how to code css or just the fact that tables are needed when you don't know what css is going to be throw in later(which probably means they are of some use)?

    Read the article

  • Assembly wide multicast attributes. Are they evil?

    - by HeavyWave
    I am working on a project where we have several attributes in AssemblyInfo.cs, that are being multicast to a methods of a particular class. [assembly: Repeatable( AspectPriority = 2, AttributeTargetAssemblies = "MyNamespace", AttributeTargetTypes = "MyNamespace.MyClass", AttributeTargetMemberAttributes = MulticastAttributes.Public, AttributeTargetMembers = "*Impl", Prefix = "Cls")] What I don't like about this, is that it puts a piece of login into AssemblyInfo (Info, mind you!), which for starters should not contain any logic at all. The worst part of it, is that the actual MyClass.cs does not have the attribute anywhere in the file, and it is completely unclear that methods of this class might have them. From my perspective it greatly hurts readability of the code (not to mention that overuse of PostSharp can make debugging a nightmare). Especially when you have multiple multicast attributes. What is the best practice here? Is anyone out there is using PostSharp attributes like this?

    Read the article

  • Usernames are evil. How can I make Restful Authentication only require a username and password, inst

    - by Koning WWWWWWWWWWWWWWWWWWWWWWW
    As the title sais: How can I use the Restful Authentication Plugin with Ruby on Rails. When I want to create a new user, it requires me a (wrong-named, confusing field) login (= username), email address and password. However, I want, like Facebook does, only require the user to enter an email address and password, not a username. People will also login with this email address. Can anyone help me?

    Read the article

  • A good architecture is evil? Hardcode forever?

    - by igor
    I have worked in many companies. Most of all reached a big success in their field. Some times I found the code was written by owner or co-owner or the first developer of this company. It was strange from architectural point of view code or awful code styled, or hardcoded and so on. I know a couple of startups, that were grown up and were started from the "one night" code. Is it only way to get success to write code in this way? Why does a code written "on knee" but in time is better than delayed well thought-out one? What about future? Which way is the best: to write a good architecture, code and spend some more time at the startup or to write "fast" and hardcoded one that would be completely (partially) throw out (or maybe wouldn't) after some period of time (or never)?

    Read the article

  • Evil merges in git - where do they come from?

    - by Benjol
    I've read this question and the answers, but what isn't clear to me is WHO creates the "changes that do not appear in any parent". Is it the git merge algorithm screwing up? Or is it because the user has to manually adjust the conflicts to get the thing to build, introducing new code which wasn't in either parent?

    Read the article

  • matrix = *((fxMatrix*)&d3dMatrix); //Evil?

    - by Xilliah
    I've been using matrix = *((fxMatrix*)&d3dMatrix); for quite a while. It worked fine until my screen turned black and received a bucket of frustration on my desk. fxMatrix contains 4 fxVectors. fxVector used to be 16 bytes, but now it was suddenly 20. This was because it inherited fxStreamable, which added the vTable. So one solution is of course just to not inherit fxStreamable, and leave a comment saying that it must always be 16 bytes and never more. Another solution would be to make conversion functions, and copy the matrix completely. This makes it more secure, but has an impact on the performance. I suppose this is the best idea. Another solution is to not convert at all, and stick to D3DXMATRIX, but this makes the engine inconsistent and I personally really dislike this idea. What is your opinion?

    Read the article

  • C# .NET Why does my inherited listview keep drawing in LargeIcon View ?? Because Microsoft is Evil!!

    - by Bugz R us
    I have a inherited Listview which standard has to be in Tile Mode. When using this control, the DrawItem gives e.bounds which are clearly bounds of largeIcon view ?? When debugging to check the view it is actually set to, it says it's in Tile view ?? Yet e.DrawText draws LargeIcon view ?? ......... Edit: ................. This seems only to happen when the control is placed upon another usercontrol? ......... Edit 2: ................. It gets stranger ... When i add buttons next to the list to change the view at runtime, "Tile" is the same as "LargeIcon", and "List" view is the same as "SmallIcons" ??? I've also completely removed the ownerdraw ... .......... Edit 3: ................. MSDN Documentation: Tile view Each item appears as a full-sized icon with the item label and subitem information to the right of it. The subitem information that appears is specified by the application. This view is available only on Windows XP and the Windows Server 2003 family. On earlier operating systems, this value is ignored and the ListView control displays in the LargeIcon view. Well I am on XP ya damn liars ?!? Apparently if the control is within a usercontrol, this value is ignored too ... pff I'm getting enough of this Microsoft crap .... you just keep on hitting bugs ... another day down the drain ... public class InheritedListView : ListView { //Hiding members ... mwuahahahahaha //yeah i was still laughing then [BrowsableAttribute(false)] public new View View { get { return base.View; } } public InheritedListView() { base.View = View.Tile; this.OwnerDraw = true; base.DrawItem += new DrawListViewItemEventHandler(DualLineGrid_DrawItem); } void DualLineGrid_DrawItem(object sender, DrawListViewItemEventArgs e) { View v = this.View; //**when debugging, v is Tile, however e.DrawText() draws in LargeIcon mode, // e.Bounds also reflects LargeIcon mode ???? ** }

    Read the article

  • GPL the Dark Side

    - by EmbeddedInsider
    This blog is about the GPL Issues nobody talks about.  Its about the evil inherent in the GPL License. Evil?  But did not someone tell us that "open" is good?  Well, yes, and I might agree. It just depends on what we mean by 'open'.   There are many kinds of 'open' license, and many of these I like.  But  I maintain the GPL; the principle license of the Open Source Software Foundation, is most certainly NOT open for business.  And to the extent that software is conceived, developed, and maintained business, not hobbyists, the GPL is very, very evil. Controversial? You bet.  Flame away please. Lawrence Ricci www.EmbeddedInsider.com

    Read the article

  • I'm a premature optimizer

    - by Matthew Day
    I work in a small sized software/web development company. I have gotten into the habit of optimizing prematurely, I know it is evil and promotes bad code... But I have been working at this firm for a long while and I have deemed this as a necessary evil. It has never caused me an issue so far in the past, but it might if I get partners or a successor. The point of this long-winded speech is that, should I change my evil practices to 'save face' and to help out in the future?

    Read the article

  • Round-twice error in .NET's Double.ToString method

    - by Jeppe Stig Nielsen
    Mathematically, consider for this question the rational number 8725724278030350 / 2**48 where ** in the denominator denotes exponentiation, i.e. the denominator is 2 to the 48th power. (The fraction is not in lowest terms, reducible by 2.) This number is exactly representable as a System.Double. Its decimal expansion is 31.0000000000000'49'73799150320701301097869873046875 (exact) where the apostrophes do not represent missing digits but merely mark the boudaries where rounding to 15 resp. 17 digits is to be performed. Note the following: If this number is rounded to 15 digits, the result will be 31 (followed by thirteen 0s) because the next digits (49...) begin with a 4 (meaning round down). But if the number is first rounded to 17 digits and then rounded to 15 digits, the result could be 31.0000000000001. This is because the first rounding rounds up by increasing the 49... digits to 50 (terminates) (next digits were 73...), and the second rounding might then round up again (when the midpoint-rounding rule says "round away from zero"). (There are many more numbers with the above characteristics, of course.) Now, it turns out that .NET's standard string representation of this number is "31.0000000000001". The question: Isn't this a bug? By standard string representation we mean the String produced by the parameterles Double.ToString() instance method which is of course identical to what is produced by ToString("G"). An interesting thing to note is that if you cast the above number to System.Decimal then you get a decimal that is 31 exactly! See this Stack Overflow question for a discussion of the surprising fact that casting a Double to Decimal involves first rounding to 15 digits. This means that casting to Decimal makes a correct round to 15 digits, whereas calling ToSting() makes an incorrect one. To sum up, we have a floating-point number that, when output to the user, is 31.0000000000001, but when converted to Decimal (where 29 digits are available), becomes 31 exactly. This is unfortunate. Here's some C# code for you to verify the problem: static void Main() { const double evil = 31.0000000000000497; string exactString = DoubleConverter.ToExactString(evil); // Jon Skeet, http://csharpindepth.com/Articles/General/FloatingPoint.aspx Console.WriteLine("Exact value (Jon Skeet): {0}", exactString); // writes 31.00000000000004973799150320701301097869873046875 Console.WriteLine("General format (G): {0}", evil); // writes 31.0000000000001 Console.WriteLine("Round-trip format (R): {0:R}", evil); // writes 31.00000000000005 Console.WriteLine(); Console.WriteLine("Binary repr.: {0}", String.Join(", ", BitConverter.GetBytes(evil).Select(b => "0x" + b.ToString("X2")))); Console.WriteLine(); decimal converted = (decimal)evil; Console.WriteLine("Decimal version: {0}", converted); // writes 31 decimal preciseDecimal = decimal.Parse(exactString, CultureInfo.InvariantCulture); Console.WriteLine("Better decimal: {0}", preciseDecimal); // writes 31.000000000000049737991503207 } The above code uses Skeet's ToExactString method. If you don't want to use his stuff (can be found through the URL), just delete the code lines above dependent on exactString. You can still see how the Double in question (evil) is rounded and cast.

    Read the article

  • INSERT..ON DUPLICATE KEY UPDATE - but NOT using the duplicate key to compare.

    - by calumbrodie
    I am trying to solve a problem I have inherited with poor treatment of different data sources. I have a user table that contains BOTH good and evil users. create table `users`( `user_id` int(13) NOT NULL AUTO_INCREMENT , `email` varchar(255) , `name` varchar(255) , PRIMARY KEY (`user_id`) ); In this table the primary key is currently set to be user_id. I have another table ('users_evil') which contains ONLY the evil users (all the users from this table are included in the first table) - the user_id's on this table do NOT correspond to those in the first table. I want to have all my users in one table, and simply flag which are good and which are evil. What I want to do is alter the user table and add a column ('evil') which defaults to 0. I then want to dump the data from my 'users_evil') table and then run an INSERT..ON DUPLICATE KEY UPDATE with this data into the first table (setting 'evil'=1 where the emails match) The problem is that the 'PK' is set to the user_id and not the 'email'. Any suggestions, or even another strategy to successfully achive this. Can I run this statement but treat another column as PK only for the duration of the statement.

    Read the article

  • Friday Fun: Turkey Slice

    - by Asian Angel
    In this week’s game you engage in a holiday war with a group of evil turkeys that are determined to ruin your Thanksgiving Day celebrations. Can you put these evil turkeys on the menu where they belong or will they get in the last gobble at your expense? How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • Friday Fun: Museum of Thieves

    - by Asian Angel
    In this week’s game you are lured to the mysterious Museum of Dunt where adventure and an evil force awaits. Can you find the differences in the museum’s strange, shifting rooms as you work your way through it or will the restless evil that dwells within escape? Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • OS X is based on the codex Gigas the devils bible: Android OS n 4.4 Dio Ra Egyptian deity why is that?

    - by user215250
    GUESS WHO? The Internet and all computers are based on a mathematical number system that seems to be 3, 4, 6, 8 and 10. The HTTP is 888P and UTF-8 and Windows 8 and 8 gigas of RAM(88) on a 64biTOS X-10 or Aten(satan) why is it allowed to be so evil and who all knows about it? Is this activity illegal and should I sew these companies for being involved in satanic practices? iC3 iC3 iC3 I do see XP (X) Chi and (P) Rho a monogram and symbol for Christ, consisting of the superimposed Greek letters. The X is ten, The X code, OS X(O Satan) and codex The Gigas the Devil bible and the P is Payne, The House of Payne in which God dwells. Windows 8, Google Android and Apples OS X are the foundation on which we operarte on the Internet and our Mobiles devices. What is it that these 3 companies have chosen to base their OS’s on such evil? Windows 8 is windows hate H8, HH and H8. Said to be the Devil. Google’s (UGLE) M the Masonic M behind Android OS.in 4.4 is Dio (R) DNA O Sin and 44 is the Devils name in Twain’s The Mysterious Stranger. Apple’s evil (i) OS X (ou-es-ten) O Satan them all beat (B8) considering Apple put their first product on the market for $666.66. The Holy Grail of computers they say. Your Excellency, Lord and King OS2 Eisus Uni Peg Unix: The Unicorn Pegasus Jesus Christ

    Read the article

  • Router(s) Issue: DNS quries sporadically fail with multiple computers hooked in

    - by bob-the-destroyer
    Basically, after anywhere from 5-60 minutes, DNS queries fail for a few minutes, then slowly begin to resolve correctly. Then the cycle repeats. This occurs only when more than one computer is on the network. All computers on the network experiences the same sporadic DNS outage at the same time. Wireless or wired, Linux or Windows, fresh OS install or old, browser or ping, same symptoms. Duplicated on 3 routers (not chained together, mind you) and 3 ISP's and 3 separate locations over the past several months. The only common theme is a single 5-yo WIN XP laptop which has been in use on the network throughout all this. There also may be anywhere between 1 - 10 devices hooked up wired or wirelessly at a time. The only reprieve I have from this torture is by using any VPN to an outside source - always smooth sailing. I typically set up any router to a) use WPA2/etc security; b) MAC whitelist; c) UPNP OFF (if available); d) always update firmware when available; e) obtain DNS from ISP automatically; f) set the router to act as DHCP server for the internal network. Adjusting channels has no effect. Any ideas?

    Read the article

  • Linux, PHP, and mysqlnd?

    - by bob-the-destroyer
    Looks like most common distros build their PHP packages without mysqlnd, instead relying on the old libmysql library. Are there any distros that do include the new mysql native driver even as an option in any of their officially supported repositories? If not, is it possible (and if so, how) to rebuild these three (mysql, mysqli, & pdo_mysql) extensions without having to rebuild the entire PHP install? In my case, there are several PHP functions I need that are available only if these Mysql extenstions are built against mysqlnd. So I'd be fine installing a new OS to get these if the OS offers and supports it.

    Read the article

  • Images with unknown content: Dangerous for a browser?

    - by chris_l
    Let's say I allow users to link to any images they like. The link would be checked for syntactical correctness, escaping etc., and then inserted in an <img src="..."/> tag. Are there any known security vulnerabilities, e.g. by someone linking to "evil.example.com/evil.jpg", and evil.jpg contains some code that will be executed due to a browser bug or something like that? (Let's ignore CSRF attacks - it must suffice that I will only allow URLs with typical image file suffixes.)

    Read the article

  • Any valid use of goto in PHP?

    - by nikic
    I know, there are other questions about the goto statement introduced in PHP 5.3. But I couldn't find any decent answer in there, all were of the type "last resort", "xkcd", "evil", "bad", "EVIL!!!". But no valid example. Only statements that there aren't any uses. Or statements that there are some rare use cases (again, without examples). So, the question is: "Which are the valid uses of goto in PHP". Answers for "Is goto evil" are not welcome and will get downvoted. Thanks :) Or does somebody have a link to an RFC where the decision is explained - I couldn't find one.

    Read the article

  • How to get select's value in jqGrid when using <select> editoptions on a column

    - by destroyer of evil
    I have a couple of columns in jqGrid with edittype="select". How can I read the option value of the value currently selected in a particular row? e.g.: When I provide the following option, how do I get "FE" for FedEx, etc. editoption: { value: “FE:FedEx; IN:InTime; TN:TNT” } getRowData() for the rowId/cellname returns only the text/displayed component of the select. If I set a "change" data event on the column, the underlying fires change events only on mouse clicks, and not keyboard selects (there's numerous references to generic selects and mouse/keyboard issues). Bottomline, when a new value is selected, I need to know the option value at the time of the change, and also prior to posting to the server.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >