Search Results

Search found 648 results on 26 pages for 'austin danger powers'.

Page 18/26 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Should I convert overly-long UTF-8 strings to their shortest normal form?

    - by Grant McLean
    I've just been reworking my Encoding::FixLatin Perl module to handle overly-long UTF-8 byte sequences and convert them to the shortest normal form. My question is quite simply "is this a bad idea"? A number of sources (including this RFC) suggest that any over-long UTF-8 should be treated as an error and rejected. They caution against "naive implementations" and leave me with the impression that these things are inherently unsafe. Since the whole purpose of my module is to clean up messy data files with mixed encodings and convert them to nice clean utf8, this seems like just one more thing I can clean up so the application layer doesn't have to deal with it. My code does not concern itself with any semantic meaning the resulting characters might have, it simply converts them into a normalised form. Am I missing something. Is there a hidden danger I haven't considered?

    Read the article

  • PHP Variable Passing in Foreach on same page

    - by tooly228
    I've been struggling this for a while and I simply can't figure this out. Here is my code: <?php foreach($list as $id =>$name) { echo("<td style=\"vertical-align:middle;\"> <a href=\"item=$id#confirm\" role=\"button\" data-toggle=\"modal\"> Buy</a></td></tr>"); }?> <html> <div class="modal small hide fade" id="confirm" tabindex="-1" role="dialog" aria- labelledby="myModalLabel" aria-hidden="true"> <a href="redeem.php?item=<?php echo $id; ?>"><button class="btn btn-danger"> Buy</button></a></div> The main issue here is that the $id from the foreeach is not the same as the $id in the div class link. Instead the link is the end value of the foreach list.

    Read the article

  • dependent: :destroy is not deleting dependencies from views

    - by jxdx
    Projects have many rooms. When I delete a project from the view, the associated rooms are not deleted. Rooms also have many products which should also be deleted when a project is deleted. Project class class Project < ActiveRecord::Base belongs_to :user has_many :rooms, dependent: :destroy has_many :products, through: :rooms end Projects Controller class ProjectsController < ApplicationController def destroy @project = current_user.projects.find(params[:id]) if @project.delete redirect_to user_projects_path(@project.user) end end end Rooms Controller class RoomsController < ApplicationController def destroy @room = Room.find(params[:id]) if @room.delete redirect_to root_path end end The delete link in the projects show view. = link_to "Delete", project_room_path(room.project, room), method: :delete, data: { confirm: "Are you sure?" }, title: room.title, class: "btn btn-danger"

    Read the article

  • How does ruby allow a method and a Class with the same name?

    - by Daniel Beardsley
    I happened to be working on a Singleton class in ruby and just remembered the way it works in factory_girl. They worked it out so you can use both the long way Factory.create(...) and the short way Factory(...) I thought about it and was curious to see how they made the class Factory also behave like a method. They simply used Factory twice like so: def Factory (args) ... end class Factory ... end My Question is: How does ruby accomplish this? and Is there danger in using this seemingly quirky pattern?

    Read the article

  • PHP sorting issue with simpleXML

    - by tugbucket
    test.xml: <?xml version="1.0"?> <props> <prop> <state statename="Mississippi"> <info> <code>a1</code> <location>Jackson</location> </info> <info> <code>d2</code> <location>Gulfport</location> </info> <info> <code>g6</code> <location>Hattiesburg</location> </info> </state> <state statename="Texas"> <info> <code>i9</code> <location>Dallas</location> </info> <info> <code>a7</code> <location>Austin</location> </info> </state> <state statename="Maryland"> <info> <code>s5</code> <location>Mount Laurel</location> </info> <info> <code>f0</code> <location>Baltimore</location> </info> <info> <code>h4</code> <location>Annapolis</location> </info> </state> </prop> </props> test.php // start the sortCities function sortCities($a, $b){ return strcmp($a->location, $b->location); } // start the sortStates function sortStates($t1, $t2) { return strcmp($t1['statename'], $t2['statename']); } $props = simplexml_load_file('test.xml'); foreach ($props->prop as $prop) { $sortedStates = array(); foreach($prop->state as $states) { $sortedStates[] = $states; } usort($sortedStates, "sortStates"); // finish the sortStates /* --- */ echo '<pre>'."\n"; print_r($sortedStates); echo '</pre>'."\n"; /* --- */ foreach ($prop->children() as $stateattr) { // this doesn't do it //foreach($sortedStates as $hotel => @attributes){ // blargh! if(isset($stateattr->info)) { $statearr = $stateattr->attributes(); echo '<optgroup label="'.$statearr['statename'].'">'."\n"; $options = array(); foreach($stateattr->info as $info) { $options[] = $info; } usort($options, "sortCities"); // finish the sortCities foreach($options as $stateattr => $info){ echo '<option value="'.$info->code.'">'.$info->location.'</option>'."\n"; } echo '</optgroup>'."\n"; } else { //empty nodes don't do squat } } } ?> This is the array that: print_r($sortedStates); prints out: Array ( [0] => SimpleXMLElement Object ( [@attributes] => Array ( [statename] => Maryland ) [info] => Array ( [0] => SimpleXMLElement Object ( [code] => s5 [location] => Mount Laurel ) [1] => SimpleXMLElement Object ( [code] => f0 [location] => Baltimore ) [2] => SimpleXMLElement Object ( [code] => h4 [location] => Annapolis ) ) ) [1] => SimpleXMLElement Object ( [@attributes] => Array ( [statename] => Mississippi ) [info] => Array ( [0] => SimpleXMLElement Object ( [code] => a1 [location] => Jackson ) [1] => SimpleXMLElement Object ( [code] => d2 [location] => Gulfport ) [2] => SimpleXMLElement Object ( [code] => g6 [location] => Hattiesburg ) ) ) [2] => SimpleXMLElement Object ( [@attributes] => Array ( [statename] => Texas ) [info] => Array ( [0] => SimpleXMLElement Object ( [code] => i9 [location] => Dallas ) [1] => SimpleXMLElement Object ( [code] => a7 [location] => Austin ) ) ) ) this: // start the sortCities function sortCities($a, $b){ return strcmp($a->location, $b->location); } plus this part of code: $options = array(); foreach($stateattr->info as $info) { $options[] = $info; } usort($options, "sortCities"); // finish the sortCities foreach($options as $stateattr => $info){ echo '<option value="'.$info->code.'">'.$info->location.'</option>'."\n"; } is doing a fine job of sorting by the 'location' node within each optgroup. You can see that in the array I can make it sort by the attribute 'statename'. What I am having trouble with is echoing out and combining the two functions in order to have it auto sort both the states and the cities within and forming the needed optgroups. I tried copying the lines for the cities and changing the names called several ways to no avail.

    Read the article

  • Pain Comes Instantly

    - by user701213
    When I look back at recent blog entries – many of which are not all that current (more on where my available writing time is going later) – I am struck by how many of them focus on public policy or legislative issues instead of, say, the latest nefarious cyberattack or exploit (or everyone’s favorite new pastime: coining terms for the Coming Cyberpocalypse: “digital Pearl Harbor” is so 1941). Speaking of which, I personally hope evil hackers from Malefactoria will someday hack into my bathroom scale – which in a future time will be connected to the Internet because, gosh, wouldn’t it be great to have absolutely everything in your life Internet-enabled? – and recalibrate it so I’m 10 pounds thinner. The horror. In part, my focus on public policy is due to an admitted limitation of my skill set. I enjoy reading technical articles about exploits and cybersecurity trends, but writing a blog entry on those topics would take more research than I have time for and, quite honestly, doesn’t play to my strengths. The first rule of writing is “write what you know.” The bigger contributing factor to my recent paucity of blog entries is that more and more of my waking hours are spent engaging in “thrust and parry” activity involving emerging regulations of some sort or other. I’ve opined in earlier blogs about what constitutes good and reasonable public policy so nobody can accuse me of being reflexively anti-regulation. That said, you have so many cycles in the day, and most of us would rather spend it slaying actual dragons than participating in focus groups on whether dragons are really a problem, whether lassoing them (with organic, sustainable and recyclable lassos) is preferable to slaying them – after all, dragons are people, too - and whether we need lasso compliance auditors to make sure lassos are being used correctly and humanely. (A point that seems to evade many rule makers: slaying dragons actually accomplishes something, whereas talking about “approved dragon slaying procedures and requirements” wastes the time of those who are competent to dispatch actual dragons and who were doing so very well without the input of “dragon-slaying theorists.”) Unfortunately for so many of us who would just get on with doing our day jobs, cybersecurity is rapidly devolving into the “focus groups on dragon dispatching” realm, which actual dragons slayers have little choice but to participate in. The general trend in cybersecurity is that powers-that-be – which encompasses groups other than just legislators – are often increasingly concerned and therefore feel they need to Do Something About Cybersecurity. Many seem to believe that if only we had the right amount of regulation and oversight, there would be no data breaches: a breach simply must mean Someone Is At Fault and Needs Supervision. (Leaving aside the fact that we have lots of home invasions despite a) guard dogs b) liberal carry permits c) alarm systems d) etc.) Also note that many well-managed and security-aware organizations, like the US Department of Defense, still get hacked. More specifically, many powers-that-be feel they must direct industry in a multiplicity of ways, up to and including how we actually build and deploy information technology systems. The more prescriptive the requirement, the more regulators or overseers a) can be seen to be doing something b) feel as if they are doing something regardless of whether they are actually doing something useful or cost effective. Note: an unfortunate concomitant of Doing Something is that often the cure is worse than the ailment. That is, doing what overseers want creates unfortunate byproducts that they either didn’t foresee or worse, don’t care about. After all, the logic goes, we Did Something. Prescriptive practice in the IT industry is problematic for a number of reasons. For a start, prescriptive guidance is really only appropriate if: • It is cost effective• It is “current” (meaning, the guidance doesn’t require the use of the technical equivalent of buggy whips long after horse-drawn transportation has become passé)*• It is practical (that is, pragmatic, proven and effective in the real world, not theoretical and unproven)• It solves the right problem With the above in mind, heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. A little background on PCI before I get too wound up. In 2008, the Payment Card Industry (PCI) Security Standards Council (SSC) introduced the Payment Application Data Security Standard (PA-DSS). That standard requires vendors of payment applications to ensure that their products implement specific requirements and undergo security assessment procedures. In order to have an application listed as a Validated Payment Application (VPA) and available for use by merchants, software vendors are required to execute the PCI Payment Application Vendor Release Agreement (VRA). (Are you still with me through all the acronyms?) Beginning in August 2010, the VRA imposed new obligations on vendors that are extraordinary and extraordinarily bad, short-sighted and unworkable. Specifically, PCI requires vendors to disclose (dare we say “tell all?”) to PCI any known security vulnerabilities and associated security breaches involving VPAs. ASAP. Think about the impact of that. PCI is asking a vendor to disclose to them: • Specific details of security vulnerabilities • Including exploit information or technical details of the vulnerability • Whether or not there is any mitigation available (as in a patch) PCI, in turn, has the right to blab about any and all of the above – specifically, to distribute all the gory details of what is disclosed - to the PCI SSC, qualified security assessors (QSAs), and any affiliate or agent or adviser of those entities, who are in turn permitted to share it with their respective affiliates, agents, employees, contractors, merchants, processors, service providers and other business partners. This assorted crew can’t be more than, oh, hundreds of thousands of entities. Does anybody believe that several hundred thousand people can keep a secret? Or that several hundred thousand people are all equally trustworthy? Or that not one of the people getting all that information would blab vulnerability details to a bad guy, even by accident? Or be a bad guy who uses the information to break into systems? (Wait, was that the Easter Bunny that just hopped by? Bringing world peace, no doubt.) Sarcasm aside, common sense tells us that telling lots of people a secret is guaranteed to “unsecret” the secret. Notably, being provided details of a vulnerability (without a patch) is of little or no use to companies running the affected application. Few users have the technological sophistication to create a workaround, and even if they do, most workarounds break some other functionality in the application or surrounding environment. Also, given the differences among corporate implementations of any application, it is highly unlikely that a single workaround is going to work for all corporate users. So until a patch is developed by the vendor, users remain at risk of exploit: even more so if the details of vulnerability have been widely shared. Sharing that information widely before a patch is available therefore does not help users, and instead helps only those wanting to exploit known security bugs. There’s a shocker for you. Furthermore, we already know that insider information about security vulnerabilities inevitably leaks, which is why most vendors closely hold such information and limit dissemination until a patch is available (and frequently limit dissemination of technical details even with the release of a patch). That’s the industry norm, not that PCI seems to realize or acknowledge that. Why would anybody release a bunch of highly technical exploit information to a cast of thousands, whose only “vetting” is that they are members of a PCI consortium? Oracle has had personal experience with this problem, which is one reason why information on security vulnerabilities at Oracle is “need to know” (we use our own row level access control to limit access to security bugs in our bug database, and thus less than 1% of development has access to this information), and we don’t provide some customers with more information than others or with vulnerability information and/or patches earlier than others. Failure to remember “insider information always leaks” creates problems in the general case, and has created problems for us specifically. A number of years ago, one of the UK intelligence agencies had information about a non-public security vulnerability in an Oracle product that they circulated among other UK and Commonwealth defense and intelligence entities. Nobody, it should be pointed out, bothered to report the problem to Oracle, even though only Oracle could produce a patch. The vulnerability was finally reported to Oracle by (drum roll) a US-based commercial company, to whom the information had leaked. (Note: every time I tell this story, the MI-whatever agency that created the problem gets a bit shirty with us. I know they meant well and have improved their vulnerability handling/sharing processes but, dudes, next time you find an Oracle vulnerability, try reporting it to us first before blabbing to lots of people who can’t actually fix the problem. Thank you!) Getting back to PCI: clearly, these new disclosure obligations increase the risk of exploitation of a vulnerability in a VPA and thus, of misappropriation of payment card data and customer information that a VPA processes, stores or transmits. It stands to reason that VRA’s current requirement for the widespread distribution of security vulnerability exploit details -- at any time, but particularly before a vendor can issue a patch or a workaround -- is very poor public policy. It effectively publicizes information of great value to potential attackers while not providing compensating benefits - actually, any benefits - to payment card merchants or consumers. In fact, it magnifies the risk to payment card merchants and consumers. The risk is most prominent in the time before a patch has been released, since customers often have little option but to continue using an application or system despite the risks. However, the risk is not limited to the time before a patch is issued: customers often need days, or weeks, to apply patches to systems, based upon the complexity of the issue and dependence on surrounding programs. Rather than decreasing the available window of exploit, this requirement increases the available window of exploit, both as to time available to exploit a vulnerability and the ease with which it can be exploited. Also, why would hackers focus on finding new vulnerabilities to exploit if they can get “EZHack” handed to them in such a manner: a) a vulnerability b) in a payment application c) with exploit code: the “Hacking Trifecta!“ It’s fair to say that this is probably the exact opposite of what PCI – or any of us – would want. Established industry practice concerning vulnerability handling avoids the risks created by the VRA’s vulnerability disclosure requirements. Specifically, the norm is not to release information about a security bug until the associated patch (or a pretty darn good workaround) has been issued. Once a patch is available, the notice to the user community is a high-level communication discussing the product at issue, the level of risk associated with the vulnerability, and how to apply the patch. The notices do not include either the specific customers affected by the vulnerability or forensic reports with maps of the exploit (both of which are required by the current VRA). In this way, customers have the tools they need to prioritize patching and to help prevent an attack, and the information released does not increase the risk of exploit. Furthermore, many vendors already use industry standards for vulnerability description: Common Vulnerability Enumeration (CVE) and Common Vulnerability Scoring System (CVSS). CVE helps ensure that customers know which particular issues a patch addresses and CVSS helps customers determine how severe a vulnerability is on a relative scale. Industry already provides the tools customers need to know what the patch contains and how bad the problem is that the patch remediates. So, what’s a poor vendor to do? Oracle is reaching out to other vendors subject to PCI and attempting to enlist then in a broad effort to engage PCI in rethinking (that is, eradicating) these requirements. I would therefore urge all who care about this issue, but especially those in the vendor community whose applications are subject to PCI and who may not have know they were being asked to tell-all to PCI and put their customers at risk, to do one of the following: • Contact PCI with your concerns• Contact Oracle (we are looking for vendors to sign our statement of concern)• And make sure you tell your customers that you have to rat them out to PCI if there is a breach involving the payment application I like to be charitable and say “PCI meant well” but in as important a public policy issue as what you disclose about vulnerabilities, to whom and when, meaning well isn’t enough. We need to do well. PCI, as regards this particular issue, has not done well, and has compounded the error by thus far being nonresponsive to those of us who have labored mightily to try to explain why they might want to rethink telling the entire planet about security problems with no solutions. By Way of Explanation… Non-related to PCI whatsoever, and the explanation for why I have not been blogging a lot recently, I have been working on Other Writing Venues with my sister Diane (who has also worked in the tech sector, inflicting upgrades on unsuspecting and largely ungrateful end users). I am pleased to note that we have recently (self-)published the first in the Miss Information Technology Murder Mystery series, Outsourcing Murder. The genre might best be described as “chick lit meets geek scene.” Our sisterly nom de plume is Maddi Davidson and (shameless plug follows): you can order the paper version of the book on Amazon, or the Kindle or Nook versions on www.amazon.com or www.bn.com, respectively. From our book jacket: Emma Jones, a 20-something IT consultant, is working on an outsourcing project at Tahiti Tacos, a restaurant chain offering Polynexican cuisine: refried poi, anyone? Emma despises her boss Padmanabh, a brilliant but arrogant partner in GD Consulting. When Emma discovers His-Royal-Padness’s body (verdict: death by cricket bat), she becomes a suspect.With her overprotective family and her best friend Stacey providing endless support and advice, Emma stumbles her way through an investigation of Padmanabh’s murder, bolstered by fusion food feeding frenzies, endless cups of frou-frou coffee and serious surfing sessions. While Stacey knows a PI who owes her a favor, landlady Magda urges Emma to tart up her underwear drawer before the next cute cop with a search warrant arrives. Emma’s mother offers to fix her up with a PhD student at Berkeley and showers her with self-defense gizmos while her old lover Keoni beckons from Hawai’i. And everyone, even Shaun the barista, knows a good lawyer. Book 2, Denial of Service, is coming out this summer. * Given the rate of change in technology, today’s “thou shalts” are easily next year’s “buggy whip guidance.”

    Read the article

  • Book Review (Book 11) - Applied Architecture Patterns on the Microsoft Platform

    - by BuckWoody
    This is a continuation of the books I challenged myself to read to help my career - one a month, for year. You can read my first book review here, and the entire list is here. The book I chose for April 2012 was: Applied Architecture Patterns on the Microsoft Platform. I was traveling at the end of last month so I’m a bit late posting this review here. Why I chose this book: I actually know a few of the authors on this book, so when they told me about it I wanted to check it out. The premise of the book is exactly as it states in the title - to learn how to solve a problem using products from Microsoft. What I learned: I liked the book - a lot. They've arranged the content in a "Solution Decision Framework", that presents a few elements to help you identify a need and then propose alternate solutions to solve them, and then the rationale for the choice. But the payoff is that the authors then walk through the solution they implement and what they ran into doing it. I really liked this approach. It's not a huge book, but one I've referred to again since I've read it. It's fairly comprehensive, and includes server-oriented products, not things like Microsoft Office or other client-side tools. In fact, I would LOVE to have a work like this for Open Source and other vendors as well - would make for a great library for a Systems Architect. This one is unashamedly aimed at the Microsoft products, and even if I didn't work here, I'd be fine with that. As I said, it would be interesting to see some books on other platforms like this, but I haven't run across something that presents other systems in quite this way. And that brings up an interesting point - This book is aimed at folks who create solutions within an organization. It's not aimed at Administrators, DBA's, Developers or the like, although I think all of those audiences could benefit from reading it. The solutions are made up, and not to a huge level of depth - nor should they be. It's a great exercise in thinking these kinds of things through in a structured way. The information is a bit dated, especially for Windows and SQL Azure. While the general concepts hold, the cloud platform from Microsoft is evolving so quickly that any printed book finds it hard to keep up with the improvements. I do have one quibble with the text - the chapters are a bit uneven. This is always a danger with multiple authors, but it shows up in a couple of chapters. I winced at one of the chapters that tried to take a more conversational, humorous style. This kind of academic work doesn't lend itself to that style. I recommend you get the book - and use it. I hope they keep it updated - I'll be a frequent customer. :)  

    Read the article

  • Closing the Gap: 2012 IOUG Enterprise Data Security Survey

    - by Troy Kitch
    The new survey from the Independent Oracle Users Group (IOUG) titled "Closing the Security Gap: 2012 IOUG Enterprise Data Security Survey," uncovers some interesting trends in IT security among IOUG members and offers recommendations for securing data stored in enterprise databases. "Despite growing threats and enterprise data security risks, organizations that implement appropriate detective, preventive, and administrative safeguards are seeing significant results," finds the report's author, Joseph McKendrick, analyst, Unisphere Research. Produced by Unisphere Research and underwritten by Oracle, the report is based on responses from 350 IOUG members representing a variety of job roles, organization sizes, and industry verticals. Key findings include Corporate budgets increase, but trailing. Though corporate data security budgets are increasing this year, they still have room to grow to reach the previous year’s spending. Additionally, more than half of respondents say their organizations still do not have, or are unaware of, data security plans to help address contingencies as they arise. Danger of unauthorized access. Less than a third of respondents encrypt data that is either stored or in motion, and at the same time, more than three-fifths say they send actual copies of enterprise production data to other sites inside and outside the enterprise. Privileged user misuse. Only about a third of respondents say they are able to prevent privileged users from abusing data, and most do not have, or are not aware of, ways to prevent access to sensitive data using spreadsheets or other ad hoc tools. Lack of consistent auditing. A majority of respondents actively collect native database audits, but there has not been an appreciable increase in the implementation of automated tools for comprehensive auditing and reporting across databases in the enterprise. IOUG RecommendationsThe report's author finds that securing data requires not just the ability to monitor and detect suspicious activity, but also to prevent the activity in the first place. To achieve this comprehensive approach, the report recommends the following. Apply an enterprise-wide security strategy. Database security requires multiple layers of defense that include a combination of preventive, detective, and administrative data security controls. Get business buy-in and support. Data security only works if it is backed through executive support. The business needs to help determine what protection levels should be attached to data stored in enterprise databases. Provide training and education. Often, business users are not familiar with the risks associated with data security. Beyond IT solutions, what is needed is a well-engaged and knowledgeable organization to help make security a reality. Read the IOUG Data Security Survey Now.

    Read the article

  • Looking for an ultra portable laptop for Ubuntu

    - by prule
    Hi, I'm in the market for a new laptop, and portability is important since I really only use it when I'm travelling to and from work - primarily for programming. I've been searching high and low for something like this: less than 2kg hopefully Intel i5 (but negotiable) NO dvd drive - just don't need it 4G ram either 7200rpm disk or SSD (ssd preferable) 13 inch screen not too pricey (MacBook Air is about $1700 AUD) available in Australia The Dell Inspiron 13z and Lenovo Edge 13 look close, but I've not found anything that says I'm not going to have a fight with compatibility. The MacBook Air 13 looks like the PERFECT hardware, but I'm afraid it will just be easier to run MacOS than Ubuntu. I want to stay with Ubuntu, but the MacBook Air is only $1700 so I'm in danger of becoming another apple fanboi if I can't find anything competitive. Going through all the sites looking for stuff has been a huge waste of time System 76 doesn't deliver to Australia http://www.linux-laptop.net/ and http://www.linlap.com/ are hard work and not confidence inspiring http://www.vgcomputing.com.au/nsintro.html is hard work again, searching for every laptop they say has excellent compatibility on the web to find out what spec it is http://zareason.com/shop/Strata-Pro-13.html (at $1345 USD) looks interesting, but I've got no idea how much I'll get stung by customs importing Dell Inspiron 13z with i5, 4G, 320 7200rpm disk, ATI Mobility Radeon HD5430 - 1GB, Dell Wireless 1501 802.11b/g/n @ $1200 AUD seems like the only competitor but is it compatible? (Dell support offer no opinion - as far as they are concerned they only have 2 models that are certified for ubuntu) Am I worrying too much about the compatibility? Should I just go with Dell? Or switch to MacOS? (It would be good to have a searchable database that had the full machine specs, and compatibility - I'm thinking about building something... but I don't have much time right now...) Thanks. UPDATE I went with a MacBook Air. The price/weight/power was just right. Everything else was either too pricy (i5) or too heavy, or underpowered (SU7300 1.3GHz). Its a pity, because I didn't really want to leave Ubuntu. I'll still run it on my media center and spare (heavy) laptop.

    Read the article

  • What to do when you inherit an unmaintainable codebase?

    - by GordonM
    I'm currently working at a company with 2 other PHP developers aside from me, and 1 junior developer. The senior developer who originally built the system we're all working on has resigned and will only be here for a matter of weeks. The other developer, who is the only other guy who knows anything about the system, is unhappy here and is looking for a new job. I'm very real danger of being left behind as the only experienced developer on this codebase. Since I've joined this company I've tried to push for better coding standards, project documentation, etc and I do think I've made some headway, but the vast majority of the code is simply unmaintainable and uncommented. A lot of this has to do with the need to get things done fast at points in the project before I joined, but now the technical debt is enormous, even with the two developers who do understand the system on board. Without them, it will simply be impossible to do anything with it. The senior developer is working on trying to at least comment all his code before he leaves but I think the codebase is simply too vast to properly document in the remaining time. Besides, when he does comment it still doesn't make things as clear as it could. If the system was better organized and documented I could probably start refactoring it incrementally, but the whole thing is so tightly coupled that it's very difficult to make any changes in one module without having unintended knock-on effects in other modules. Naturally, there's no unit tests either, and I honestly don't think this codebase could possibly be unit tested anyway given how it's implemented. There also never seems to be enough time to get things done even with 3 developers and 1 junior developer. With one developer and one junior, neither of which had significant input into the early design of the system, I don't see how we could possibly get anything done with keeping the current system working, implementing new features as needed and developing a replacement for the current codebase that is better organized. Is there an approach I can take to cope with this situation, or should I be getting my own CV in order as well at this point? If it was just me and the junior designer who would be left I'd go for the latter option almost without question. However, there's a team of front-end developers and content managers as well, and I'm worried what would become of them if I left and put them in a position where there would be no developers at all. The department might just be closed down altogether under such circumstances, and then I'd have their unemployment on my conscience as well!

    Read the article

  • What You Said: Staying Productive While Working from home

    - by Jason Fitzpatrick
    Earlier this week we asked you to share your telecommuting/work-from-home productivity tips. Now we’re back with a tips and tricks roundup; read on to see how your fellow readers stay focused at home. By far and away the most common technique deployed as carefully isolating work from home life. Carol writes: I love working from home and have done so for 6 years now. I have a routine just as if I was going to an office, except my commute is 12 steps. I get ready for work, grab my purse and smart phone and go to up the steps to my office. I maintain a separate phone line and voice mail for work that I cannot answer from anywhere but my work desk. I use call forwarding when I travel so only my office phone number is published. I have VOIP phone service so I can forward calls from the internet if I forget, or need to change where the phone is forwarded. I do have a wired and wireless head set so I can go get a cold drink if on one of those long boring conference calls. I plan my ‘get off from work’ time and try to stick to it, as with any job some days I am late getting off, but it all works out. I make sure my office is for work only, any other computer play time is in a different part of the house on different computer. My office has laptop, dock, couple of monitors, multipurpose printer, fax, scanner, file cabinets – just like the office at a company. I just also happen to have a couple of golden retrievers that come to work with me and usually lay quietly until 5, and yes they know it is 5pm sometimes before I do. For me, one of the biggest concerns when working from home is not being unproductive, but the danger of never stopping work. You could keep going and going because let’s face it – the company will let you do it, so I set myself up to prevent that and maintain a separateness. HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder?

    Read the article

  • Fans running very fast on MacBook Pro 8.1 ubuntu 12.04

    - by Tomasz Kacprzak
    I installed Ubuntu 12.04 on Macbook Pro 8.1 and one of the first things I noticed was that the fans were starting to spin very fast every few minutes for 10-30 sec and then going back to normal. That was happening even without any processor load, when completely idle. The fans were usually spinning at 4000 RPM and made much noise. The computer was not getting hotter than usual. When running OSX Lion there was no noise at all, fans almost all the time at 2000 RPM. I spent some time on it and found out that Precise uses a deamon to control the temperature, called macfanctld. You can use /etc/macfanctld.conf to set the configuration. I found out that the high fan speed is not due to the fact that the temperature is getting hot, but because there are two sensors which indicate wrong numbers (you can check that using 'sensors' command ): TW0P: +129.0°C TCTD: +256.0°C TCFC: +0.0°C TMBS: +0.0°C or setting the macfanctld log level to 2: Speed: 4992, *AVG: 56.9C, TC0P: 50.2C, TG0P: 51.5C, Sensors: TB0T:34 TB1T:34 TB2T:33 TC0C:58 TC0D:56 TC0E:59 TC0F:60 TC0P:50 TC1C:58 TC2C:58 TC3C:58 TC4C:57 TCFC:0 TCGC:57 TCSA:53 TCTD:256 TG0D:52 TG0P:52 THSP:42 TM0S:64 TMBS:0 TP0P:54 TPCD:60 TW0P:129 Th1H:51 Th2H:48 Tm0P:40 Ts0P:32 Ts0S:43 Moreover, TCTD was randomly jumping from temperatures of 0 to 256, so this may be the reason for unjustified random fan speeds. macfanctld is taking an average of the sensors including the values above, so the actual AVG temp used to control the fans is wrong, usually biased up, hence high RPM and noise. The workaround solution is to use an option in the macfanctld.conf which allows to ignore the malfunctioning sensors: exclude: 13 16 21 24 After reboot the reported temperatures are usually normal and the fans are working at reasonable speeds. I tested the response of the fans to heavy processor load by asking MATLAB to invert 10000x10000 matrix and the AVG temperature jumped to 63deg, and the fan to max 6200 RPM and then got it back to normal temperature. So I think it is safe so far. There is a expired bug about the failing sensor readings: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/955538 which may be good to open again. My question would be: does anyone know what the failing sensors do and if there is any danger in excluding them? Maybe some better solution to this problem?

    Read the article

  • Keeping your options open in a cloud solution

    - by BuckWoody
    In on-premises solutions we have the full range of options open for a given computing solution – but we don’t always take advantage of them, for multiple reasons. Data goes in a Relational Database Management System, files go on a share, and e-mail goes to the Exchange server. Over time, vendors (including ourselves) add in functionality to one product that allow non-standard use of the platform. For example, SQL Server (and Oracle, and others) allow large binary storage in or through the system – something not originally intended for an RDBMS to handle. There are certainly times when this makes sense, of course, but often these platform hammers turn every problem into a nail. It can make us “lazy” in our design – we sometimes don’t take the time to learn another architecture because the one we’ve spent so much time with can handle what we want to do. But there’s a distinct danger here. In nature, when a population shares too many of the same traits, it can cause a complete collapse if a situation exploits a weakness shared by that population. The same is true with not using the righttool for the job in a computing environment. Your company or organization depends on your knowledge as a professional to select the best mix of supportable, flexible, cost-effective technologies to solve their problems, whether you’re in an architect role or not.  So take some time today to learn something new. The way I do this is to select a given problem, and try to solve it with a technology I’m not familiar with. For instance – create a Purchase Order system in Excel, then in Hadoop or MongoDB, or even in flat-files using PowerShell as an interface. No, I’m not suggesting any of these architectures are the proper way to solve the PO problem, but taking something concrete that you know well and applying that meta-knowledge to another platform will assist you in exercising the “little grey cells” and help you and your organization understand what is open to you. And of course you can do all of this on-premises – but my recommendation is to check out a cloud platform (my suggestion would of course be Windows Azure :) ) and try it there. Most providers (including Microsoft) provide free time to do that.

    Read the article

  • Fans running very fast on MacBook Pro 8.1

    - by Tomasz Kacprzak
    I installed Ubuntu 12.04 on Macbook Pro 8.1 and one of the first things I noticed was that the fans were starting to spin very fast every few minutes for 10-30 sec and then going back to normal. That was happening even without any processor load, when completely idle. The fans were usually spinning at 4000 RPM and made much noise. The computer was not getting hotter than usual. When running OSX Lion there was no noise at all, fans almost all the time at 2000 RPM. I spent some time on it and found out that Precise uses a deamon to control the temperature, called macfanctld. You can use /etc/macfanctld.conf to set the configuration. I found out that the high fan speed is not due to the fact that the temperature is getting hot, but because there are two sensors which indicate wrong numbers (you can check that using 'sensors' command ): TW0P: +129.0°C TCTD: +256.0°C TCFC: +0.0°C TMBS: +0.0°C or setting the macfanctld log level to 2: Speed: 4992, *AVG: 56.9C, TC0P: 50.2C, TG0P: 51.5C, Sensors: TB0T:34 TB1T:34 TB2T:33 TC0C:58 TC0D:56 TC0E:59 TC0F:60 TC0P:50 TC1C:58 TC2C:58 TC3C:58 TC4C:57 TCFC:0 TCGC:57 TCSA:53 TCTD:256 TG0D:52 TG0P:52 THSP:42 TM0S:64 TMBS:0 TP0P:54 TPCD:60 TW0P:129 Th1H:51 Th2H:48 Tm0P:40 Ts0P:32 Ts0S:43 Moreover, TCTD was randomly jumping from temperatures of 0 to 256, so this may be the reason for unjustified random fan speeds. macfanctld is taking an average of the sensors including the values above, so the actual AVG temp used to control the fans is wrong, usually biased up, hence high RPM and noise. The workaround solution is to use an option in the macfanctld.conf which allows to ignore the malfunctioning sensors: exclude: 13 16 21 24 After reboot the reported temperatures are usually normal and the fans are working at reasonable speeds. I tested the response of the fans to heavy processor load by asking MATLAB to invert 10000x10000 matrix and the AVG temperature jumped to 63deg, and the fan to max 6200 RPM and then got it back to normal temperature. So I think it is safe so far. There is a expired bug about the failing sensor readings: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/955538 which may be good to open again. My question would be: does anyone know what the failing sensors do and if there is any danger in excluding them? Maybe some better solution to this problem?

    Read the article

  • WiFi on Ubuntu 12.04 custom: downloading unbearably slow

    - by Mark
    iwconfig reports 11 Mbps, yet I've seen as low as <1 KBps. This is the latest in my laundry list of Ubuntu problems in a dual-boot machine (cyberpowerpc custom, intel i7-3820, nvidia gtx 570). I received it two days ago, Windows 7 running fine, still having problems with Ubuntu. The browsing is intermittent but unacceptable. e.g. I could get to this site last night but I couldn't post this question. The downloading is unbearably slow, I can't download anything or install any packages because the speed is so slow. e.g. I am trying to install vim which is inexplicably missing from my 12.04 install (add another one to the problems list) and my download speed reported in the terminal was 241 B/s. Yes, bytes. iwconfig reports 11 Mbps, which further adds to the confusion. User@ubuntu:~$ iwconfig lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"linksys" Mode:Managed Frequency:2.437 GHz Access Point: 00:18:39:76:2C:A1 Bit Rate=11 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=36/70 Signal level=-74 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:54 Invalid misc:18 Missed beacon:0 eth0 no wireless extensions. Any ideas? I see this is a problem a lot of people, but none of the on line solutions have worked for me so far. e.g. one site recommends editing the ath9k.conf file in /etc/modprobe.d, yet this file isn't even in the folder: User@ubuntu:/$ cd etc/modprobe.d User@ubuntu:/etc/modprobe.d$ ls alsa-base.conf blacklist-oss.conf blacklist-ath_pci.conf blacklist-rare-network.conf blacklist.conf blacklist-watchdog.conf blacklist-firewire.conf dkms.conf blacklist-framebuffer.conf nvidia-current_hybrid.conf blacklist-modem.conf nvidia-graphics-drivers.conf I think the nvidia gpu might be mucking things up. I had the "blinking cursor" problem when installing in the first place, and then I had the monitor out of range problem as well. I have my faithful Asus laptop, which is running Ubuntu 12.04 just fine. The only difference is executing host -t SOA local in the terminal gives User@ubuntu:~$ host -t SOA local local has SOA record local. nobody.localhost. 42 86400 43200 604800 10800 in my new machine, and the command reports Host local. not found in the laptop. Help would be most welcome, as I am in danger of reverting back to Windows. I'm seriously considering it. Sorry for the length, trying to show my effort in resolving the issue and include terminal snippets that might be helpful.

    Read the article

  • Why allow concatenation of string literals?

    - by Caspin
    I recently got bit by a subtle bug. char ** int2str = { "zero", // 0 "one", // 1 "two" // 2 "three",// 3 nullptr }; assert( values[1] == "one"_s ); // passes assert( values[2] == "two"_s ); // fails If you have godlike code review powers you'll notice I forgot the , after "two". After the considerable effort to find that bug I've got to ask why would anyone ever want this behavior? I can see how this might be useful for macro magic, but then why is this a "feature" in a modern language like python? Have you ever used string literal concatenation in production code?

    Read the article

  • Can I get the "value" of an arbitrary statement in JavaScript (like eval does, but without eval)

    - by tlrobinson
    In JavaScript is there a way to get the "value" of a statement in the same way that function() { return eval("if (true) { 1 }"); } returns "1"; function() { return if (true) { 1 } } and all similar permutations I've tried are not valid syntax. Is eval just blessed with special powers to determine the "last" value of a statement in an expression? Use case is a REPL that evaluates arbitrary expressions and returns the result. eval works, but I want to wrap it in function.

    Read the article

  • Decoding bitmaps in Android with the right size

    - by hgpc
    I decode bitmaps from the SD card using BitmapFactory.decodeFile. Sometimes the bitmaps are bigger than what the application needs or that the heap allows, so I use BitmapFactory.Options.inSampleSize to request a subsampled (smaller) bitmap. The problem is that the platform does not enforce the exact value of inSampleSize, and I sometimes end up with a bitmap either too small, or still too big for the available memory. From http://developer.android.com/reference/android/graphics/BitmapFactory.Options.html#inSampleSize: Note: the decoder will try to fulfill this request, but the resulting bitmap may have different dimensions that precisely what has been requested. Also, powers of 2 are often faster/easier for the decoder to honor. How should I decode bitmaps from the SD card to get a bitmap of the exact size I need while consuming as little memory as possible to decode it?

    Read the article

  • Problem with numbers

    - by StolePopov
    I am given a number N, and i must add some numbers from the array V so that they wil be equal. V is consisting of numbers that are all powers of 3: N = 17 S = 0 V = 1 3 9 27 81 .. I should add numbers from V to N and S in order to make them equal. The solution to the example above is : 17 + 1 + 9 = 27, 27, 1 and 9 are taken from V, a number from V can be taken only once, and when taken it's removed from V. I tried sorting V and then adding the biggest numbers from V to S until S has reached N, but it fails on some tests when it's like: N = 7 S = 0 V = 1 3 9 27 So the solution will be: 7 + 3 = 9 + 1 In examples like this i need to add numbers both to N and S, and also select them so they become equal. Any idea of solving this ? Thanks.

    Read the article

  • Reinforcement learning with neural networks

    - by Betamoo
    I am working on a project with RL & NN I need to determine the action vector structure which will be fed to a neural network.. I have 3 different actions (A & B & Nothing) each with different powers (e.g A100 A50 B100 B50) I wonder what is the best way to feed these actions to a NN in order to yield best results? 1- feed A/B to input 1, while action power 100/50/Nothing to input 2 2- feed A100/A50/Nothing to input 1, while B100/B50/Nothing to input 2 3- feed A100/A50 to input 1, while B100/B50 to input 2, while Nothing flag to input 3 4- Also to feed 100 & 50 or normalize them to 2 & 1 ? I need reasons why to choose one method Any suggestions are recommended Thanks

    Read the article

  • How to solve rake tasks deprecation on the rails plugin?

    - by Dida
    Because of the concept introduced in here, Rails::Plugin is nothing more than a Rails::Engine, but since it's loaded too late in the boot process, it does not have the same configuration powers as a bare Rails::Engine. Opposite to Rails::Railtie and Rails::Engine, you are not supposed to inherit from Rails::Plugin. Rails::Plugin is automatically configured to be an engine by simply placing inside vendor/plugins. Since this is done automatically, you actually cannot declare a Rails::Engine inside your Plugin, otherwise it would cause the same files to be loaded twice. This means that if you want to ship an Engine as gem it cannot be used as plugin and vice-versa. Besides this conceptual difference, the only difference between Rails::Engine and Rails::Plugin is that plugins automatically load the file "init.rb" at the plugin root during the boot process. rake tasks in the rails plugins are deprecated and it is advised to use lib/tasks instead. How to solve this? Can I just simply move the plugin's tasks to the lib/tasks?

    Read the article

  • Decoding subsampled bitmaps in Android

    - by hgpc
    I decode bitmaps from the SD card using BitmapFactory.decodeFile. Sometimes the bitmaps are bigger than what the application needs or that the heap allows, so I use BitmapFactory.Options.inSampleSize to request a subsampled (smaller) bitmap. The problem is that the platform does not enforce the exact value of inSampleSize, and I sometimes end up with a bitmap either too small, or still too big for the available memory. From http://developer.android.com/reference/android/graphics/BitmapFactory.Options.html#inSampleSize: Note: the decoder will try to fulfill this request, but the resulting bitmap may have different dimensions that precisely what has been requested. Also, powers of 2 are often faster/easier for the decoder to honor. How should I decode bitmaps from the SD card to get a bitmap of the exact size I need while consuming as little memory as possible to decode it?

    Read the article

  • Inaccurate Logarithm in Python

    - by Avihu Turzion
    I work daily with Python 2.4 at my company. I used the versatile logarithm function 'log' from the standard math library, and when I entered log(2**31, 2) it returned 31.000000000000004, which struck me as a bit odd. I did the same thing with other powers of 2, and it worked perfectly. I ran 'log10(2**31) / log10(2)' and I got a round 31.0 I tried running the same original function in Python 3.0.1, assuming that it was fixed in a more advanced version. Why does this happen? Is it possible that there are some inaccuracies in mathematical functions in Python?

    Read the article

  • UI not redrawing after display powered off on Win 7

    - by oltman
    Some portions of my application's interface are not getting refreshed after Windows 7 powers down the display. More specifically, I'm swapping out images, User Controls, and a button's content while the display is powered off and after it has been restarted, and this isn't being reflected in the UI until I minimize and restore the window or move it to one of the screen's edges. I've tried calling the Window's InvalidateVisual() method when the app was in a state where it needed to redraw, and that didn't solve the problem. I have only been able to reproduce this issue on Windows 7. Any ideas?

    Read the article

  • Ruby Thread with "watchdog"

    - by Sergio Campamá
    I'm implementing a ruby server for handling sockets being created from GPRS modules. The thing is that when the module powers down, there's no indication that the socket closed. I'm doing threads to handle multiple sockets with the same server. What I'm asking is this: Is there a way to use a timer inside a thread, reset it after every socket input, and that if it hits the timeout, closes the thread? Where can I find more information about this? EDIT: Code example that doesn't detect the socket closing require 'socket' server = TCPServer.open(41000) loop do Thread.start(server.accept) do |client| puts "Client connected" begin loop do line = client.readline open('log.txt', 'a') { |f| f.puts line.strip } end rescue puts "Client disconnected" end end end

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >