Search Results

Search found 17013 results on 681 pages for 'hard coding'.

Page 434/681 | < Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >

  • Talking JavaOne with Rock Star Charles Nutter

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top rated speakers from the JavaOne Conference. They are awarded by their peers who through conference surveys recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized.We spoke with distinguished Rock Star, Charles Nutter. A JRuby Update from Charles NutterCharles Nutter of Red Hat is well known as a lead developer of JRuby, a Ruby implementation of Java that is tightly integrated with Java to allow for the embedding of the interpreter into any Java application with full two-way access between the Java and the Ruby code. Nutter is giving the following sessions at this year’s JavaOne: CON7257 – “JVM Bytecode for Dummies (and the Rest of Us Too)” CON7284 – “Implementing Ruby: The Long, Hard Road” CON7263 – “JVM JIT for Dummies” BOF6682 – “I’ve Got 99 Languages, but Java Ain’t One” CON6575 – “Polyglot for Dummies” (Both with Thomas Enebo) I asked Nutter, to give us the latest on JRuby. “JRuby seems to have hit a tipping point this past year,” he explained, “moving from ‘just another Ruby implementation’ to ‘the best Ruby implementation for X,’ where X may be performance, scaling, big data, stability, reliability, security, and a number of other features important for today's applications. We're currently wrapping up JRuby 1.7, which improves support for Ruby 1.9 APIs, solves a number of user issues and concurrency challenges, and utilizes invokedynamic to outperform all other Ruby implementations by a wide margin. JRuby just gets better and better.” When asked what he thought about the rapid growth of alternative languages for the JVM, he replied, “I'm very intrigued by efforts to bring a high-performance JavaScript runtime to the JVM. There's really no reason the JVM couldn't be the fastest platform for running JavaScript with the right implementation, and I'm excited to see that happen.”And what is Nutter working on currently? “Aside from JRuby 1.7 wrap-up,” he explained, “I'm helping the Hotspot developers investigate invokedynamic performance issues and test-driving their new invokedynamic code in Java 8. I'm also starting to explore ways to improve the general state of dynamic languages on the JVM using JRuby as a guide, and to help the JVM become a better platform for all kinds of languages.”

    Read the article

  • Are there any actual case studies on rewrites of software success/failure rates?

    - by James Drinkard
    I've seen multiple posts about rewrites of applications being bad, peoples experiences about it here on Programmers, and an article I've ready by Joel Splosky on the subject, but no hard evidence of case studies. Other than the two examples Joel gave and some other posts here, what do you do with a bad codebase and how do you decide what to do with it based on real studies? For the case in point, there are two clients I know of that both have old legacy code. They keep limping along with it because as one of them found out, a rewrite was a disaster, it was expensive and didn't really work to improve the code much. That customer has some very complicated business logic as the rewriters quickly found out. In both cases, these are mission critical applications that brings in a lot of revenue for the company. The one that attempted the rewrite felt that they would hit a brick wall at some point if the legacy software didn't get upgraded at some point in the future. To me, that kind of risk warrants research and analysis to ensure a successful path. My question is have there been actual case studies that have investigated this? I wouldn't want to attempt a major rewrite without knowing some best practices, pitfalls, and successes based on actual studies. Aftermath: okay, I was wrong, I did find one article: Rewrite or Reuse. They did a study on a Cobol app that was converted to Java.

    Read the article

  • Developing professionally for both iOS, Android, web - an insight

    - by Scott Roberts
    This is not really a question on how to develop for both, I know various cross platform ways and so on. But I more want to know from developer standpoint how hard it is to basically develop iOS, Android and web apps? I am currently in my first job as a mobile/web developer. I have already developed my first iPhone/iPad app and now I have to develop the app for android because the web version I tried just didn't perform as well as needed and web databases just did not seem to make the cut. But I am not sure it's possible to be good at developing all 3 in terms of remembering all the api's etc. I wouldn't say I have an issue with the programming languages just how to use the api's for the various platforms. Also, all the other languages I look at, in my spare time, just feel like I am spreading myself to thin. Is it feasible for one person to be developing ios, android and web apps? Should I think about reducing it to iOS and web based apps? I develop everything by myself, so I have no one to discuss what the best solutions are for everything and I am just trying to workout as I go along. So any cross platform developers out there? Do companies have different teams for different platforms? Any insight would just help me get my head together. Hopefully this question makes sense.

    Read the article

  • Two Cloudy Observations from Oracle OpenWorld

    - by Gene Eun
    Now that the dust has settled from another amazing Oracle OpenWorld, I wanted to reflect back on a couple of key observations I made during the event. First, it was pretty clear that Cloud was again a big deal at this year's conference. It was hard to not notice that Oracle continues to be "all-in" with respect to cloud computing. Just to give you an idea of the emphasis on Cloud, there were over 300 Cloud-related sessions at this year's OpenWorld. If you caught some of the demo booths in the Oracle Red Lounge, then you saw some of the great platform, application, and social services that are now part of Oracle Cloud, as well as numerous demos of private cloud products that Oracle offers. Second, during Thomas Kurian's keynote presentation on Oracle Cloud, he announced the Preview Availability of a new service called Oracle Developer Cloud Service. This new platform service will provide developers with instant access to environments to better manage the application development lifecycle in the cloud. It provides development project teams access to favorite tools like Hudson, Git, Github, wikis, and tasks to help make innovation faster, more collaborative, and more effective. There's also integration with IDEs like Eclipse, NetBeans, and JDeveloper. If you're a developer, it's an awesome addition to Oracle Cloud's platform services! Want more details about Oracle Developer Cloud Service? Click here.

    Read the article

  • Play Updated Retro Arcade Games for Free Courtesy of Microsoft

    - by Jason Fitzpatrick
    As part of a promotion for Internet Explorer 10, Microsoft comissioned Atari to update several classic retro games from the arcade and the Atari consoles. Fortunately for those of us looking for a retro gaming fix, you can play the games in any HTML5-enabled browser. While game play is really smooth on most games there are a few little quirks that using Internet Explorer 10 does take care of. First, if you’re not on IE10, you’ll see a little advertisement before you begin playing each game. Second, some games call on some of the new touch/motion variable functionality built into IE10 for the Windows 8 tablet experience and they’ll stall out when they reach the point in the game they need to call that variable. That said, we played quite a few games without any hiccups at all. Hit up the link below to play classics like Asteroids, Lunar Lander, Pong, Super Breakout, and more. Atari IE10 Promo Gallery [Atari] HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How What Are the Windows A: and B: Drives Used For?

    Read the article

  • Why do some bad websites rank well?

    - by BradB
    Consider the following scenario: you are pitching SEO/Website Optimisation to a prospective client and you explain to them the importance of great copy and content, how acquiring links (ethically) can increase page rank, why the quality of the HTML build matters (H1, H2 tags, w3c validation etc), why keyword research is beneficial, you may drop in a few Google Webmaster Guideline or Matt Cutts references to back up your claims and rubbish the "back hat" approach as being no longer effective for good measure. Your advice is ethical and in the eyes of best practices, spot on. Then, the client points out to you some of their long established competitors on Google and you see these competitor websites ranking in the top spots (1 to 3) for medium to highly competitive search phrases that your client wants to compete for. These websites totally contradict your ethical approach and pretty much violate every best practice previously noted. They even out perform other "white hat" competitors who are in accordance with the above guidelines. I experienced this today. One of these well ranking websites had: About six microsites with more or less the same copy and a slightly varied layout Little or not textual content I would almost say duplicate content across the sites, but there was so little of it it could barely qualify for being duplicate All the content in Flash (with a music track that kicked in on each page load, not so much of an SEO issue - but it helps paint the picture) Keyword stuffing behind the Flash file with a bunch of black text on black background in the style of keyword 1 keyword 2,keyword1,keyword 2,keyword 2 keyword 3 and so on... The exact keyword stuffed combination present on every page of the website A bunch of clearly self made links from poor quality forums and directories with little or no Page Rank Links exchanged across the microsites How do you explain your way out of this when this hard evidence is sat in front of you undermining your great pitch?

    Read the article

  • We Need More Migration!

    - by rickramsey
    source Eva Mendez says, "Oye chico, do you really want to keep your data in that tired legacy file system when it could be enjoying encryption, compression, deduplication, snapshots, remote replication and other benefits provided by ZFS in Oracle Solaris 11? It's really not that hard to cross over. If you know how." "I don't know how, me dices? Esta bien, papacito. Go to OTN. Take my word for it. They know how." <blushing> Aw shucks, Eva. Anything for you! </blushing> The Best Way to Migrate Data From Legacy File Systems to ZFS To migrate data from a legacy filesystem to ZFS in Oracle Solaris 11, you need to install the shadow-migration package and enable the shadowd service. Then follow the simple procedure described by Dominic Kay. How to Update to Oracle Solaris 11 Using the Image Packaging System Oracle Solaris 11.1 has been released. You can upgrade using either Oracle's official Solaris release repository or, if you have a support contract, the Support repository. Peter Dennis explains how. How to Migrate Oracle Database from Oracle Solaris 8 to Oracle Solaris 11 How to use the Oracle Solaris 8 P2V (physical to virtual) Archiver tool, which comes with Oracle Solaris Legacy Containers, to migrate a physical Oracle Solaris 8 system with Oracle Database and an Oracle Automatic Storage Management file system into an Oracle Solaris 8 branded zone inside an Oracle Solaris 10 guest domain on top of an Oracle Solaris 11 control domain. - Ricardo Website Newsletter Facebook Twitter

    Read the article

  • With which class to start Test Driven Development of card game application? And what would be the next 5 to 7 tests?

    - by Maxis
    I have started to write card game applications. Some model classes: CardSuit, CardValue, Card Deck, IDeckCreator, RegularDeckCreator, DoubleDeckCreator Board Hand and some game classes: Turn, TurnHandler IPlayer, ComputerPlayer, HumanPlayer IAttackStrategy, SimpleAttachStrategy, IDefenceStrategy, SimpleDefenceStrategy GameData, Game are already written. My idea is to create engine, where two computer players could play game and then later I could add UI part. Already for some time I'm reading about Test Driven Development (TDD) and I have idea to start writing application from scratch, as currently I have tendency to write not needed code, which seems usable in future. Also code doesn't have any tests and it is hard to add them now. Seems that TDD could improve all these issue - minimum of needed code, good test coverage and also could help to come to right application design. But I have one issue - I can't decide from where to start TDD? Should I start from bottom - Card related classes or somewhere on top - Game, TurnHandler, ... ? With which class you would start? And what would be the next 5 to 7 tests? (use the card game you know the best) I would like to start TDD with your help and then continue on my own!

    Read the article

  • Windows Azure Virtual Machines - Make Sure You Follow the Documentation

    - by BuckWoody
    To create a Windows Azure Infrastructure-as-a-Service Virtual Machine you have several options. You can simply select an image from a “Gallery” which includes Windows or Linux operating systems, or even a Windows Server with pre-installed software like SQL Server. One of the advantages to Windows Azure Virtual Machines is that it is stored in a standard Hyper-V format – with the base hard-disk as a VHD. That means you can move a Virtual Machine from on-premises to Windows Azure, and then move it back again. You can even use a simple series of PowerShell scripts to do the move, or automate it with other methods. And this then leads to another very interesting option for deploying systems: you can create a server VHD, configure it with the software you want, and then run the “SYSPREP” process on it. SYSPREP is a Windows utility that essentially strips the identity from a system, and when you re-start that system it asks a few details on what you want to call it and so on. By doing this, you can essentially create your own gallery of systems, either for testing, development servers, demo systems and more. You can learn more about how to do that here: http://msdn.microsoft.com/en-us/library/windowsazure/gg465407.aspx   But there is a small issue you can run into that I wanted to make you aware of. Whenever you deploy a system to Windows Azure Virtual Machines, you must meet certain password complexity requirements. However, when you build the machine locally and SYSPREP it, you might not choose a strong password for the account you use to Remote Desktop to the machine. In that case, you might not be able to reach the system after you deploy it. Once again, the key here is reading through the instructions before you start. Check out the link I showed above, and this link: http://technet.microsoft.com/en-us/library/cc264456.aspx to make sure you understand what you want to deploy.  

    Read the article

  • Apple Co-founder Wozniak: Windows Phone is more beautiful and intuitive than Android

    - by Gopinath
    It’s a great day for Microsoft and Nokia who are working hard to resurrect Windows Phone. Steve Wozniak, the co-founder of Apple and gadget lover, has all praise for Windows Phone after using Nokia Lumia 900 for a while. He shared his experience of using Lumia powered by Windows Phone 7.5 in a interview and he is very impressed. Here are the few good quotes from the interview Just for looks and beauty, I definitely favor the Windows 7 Phone over Android. I’m kinda shocked how every screen is much more beautiful than the same apps on Android and iPhone. I’m just shocked, I haven’t seen anything yet that isn’t more beautiful than the other platforms. It’s more intuitive and beautiful. This is going to change the perception of Windows Phone  which is seen  as “not-so good” operating system for mobiles.  Along with these high profile endorsements, if Microsoft can manage to attract developers in building good apps then Windows Phone is going to be a fitting competitor to iOS and Android. via aNewDomain

    Read the article

  • LWJGL Determining whether or not a polygon is on-screen.

    - by Brandon oubiub
    Not sure whether this is an LWJGL or math question. I want to check whether a shape is on-screen, so that I don't have to render it if it isn't. First of all, is there any simple way to do this that I am overlooking? Like some method or something that I haven't found? I'm going to assume there isn't. I tried using my trigonometry skills, but it is hard to do this because of how glRotate also distorts the image a little for perspective and realism. Or, is there any way to easily determine if a ray starting from the camera, and going outward in a straight line intersects a shape? (I can probably do it with my math skillz, but is there an easier way?) By the way, I can easily determine the angle at which the camera is facing around the x and y axis. EDIT: Or, possibly, I could get the angles of a vector from the camera to the object, and compare those angles to my camera angles. But I have a feeling that the distorts from glRotate and glTranslate would be an issue. I'll try it though.

    Read the article

  • 2D XNA C#: Texture2D Wrapping Issue

    - by Kieran
    Working in C#/XNA for a Windows game: I'm using Texture2D to draw sprites. All of my sprites are 16 x 32. The sprites move around the screen as you would expect, by changing the top X/Y position of them when they're being drawn by the spritebatch. Most of the time when I run the game, the sprites appear like this: and when moved, they move as I expect, as one element. Infrequently they appear like this: and when moved it's like there are two sprites with a gap in between them - it's hard to describe. It only seems to happen sometimes - is there something I'm missing? I'd really like to know why this is happening. [Edit:] Adding Draw code as requested: This is the main draw routine - it first draws the sprite to a RenderTarget then blows it up by a scale of 4: protected override void Draw(GameTime gameTime) { // Draw to render target GraphicsDevice.SetRenderTarget(renderTarget); GraphicsDevice.Clear(Color.CornflowerBlue); Texture2D imSprite = null; spriteBatch.Begin(SpriteSortMode.FrontToBack, null, SamplerState.PointWrap, null, null); ManSprite.Draw(spriteBatch); base.Draw(gameTime); spriteBatch.End(); // Draw render target to screen GraphicsDevice.SetRenderTarget(null); imageFrame = (Texture2D)renderTarget; GraphicsDevice.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.DarkSlateBlue, 1.0f, 0); spriteBatch.Begin(SpriteSortMode.FrontToBack, null, SamplerState.PointClamp, null, null); spriteBatch.Draw(imageFrame, new Vector2(0, 0), null, Color.White, 0, new Vector2(0, 0), IM_SCALE, SpriteEffects.None, 0); spriteBatch.End(); } This is the draw routine for the Sprite class: public virtual void Draw(SpriteBatch spriteBatch) { spriteBatch.Draw(Texture, new Vector2(PositionX, PositionY), null, Color.White, 0.0f, Vector2.Zero, Scale, SpriteEffects.None, 0.3f); }

    Read the article

  • Talking JavaOne with Rock Star Charles Nutter

    - by Janice J. Heiss
    JavaOne Rock Stars, conceived in 2005, are the top rated speakers from the JavaOne Conference. They are awarded by their peers who through conference surveys recognize them for their outstanding sessions and speaking ability. Over the years many of the world’s leading Java developers have been so recognized.We spoke with distinguished Rock Star, Charles Nutter. A JRuby Update from Charles NutterCharles Nutter of Red Hat is well known as a lead developer of JRuby, a Ruby implementation of Java that is tightly integrated with Java to allow for the embedding of the interpreter into any Java application with full two-way access between the Java and the Ruby code. Nutter is giving the following sessions at this year’s JavaOne: CON7257 – “JVM Bytecode for Dummies (and the Rest of Us Too)” CON7284 – “Implementing Ruby: The Long, Hard Road” CON7263 – “JVM JIT for Dummies” BOF6682 – “I’ve Got 99 Languages, but Java Ain’t One” CON6575 – “Polyglot for Dummies” (Both with Thomas Enebo) I asked Nutter, to give us the latest on JRuby. “JRuby seems to have hit a tipping point this past year,” he explained, “moving from ‘just another Ruby implementation’ to ‘the best Ruby implementation for X,’ where X may be performance, scaling, big data, stability, reliability, security, and a number of other features important for today's applications. We're currently wrapping up JRuby 1.7, which improves support for Ruby 1.9 APIs, solves a number of user issues and concurrency challenges, and utilizes invokedynamic to outperform all other Ruby implementations by a wide margin. JRuby just gets better and better.” When asked what he thought about the rapid growth of alternative languages for the JVM, he replied, “I'm very intrigued by efforts to bring a high-performance JavaScript runtime to the JVM. There's really no reason the JVM couldn't be the fastest platform for running JavaScript with the right implementation, and I'm excited to see that happen.”And what is Nutter working on currently? “Aside from JRuby 1.7 wrap-up,” he explained, “I'm helping the Hotspot developers investigate invokedynamic performance issues and test-driving their new invokedynamic code in Java 8. I'm also starting to explore ways to improve the general state of dynamic languages on the JVM using JRuby as a guide, and to help the JVM become a better platform for all kinds of languages.” Originally published on blogs.oracle.com/javaone.

    Read the article

  • Where could Distributed Version Control Systems currently be in Gartner's hype cycle?

    - by dukeofgaming
    Edit: Given the recent downvoting (+8/-6 at this point) it was made clear to me that Gartner's lifecycle is a biased metric from a programmer's perspective. This is something that is part of a paper I'm going to present to management, and management types are part of Gartner's audience. Giving DVCS exposure & enthusiasm (that "could" be deemed as hype, or at least attacked as such), think about the following question when reading this one: "how could I use Gartner's hype cycle to convince management that DVCSs are ready (or ready-enough) for us, and that it is not just hype" Just asking if DVCSs is hype wouldn't be constructive, Gartner's hype cycle is a more objective instrument than just asking that (even if this instrument is regarded as biased). If you know any other instrument please, by all means, mention it. Edit #2: I agree that Gartner's Life Cycle is not for every technology, but I consider it may have generated enough buzz to be considered hype by some, so it maybe deserves to be at least evaluated/pondered as such by using this instrument in order to prove/disprove it to whatever degree. I'm an advocate of DVCS, BTW. I'm doing research for a whitepaper I'm writing in favor of DVCS adoption at company and I stumbled upon the concept of social proof. I want to prove that the social proof of DVCS adoption is not necessarily cargo cult and doing further research I now stumbled upon Gartner's hype cycle which describes technology maturity in 5 phases. My question is: what could be an indicator of the current location of Distributed Version Control Systems (I mean git, mercurial, bazaar, etc. in general) at a particular phase in the hype cycle?... in other (less convoluted) words, would you say that currently expectations of DVCSs are a) starting, b)inflated, c)decreasing (disillusionment), d)increasing (enlightenment) or e)stabilizing (mature) and (more importantly) why? I know it is a hard question and there is subjectivity involved, but I'll grant the answer (and the traditional cookie) to the clearest argument/evidence for a particular phase.

    Read the article

  • Booting sequence. Ubuntu 12.04 installation and cohabitation with former OSes

    - by Stephane Rolland
    I am on the brink of installing Ubuntu 12.04 Precise Pengolin on the first primary partition of my hard-drive. (A day in History for me since I had always kept a MS windows at this first place). But I have some fears: This is my last computer available (In the past I used to have 2 or even 3 machines so I could always un/plug HDs for recovery operations and rescue) The current booting sequence is not straight forard. So as to explain the boot sequence let me briefly sum-up the history of this laptop computer. It was a dedicated Windows Vista computer. 1st and only Primary partition. Then I added Windows 7 (on the 2nd primary partition) letting the Windows Vista Boot Loader manage the boot sequence. Then I added Ubuntu 10.04 Lucid Lynx on the 1st sub-partition of the Extended Partitionm asking Grub to be the boot loader. But when I ask Grub to launch windows it launches the Vista BootLoader that manages the choice betzeen Vista and 7. So in theory Grub is on the MasterBootRecord - though I understand where the Vista BootLoader remains. Now, I will no longer use the Ubuntu 10.04 ( on extended partition) and also the Windows vista (on the first primary partition). I will install Ubuntu 12.04 on the First Primary, asking it to install a new bootloader. I want to keep the Windows 7 that is already on the Second Primary partition. And I want it to be loaded by the Ubuntu Boot loader(I don4t knoz zhich is included in this version)... And I am afraid the last point will not work.

    Read the article

  • Plastic Clamshell Packaging Voted Worse Design Ever

    - by Jason Fitzpatrick
    We’ve all been there: frustrated and trying free a new purchase from it’s plastic clamshell jail. You’re not alone, the packaging design has been voted the worst in history. In a poll at Quora, users voted on the absolute worst piece of design work they’d encountered. Overwhelmingly, they voted the annoying-to-open clamshell design to the top. The author of the top comment/entry, Anita Shillhorn writes: “Design should help solve problems” — clamshells are supposed to make it harder to steal small products and easier for employees to arrange on display — but this packaging, she says, makes new ones, such as time wasted, frustration, and the little nicks and scrapes people incur as they just try to get their damn lightbulb out. This is a product designed for the manufacturers and the retailers, not the end users. There is even a Wikipedia page devoted to “wrap rage,” “the common name for heightened levels of anger and frustration resulting from the inability to open hard-to-remove packaging.” Hit up the link below for more entries in their worst-design poll. Before you go, if you’ve got a great tip for getting goods out of the plastic shell they ship in, make sure to share it in the comments. What Is The Worst Piece of Design Ever Done? [via The Atlantic] HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

  • SEO Keyword Research

    - by James
    Hi Everyone, I'm new at SEO and keyword research. I am using Market Samurai as my research tool, and I was wondering if I could ask for your help to identify the best key word to target for my niche. I do plan on incorporating all of them into my site, but I wanted to start with one. If you could give me your input on these keywords, I would appreciate it. This is all new to me :) I'm too new to post pictures, but here are my keywords (Searches, SEO Traffic, and SEO Value / Day): Searches | SEO Traffic | PBR | SEO Value | Average PR/Backlinks of Current Top 10 1: 730 | 307 | 20% | 2311.33 | 1.9 / 7k-60k 2: 325 | 137 | 24% | 822.94 | 2.3 / 7k-60k 3: 398 | 167 | 82% | 589.79 | 1.6 / 7k-60k I'm wondering if the PBR (Phrase-to-broad) value of #1 is too low. It seems like the best value because the SEOV is crazy high. That is like $70k a month. #3 has the highest PBR, but also the lowest SEOV. #2 doesn't seem worth it because of the PR competetion. Might be a little too hard to get into the top page of Google. I'm wondering which keywords to target, and if I should be looking at any other metric to see if this is a profitable niche to jump into. Thanks.

    Read the article

  • Our winners- and some BBQ for everyone

    - by Steve Tunstall
    Congrats to our two winners for the first two comments on my last entry. Steve from Australia and John Lemon. Steve won since he was the first person over the International Date Line to see the post I made so late after a workday on Friday. So not only does he get to live in a country with the 2nd most beautiful women in the world, but now he gets some cool Oracle Swag, too. (Yes, I live on the beach in southern California, so you can guess where 1st place is for that other contest…Now if Steve happens to live in Manly, we may actually have a tie going…) OK, ok, for everyone else, you can be winners, too. How you ask? I will make you the envy of every guy and gal in your neighborhood or campsite. What follows is the way to smoke the best ribs you or anyone you know have ever tasted. Follow my instructions and give it a try. People at your party/cookout/campsite will tell you that they’re the best ribs they’ve ever had, and I will let you take all the credit. Yes, I fully realize this post is going to be longer than any post I’ve done yet. But let’s get serious here. Smoking meat is much more important, agreed? J In all honesty, this is a repeat of another blog I did, so I’m just copying and pasting. Step 1. Get some ribs. I actually really like Costco’s pack. They have both St. Louis and Baby Back. (They are the same ribs, but cut in half down the sides. St. Louis style is the ‘front’ of the ribs closest to the stomach, and ‘Baby back’ is the part of the ribs where is connects to the backbone). I like them both, so here you see I got one pack of each. About 4 racks to a pack. So these two packs for $25 each will feed about 16-20 of my guests. So around 3 bucks a person is a pretty good deal for the best ribs you’ll ever have. Step 2. Prep the ribs the night before you’re going to smoke. You need to trim them to fit your smoker racks, and also take off the membrane and add your rub. Then cover and set in fridge overnight. Here’s how to take off the membrane, which will not break down with heat and smoke like the rest of the meat, so must be removed. Use a butter knife to work in a ways between the membrane and the white bone. Just enough to make room for your finger. Try really hard not to poke through the membrane, you want to keep it whole. See how my gloved fingers can now start to lift up and pull off the membrane? This is what you are trying to do. It’s awesome when the whole thing can come off at once. This one is going great, maybe the best one I’ve ever done. Sometime, it falls apart and doesn't come off in one nice piece. I hate when that happens. Now, add your rub and pat it down once into the meat with your other hand. My rub is not secret. I got it from my mentor, a BBQ competitive chef who is currently ranked #1 in California and #3 in the nation on the BBQ circuit. He does full-day classes in southern California if anyone is interested in taking his class. Go to www.slapyodaddybbq.com to check him out. I tweaked his run recipe a tad and made my own. It’s one part Lawry’s, one part sugar, one part Montreal Steak Seasoning, one part garlic powder, one-half part red chili powder, one-half part paprika, and then 1/20th part cayenne. You can adjust that last ingredient, or leave it out. Real cheap stuff you can get at Costco. This lets you make enough rub to last about a year or two. Don’t make it all at once, make a shaker’s worth and use it up before you make more. Place it all in a bowl, mix well, and then add to a shaker like you see here. You can get a shaker with medium sized holes on it at any restaurant supply store or Smart & Final. The kind you see at pizza places for their red pepper flakes works best. Now cover and place in fridge overnight. Step 3. The next day. Ok, I’m ready to go. Get your stuff together. You will need your smoker, some good foil, a can of peach nectar, a bottle of Agave syrup, and a package of brown sugar. You will need this stuff later. I also use a clean spray bottle, and apple juice. Step 4. Make your fire, or turn on your electric smoker. In this example I’m using my portable charcoal smoker. I got this for only $40. I then modified it to be useful. Once modified, these guys actually work very well. Trust me, your food DOES NOT KNOW how expensive your smoker is. Someone who tells you that you need to spend a bunch of money on a smoker is an idiot. I also have an electric smoker that stays in my backyard. It’s cleaner and larger so I can smoke more food. But this little $40 one works great for going camping. Here is what my fire-bowl looks like. I leave a space in the middle open, and place cold charcoal and wood chucks in a circle going outwards. This makes it so when I dump the hot coals down the middle, they will slowly burn outwards, hitting different wood chucks at different times, allowing me to go 4-5 hours without having to even touch my fire. For ribs, I use apple and pecan wood. Pecan works for anything. Apple or any fruit wood is excellent for pork. So now I make my hot charcoal with a chimney only about half-full. I found a great use for that side-burner on my grill that I never use. It makes a fantastic chimney starter. You never use fluids of any kind, nor ever use that stupid charcoal that has lighter fluid built into it. Never, ever, ever. Step 5. Smoke. Add your ribs in the racks and stack them up in your smoker. I have a digital thermometer on a probe that I use to keep track of the temp in the smoker. I just lay the probe on the top rack and shut the lid. This cheap guy is a little harder to maintain the right temperature of around 225 F, so I do have to keep my eye on it more than my electric one or a more expensive charcoal one with the cool gadgets that regulate your temp for you. Every hour, spray apple juice all over your ribs using that spray bottle. After about 3 hours, you should have a very good crust (called the Bark) on your ribs. Once you have the Bark where you want it, carefully remove your ribs and place them in a tray. We are now ready for a very important part to make the flavor. Get a large piece of foil and place one rib section on it. Splash some of the peach nectar on it, and then a drizzle of the Agave syrup. Then, use your gloved hand to pack on some brown sugar. Do this on BOTH sides, and then completely wrap it up TIGHT in the foil. Do this for each rib section, and then place all the wrapped sections back into the smoker for another 4 to 6 hours. This is where the meat will get tender and flavorful. The first three hours is only to make the smoke bark. You don’t need smoke anymore, since the ribs are wrapped, you only need to keep the heat around 225 for the next 4-6 hours. Obviously you don’t spray anymore. Just time and slow heat. Be patient. It’s actually really hard to overdo it. You can let them go longer, and all that will happen is they will get even MORE tender!!! If you take them out too soon, they will be tough. How do you know? Take out one package (use long tongs) and open it up. If you grab a bone with your tongs and it just falls apart and breaks away from the rest of the meat, you are done!!! Enjoy!!! Step 6. Eat. It pulls apart like this when it’s done. By the way, smoking tri-tip is way easier. Just rub it with the same rub, and put in your smoker for about 2.5 hours at 250 F. That’s it. Low-maintenance. It comes out like this, with a fantastic smoke ring and amazing flavor. Thanks, and I will put up another good tip, about the ZFSSA, around the end of November. Steve 

    Read the article

  • CPU Wars Is a Trump-Style Card Game Driven by Chip Stats

    - by Jason Fitzpatrick
    If you’re looking for the geekiest card game around, you’d be hard pressed to beat CPU Wars–a top-trumps card game built around CPU specs. From the game’s designers: CPU Wars is a trump card game built by geeks for geeks. For Volume 1.0 we chose 30 CPUs that we believe had the greatest impact on the desktop history. The game is ideally played by 2 or 3 people. The deck is split between the players and then each player takes a turn and picks a category that they think has the best value. We have chosen the most important specs that could be numerically represented, such as maximum speed achieved and maximum number of transistors. It’s lots of fun, it has a bit of strategy and can be played during a break or over a coffee. If you’re interested, you can pick up a copy for £7.99 (roughly $12.50 USD). Hit up the link below for more information. How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More 47 Keyboard Shortcuts That Work in All Web Browsers How To Hide Passwords in an Encrypted Drive Even the FBI Can’t Get Into

    Read the article

  • Hybrid Graphics on Windows 7/Ubuntu 12.04 Dual Boot

    - by Noob.
    Alright, so here's the situation: I am using an ASUS UL80VT with two graphics cards: Integrated intel graphics and NVIDIA G210M I was running an Ubuntu 12.04 - Windows 7 dual boot (on separate partitions).The machine worked perfectly (including the display drivers) without me needing to install anything special or change any settings. However, my hard drive was corrupted and I lost all my data yesterday, so after it was replaced, I installed Ubuntu 12.04 64x again after installing Windows 7. I booted up Ubuntu after installation, and noticed it was by default using Unity 2D... Gnome 3.4 wasn't working properly either, so I guessed that the NVIDIA G210M driver wasn't installed/working and the OS was instead using the integrated graphics. I checked the "Additional Drivers" thing, but there were no proprietary drivers listed there, so I went to the NVIDIA website, downloaded the driver directly and installed it. I restarted, but there was no change. After this, I read somewhere that I should change my SATA in the BIOS to "Compatible" rather than "Enhanced". This worked fine and fixed the problem (both Unity and Gnome were working perfectly) but then when I tried booting up Windows 7, I recieved the BSOD. So I changed it back to Enhanced, and once again, the NVIDIA 210M graphics isn't working on Ubuntu, but on Windows 7 it is. I do not want to keep changing from Enhanced to Compatible every time I reboot to Ubuntu and neither do I want to simply just use one OS. Note that NVIDIA 210M and integrated graphics work perfectly on Windows 7. Also, I don't care about switching between them, I just want to be able to use the NVIDIA one. What can I do so that both Windows 7 and Ubuntu work and NVIDIA G210M works on Ubuntu?

    Read the article

  • How do I capture a 10053 trace for a SQL statement called in a PL/SQL package?

    - by Maria Colgan
    Traditionally if you wanted to capture an Optimizer trace (10053) for a SQL statement you would issue an alter session command to switch on a 10053 trace for that entire session, and then issue the SQL statement you wanted to capture the trace for. Once the statement completed you would exit the session to disable the trace. You would then look in the USER_DUMP_DEST directory for the trace file. But what if the SQL statement you were interested  in was actually called as part of a PL/SQL package? Oracle Database 11g, introduced a new diagnostic events infrastructure, which greatly simplifies the task of generating a 10053 trace for a specific SQL statement in a PL/SQL package. All you will need to know is the SQL_ID for the statement you are interested in. Instead of turning on the trace event for the entire session you can now switch it on for a specific SQL ID. Oracle will then capture a 10053 trace for the corresponding SQL statement when it is issued in that session. Remember the SQL statement still has to be hard parsed for the 10053 trace to be generated.  Let's begin our example by creating a PL/SQL package called 'cal_total_sales'. The SQL statement we are interested in is the same as the one in our original example, SELECT SUM(AMOUNT_SOLD) FROM SALES WHERE CUST_ID = :B1. We need to know the SQL_ID of this SQL statement to set up the trace, and we can find in V$SQL. We now have everything we need to generate the trace. Finally  you would look in the USER_DUMP_DEST directory for the trace file with the name you specified. Maria Colgan+

    Read the article

  • Data binding in web UI frameworks, what's the deal?

    - by c-smile
    I believe that most of modern Web frameworks that pretend to be MVC ones also has a notion of data binding in one form or another. Examples: AngularJS, EmberJS, KnockoutJS, etc. I am assuming that "data binding" is a declarative definition (oxymoron, no?) of live link between data (a.k.a. model) and its representation (a.k.a. view). With some transformers in between (a.k.a. controllers). I understand why declarativeness is kind of appealing but also understand that as usual it comes with the price. In particular: 1. Live binding is quite heavy, either with dirty watch (high CPU consumption) or with Object.observe() (high memory consumption with high CPU load in some scenarios). 2. There is a "frame" part in the framework word, means there are some boundaries/limits that can be hard to overcome if you need slightly more than it was designed for. Quite usual time split: 90% of features are made in 10% of project time. But 10% rest take 90% of project time. I suspect (a.k.a. educated guess) that those MVC things are not helping to implement more functionality in less time... If so their usage motivation is not quite clear. As an example: last week wanted to find virtual list idea/solution. Found one in vanilla JavaScript that is 120 LOC. Implementation of the same but in AngualrJS is about 420 LOC. Most of the code there seems like a fight with the framework itself... So is my question: what benefits that MVC stuff or data binding give us? Is it just a buzzword popular among project managers or they give us something useful. If later one then what exactly?

    Read the article

  • Troubleshooting Your Network with Oracle Linux

    - by rickramsey
    Are you afraid of network problems? I was. Whenever somebody said "it's probably the network," I went to lunch. And hoped that it was fixed by the time I got back. Turns out it wasn't that hard to do a little basic troubleshooting Tech Article: Troubleshooting Your Network with Oracle Linux by Robert Chase You're no doubt already familiar with ping. Even I knew how to use ping. Turns out there's another command that can show you not just whether a system can respond over the network, but the path the packets to that system take. Our blogging platform won't allow me to write the name down, but I can tell you that if you replace the x in this word with an e, you'll have the right command: tracxroute Once you get used to those, you can venture into the realms of mtr, nmap, and netcap. Robert Chase explains how each one can help you troubleshoot the network, and provides examples for how to use them. Robert is not only a solid writer, he is also a brilliant motorcyclist and rides an MV Augusta F4 750. About the Photograph Photo of flowers in San Simeon, California, taken by Rick Ramsey on a ride home from the Sun Reunion in May 2014. - Rick Follow me on: Personal Blog | Personal Twitter   Follow OTN Garage on: Web | Facebook | Twitter | YouTube

    Read the article

  • Looking for a simple web interface with subversion support and ticket /issue tracker [closed]

    - by Stefan Andre Brannfjell
    I am working on a small project and we have a few programmers on the job. We are using subversion to commit updates and keep all developers up to date on their workstations. However, we have yet to find a suitable web interface to use for it. I have tried redmine, but that installation progress was extremely bothersome and advanced. Once I got it to work I found out that it was slow and did not meet my expectations. As well as it seems a bit complex for our needs. I would prefer to find a solution that supports lighttpd web server, however that seem to be very hard to come by, those I have found seem to only have apache support. Functionality i wish for the website: - login to an svn account - view svn logs - view & create issues, todo list etc - view svn difference Do you have any open source recommendations that I can try out? I will appreciate any kind of reply. :) Edit: I wish to host the website on our own servers.

    Read the article

  • 12.04 grub unable to boot on /sde, upgrade-grub and boot-repair failed, please help

    - by VGR
    My problem is I've 4 disks in a raid array listed as sda, sdb... sdd and grub 2 refuses to boot on /sde (the 5th disk, standalone and containing a clean install of 12.04 64 bits). I tried all solutions but all fail. (live CD/USB with grub-setup, also tried repair-grub, and tried also in the "grub rescue" set prefix= etc). I also tried to deactivate the RAID array in the BIOS, but I'd rather not destroy it, and I didn't find a way to make the standalone disk as '/sda1' (this would satisfy grub). In the BIOS, the would-be /sda is the only bootable hard disk; it ends up as /sde and grubs complains. I've made repair-grub issue a pastebin. I always end up in grub-rescue and I'm stuck. I need Ubuntu to boot so that I can add the device array handler for my disks. I can't switch the disks and I can't disconnect the SATA RAID controller. I need: (a) a workaround so that grub starts on /sde; or (b) a way to change the order in which Ubuntu sees the disks, at boot time. I could then provide grub with a /sda1. Thanks a lot. up please thanks a lot it's not the same problem as booting ubuntu from raid. My RAID array serves only of data repository windows had no problem with this configuration

    Read the article

< Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >