Search Results

Search found 34242 results on 1370 pages for 'tal even tov'.

Page 40/1370 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • career advice for PhD scientist seeking to program?

    - by C SD
    I'm largely a self-taught programmer. In fact, I first started programming about half way through biophysics grad school, and even though I think I've done some pretty nice work, I've never worked as part of a 'serious' development team that had more than one or two other developers (and I wouldn't hesitate to call them equally inexperienced in software development as a profession). After finishing my PhD I applied to Google, on a lark, since I had some confidence in my abilities, if not necessarily my experience, and I was hoping to maybe slip in and absorb all the experience and talent I'd be surrounded with and become productive enough, quickly enough, that they wouldn't immediately regret their decision. I was excited to actually get invited to interview up at Mountain View (this was ~ mid 2008). Overall, my memory of the interview was very positive, but after close to a three month wait (is that normal?) they ended up turning me down. I wasn't too surprised or disappointed (aside from the uncomfortably long wait) given my unusual background and admitted lack of experience. I decided to continue as a postdoc, but focus on improving my skills rather than doing research. I've done about three years of that, and my honest assessment is that I've learned a ton more, but I really need more of a peer group to maintain or accelerate my growth. Google invited me to interview again about eight months ago, and the interview process went even better than the first time around (I thought), though they again declined to give me an offer. I have to admit this second rejection was much more discouraging. They had insisted I interview even after I mentioned to them that a move on my part was unlikely given that I had bought a house, gotten married, etc. since the first interview. I guess I was hoping they'd at least give me an offer that I could parlay into a more conventional, but still interesting, programming position close to home. So here I am, going on my third year out of grad school, a glorified postdoc and I'm starting to get pretty discouraged. Even though I could technically get 'back-on-track' for a career in science, I have been focusing the vast majority of this time on gaining programming experience rather than on research and publications. The problem is, whenever I look, most job listings have requirements that seem impossibly grandiose and I hesitate to apply. That, or the job/project seems incredibly dull. Ironically, applying to Google struck me as less intimidating. I suspect that either most people are just a lot less realistic than I am when it comes to assessing how long it will take for them to get up to speed, or they don't care; my fear is that I'm just woefully unqualified for any interesting, well paying work. IE: I'm confident I could switch fully back into C++ mode with a couple weeks work (I mostly use C,Python,C# daily) but I don't list myself as being 'proficient' in C++ on my CV, or applying for jobs that 'require' such knowledge. The few applications for which I did feel I was a legitimately good match have not elicited a response. I suspect the following things are potential problems with my application/CV and I would like feedback on: I don't have a CS degree. My BS was in biochemistry and molecular biology, my PhD in biophysics. I took a undergrad and grad level CS course at UCSD and completely killed them, but I don't know how to translate that to my CV effectively. I have a PhD, but it's not in CS... I've been debating if I should remove it from my CV, and wether or not it would then be misleading to list at least some of those years as some kind of 'programming' job (in many respects it was). I think there are sometimes strong stigmas associated with 'self-taught' programmers. I am certainly one of those. I even recognize that some of those stigmas hold a hint of truth, but I really do want to be an asset to a team. How do I communicate that even though I have been largely self-directing for ~8 years I can still take marching orders when needed? Do I just say so outright? Should I just become a lot less scrupulous about the whole process? anecdote: I have a friend who applied for positions where he completely fudged his qualifications to get past the first culling. He was much more honest and forthcoming about his actual qualifications when contacted and he still managed to get invited to a couple of interviews and even got some offers. His balls are larger than mine though.

    Read the article

  • Getting WAP embedded video in android AND iphone?

    - by Jon
    Recently a client asked me to make their site "work on smart phones", which normally wouldn't be too much of an issue... However it's a video site, and I have absolutely no idea where to even begin. Right off the bat I'm not even going to consider allowing the site to even function in anything other than Android (Maybe even 2.0+) and iPhone, maybe Blackberry and WinMo. But beyond that... What do I do? I'm looking at using the tag, however I'm unsure what, if any, codecs which phone uses. Is HTML5 even adopted in their browsers yet? Could someone please point me in the right direction? Am I going about this the right way, using the tag? Or is there some magical html element both iPhone and Android (And BB and WMo) that lets them run video in their native video players (Like on youtube).

    Read the article

  • Plagued by multithreaded bugs

    - by koncurrency
    On my new team that I manage, the majority of our code is platform, TCP socket, and http networking code. All C++. Most of it originated from other developers that have left the team. The current developers on the team are very smart, but mostly junior in terms of experience. Our biggest problem: multi-threaded concurrency bugs. Most of our class libraries are written to be asynchronous by use of some thread pool classes. Methods on the class libraries often enqueue long running taks onto the thread pool from one thread and then the callback methods of that class get invoked on a different thread. As a result, we have a lot of edge case bugs involving incorrect threading assumptions. This results in subtle bugs that go beyond just having critical sections and locks to guard against concurrency issues. What makes these problems even harder is that the attempts to fix are often incorrect. Some mistakes I've observed the team attempting (or within the legacy code itself) includes something like the following: Common mistake #1 - Fixing concurrency issue by just put a lock around the shared data, but forgetting about what happens when methods don't get called in an expected order. Here's a very simple example: void Foo::OnHttpRequestComplete(statuscode status) { m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } So now we have a bug in which Shutdown could get called while OnHttpNetworkRequestComplete is occuring on. A tester finds the bug, captures the crash dump, and assigns the bug to a developer. He in turn fixes the bug like this. void Foo::OnHttpRequestComplete(statuscode status) { AutoLock lock(m_cs); m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { AutoLock lock(m_cs); m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } The above fix looks good until you realize there's an even more subtle edge case. What happens if Shutdown gets called before OnHttpRequestComplete gets called back? The real world examples my team has are even more complex, and the edge cases are even harder to spot during the code review process. Common Mistake #2 - fixing deadlock issues by blindly exiting the lock, wait for the other thread to finish, then re-enter the lock - but without handling the case that the object just got updated by the other thread! Common Mistake #3 - Even though the objects are reference counted, the shutdown sequence "releases" it's pointer. But forgets to wait for the thread that is still running to release it's instance. As such, components are shutdown cleanly, then spurious or late callbacks are invoked on an object in an state not expecting any more calls. There are other edge cases, but the bottom line is this: Multithreaded programming is just plain hard, even for smart people. As I catch these mistakes, I spend time discussing the errors with each developer on developing a more appropriate fix. But I suspect they are often confused on how to solve each issue because of the enormous amount of legacy code that the "right" fix will involve touching. We're going to be shipping soon, and I'm sure the patches we're applying will hold for the upcoming release. Afterwards, we're going to have some time to improve the code base and refactor where needed. We won't have time to just re-write everything. And the majority of the code isn't all that bad. But I'm looking to refactor code such that threading issues can be avoided altogether. One approach I am considering is this. For each significant platform feature, have a dedicated single thread where all events and network callbacks get marshalled onto. Similar to COM apartment threading in Windows with use of a message loop. Long blocking operations could still get dispatched to a work pool thread, but the completion callback is invoked on on the component's thread. Components could possibly even share the same thread. Then all the class libraries running inside the thread can be written under the assumption of a single threaded world. Before I go down that path, I am also very interested if there are other standard techniques or design patterns for dealing with multithreaded issues. And I have to emphasize - something beyond a book that describes the basics of mutexes and semaphores. What do you think? I am also interested in any other approaches to take towards a refactoring process. Including any of the following: Literature or papers on design patterns around threads. Something beyond an introduction to mutexes and semaphores. We don't need massive parallelism either, just ways to design an object model so as to handle asynchronous events from other threads correctly. Ways to diagram the threading of various components, so that it will be easy to study and evolve solutions for. (That is, a UML equivalent for discussing threads across objects and classes) Educating your development team on the issues with multithreaded code. What would you do?

    Read the article

  • Toggling row colors

    - by cf_PhillipSenn
    I have a table cell in the footer that allows the user to turn on row coloring: $('#highlight').click(function() { $(this).parents('table').RowColors(); }) // From Chapter 7 of Learning jQuery $.fn.RowColors = function() { $('tbody tr:odd', this).removeClass('even').addClass('odd'); $('tbody tr:even', this).removeClass('odd').addClass('even'); return this; }; Q: How do I write a selector that says: IF there is at least 1 row with class="even", then remove both "even" and "odd" ELSE execute the RowColors function.

    Read the article

  • Tech Ed/BI Conference 2010: A Recovering Industry in a Recovering City

    - by andrewbrust
    I tried writing a post for this blog last night, while at the this year’s Microsoft Tech Ed and Business Intelligence conferences, in New Orleans. But I literally fell asleep while writing it.  That’s probably a sign that my readers might have done the same while reading it. Why the writer’s block? This was a very good show for me, but I think I was having trouble figuring out exactly why.  Now that I’m on the flight home, I’m starting to piece it together. One reason, for sure, was that I’ve spent years in both the developer and the BI worlds, and a show that combined the two was really enjoyable for me.  Typically, the subject matter, the attendees, the Microsoft execs and managers, and even the social circles have been separate.  This year’s Tech Ed facilitated a fusion of each of these previously segregated groups.  That was good for me as a speaker; for example, I facilitated a Birds of a Feather session on PowerPivot (Microsoft’s new self-service BI offering) which was well-attended, and by a large number of non-BI pros.  The fusion was good for me as an attendee too, as Microsoft BI, in the form of a new Pivot Viewer control, made it into the Day 1 keynote, demoed by Microsoft’s key BI champion, Amir Netz.  And it was good for me socially, as I was able to meet with peers in both camps, and at one location. Speaking of meeting with industry colleagues, I did a lot of that at this show.  Probably for the first time ever, I carefully scheduled and conducted a series of meetings with friends and business acquaintances in the developer tools, data visualization, utilities, publishing and training areas of the Microsoft ecosystem.  Beside the time efficiencies in conducting so many meetings, I discovered another benefit. I got a real handle on the tech industry’s economic health. The news here is good.  First of all, 2010 has been a great year for just about everyone I spoke to.  The mood is positive, energy is high, and people are working really hard.  This is, of course, refreshing to see, and it’s a huge relief.  Add to that the fact that this year’s Tech Ed was about 2.5 times larger in headcount than last year’s (based on numbers from unofficial, but reliable, sources), and the economic prognosis seems excellent.  But there’s more to it than that. Here’s the thing: everyone I talked to seems to be working, and succeeding, at changing their business models to adapt to changes in the industry.  Whether it’s the Internet’s impact on publishing and training, the increased importance of the developer audience in South Asia, the shift of affordable developer and business talent to unfamiliar locales abroad, or even lapses in Microsoft’s performance in the market, partner companies aren’t just rolling with the punches; they’re welcoming the changes and working them to their advantage.  No one seemed downtrodden, or even fatigued.  Even for businesses who have seen core revenue streams become commoditized, everyone seems to be changing their market strategy and winning.  Even Microsoft, of whom I have been critical recently, showed signs of successful hard work and playbook change, in the maturing of their cloud strategy, their commitment to it and their excitement around it.  And the embedded, managed, self-service BI strategy that Microsoft has been touting looks like it’s already being embraced by customers, even though PowerPivot, and other new Microsoft BI products, were released only recently. The collective optimism I have witnessed, and that I have felt, tells me good things about this industry and the economy.  The stock market had huge mood swings during my stay, and that may yet subdue the industry recovery I have seen this week.  Nonetheless, I am convinced that a strong foundation of hard work, innovative thinking and, if I may,  true renaissance is underlying this industry’s success. That kind of strength will generate a strong recovery, I am certain, whether now or once we’re past another round of choppy weather in the broader economy.  The fundamentals are good.

    Read the article

  • What is a more "ruby way" to write this code?

    - by steadfastbuck
    This was a homework assignment for my students (I am a teaching assistant) in c and I am trying to learn Ruby, so I thought I would code it up. The goal is to read integers from a redirected file and print some simple information. The first line in the file is the number of elements, and then each integer resides on its own line. This code works (although perhaps inefficiently), but how can I make the code more Ruby-like? #!/usr/bin/ruby -w # first line is number of inputs (Don't need it) num_inputs = STDIN.gets.to_i # read inputs as ints h = Hash.new STDIN.each do |n| n = n.to_i h[n] = 1 unless h[n] and h[n] += 1 end # find smallest mode h.sort.each do |k,v| break puts "Mode is: #{k}", "\n" if v == h.values.max end # mode unique? v = h.values.sort print "Mode is unique: " puts v.pop == v.pop, "\n" # print number of singleton odds, # odd elems repeated odd number times in desc order # even singletons in desc order odd_once = 0 odd = Array.new even = Array.new h.each_pair do |k, v| odd_once += 1 if v == 1 and k.odd? odd << k if v.odd? even << k if v == 1 and k.even? end puts "Number of elements with an odd value that appear only once: #{odd_once}", "\n" puts "Elements repeated an odd number of times:" puts odd.sort.reverse, "\n" puts "Elements with an even value that appear exactly once:" puts even.sort.reverse, "\n" # print fib numbers in the hash class Fixnum def is_fib? l, h = 0, 1 while h <= self return true if h == self l, h = h, l+h end end end puts "Fibonacci numbers:" h.keys.sort.each do |n| puts n if n.is_fib? end

    Read the article

  • When should complexity be removed?

    - by ElGringoGrande
    Prematurely introducing complexity by implementing design patterns before they are needed is not good practice. But if you follow all (or even most of) the SOLID principles and use common design patterns you will introduce some complexity as features and requirements are added or changed to keep your design as maintainable and flexible as needed. However once that complexity is introduced and working like a champ when do you removed it? Example. I have an application written for a client. When originally created there where several ways to give raises to employees. I used the strategy pattern and factory to keep the whole process nice and clean. Over time certain raise methods where added or removed by the application owner. Time passes and new owner takes over. This new owner is hard nosed, keeps everything simple and only has one single way to give a raise. The complexity needed by the strategy pattern is no longer needed. If I where to code this from the requirements as they are now I would not introduce this extra complexity (but make sure I could introduce it with little or no work should the need arise). So do I remove the strategy implementation now? I don't think this new owner will ever change how raises are given. But the application itself has demonstrated that this could happen. Of course this is just one example in an application where a new owner takes over and has simplified many processes. I could remove dozens of classes, interfaces and factories and make the whole application much more simple. Note that the current implementation does works just fine and the owner is happy with it (and surprised and even happier that I was able to implement her changes so quickly because of the discussed complexity). I admit that a small part of this doubt is because it is highly likely the new owner isn't going to use me any longer. I don't really care that somebody else will take this over since it has not been a big income generator. But I do care about 2 (related) things I care a bit that the new maintainer will have to think a bit harder when trying to understand the code. Complexity is complexity and I don't want to anger the psycho maniac coming after me. But even more I worry about a competitor seeing this complexity and thinking I just implement design patterns to pad my hours on jobs. Then spreading this rumor to hurt my other business. (I have heard this mentioned.) So... In general should previously needed complexity be removed even though it works and there has been a historically demonstrated need for the complexity but you have no indication that it will be needed in the future? Even if the question above is generally answered "no" is it wise to remove this "un-needed" complexity if handing off the project to a competitor (or stranger)?

    Read the article

  • 12.04 Unity 3D 80% CPU load with Compiz

    - by user39288
    EDIT : I have been able to to determine that the problem is not compiz, but is actually Xorg. I don't know why, but by quickly maximizing terminal and taking a screenshot with top running before the problem went away I am able to see xorg takes up 72% of cpu, with bamfdaemon taking up 18%, and compiz taking up 14%. Seems the nvidia drivers are to blame, will play more with settings and perhaps do a clean nvidia-current install to try to fix the problem. Having a very annoying problem with high CPU usage. Running 12.04 with latest drivers and nvidia-current installed. Have not had any issues for days, now I have a strange problem. Unity 3d runs great most of the time, 1-2% CPU usage with only transmission running in background. Windows open and close smoothly. However,no matter what programs are open, if I minimize all open programs to the unity bar on the left, my CPU jumps to about 80% and slows down all maximize and minimize effects. Mouse movement stays smooth the whole time, but unity becomes unresponsive for up to 30 seconds at times. Hitting alt + tab to bring up even a single window fixes the problem. The window I bring back up doesn't even have to be maximized to solve the problem. Hitting the super button to bring up the dash makes CPU drop back to idle until I close it, then high CPU usage resumes. Believe the problem is compiz, but even just having only terminal running "top", I have to minimize it to the tray for the problem to show, so I can't see the problem process. I can only tell about the high CPU usage using indicator-sysmonitor. Even tried quitting the indicator, but I can still tell very poor performance with all applications when minimized. Reset compiz back to defaults, tried going to the post-release update nvidia drivers, played with vsync settings in both the nvidia settings and compiz. Even forced refresh rate, but cannot solve the problem. The problem does NOT occur in Unity 2D. Specs are core 2 duo 2.0ghz, 4GB ddr2 ram, 2x 320's HDD in RAID 0, and Nvidia GTX 260M graphics card.

    Read the article

  • What's a good way to get an IT internship? [closed]

    - by user1419715
    I'm a second year CS student who's worked really hard to build and expand my skills. I've spent the past week now trying to find a place to volunteer (i.e. work for FREE) so I can get a little bit of in-the-door experience with web development. I have a portfolio with several decent projects, a handful of languages and other hard/soft skills that employers constantly say they're clamoring for. I can't even get people to take my calls. This is me offering to work for them for FREE, remember. I'm in a reputable program at a respected school, get decent grades and...yeah, I've worked really hard to be presentable. On the rare occassions I actually get to speak to somebody at a design firm they hedge and do everything they can to get me off the phone. Nobody's ever expressed even the slightest interest in taking me on. The answer to the experience problem is supposed to be "you need to spend a year or two building up a big portfolio of projects on your own" so that employers will be impressed. I've done that. Websites, standalone apps, etc.. Nobody will even look at my resume, though. Question: Why does there seem to be so little interest in taking on upaid interns in the world of IT? Update: Sorry you all think I'm too aggressive or angry. It wasn't my intent to be a jerk to people while asking them for their opinions. That said, how would you feel if employer after employer turned you down cold when you offered yourself to them without asking for remuneration? One can't even get an unpaid job in this economy now, it seems. How am I going about my search? I find web firms in my area and contact them via email with a brief sales pitch of myself and a resume attached. Then a couple of days later I follow up with a phone contact. Nobody--anywhere--is advertising for interns of any kind. If there were I'm sure there'd be about 500 resumes per position, even unpaid. I've had good experiences in the past with cold-calling firms for actual paid jobs in other industries (hiring is a pain in the ass process and a call like this can show initiative while reducing a busy employer's need to do all the hiring overhead work), so I thought volunteering would work at least as well. My skills are pretty good for a CS student and include the usual suspects: HTML/CSS/Javascript, Python, Java, C, C#/.Net etc etc. I made a point on my resume to tie each ability claim to a project as well. Oh, and regarding the "working for free still costs the employer money" argument: that's an excellent point I hadn't though of. But it means...what? I have to pay the employer for the privilege of working there now?

    Read the article

  • HTG Reviews the CODE Keyboard: Old School Construction Meets Modern Amenities

    - by Jason Fitzpatrick
    There’s nothing quite as satisfying as the smooth and crisp action of a well built keyboard. If you’re tired of  mushy keys and cheap feeling keyboards, a well-constructed mechanical keyboard is a welcome respite from the $10 keyboard that came with your computer. Read on as we put the CODE mechanical keyboard through the paces. What is the CODE Keyboard? The CODE keyboard is a collaboration between manufacturer WASD Keyboards and Jeff Atwood of Coding Horror (the guy behind the Stack Exchange network and Discourse forum software). Atwood’s focus was incorporating the best of traditional mechanical keyboards and the best of modern keyboard usability improvements. In his own words: The world is awash in terrible, crappy, no name how-cheap-can-we-make-it keyboards. There are a few dozen better mechanical keyboard options out there. I’ve owned and used at least six different expensive mechanical keyboards, but I wasn’t satisfied with any of them, either: they didn’t have backlighting, were ugly, had terrible design, or were missing basic functions like media keys. That’s why I originally contacted Weyman Kwong of WASD Keyboards way back in early 2012. I told him that the state of keyboards was unacceptable to me as a geek, and I proposed a partnership wherein I was willing to work with him to do whatever it takes to produce a truly great mechanical keyboard. Even the ardent skeptic who questions whether Atwood has indeed created a truly great mechanical keyboard certainly can’t argue with the position he starts from: there are so many agonizingly crappy keyboards out there. Even worse, in our opinion, is that unless you’re a typist of a certain vintage there’s a good chance you’ve never actually typed on a really nice keyboard. Those that didn’t start using computers until the mid-to-late 1990s most likely have always typed on modern mushy-key keyboards and never known the joy of typing on a really responsive and crisp mechanical keyboard. Is our preference for and love of mechanical keyboards shining through here? Good. We’re not even going to try and hide it. So where does the CODE keyboard stack up in pantheon of keyboards? Read on as we walk you through the simple setup and our experience using the CODE. Setting Up the CODE Keyboard Although the setup of the CODE keyboard is essentially plug and play, there are two distinct setup steps that you likely haven’t had to perform on a previous keyboard. Both highlight the degree of care put into the keyboard and the amount of customization available. Inside the box you’ll find the keyboard, a micro USB cable, a USB-to-PS2 adapter, and a tool which you may be unfamiliar with: a key puller. We’ll return to the key puller in a moment. Unlike the majority of keyboards on the market, the cord isn’t permanently affixed to the keyboard. What does this mean for you? Aside from the obvious need to plug it in yourself, it makes it dead simple to repair your own keyboard cord if it gets attacked by a pet, mangled in a mechanism on your desk, or otherwise damaged. It also makes it easy to take advantage of the cable routing channels in on the underside of the keyboard to  route your cable exactly where you want it. While we’re staring at the underside of the keyboard, check out those beefy rubber feet. By peripherals standards they’re huge (and there is six instead of the usual four). Once you plunk the keyboard down where you want it, it might as well be glued down the rubber feet work so well. After you’ve secured the cable and adjusted it to your liking, there is one more task  before plug the keyboard into the computer. On the bottom left-hand side of the keyboard, you’ll find a small recess in the plastic with some dip switches inside: The dip switches are there to switch hardware functions for various operating systems, keyboard layouts, and to enable/disable function keys. By toggling the dip switches you can change the keyboard from QWERTY mode to Dvorak mode and Colemak mode, the two most popular alternative keyboard configurations. You can also use the switches to enable Mac-functionality (for Command/Option keys). One of our favorite little toggles is the SW3 dip switch: you can disable the Caps Lock key; goodbye accidentally pressing Caps when you mean to press Shift. You can review the entire dip switch configuration chart here. The quick-start for Windows users is simple: double check that all the switches are in the off position (as seen in the photo above) and then simply toggle SW6 on to enable the media and backlighting function keys (this turns the menu key on the keyboard into a function key as typically found on laptop keyboards). After adjusting the dip switches to your liking, plug the keyboard into an open USB port on your computer (or into your PS/2 port using the included adapter). Design, Layout, and Backlighting The CODE keyboard comes in two flavors, a traditional 87-key layout (no number pad) and a traditional 104-key layout (number pad on the right hand side). We identify the layout as traditional because, despite some modern trapping and sneaky shortcuts, the actual form factor of the keyboard from the shape of the keys to the spacing and position is as classic as it comes. You won’t have to learn a new keyboard layout and spend weeks conditioning yourself to a smaller than normal backspace key or a PgUp/PgDn pair in an unconventional location. Just because the keyboard is very conventional in layout, however, doesn’t mean you’ll be missing modern amenities like media-control keys. The following additional functions are hidden in the F11, F12, Pause button, and the 2×6 grid formed by the Insert and Delete rows: keyboard illumination brightness, keyboard illumination on/off, mute, and then the typical play/pause, forward/backward, stop, and volume +/- in Insert and Delete rows, respectively. While we weren’t sure what we’d think of the function-key system at first (especially after retiring a Microsoft Sidewinder keyboard with a huge and easily accessible volume knob on it), it took less than a day for us to adapt to using the Fn key, located next to the right Ctrl key, to adjust our media playback on the fly. Keyboard backlighting is a largely hit-or-miss undertaking but the CODE keyboard nails it. Not only does it have pleasant and easily adjustable through-the-keys lighting but the key switches the keys themselves are attached to are mounted to a steel plate with white paint. Enough of the light reflects off the interior cavity of the keys and then diffuses across the white plate to provide nice even illumination in between the keys. Highlighting the steel plate beneath the keys brings us to the actual construction of the keyboard. It’s rock solid. The 87-key model, the one we tested, is 2.0 pounds. The 104-key is nearly a half pound heavier at 2.42 pounds. Between the steel plate, the extra-thick PCB board beneath the steel plate, and the thick ABS plastic housing, the keyboard has very solid feel to it. Combine that heft with the previously mentioned thick rubber feet and you have a tank-like keyboard that won’t budge a millimeter during normal use. Examining The Keys This is the section of the review the hardcore typists and keyboard ninjas have been waiting for. We’ve looked at the layout of the keyboard, we’ve looked at the general construction of it, but what about the actual keys? There are a wide variety of keyboard construction techniques but the vast majority of modern keyboards use a rubber-dome construction. The key is floated in a plastic frame over a rubber membrane that has a little rubber dome for each key. The press of the physical key compresses the rubber dome downwards and a little bit of conductive material on the inside of the dome’s apex connects with the circuit board. Despite the near ubiquity of the design, many people dislike it. The principal complaint is that dome keyboards require a complete compression to register a keystroke; keyboard designers and enthusiasts refer to this as “bottoming out”. In other words, the register the “b” key, you need to completely press that key down. As such it slows you down and requires additional pressure and movement that, over the course of tens of thousands of keystrokes, adds up to a whole lot of wasted time and fatigue. The CODE keyboard features key switches manufactured by Cherry, a company that has manufactured key switches since the 1960s. Specifically the CODE features Cherry MX Clear switches. These switches feature the same classic design of the other Cherry switches (such as the MX Blue and Brown switch lineups) but they are significantly quieter (yes this is a mechanical keyboard, but no, your neighbors won’t think you’re firing off a machine gun) as they lack the audible click found in most Cherry switches. This isn’t to say that they keyboard doesn’t have a nice audible key press sound when the key is fully depressed, but that the key mechanism isn’t doesn’t create a loud click sound when triggered. One of the great features of the Cherry MX clear is a tactile “bump” that indicates the key has been compressed enough to register the stroke. For touch typists the very subtle tactile feedback is a great indicator that you can move on to the next stroke and provides a welcome speed boost. Even if you’re not trying to break any word-per-minute records, that little bump when pressing the key is satisfying. The Cherry key switches, in addition to providing a much more pleasant typing experience, are also significantly more durable than dome-style key switch. Rubber dome switch membrane keyboards are typically rated for 5-10 million contacts whereas the Cherry mechanical switches are rated for 50 million contacts. You’d have to write the next War and Peace  and follow that up with A Tale of Two Cities: Zombie Edition, and then turn around and transcribe them both into a dozen different languages to even begin putting a tiny dent in the lifecycle of this keyboard. So what do the switches look like under the classicly styled keys? You can take a look yourself with the included key puller. Slide the loop between the keys and then gently beneath the key you wish to remove: Wiggle the key puller gently back and forth while exerting a gentle upward pressure to pop the key off; You can repeat the process for every key, if you ever find yourself needing to extract piles of cat hair, Cheeto dust, or other foreign objects from your keyboard. There it is, the naked switch, the source of that wonderful crisp action with the tactile bump on each keystroke. The last feature worthy of a mention is the N-key rollover functionality of the keyboard. This is a feature you simply won’t find on non-mechanical keyboards and even gaming keyboards typically only have any sort of key roller on the high-frequency keys like WASD. So what is N-key rollover and why do you care? On a typical mass-produced rubber-dome keyboard you cannot simultaneously press more than two keys as the third one doesn’t register. PS/2 keyboards allow for unlimited rollover (in other words you can’t out type the keyboard as all of your keystrokes, no matter how fast, will register); if you use the CODE keyboard with the PS/2 adapter you gain this ability. If you don’t use the PS/2 adapter and use the native USB, you still get 6-key rollover (and the CTRL, ALT, and SHIFT don’t count towards the 6) so realistically you still won’t be able to out type the computer as even the more finger twisting keyboard combos and high speed typing will still fall well within the 6-key rollover. The rollover absolutely doesn’t matter if you’re a slow hunt-and-peck typist, but if you’ve read this far into a keyboard review there’s a good chance that you’re a serious typist and that kind of quality construction and high-number key rollover is a fantastic feature.  The Good, The Bad, and the Verdict We’ve put the CODE keyboard through the paces, we’ve played games with it, typed articles with it, left lengthy comments on Reddit, and otherwise used and abused it like we would any other keyboard. The Good: The construction is rock solid. In an emergency, we’re confident we could use the keyboard as a blunt weapon (and then resume using it later in the day with no ill effect on the keyboard). The Cherry switches are an absolute pleasure to type on; the Clear variety found in the CODE keyboard offer a really nice middle-ground between the gun-shot clack of a louder mechanical switch and the quietness of a lesser-quality dome keyboard without sacrificing quality. Touch typists will love the subtle tactile bump feedback. Dip switch system makes it very easy for users on different systems and with different keyboard layout needs to switch between operating system and keyboard layouts. If you’re investing a chunk of change in a keyboard it’s nice to know you can take it with you to a different operating system or “upgrade” it to a new layout if you decide to take up Dvorak-style typing. The backlighting is perfect. You can adjust it from a barely-visible glow to a blazing light-up-the-room brightness. Whatever your intesity preference, the white-coated steel backplate does a great job diffusing the light between the keys. You can easily remove the keys for cleaning (or to rearrange the letters to support a new keyboard layout). The weight of the unit combined with the extra thick rubber feet keep it planted exactly where you place it on the desk. The Bad: While you’re getting your money’s worth, the $150 price tag is a shock when compared to the $20-60 price tags you find on lower-end keyboards. People used to large dedicated media keys independent of the traditional key layout (such as the large buttons and volume controls found on many modern keyboards) might be off put by the Fn-key style media controls on the CODE. The Verdict: The keyboard is clearly and heavily influenced by the needs of serious typists. Whether you’re a programmer, transcriptionist, or just somebody that wants to leave the lengthiest article comments the Internet has ever seen, the CODE keyboard offers a rock solid typing experience. Yes, $150 isn’t pocket change, but the quality of the CODE keyboard is so high and the typing experience is so enjoyable, you’re easily getting ten times the value you’d get out of purchasing a lesser keyboard. Even compared to other mechanical keyboards on the market, like the Das Keyboard, you’re still getting more for your money as other mechanical keyboards don’t come with the lovely-to-type-on Cherry MX Clear switches, back lighting, and hardware-based operating system keyboard layout switching. If it’s in your budget to upgrade your keyboard (especially if you’ve been slogging along with a low-end rubber-dome keyboard) there’s no good reason to not pickup a CODE keyboard. Key animation courtesy of Geekhack.org user Lethal Squirrel.       

    Read the article

  • eBooks on iPad vs. Kindle: More Debate than Smackdown

    - by andrewbrust
    When the iPad was presented at its San Francisco launch event on January 28th, Steve Jobs spent a significant amount of time explaining how well the device would serve as an eBook reader. He showed the iBooks reader application and iBookstore and laid down the gauntlet before Amazon and its beloved Kindle device. Almost immediately afterwards, criticism came rushing forth that the iPad could never beat the Kindle for book reading. The curious part of that criticism is that virtually no one offering it had actually used the iPad yet. A few weeks later, on April 3rd, the iPad was released for sale in the United States. I bought one on that day and in the few additional weeks that have elapsed, I’ve given quite a workout to most of its capabilities, including its eBook features. I’ve also spent some time with the Kindle, albeit a first-generation model, to see how it actually compares to the iPad. I had some expectations going in, but I came away with conclusions about each device that were more scenario-based than absolute. I present my findings to you here.   Vital Statistics Let’s start with an inventory of each device’s underlying technology. The iPad has a color, backlit LCD screen and an on-screen keyboard. It has a battery which, on a full charge, lasts anywhere from 6-10 hours. The Kindle offers a monochrome, reflective E Ink display, a physical keyboard and a battery that on my first gen loaner unit can go up to a week between charges (Amazon claims the battery on the Kindle 2 can last up to 2 weeks on a single charge). The Kindle connects to Amazon’s Kindle Store using a 3G modem (the technology and network vary depending on the model) that incurs no airtime service charges whatsoever. The iPad units that are on-sale today work over WiFi only. 3G-equipped models will be on sale shortly and will command a $130 premium over their WiFi-only counterparts. 3G service on the iPad, in the U.S. from AT&T, will be fee-based, with a 250MB plan at $14.99 per month and an unlimited plan at $29.99. No contract is required for 3G service. All these tech specs aside, I think a more useful observation is that the iPad is a multi-purpose Internet-connected entertainment device, while the Kindle is a dedicated reading device. The question is whether those differences in design and intended use create a clear-cut winner for reading electronic publications. Let’s take a look at each device, in isolation, now.   Kindle To me, what’s most innovative about the Kindle is its E Ink display. E Ink really looks like ink on a sheet of paper. It requires no backlight, it’s fully visible in direct sunlight and it causes almost none of the eyestrain that LCD-based computer display technology (like that used on the iPad) does. It’s really versatile in an all-around way. Forgive me if this sounds precious, but reading on it is really a joy. In fact, it’s a genuinely relaxing experience. Through the Kindle Store, Amazon allows users to download books (including audio books), magazines, newspapers and blog feeds. Books and magazines can be purchased either on a single-issue basis or as an annual subscription. Books, of course, are purchased singly. Oddly, blogs are not free, but instead carry a monthly subscription fee, typically $1.99. To me this is ludicrous, but I suppose the free 3G service is partially to blame. Books and magazine issues download quickly. Magazine and blog subscriptions cause new issues or posts to be pushed to your device on an automated basis. Available blogs include 9000-odd feeds that Amazon offers on the Kindle Store; unless I missed something, arbitrary RSS feeds are not supported (though there are third party workarounds to this limitation). The shopping experience is integrated well, has an huge selection, and offers certain graphical perks. For example, magazine and newspaper logos are displayed in menus, and book cover thumbnails appear as well. A simple search mechanism is provided and text entry through the physical keyboard is relatively painless. It’s very easy and straightforward to enter the store, find something you like and start reading it quickly. If you know what you’re looking for, it’s even faster. Given Kindle’s high portability, very reliable battery, instant-on capability and highly integrated content acquisition, it makes reading on whim, and in random spurts of downtime, very attractive. The Kindle’s home screen lists all of your publications, and easily lets you select one, then start reading it. Once opened, publications display in crisp, attractive text that is adjustable in size. “Turning” pages is achieved through buttons dedicated to the task. Notes can be recorded, bookmarks can be saved and pages can be saved as clippings. I am not an avid book reader, and yet I found the Kindle made it really fun, convenient and soothing to read. There’s something about the easy access to the material and the simplicity of the display that makes the Kindle seduce you into chilling out and reading page after page. On the other hand, the Kindle has an awkward navigation interface. While menus are displayed clearly on the screen, the method of selecting menu items is tricky: alongside the right-hand edge of the main display is a thin column that acts as a second display. It has a white background, and a scrollable silver cursor that is moved up or down through the use of the device’s scrollwheel. Picking a menu item on the main display involves scrolling the silver cursor to a position parallel to that menu item and pushing the scrollwheel in. This navigation technique creates a disconnect, literally. You don’t really click on a selection so much as you gesture toward it. I got used to this technique quickly, but I didn’t love it. It definitely created a kind of anxiety in me, making me feel the need to speed through menus and get to my destination document quickly. Once there, I could calm down and relax. Books are great on the Kindle. Magazines and newspapers much less so. I found the rendering of photographs, and even illustrations, to be unacceptably crude. For this reason, I expect that reading textbooks on the Kindle may leave students wanting. I found that the original flow and layout of any publication was sacrificed on the Kindle. In effect, browsing a magazine or newspaper was almost impossible. Reading the text of individual articles was enjoyable, but having to read this way made the whole experience much more “a la carte” than cohesive and thematic between articles. I imagine that for academic journals this is ideal, but for consumer publications it imposes a stripped-down, low-fidelity experience that evokes a sense of deprivation. In general, the Kindle is great for reading text. For just about anything else, especially activity that involves exploratory browsing, meandering and short-attention-span reading, it presents a real barrier to entry and adoption. Avid book readers will enjoy the Kindle (if they’re not already). It’s a great device for losing oneself in a book over long sittings. Multitaskers who are more interested in periodicals, be they online or off, will like it much less, as they will find compromise, and even sacrifice, to be palpable.   iPad The iPad is a very different device from the Kindle. While the Kindle is oriented to pages of text, the iPad orbits around applications and their interfaces. Be it the pinch and zoom experience in the browser, the rich media features that augment content on news and weather sites, or the ability to interact with social networking services like Twitter, the iPad is versatile. While it shares a slate-like form factor with the Kindle, it’s effectively an elegant personal computer. One of its many features is the iBook application and integration of the iBookstore. But it’s a multi-purpose device. That turns out to be good and bad, depending on what you’re reading. The iBookstore is great for browsing. It’s color, rich animation-laden user interface make it possible to shop for books, rather than merely search and acquire them. Unfortunately, its selection is rather sparse at the moment. If you’re looking for a New York Times bestseller, or other popular titles, you should be OK. If you want to read something more specialized, it’s much harder. Unlike the awkward navigation interface of the Kindle, the iPad offers a nearly flawless touch-screen interface that seduces the user into tinkering and kibitzing every bit as much as the Kindle lulls you into a deep, concentrated read. It’s a dynamic and interactive device, whereas the Kindle is static and passive. The iBook reader is slick and fun. Use the iPad in landscape mode and you can read the book in 2-up (left/right 2-page) display; use it in portrait mode and you can read one page at a time. Rather than clicking a hardware button to turn pages, you simply drag and wipe from right-to-left to flip the single or right-hand page. The page actually travels through an animated path as it would in a physical book. The intuitiveness of the interface is uncanny. The reader also accommodates saving of bookmarks, searching of the text, and the ability to highlight a word and look it up in a dictionary. Pages display brightly and clearly. They’re easy to read. But the backlight and the glare made me less comfortable than I was with the Kindle. The knowledge that completely different applications (including the Web and email and Twitter) were just a few taps away made me antsy and very tempted to task-switch. The knowledge that battery life is an issue created subtle discomfort. If the Kindle makes you feel like you’re in a library reading room, then the iPad makes you feel, at best, like you’re under fluorescent lights at a Barnes and Noble or Borders store. If you’re lucky, you’d be on a couch or at a reading table in the store, but you might also be standing up, in the aisles. Clearly, I didn’t find this conducive to focused and sustained reading. But that may have more to do with my own tendency to read periodicals far more than books, and my neurotic . And, truth be known, the book reading experience, when not explicitly compared to Kindle’s, was still pleasant. It is also important to point out that Kindle Store-sourced books can be read on the iPad through a Kindle reader application, from Amazon, specific to the device. This offered a less rich experience than the iBooks reader, but it was completely adequate. Despite the Kindle brand of the reader, however, it offered little in terms of simulating the reading experience on its namesake device. When it comes to periodicals, the iPad wins hands down. Magazines, even if merely scanned images of their print editions, read on the iPad in a way that felt similar to reading hard copy. The full color display, touch navigation and even the ability to render advertisements in their full glory makes the iPad a great way to read through any piece of work that is measured in pages, rather than chapters. There are many ways to get magazines and newspapers onto the iPad, including the Zinio reader, and publication-specific applications like the Wall Street Journal’s and Popular Science’s. The New York Times’ free Editors’ Choice application offers a Times Reader-like interface to a subset of the Gray Lady’s daily content. The completely Web-based but iPad-optimized Times Skimmer site (at www.nytimes.com/timesskimmer) works well too. Even conventional Web sites themselves can be read much like magazines, given the iPad’s ability to zoom in on the text and crop out advertisements on the margins. While the Kindle does have an experimental Web browser, it reminded me a lot of early mobile phone browsers, only in a larger size. For text-heavy sites with simple layout, it works fine. For just about anything else, it becomes more trouble than it’s worth. And given the way magazine articles make me think of things I want to look up online, I think that’s a real liability for the Kindle.   Summing Up What I came to realize is that the Kindle isn’t so much a computer or even an Internet device as it is a printer. While it doesn’t use physical paper, it still renders its content a page at a time, just like a laser printer does, and its output appears strikingly similar. You can read the rendered text, but you can’t interact with it in any way. That’s why the navigation requires a separate cursor display area. And because of the page-oriented rendering behavior, turning pages causes a flash on the display and requires a sometimes long pause before the next page is rendered. The good side of this is that once the page is generated, no battery power is required to display it. That makes for great battery life, optimal viewing under most lighting conditions (as long as there is some light) and low-eyestrain text-centric display of content. The Kindle is highly portable, has an excellent selection in its store and is refreshingly distraction-free. All of this is ideal for reading books. And iPad doesn’t offer any of it. What iPad does offer is versatility, variety, richness and luxury. It’s flush with accoutrements even if it’s low on focused, sustained text display. That makes it inferior to the Kindle for book reading. But that also makes it better than the Kindle for almost everything else. As such, and given that its book reading experience is still decent (even if not superior), I think the iPad will give Kindle a run for its money. True book lovers, and people on a budget, will want the Kindle. People with a robust amount of discretionary income may want both devices. Everyone else who is interested in a slate form factor e-reading device, especially if they also wish to have leisure-friendly Internet access, will likely choose the iPad exclusively. One thing is for sure: iPad has reduced Kindle’s market, and may have shifted its mass market potential to a mere niche play. If Amazon is smart, it will improve its iPad-based Kindle reader app significantly. It can then leverage the iPad channel as a significant market for the Kindle Store. After all, selling the eBooks themselves is what Amazon should care most about.

    Read the article

  • The Incremental Architect&acute;s Napkin - #2 - Balancing the forces

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/02/the-incremental-architectacutes-napkin---2---balancing-the-forces.aspxCategorizing requirements is the prerequisite for ecconomic architectural decisions. Not all requirements are created equal. However, to truely understand and describe the requirement forces pulling on software development, I think further examination of the requirements aspects is varranted. Aspects of Functionality There are two sides to Functionality requirements. It´s about what a software should do. I call that the Operations it implements. Operations are defined by expressions and control structures or calls to frameworks of some sort, i.e. (business) logic statements. Operations calculate, transform, aggregate, validate, send, receive, load, store etc. Operations are about behavior; they take input and produce output by considering state. I´m not using the term “function” here, because functions - or methods or sub-programs - are not necessary to implement Operations. Functions belong to a different sub-aspect of requirements (see below). Operations alone are not enough, though, to make a customer happy with regard to his/her Functionality requirements. Only correctly implemented Operations provide full value. This should make clear, why testing is so important. And not just manual tests during development of some operational feature, but automated tests. Because only automated tests scale when over time the number of operations increases. Without automated tests there is no guarantee formerly correct operations are still correct after more got added. To retest all previous operations manually is infeasible. So whoever relies just on manual tests is not really balancing the two forces Operations and Correctness. With manual tests more weight is put on the side of the scale of Operations. That might be ok for a short period of time - but in the long run it will bite you. You need to plan for Correctness in the long run from the first day of your project on. Aspects of Quality As important as Functionality is, it´s not the driver for software development. No software has ever been written to just implement some operation in code. We don´t need computers just to do something. All computers can do with software we can do without them. Well, at least given enough time and resources. We could calculate the most complex formulas without computers. We could do auctions with millions of people without computers. The only reason we want computers to help us with this and a million other Operations is… We don´t want to wait for the results very long. Or we want less errors. Or we want easier accessability to complicated solutions. So the main reason for customers to buy/order software is some Quality. They want some Functionality with a higher Quality (e.g. performance, scalability, usability, security…) than without the software. But Qualities come in at least two flavors: Most important are Primary Qualities. That´s the Qualities software truely is written for. Take an online auction website for example. Its Primary Qualities are performance, scalability, and usability, I´d say. Auctions should come within reach of millions of people; setting up an auction should be very easy; finding a suitable auction and bidding on it should be as fast as possible. Only if those Qualities have been implemented does security become relevant. A secure auction website is important - but not as important as a fast auction website. Nobody would want to use the most secure auction website if it was unbearably slow. But there would be people willing to use the fastest auction website even it was lacking security. That´s why security - with regard to online auction software - is not a Primary Quality, but just a Secondary Quality. It´s a supporting quality, so to speak. It does not deliver value by itself. With a password manager software this might be different. There security might be a Primary Quality. Please get me right: I don´t want to denigrate any Quality. There´s a long list of non-functional requirements at Wikipedia. They are all created equal - but that does not mean they are equally important for all software projects. When confronted with Quality requirements check with the customer which are primary and which are secondary. That will help to make good economical decisions when in a crunch. Resources are always limited - but requirements are a bottomless ocean. Aspects of Security of Investment Functionality and Quality are traditionally the requirement aspects cared for most - by customers and developers alike. Even today, when pressure rises in a project, tunnel vision will focus on them. Any measures to create and hold up Security of Investment (SoI) will be out of the window pretty quickly. Resistance to customers and/or management is futile. As long as SoI is not placed on equal footing with Functionality and Quality it´s bound to suffer under pressure. To look closer at what SoI means will help to become more conscious about it and make customers and management aware of the risks of neglecting it. SoI to me has two facets: Production Efficiency (PE) is about speed of delivering value. Customers like short response times. Short response times mean less money spent. So whatever makes software development faster supports this requirement. This must not lead to duct tape programming and banging out features by the dozen, though. Because customers don´t just want Operations and Quality, but also Correctness. So if Correctness gets compromised by focussing too much on Production Efficiency it will fire back. Customers want PE not just today, but over the whole course of a software´s lifecycle. That means, it´s not just about coding speed, but equally about code quality. If code quality leads to rework the PE is on an unsatisfactory level. Also if code production leads to waste it´s unsatisfactory. Because the effort which went into waste could have been used to produce value. Rework and waste cost money. Rework and waste abound, however, as long as PE is not addressed explicitly with management and customers. Thanks to the Agile and Lean movements that´s increasingly the case. Nevertheless more could and should be done in many teams. Each and every developer should keep in mind that Production Efficiency is as important to the customer as Functionality and Quality - whether he/she states it or not. Making software development more efficient is important - but still sooner or later even agile projects are going to hit a glas ceiling. At least as long as they neglect the second SoI facet: Evolvability. Delivering correct high quality functionality in short cycles today is good. But not just any software structure will allow this to happen for an indefinite amount of time.[1] The less explicitly software was designed the sooner it´s going to get stuck. Big ball of mud, monolith, brownfield, legacy code, technical debt… there are many names for software structures that have lost the ability to evolve, to be easily changed to accomodate new requirements. An evolvable code base is the opposite of a brownfield. It´s code which can be easily understood (by developers with sufficient domain expertise) and then easily changed to accomodate new requirements. Ideally the costs of adding feature X to an evolvable code base is independent of when it is requested - or at least the costs should only increase linearly, not exponentially.[2] Clean Code, Agile Architecture, and even traditional Software Engineering are concerned with Evolvability. However, it seems no systematic way of achieving it has been layed out yet. TDD + SOLID help - but still… When I look at the design ability reality in teams I see much room for improvement. As stated previously, SoI - or to be more precise: Evolvability - can hardly be measured. Plus the customer rarely states an explicit expectation with regard to it. That´s why I think, special care must be taken to not neglect it. Postponing it to some large refactorings should not be an option. Rather Evolvability needs to be a core concern for every single developer day. This should not mean Evolvability is more important than any of the other requirement aspects. But neither is it less important. That´s why more effort needs to be invested into it, to bring it on par with the other aspects, which usually are much more in focus. In closing As you see, requirements are of quite different kinds. To not take that into account will make it harder to understand the customer, and to make economic decisions. Those sub-aspects of requirements are forces pulling in different directions. To improve performance might have an impact on Evolvability. To increase Production Efficiency might have an impact on security etc. No requirement aspect should go unchecked when deciding how to allocate resources. Balancing should be explicit. And it should be possible to trace back each decision to a requirement. Why is there a null-check on parameters at the start of the method? Why are there 5000 LOC in this method? Why are there interfaces on those classes? Why is this functionality running on the threadpool? Why is this function defined on that class? Why is this class depending on three other classes? These and a thousand more questions are not to mean anything should be different in a code base. But it´s important to know the reason behind all of these decisions. Because not knowing the reason possibly means waste and having decided suboptimally. And how do we ensure to balance all requirement aspects? That needs practices and transparency. Practices means doing things a certain way and not another, even though that might be possible. We´re dealing with dangerous tools here. Like a knife is a dangerous tool. Harm can be done if we use our tools in just any way at the whim of the moment. Over the centuries rules and practices have been established how to use knifes. You don´t put them in peoples´ legs just because you´re feeling like it. You hand over a knife with the handle towards the receiver. You might not even be allowed to cut round food like potatos or eggs with it. The same should be the case for dangerous tools like object-orientation, remote communication, threads etc. We need practices to use them in a way so requirements are balanced almost automatically. In addition, to be able to work on software as a team we need transparency. We need means to share our thoughts, to work jointly on mental models. So far our tools are focused on working with code. Testing frameworks, build servers, DI containers, intellisense, refactoring support… That´s all nice and well. I don´t want to miss any of that. But I think it´s not enough. We´re missing mental tools, tools for making thinking and talking about software (independently of code) easier. You might think, enough of such tools already exist like all those UML diagram types or Flow Charts. But then, isn´t it strange, hardly any team is using them to design software? Or is that just due to a lack of education? I don´t think so. It´s a matter value/weight ratio: the current mental tools are too heavy weight compared to the value they deliver. So my conclusion is, we need lightweight tools to really be able to balance requirements. Software development is complex. We need guidance not to forget important aspects. That´s like with flying an airplane. Pilots don´t just jump in and take off for their destination. Yes, there are times when they are “flying by the seats of their pants”, when they are just experts doing thing intuitively. But most of the time they are going through honed practices called checklist. See “The Checklist Manifesto” for very enlightening details on this. Maybe then I should say it like this: We need more checklists for the complex businss of software development.[3] But that´s what software development mostly is about: changing software over an unknown period of time. It needs to be corrected in order to finally provide promised operations. It needs to be enhanced to provide ever more operations and qualities. All this without knowing when it´s going to stop. Probably never - until “maintainability” hits a wall when the technical debt is too large, the brownfield too deep. Software development is not a sprint, is not a marathon, not even an ultra marathon. Because to all this there is a foreseeable end. Software development is like continuously and foreever running… ? And sometimes I dare to think that costs could even decrease over time. Think of it: With each feature a software becomes richer in functionality. So with each additional feature the chance of there being already functionality helping its implementation increases. That should lead to less costs of feature X if it´s requested later than sooner. X requested later could stand on the shoulders of previous features. Alas, reality seems to be far from this despite 20+ years of admonishing developers to think in terms of reusability.[1] ? Please don´t get me wrong: I don´t want to bog down the “art” of software development with heavyweight practices and heaps of rules to follow. The framework we need should be lightweight. It should not stand in the way of delivering value to the customer. It´s purpose is even to make that easier by helping us to focus and decreasing waste and rework. ?

    Read the article

  • Estudio de caso: CFO de At&T le apunta a la tecnología para transformar las Finanzas Globales

    - by RED League Heroes-Oracle
    AT&T es una de las pocas multinacionales modernas que han participado de todas las etapas anteriores de la innovación de las telecomunicaciones, de Alexander Graham Bell como inventor solitario, a Bell Labs, a los lanzamientos acelerados en las fundiciones de AT&T. La tecnología es el corazón de todo lo que AT&T hace, incluyendo sus inversiones en innovaciones tecnológicas para permitir que las finanzas de AT&T trabajen más estratégicamente con los negocios para asegurar que las inversiones en las iniciativas de crecimiento sean exitosas. Según John Stephens, Vicepresidente Ejecutivo y director financiero de AT&T, la empresa ha trazado un plan de inversión de tres años para mejorar y aumentar sus redes IP de banda ancha alámbricas e inalámbricas. El plan incluye la implementación del servicio 4G LTE para 300 millones de personas en los Estados Unidos, expansión de IP de banda ancha de alta velocidad a unos 57 millones de hogares de clientes y una expansión de la fibra a 1 millón de clientes corporativos adicionales en su área de servicio de telefonía fija. "La necesidad de velocidad es mayor que nunca, y este proyecto es nuestro paso hacia la innovación para ofrecer tal velocidad," dice Stephens. Como AT&T moderniza su infraestructura global, sus procesos operacionales se hacen tan poderosos como su red. Ha sido una tarea grande y compleja, pero Stephens se complace al decir que el departamento de finanzas de AT&T ha adoptado su papel de catalizador corporativo. Empezó con un concepto simple: "Vamos a hacer que todos hablen en el mismo idioma". Esto llevó a la consolidación de sistemas financieros heredados de las empresas adquiridas. No fue una tarea sencilla, dado que la empresa pasó por más de cinco adquisiciones importantes y un sinfín de otras transacciones. En 2007, AT&T tenía 17 aplicaciones apenas en la función de cuentas por pagar. Hoy, el número se ha reducido a dos. Asimismo, hubo 50 sistemas de reportes gerenciales oficiales y ahora hay tres, con planes de excluir uno de ellos. Al tener un único lenguaje volcado a las Finanzas en toda la empresa, el equipo de finanzas de AT&T ha eliminado las varias versiones de los mismos datos, reduciendo la posible confusión en las discusiones y en las decisiones de estrategia de negocios. Estos pasos también han reducido los costos y aceleraron la toma de decisiones. "Lo lindo de los sistemas es que permiten que la gente talentosa con habilidades analíticas usen su tiempo en esa zona, en vez de gastar tiempo en recolección, agregación y organización de los datos," señala Stephens. "Tenemos un proceso eficiente y eficaz que hace que nosotros, dejemos a la gente libre para dedicarse a aquello en que son realmente buenos. Y tenemos un equipo de alta calidad y ellos están en su mejor punto cuando son capaces de hacer su función para apoyar a la unidad de negocio”AT&T es una de las pocas multinacionales modernas que han participado de todas las etapas anteriores de la innovación de las telecomunicaciones, de Alexander Graham Bell como inventor solitario, a Bell Labs, a los lanzamientos acelerados en las fundiciones de AT&T. La tecnología es el corazón de todo lo que AT&T hace, incluyendo sus inversiones en innovaciones tecnológicas para permitir que las finanzas de AT&T trabajen más estratégicamente con los negocios para asegurar que las inversiones en las iniciativas de crecimiento sean exitosas.  Según John Stephens, Vicepresidente Ejecutivo y director financiero de AT&T, la empresa ha trazado un plan de inversión de tres años para mejorar y aumentar sus redes IP de banda ancha alámbricas e inalámbricas. El plan incluye la implementación del servicio 4G LTE para 300 millones de personas en los Estados Unidos, expansión de IP de banda ancha de alta velocidad a unos 57 millones de hogares de clientes y una expansión de la fibra a 1 millón de clientes corporativos adicionales en su área de servicio de telefonía fija. "La necesidad de velocidad es mayor que nunca, y este proyecto es nuestro paso hacia la innovación para ofrecer tal velocidad," dice Stephens. Como AT&T moderniza su infraestructura global, sus procesos operacionales se hacen tan poderosos como su red. Ha sido una tarea grande y compleja, pero Stephens se complace al decir que el departamento de finanzas de AT&T ha adoptado su papel de catalizador corporativo. Empezó con un concepto simple: "Vamos a hacer que todos hablen en el mismo idioma". Esto llevó a la consolidación de sistemas financieros heredados de las empresas adquiridas. No fue una tarea sencilla, dado que la empresa pasó por más de cinco adquisiciones importantes y un sinfín de otras transacciones. En 2007, AT&T tenía 17 aplicaciones apenas en la función de cuentas por pagar. Hoy, el número se ha reducido a dos. Asimismo, hubo 50 sistemas de reportes gerenciales oficiales y ahora hay tres, con planes de excluir uno de ellos. Al tener un único lenguaje volcado a las Finanzas en toda la empresa, el equipo de finanzas de AT&T ha eliminado las varias versiones de los mismos datos, reduciendo la posible confusión en las discusiones y en las decisiones de estrategia de negocios. Estos pasos también han reducido los costos y aceleraron la toma de decisiones. "Lo lindo de los sistemas es que permiten que la gente talentosa con habilidades analíticas usen su tiempo en esa zona, en vez de gastar tiempo en recolección, agregación y organización de los datos," señala Stephens. "Tenemos un proceso eficiente y eficaz que hace que nosotros, dejemos a la gente libre para dedicarse a aquello en que son realmente buenos. Y tenemos un equipo de alta calidad y ellos están en su mejor punto cuando son capaces de hacer su función para apoyar a la unidad de negocio”

    Read the article

  • Primeiras considerações sobre TypeScript (pt-BR)

    - by srecosta
    É muito, muito cedo para ser realmente útil mas é bem promissor.Todo mundo que já trabalhou com JavaScript em aplicações que fazem realmente uso de JavaScript (não estou falando aqui de validação de formulário, ok?) sabe o quanto é difícil para uma pessoa, quiçá um time inteiro, dar manutenção nele conforme ele vai crescendo. Piora muito quando o nível de conhecimento de JavaScript que as pessoas da equipe têm varia muito e todos têm que meter a mão, eventualmente.Imagine a quantidade de JavaScript que existe por trás destas aplicações que rodam no browser tal como um Google Maps ou um Gmail ou um Outlook? É insano. E mesmo em aplicações que fazem uso de Ajax e coisas do tipo, com as telas sendo montadas “na unha” e o servidor servindo apenas de meio de campo para se chegar ao banco de dados, não é pouca coisa.O post continua no meu blog em http://www.srecosta.com/2012/11/05/primeiras-consideracoes-sobre-typescript/Grande abraço,Eduardo Costa

    Read the article

  • Felkészülni, vigyázz, kész - Blog indul!

    - by user552636
    Kedves Ügyfeleink! Örömmel indítom útjára támogatás témában blog-omat, mellyel célom az Oracle Support-tal kapcsolatos fontosabb tudnivalók, újdonságok ismertetése. Egyben ezen a csatornán is tájékoztatást fogok adni aktuális support szemináriumainkról, eseményeinkrol, illetve az ezeken bemutatásra kerülo anyagokat is elérhetové fogom tenni. Bízom benne, hogy hasznosnak találják majd a bejegyzéseket, híreket. Megtiszteltetés számomra, és elore is megköszönöm, ha visszajelzést adnak a felkerülo témákról. Kellemes tájékozódást! Gruhala Iza      

    Read the article

  • Windows boots to Black Screen of Death — Is my HDD faulty?

    - by Flynn1179
    My wife's PC suddenly stopped booting up properly. It gets as far as the Windows 'loading' screen with the bar scrolling away, but that's as far as it gets for some time, then suddenly flashes up a BSoD barely long enough to see, then the display cuts out. We've got identical PCs, and after swapping components, I established that my PC suffers the same problem if I swap in the HDD. Even if I plug hers in as a second HDD on my machine, it still does the same thing, even though it's booting from mine. I can't even boot her machine from CD or DVD either, so I couldn't even use a recovery disc. I did manage to partially boot my PC into safe mode with the other HDD attached, and it got as far as loading 'crcdisk.sys' and froze. Anybody know what could be wrong with it, or at least how I can get the data off the disk? I'm assuming there's still data on the disk, given that it at least shows me the vista 'loading' screen.

    Read the article

  • Does it actually matter whether you have open applications when installing new software?

    - by Dan
    It seems the norm these days is for installers/setup programs to request that you close all open applications before initiating the install process for a piece of new software. I used to obediently follow these directions without fail, even though it could sometimes be frustrating having to close open documents and stop working on things just to get a new, seemingly unrelated application installed. Then at some point I simply stopped bothering. Nowadays if I have a lot of stuff going on I might even run multiple installers at the same time; I can't even recall a time it has ever posed a problem. Why do setup programs even make this request in the first place, then, when it appears to be unnecessary? Is this just to simplify troubleshooting for companies' support people? Has anyone else ever run into problems as a result of trying to install an app while other apps were open?

    Read the article

  • Black screen before booting up windows 7

    - by Walking Man
    I have this problem that even before booting into Win7, my laptop gets stuck in a black screen with a blinking keyboard cursor on it. It happened after I formatted one of my other partitions. I have tried everything including bootrec /fixmbr, bootrec /fixboot, and I have even tried using Ubuntu grub to boot windows but unfortunately even grub does not identify my Windows installation. I really don't want to do a reinstalltion of Windows...

    Read the article

  • merge two parts of pdf in one

    - by Yurij73
    I have two searchable pdf documents say even.pdf and odd.pdf which contains respectively even and odd pages of a book. I can decompile each pdf to separate files 001.pdf 002.pdf oo3.pdf ....The question is how to merge them? They are both even and odd sequences numbered 1,2,3. If it where other numbering on decompile stage with pdftk for even 1,3,5 and for odd 2,4,6 instead of existing order 1,2,3, 4.. i coulde simple merge them, but i ignore this method of numbering with pdftk. May be i need to do the task in other way?

    Read the article

  • Computer Locked after period of time

    - by bossDub
    I have a Windows XP machine that locks itself after an unknown amount of time. I have disabled the screen saver completely, and unchecked "lock computer when coming out of standby". I've even overrode the settings with gpedit. Even still, it locks itself after a while, and it doesn't take long. If it's any sort of hint, the time period for the screensaver is greyed out, even if I choose an actual screensaver.

    Read the article

  • How do I run Firefox OS as a standalone application?

    - by JamesTheAwesomeDude
    I got the add-on for the Firefox OS simulator, and it works great! It even keeps functioning after Firefox is closed, so I can save processing power for other things. I'd like to run it as a standalone application, so that I don't even have to open Firefox in the first place. I've gone to the System Monitor, and it says that the process (I guessed which by CPU usage and filename) was started via /home/james/.mozilla/firefox-trunk/vkuuxfit.default/extensions/[email protected]/resources/r2d2b2g/data/linux64/b2g/plugin-container 3386 true tab, so I tried running that in the Terminal (after I'd closed the simulator, of course,) but it gives this: james@james-OptiPlex-GX620:~/.mozilla/firefox-trunk/vkuuxfit.default/extensions/[email protected]/resources/r2d2b2g/data/linux64/b2g$ ./plugin-container 3386 true tab ./plugin-container: error while loading shared libraries: libxpcom.so: cannot open shared object file: No such file or directory james@james-OptiPlex-GX620:~/.mozilla/firefox-trunk/vkuuxfit.default/extensions/[email protected]/resources/r2d2b2g/data/linux64/b2g$ What should I do? Is what I'm attempting even possible? (It should be, since the simulator kept running even after Firefox itself was closed...) NOTE: I've tried chmod u+sx plugin-container, but that didn't help.

    Read the article

  • Microsoft Security Essentials 2.0 Kills Viruses Dead. Download It Now.

    - by The Geek
    Microsoft’s Security Essentials has been our favorite anti-malware application for a while—it’s free, unobtrusive, and it doesn’t slow your PC down, but now it’s even better with the new 2.0 release, which adds network filtering, heuristic protection, and more. Just to be clear and direct with you: we absolutely recommend Microsoft Security Essentials as your anti-malware / anti-virus utility over any other option—and how can you argue? It’s totally free! New Features in 2.0 Here’s all of the new features in the latest release, which make it even more of a must-download: Network Traffic Inspection integrates into the network system and monitors the traffic at a low level without slowing down your PC, so it can actually detect threats before they get to your PC.   Internet Explorer Integration blocks malicious scripts before IE even starts running them—clearly a big security advantage.  Heuristic Scanning Engine finds malware that hasn’t been previously detected by scanning for certain types of attacks. This provides even more protection than just through virus definitions.   These new features make MSE on par with other anti-malware applications, especially the heuristic scanning, which has been the only complaint that anybody could make against MSE in the past—but now it has it Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor The Spam Police Parts 1 and 2 – Goodbye Spammers [Videos] Snow Angels Theme for Windows 7 Exploring the Jungle Ruins Wallpaper Protect Your Privacy When Browsing with Chrome and Iron Browser Free Shipping Day is Friday, December 17, 2010 – National Free Shipping Day Find an Applicable Quote for Any Programming Situation

    Read the article

  • Windows Azure Database (SQL Azure) Development Tip

    - by BuckWoody
    When you create something in the cloud, it's real, and you're charged for it. There are free offerings, and you even get free resources with your Microsoft Developer Network (MSDN) subscription, but there are limits within those. Creating a 1 GB database - even with nothing in it - is a 1 GB Database. If you create it, drop it, and create it again 2 minutes later, that's 2 GB of space you've used for the month. Wait - how do I develop in this kind of situation? With Windows Azure, you can simply install the free Software Development Kit (SDK) and develop your entire application for free - you need never even log in to Windows Azure to code. Once you're done, you simply deploy the app and you start making money from the application as you're paying for it. Windows Azure Databases (The Artist Formerly Known As SQL Azure) is a bit different. It's not emulated in the SDK - because it doesn't have to be. It's just SQL Server, with some differences in feature set. To develop in this environment, you can use SQL Server, any edition. Be aware of the feature differences, of course, but just develop away - even in the free "Express" or LocalDB flavors - and then right-click in SQL Server Management Studio to script objects. Script the database, but change the "Advanced" selection to the Engine Type of "SQL Azure". Bing. Although most all T-SQL ports directly, one thing to keep in mind is that you need a Clustered Index on every table. Often the Primary Key (PK) is a good choice for that.

    Read the article

  • C#.NET (AForge) against Java (JavaCV, JMF) for video processing

    - by Leron
    I'm starting to get really confused looking deeper and deeper at video processing world and searching for optimal choices. This is the reason to post this and some other questions to try and navigate myself in the best possible way. I really like Java, working with Java, coding with Java, at the same time C# is not that different from Java and Visual Studio is maybe the best IDE I've been working with. So even though I really want to do my projects in Java so I can get better and better Java programmer at the same time I'm really attract to video processing and even though I'm still at the beginning of this journey I want to take the right path. So I'm really in doubt could Java be used in a production environment for serious video processing software. As the title says I already have been looking at maybe the two most used technologies for video processing in Java - JMF and JavaCV and I'm starting to think that even they are used and they provide some functionality, when it comes to real work and real project that's not the first thing that comes to once mind, I mean to someone that have a professional opinion about this. On the other hand I haven't got the time to investigate .NET (c# specificly) options but even AForge looks a lot more serious library then those provided for Java. So in general -either ways I'm gonna spend a lot of time learning some technology and trying to do something that make sense with it, but my plan is at the end the thing that I'll eventually come up to be my headline project. To represent my skills and eventually help me find a job in the field. So I really don't want to spend time learning something that will give me the programming result I want but at the same time is not something that is needed in the real world development. So what is your opinion, which language, technology is better for this specific issue. Which one worths more in terms that I specified above?

    Read the article

  • Yahoo search: different results shown in two identical searches

    - by Marco Demaio
    Hello,simple question: searching on http://www.yahoo.it for villa matrimonio bologna I noticed Yahoo shows different results. You need to retry few times to get this done maybe exiting the browser and openeing it again, or maybe searching once and then clearing browser cookies and then search again (it's even easier to test if you use two different browsers at the same time to search for the same phrase). Anyway in order to reproduce this easily I write down here the query shown in the address bar after the search, so you can just click on these to see the results shown by entering these query: http://it.search.yahoo.com/search;_ylt=AirvLYKvBMPP_6MpAmONN14brK5_?vc=&p=villa+matrimonio+bologna&toggle=1&cop=mss&ei=UTF-8&fr=yfp-t-709 http://it.search.yahoo.com/search;_ylt=AirvLYKvBMPP_6MpAmONN14brK5_?vc=&p=villa+matrimonio+bologna&toggle=1&cop=mss&ei=UTF-8&fr=sfp Note the last parameter fr is different, but it's Yahoo that set it (not me), I don't even know what it means. You can see in the search box that the searched phrase is IDENTICAL in both cases. So why Yahoo is giving out different results on same search phrase? I used the same browser and performed the test in few minutes by simply trying more than once. You may also notice that the number of results returned (written on the left side of the page) is different, for the 1st search it returns 274K results, for the 2nd one 5.38M results. Actually you might think that this is just an error on Yahoo, but it's almost 1 year that while looking once in a while at some websites to see how they are ranking on Yahoo and also Google, I noticed that two searches on the same phrase show up different results even on the same day after few minutes/hours. I couldn't reproduce this behaviour also on Google so I can not say for sure, but since it seems to me it happened sometimes I was wondering if anyone of you noticed it too. Do you know if this is the normal behaviour of search engines? Because if it's normal (and it's just me that noticed it only now) I wonder how do you understand how well a site is ranking on a search engine, you could even see one of your customer's website ranking differently compared to what your customer sees on his PC.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >