Search Results

Search found 606 results on 25 pages for 'frequent'.

Page 12/25 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Software development is (mostly) a trade, and what to do about it

    - by Jeff
    (This is another cross-post from my personal blog. I don’t even remember when I first started to write it, but I feel like my opinion is well enough baked to share.) I've been sitting on this for a long time, particularly as my opinion has changed dramatically over the last few years. That I've encountered more crappy code than maintainable, quality code in my career as a software developer only reinforces what I'm about to say. Software development is just a trade for most, and not a huge academic endeavor. For those of you with computer science degrees readying your pitchforks and collecting your algorithm interview questions, let me explain. This is not an assault on your way of life, and if you've been around, you know I'm right about the quality problem. You also know the HR problem is very real, or we wouldn't be paying top dollar for mediocre developers and importing people from all over the world to fill the jobs we can't fill. I'm going to try and outline what I see as some of the problems, and hopefully offer my views on how to address them. The recruiting problem I think a lot of companies are doing it wrong. Over the years, I've had two kinds of interview experiences. The first, and right, kind of experience involves talking about real life achievements, followed by some variation on white boarding in pseudo-code, drafting some basic system architecture, or even sitting down at a comprooder and pecking out some basic code to tackle a real problem. I can honestly say that I've had a job offer for every interview like this, save for one, because the task was to debug something and they didn't like me asking where to look ("everyone else in the company died in a plane crash"). The other interview experience, the wrong one, involves the classic torture test designed to make the candidate feel stupid and do things they never have, and never will do in their job. First they will question you about obscure academic material you've never seen, or don't care to remember. Then they'll ask you to white board some ridiculous algorithm involving prime numbers or some kind of string manipulation no one would ever do. In fact, if you had to do something like this, you'd Google for a solution instead of waste time on a solved problem. Some will tell you that the academic gauntlet interview is useful to see how people respond to pressure, how they engage in complex logic, etc. That might be true, unless of course you have someone who brushed up on the solutions to the silly puzzles, and they're playing you. But here's the real reason why the second experience is wrong: You're evaluating for things that aren't the job. These might have been useful tactics when you had to hire people to write machine language or C++, but in a world dominated by managed code in C#, or Java, people aren't managing memory or trying to be smarter than the compilers. They're using well known design patterns and techniques to deliver software. More to the point, these puzzle gauntlets don't evaluate things that really matter. They don't get into code design, issues of loose coupling and testability, knowledge of the basics around HTTP, or anything else that relates to building supportable and maintainable software. The first situation, involving real life problems, gives you an immediate idea of how the candidate will work out. One of my favorite experiences as an interviewee was with a guy who literally brought his work from that day and asked me how to deal with his problem. I had to demonstrate how I would design a class, make sure the unit testing coverage was solid, etc. I worked at that company for two years. So stop looking for algorithm puzzle crunchers, because a guy who can crush a Fibonacci sequence might also be a guy who writes a class with 5,000 lines of untestable code. Fashion your interview process on ways to reveal a developer who can write supportable and maintainable code. I would even go so far as to let them use the Google. If they want to cut-and-paste code, pass on them, but if they're looking for context or straight class references, hire them, because they're going to be life-long learners. The contractor problem I doubt anyone has ever worked in a place where contractors weren't used. The use of contractors seems like an obvious way to control costs. You can hire someone for just as long as you need them and then let them go. You can even give them the work that no one else wants to do. In practice, most places I've worked have retained and budgeted for the contractor year-round, meaning that the $90+ per hour they're paying (of which half goes to the person) would have been better spent on a full-time person with a $100k salary and benefits. But it's not even the cost that is an issue. It's the quality of work delivered. The accountability of a contractor is totally transient. They only need to deliver for as long as you keep them around, and chances are they'll never again touch the code. There's no incentive for them to get things right, there's little incentive to understand your system or learn anything. At the risk of making an unfair generalization, craftsmanship doesn't matter to most contractors. The education problem I don't know what they teach in college CS courses. I've believed for most of my adult life that a college degree was an essential part of being successful. Of course I would hold that bias, since I did it, and have the paper to show for it in a box somewhere in the basement. My first clue that maybe this wasn't a fully qualified opinion comes from the fact that I double-majored in journalism and radio/TV, not computer science. Eventually I worked with people who skipped college entirely, many of them at Microsoft. Then I worked with people who had a masters degree who sucked at writing code, next to the high school diploma types that rock it every day. I still think there's a lot to be said for the social development of someone who has the on-campus experience, but for software developers, college might not matter. As I mentioned before, most of us are not writing compilers, and we never will. It's actually surprising to find how many people are self-taught in the art of software development, and that should reveal some interesting truths about how we learn. The first truth is that we learn largely out of necessity. There's something that we want to achieve, so we do what I call just-in-time learning to meet those goals. We acquire knowledge when we need it. So what about the gaps in our knowledge? That's where the most valuable education occurs, via our mentors. They're the people we work next to and the people who write blogs. They are critical to our professional development. They don't need to be an encyclopedia of jargon, but they understand the craft. Even at this stage of my career, I probably can't tell you what SOLID stands for, but you can bet that I practice the principles behind that acronym every day. That comes from experience, augmented by my peers. I'm hell bent on passing that experience to others. Process issues If you're a manager type and don't do much in the way of writing code these days (shame on you for not messing around at least), then your job is to isolate your tradespeople from nonsense, while bringing your business into the realm of modern software development. That doesn't mean you slap up a white board with sticky notes and start calling yourself agile, it means getting all of your stakeholders to understand that frequent delivery of quality software is the best way to deal with change and evolving expectations. It also means that you have to play technical overlord to make sure the education and quality issues are dealt with. That's why I make the crack about sticky notes, because without the right technique being practiced among your code monkeys, you're just a guy with sticky notes. You're asking your business to accept frequent and iterative delivery, now make sure that the folks writing the code can handle the same thing. This means unit testing, the right instrumentation, integration tests, automated builds and deployments... all of the stuff that makes it easy to see when change breaks stuff. The prognosis I strongly believe that education is the most important part of what we do. I'm encouraged by things like The Starter League, and it's the kind of thing I'd love to see more of. I would go as far as to say I'd love to start something like this internally at an existing company. Most of all though, I can't emphasize enough how important it is that we mentor each other and share our knowledge. If you have people on your staff who don't want to learn, fire them. Seriously, get rid of them. A few months working with someone really good, who understands the craftsmanship required to build supportable and maintainable code, will change that person forever and increase their value immeasurably.

    Read the article

  • Updating an ADF Web Service Data Control When Service Structure or Location Change

    - by Shay Shmeltzer
    The web service data control in Oracle ADF gives you a simplified approach to consuming services in ADF applications, and now with ADF Mobile the usage of this service seems to be growing. A frequent question we get is what happens if the service that I'm consuming changes - how do I update my data control? Well, first we should mention that if you do a good design of your application before you actually code - then things like Web service method signature shouldn't change. The signature is the contract between the publisher and the consumer, and contracts shouldn't be broken. But in reality things do change during development stages, so here is how you can update both method signatures and service location with the Web service data control: After watching this video you might be tempted to not copy the WSDLs to your project - which lets you use the right click update on a data control. However there is a reason why the copy is on by default, it reduces network traffic when you are actually running your application since ADF doesn't need to go to the server to find out the service structure. So for runtime performance, you probably should keep the WSDL local.  I encourage you to further look into both the connections.xml file where your service location is saved, and the datacontrols.dcx file where its definition is kept to get an even deeper understanding of how ADF works underneath the declarative layers.

    Read the article

  • Group Matchmaking

    - by Simon Kérouack
    Consider different groups(1 or more players) queuing together, we want to make 2 opposing teams containing each the same amount of players while keeping the groups together. At the same time we want to make both teams' average ranking as close as possible. Now also consider we have as a working set the subset of groups currently queuing within a given ranking range. For an example, let's say we have the following groups, ordered by queuing time: Id, playerCount, totalRank, avgRank 0, 3, 126, 42 1, 2, 60, 30 2, 1, 25, 25 3, 2, 80, 40 4, 1, 40, 40 5, 1, 20, 20 6, 3, 150, 50 for this specific subset, the expected output should ideally be: team1: 0, 1 (total: 186) team2: 2, 5, 6 (total: 195) up to now the solution I have been using is to balance out each team by making each team pick the group with highest ranking within the subset turn by turn. The team who picks is the one with the currently lowest average rank unless one is already full. If one team is already full the other team tries to complete itself with groups that would make the rank gap as small as possible. This solution turns out to have issues with frequent edge cases and I'm looking for a better solution, or some fine-tuning that could be made. In most cases, players seems to want teams of 5 people and queue in group of 2. Our average subset when 2 teams of 5 are chosen is made of about 14 players if that may be of any help.

    Read the article

  • Tips for achieving "continual" delivery

    - by Ben
    A team is experiencing difficulty releasing software on a frequent basis (once every week). What follows is a typical release timeline: During the iteration: Developers work on stories on the backlog on short-lived (this is enthusiastically enforced) feature branches based on the master branch. Developers frequently pull their feature branches into the integration branch, which is continually built and tested (as far as the test coverage goes) automatically. The testers have the ability to auto-deploy integration to a staging environment and this occurs multiple times per week, enabling continual running of their test suites. Every Monday: there is a release planning meeting to determine which stories are "known good" (based on the testers' work), and hence will be in the release. If there is a known issue with a story, the source branch is pulled out of integration. no new code (only bug fixes requested by the testers) may be pulled into integration on this Monday to ensure the testers have a stable codebase to cut a release from. Every Tuesday: The testers have tested the integration branch as much as they possibly can have given the time available and there are no known bugs so a release is cut and pushed out to the production nodes slowly. This sounds OK in practise, but we have found that it is incredibly difficult to achieve. The team sees the following symptoms "subtle" bugs are found on production that were not identified on the staging environment. last minute hot-fixes continue into the Tuesday. problems on the production environment require roll-backs which blocks continued development until a successful live deployment is achieved and the master branch can be updated (and hence branched from). I think test coverage, code quality, ability to regression test quickly, last minute changes and environmental differences are at play here. Can anyone offer any advice regarding how best to achieve "continual" delivery?

    Read the article

  • Efficient coding in Visual Studio (or another IDE), with touch typing

    - by cheeesus
    Moving the cursor to another position in code is one of the most frequent actions when coding. I don't write my programs from the beginning to the end, like a letter. However, moving the cursor requires me to move my right hand to the key arrows or to the mouse, which feels like an interruption to my writing rhythm, since I'm using touch typing. I want my hands to rest on the keyboard. It's difficult to explain what I mean, but I think every coder using touch typing knows what I mean. I tried many things, like defining some shortcuts as surrogate arrow keys (Shift+Alt+J, K, L, I), or buying a keyboard with a Trackpoint, Trackpad, or Trackball on it, but I have not yet found a satisfying solution to the problem. What is the best solution you know of, regardless of which IDE you use? Edit: Thank you for your answers. I am using a lot of shortkeys, but I think using a Vim plugin in Visual Studio would interfere too much with the shortkeys I am used to. Also, I have a keyboard with a built-in mouse, but I'm still looking for a better solution.

    Read the article

  • Simulate analogue steering wheel input from third party software like xPadder

    - by David Jensen
    So. I currently have a steering wheel connected to my htpc. Since it wont work / get recognized by all games i play. I use xpadder to play for example Grid 3. As far as i have understood, it simply presses the associated arrow key when i turn the wheel. What i would like, is a software to simulate more and more frequent key presses of the arrow keys the more i turn the wheel. Example: Tilting a little - 100 presses / second Tilting more - 200 presses / second Tilting max - Hold down key Because what i currently do with the wheel is constantly shaking it a bit right, when i just want to turn a little bit right and so on. Os : W8

    Read the article

  • Glimpse: Open Source Web Development

    - by Elizabeth Ayer
    We’re delighted to announce that Red Gate will be backing Glimpse! For those of you who aren’t familiar with the project, Glimpse is an open source tool which does for the server what Firebug does for the client. It’s been in beta for the last year, and we’re very excited to give Glimpse the support and dedicated effort needed to take it to a v1 and beyond. Glimpse’s founders (Nik Molnar and Anthony van der Hoorn) have joined Red Gate, and they’re just as excited as we are about the opportunities that active development of Glimpse will bring. They will continue to write code, support the community and drive the project forward (as they’ve done since its inception). With full-time attention on growing Glimpse and its community, users and developers can expect the project to accelerate, with frequent releases of new functionality. Red Gate is excited about its first major involvement with open source. You may well be wondering, though, why Red Gate is doing this. Glimpse dovetails beautifully with Red Gate’s .NET tools, which makes Glimpse an ideal framework for plugging in advanced, paid-for functionality (like performance analysis) the way web developers want to see it. As a means to this end, we will contribute to the Glimpse open source project in order to broaden its adoption and delight web developers. Since bringing in .NET Reflector in 2008, we’ve learnt sharp lessons from the community about the right and wrong ways to engage with developers, not to mention the enduring value of free. Glimpse further shows what the .NET community can achieve through open source collaboration, and we’re looking forward to working with the Glimpse community to make something enduring and awesome. Nik and Anthony, themselves passionate advocates of community-driven software, will continue to control the Glimpse project, steering it to best meet the needs of its users and contributors. If you have any questions or queries about Glimpse, or Red Gate’s involvement in the project, please tweet with the #glimpse hashtag, contact us at Red Gate on [email protected], or post to the Glimpse Development Forum on Google Groups.

    Read the article

  • VMPlayer 9, Xubuntu 12.10, Rails Development - Freezing frequently

    - by douglasisshiny
    I have a new Vizio Ultrabook that came with Windows 7. I develop Rails applications, and it's a pain to do that in windows, so I setup a Xubuntu VM with 1GB ram and 2 CPU cores. I basically keep the VM open all the time and have enough memory not to worry. Sometimes I pause the VM. For the first few days, everything was fine. The fourth day, Xubuntu froze up while running a test (with Guard and RSpec). I didn't think much of it and restarted the VM and went on my way. The freezes started becoming more frequent, though. I don't think they are only when I run a test, but often they are. It'll happen quickly, too. Startup VM, save file, test runs, it freezes, all within 5 minutes. Of note: the VM is using a shared folder from Windows (where the code is). This may be the problem. Any other people experience something like this?

    Read the article

  • How to go from Mainframe to the Cloud?

    - by Ruma Sanyal
    Running applications on IBM mainframes is expensive, complex, and hinders IT responsiveness. The high costs from frequent forced upgrades, long integration cycles, and complex operations infrastructures can only be alleviated by migrating away from a mainframe environment.  Further, data centers are planning for cloud enablement pinned on principles of operating at significantly lower cost, very low upfront investment, operating on commodity hardware and open, standards based systems, and decoupling of hardware, infrastructure software, and business applications. These operating principles are in direct contrast with the principles of operating businesses on mainframes. By utilizing technologies such as Oracle Tuxedo, Oracle Coherence, and Oracle GoldenGate, businesses are able to quickly and safely migrate away from their IBM mainframe environments. Further, running Oracle Tuxedo and Oracle Coherence on Oracle Exalogic, the first and only integrated cloud machine on the market, Oracle customers can not only run their applications on standards-based open systems, significantly cutting their time to market and costs, they can start their journey of cloud enabling their mainframe applications. Oracle Tuxedo re-hosting tools and techniques can provide automated migration coverage for more than 95% of mainframe application assets, at a fraction of the cost Oracle GoldenGate can migrate data from mainframe systems to open systems, eliminating risks associated with the data migration Oracle Coherence hosts transactional data in memory providing mainframe-like data performance and linear scalability Running Oracle software on top of Oracle Exalogic empowers customers to start their journey of cloud enabling their mainframe applications Join us in a series of events across the globe where you you'll learn how you can build your enterprise cloud and add tremendous value to your business. In addition, meet with Oracle experts and your peers to discuss best practices and see how successful organizations are lowering total cost of ownership and achieving rapid returns by moving to the cloud. Register for the Oracle Fusion Middleware Forum event in a city new you!

    Read the article

  • Remote Working & Relocation

    - by James Burgess
    Sorry if this question is a duplicate, I did some extensive searching and found nothing on quite the same topic (though a couple on partially-overlapping topics). Recently, whilst on holiday in Munich, Germany, I was taken aback by the sheer number of programming-related posts available in the city that I easily qualify for (both in terms of knowledge, and experience). The advertised working environments seemed good and the pay seemed to be at least as good as what I'd expect here in the UK. Probably 80% of the advertisements I saw on the underground were for IT-related jobs, and a good 60% of those I was easily qualified for. At the moment, I work as a freelancer mostly on web and small software projects, but seeing the vast availability of jobs in Munich versus my local area has me thinking about remote working. I'm unable to relocate for a job for the next 3 years (my wife has a contract to continue being a doctor at her current hospital for that time) but would almost certainly be open to it after that (after all, my wife and I both love Munich). In the meanwhile, I would be very interested in remote-working. So, my question is thus do companies ever take on remote workers (even with semi-frequent trips to the office) from abroad, with a view to later relocation? And, if so, how do you go about broaching the topic with a recruiter when getting in contact about a job posting? Language isn't a barrier for me, here, as 90% of the jobs I've looked up in Munich don't require German speakers (seems they have a big recruiting market abroad). I'm also under no illusions about the disadvantages of remote working, but I'm more interested in the viability of the scenario rather than the intricacies (at least at this point). I'd really appreciate any contributions, especially from those who have experience with working in such a scenario!

    Read the article

  • /lib/i386-linux-gnu/libc.so.6 causing segmentation fault & session crash

    - by Fred Zimmerman
    I am having repeated and frequent crashes ending session whenever I take certain actions such as loading gmail under Chrome. Oddly, the same is not happening when I go to gmail under Chrome. After rooting around in /var/logs it appears to me that he trigger is something to do with libc.so.6 (see below). How can I fix this? 23936.947] [ 23936.947] Backtrace: [ 23936.948] 0: /usr/bin/X (xorg_backtrace+0x49) [0xb7745089] [ 23936.948] 1: /usr/bin/X (0xb75bf000+0x189d7a) [0xb7748d7a] [ 23936.948] 2: (vdso) (__kernel_rt_sigreturn+0x0) [0xb759c40c] [ 23936.948] 3: /usr/bin/X (0xb75bf000+0xfade7) [0xb76b9de7] [ 23936.948] 4: /usr/bin/X (ValidatePicture+0x1d) [0xb76bcb8d] [ 23936.949] 5: /usr/bin/X (CompositePicture+0xc3) [0xb76bcc83] [ 23936.949] 6: /usr/lib/xorg/modules/drivers/intel_drv.so (0xb6f18000+0xcf542) [0xb6fe7542] [ 23936.949] 7: /usr/bin/X (0xb75bf000+0x10b1d7) [0xb76ca1d7] [ 23936.949] 8: /usr/bin/X (CompositeGlyphs+0xc4) [0xb76b6d84] [ 23936.949] 9: /usr/bin/X (0xb75bf000+0x104956) [0xb76c3956] [ 23936.949] 10: /usr/bin/X (0xb75bf000+0xfe6f1) [0xb76bd6f1] [ 23936.949] 11: /usr/bin/X (0xb75bf000+0x3798d) [0xb75f698d] [ 23936.949] 12: /usr/bin/X (0xb75bf000+0x253ba) [0xb75e43ba] **[ 23936.950] 13: /lib/i386-linux-gnu/libc.so.6 (__libc_start_main+0xf3) [0xb721d4d3] [ 23936.950] 14: /usr/bin/X (0xb75bf000+0x256f9) [0xb75e46f9] [ 23936.950] [ 23936.950] Segmentation fault at address 0x155 [ 23936.950] Caught signal 11 (Segmentation fault). Server aborting [ 23936.950] Please consult the The X.Org Fou**ndation support at http://wiki.x.org for help. [ 23936.950] Please also check the log file at "/var/log/Xorg.0.log" for additional information.

    Read the article

  • ArchBeat Link-o-Rama for December 7, 2012

    - by Bob Rhubart
    From XaaS to Java EE – Which damn cloud is right for me in 2012? | Markus Eisele Oracle ACE Director Markus Eisele wrestles with a timely technical issue and shares his observations on several of the alternatives. WebLogic Servier Domain Browser App (Android) My colleague Jeff Davies, a frequent speaker at OTN Architect Day events and a genuinely nice guy, emailed me last night with this message: "I just came across this app on Google Play. It allows WebLogic administrators to browse WLS 12c domain information. I installed it on my phone and tried it out. Works very fast." I'm an iPhone guy, but I'm perfectly comfortable taking Jeff at his word.The app is called WLS Domain Browser. Follow the link for more info from the Google Play site. Exalogic 2.0.1 Tea Break Snippets - Creating a ModifyJeOS VirtualBox | The Old Toxophilist "One of the main advantages of this is that Templates can be created away from the Exalogic Environment," explains The Old Toxophilist. BTW: I had to look it up: a toxophilist is one who collects bows and arrows. Thought for the Day "All models are wrong; some models are useful." — George Box Source: SoftwareQuotes.com

    Read the article

  • Oracle Exalogic Customer Momentum @ OOW'12

    - by Sanjeev Sharma
    [Adapted from here]  At Oracle Open World 2012, i sat down with some of the Oracle Exalogic early adopters  to discuss the business benefits these businesses were realizing by embracing the engineered systems approach to data-center modernization and application consolidation. Below is an overview of the 4 businesses that won the Oracle Fusion Middleware Innovation Award for Oracle Exalogic this year. Company: Netshoes About: Leading online retailer of sporting goods in Latin America.Challenges: Rapid business growth resulted in frequent outages and poor response-time of online store-front Conventional ad-hoc approach to horizontal scaling resulted in high CAPEX and OPEX Poor performance and unavailability of online store-front resulted in revenue loss from purchase abandonment Solution: Consolidated ATG Commerce and Oracle WebLogic running on Oracle Exalogic.Business Impact:Reduced abandonment rates resulting in a two-digit increase in online conversion rates translating directly into revenue up-liftCompany: ClaroAbout: Leading communications services provider in Latin America.Challenges: Support business growth over the next 3  - 5 years while maximizing re-use of existing middleware and application investments with minimal effort and risk Solution: Consolidated Oracle Fusion Middleware components (Oracle WebLogic, Oracle SOA Suite, Oracle Tuxedo) and JAVA applications onto Oracle Exalogic and Oracle Exadata. Business Impact:Improved partner SLA’s 7x while improving throughput 5X and response-time 35x for  JAVA applicationsCompany: ULAbout: Leading safety testing and certification organization in the world.Challenges: Transition from being a non-profit to a profit oriented enterprise and grow from a $1B to $5B in annual revenues in the next 5 years Undertake a massive business transformation by aligning change strategy with execution Solution: Consolidated Oracle Applications (E-Business Suite, Siebel, BI, Hyperion) and Oracle Fusion Middleware (AIA, SOA Suite) on Oracle Exalogic and Oracle ExadataBusiness Impact:Reduced financial and operating risk in re-architecting IT services to support new business capabilities supporting 87,000 manufacturersCompany: Ingersoll RandAbout: Leading manufacturer of industrial, climate, residential and security solutions.Challenges: Business continuity risks due to complexity in enforcing consistent operational and financial controls; Re-active business decisions reduced ability to offer differentiation and compete Solution: Consolidated Oracle E-business Suite on Oracle Exalogic and Oracle ExadataBusiness Impact:Service differentiation with faster order provisioning and a shorter lead-to-cash cycle translating into higher customer satisfaction and quicker cash-conversionCheck out the winners of the Oracle Fusion Middleware Innovation awards in other categories here.

    Read the article

  • Visualizing Parallel Branch and Bound Tree Exploration

    - by Akhil
    I have written a parallel program that does a depth first branch and bound exploration of a tree. I can dump the id's (id's are like this 0, 00, 01, 0000, 0001, etc.) of the nodes at frequent intervals to know the frontier of the tree that is being explored at that instant in the tree. The challenge is to visualize the tree exploration with time. Any ideas? e.g. I can draw trees(e.g. using graphViz) at different times and create a movie out of it. Looking for ideas to facilitate this visualization - some better ways to do so or easy tools that can help me make the visualization

    Read the article

  • Hurry! See the uncensored OOW videos before they get edited!

    - by rickramsey
    source Uploaded so far: Which Oracle Solaris 11 Technologies Have Sysadmins Been Using Most? Director's Cut - Uncensored - Markus Flierl, VP Solaris Core Engineering, describes how Oracle Solaris 11 customers are taking advantage of the Image Packaging System and the snapshot capability of ZFS to run more frequent updates of not only the OS, but also the applications (agile development, anyone?), and how they're using the network virtualization capabilities in Oracle Solaris 11 to isolate applications and manage workloads on the cloud. Watch How Hybrid Columnar Compression Saves Storage Space Director's Cut - Uncensored - Art Licht shows how hyprid columnar compression (HCC) compresses data 30x without slowing down other queries that the database is performing. First he shows what happens when he runs database queries without HCC, then he shows what happens when he runs the queries with HCC. Security Capabilities and Design in Oracle Solaris 11 Director's Cut - Uncensored - Compliance reporting. Extended policy. Immutable zones. Three of the best minds in Oracle Solaris security explain what they are, what customers are doing with them, and how they were engineered. Filmed at Oracle Open World 2012. Why DTrace and Ksplice Have Made Oracle Linux 6 Popular with Sysadmins Use the DTrace scripts you wrote for Oracle Solaris on Oracle Linux without modification. Wim Coekaerts, VP of Engineering for Oracle Linux, explains how this capability of DTrace, the zero downtime updates enabled by KSplice, and other performance and stability enhancements have made Oracle Linux 6 popular with sysadmins. Why Solaris 11 Is Being Adopted Faster Than Solaris 10 Sneak Preview - Uncut Version - Lynn Rohrer, Director of Oracle Solaris Product Management explains why customers are adopting Oracle Solaris 11 at a faster rate than Oracle Solaris 10, and proves why you should never challenge a Montana woman to a test of strength. What Forsythe Corp Is Helping Its Customers Do With Oracle Solaris 11 Director's Cut - Unedited - Lee Diamante, Solutions Architect for Forsythe Corp, an Oracle Solaris Partner, explains why Forsythe has been recommending Oracle Solaris to its customers, and what those customers have been doing with it. Lots more to come ... - Rick Website Newsletter Facebook Twitter

    Read the article

  • How can I access an Apple Xserve with no I/O ports

    - by DigitalJedi805
    I have an Apple Xserve at my place of employment, that I stumbled across one day going through some old equipment. The Xserve has one card installed on it, being the NIC, and is labelled with the static IP adresses that it has assigned, but other than that it is totally 'headless'. I would like to put it to use ( by either rolling it over to a Linux, or Windows Server 08 environment... 0_o ) but have yet to figure out how to get into the system to manage it. I'm a frequent at StackOverflow, but this is my first SuperUser post, so please let me know if this should be on one of the affiliate sites.

    Read the article

  • Is it good idea to require to commit only working code?

    - by Astronavigator
    Sometimes I hear people saying something like "All committed code must be working". In some articles people even write descriptions how to create svn or git hooks that compile and test code before commit. In my company we usually create one branch for a feature, and one programmer usually works in this branch. I often (1 per 100, I think and as I think with good reason) do non-compilable commits. It seems to me that requirement of "always compilable/stable" commits conflicts with the idea of frequent commits. A programmer would rather make one commit in a week than test the whole project's stability/compilability ten times a day. For only compilable code I use tags and some selected branches (trunk etc). I see these reasons to commit not fully working or not compilable code: If I develop a new feature, it is hard to make it work writing a few lines of code. If I am editing a feature, it is again sometimes hard to keep code working every time. If I am changing some function's prototype or interface, I would also make hundreds of changes, not mechanical changes, but intellectual. Sometimes one of them could cause me to carry out hundreds of commits (but if I want all commits to be stable I should commit 1 time instead of 100). In all these cases to make stable commits I would make commits containing many-many-many changes and it will be very-very-very hard to find out "What happened in this commit?". Another aspect of this problem is that compiling code gives no guarantee of proper working. So is it good idea to require every commit to be stable/compilable? Does it depends on branching model or CVS? In your company, is it forbidden to make non compilable commits? Is it (and why) a bad idea to use only selected branches (including trunk) and tags for stable versions?

    Read the article

  • Does this licensing clause allow redistribution of this application?

    - by George Edison
    As a software developer, I find a frequent need to create icons for my applications. Nothing has ever worked as well as IcoFX for this purpose. Unfortunately, the program is no longer free - but I still have the installer for an older version. My question is whether or not I can distribute copies of the installer. The license agreement contains the following pertinent clauses: 6. All redistributions of the Software's files must retain all copyright notices and web site addresses that are currently in place, and must include this list of conditions without modification. 7. None of the Software's files may be redistributed for profit or as part of another Software package without express written permission of the Author. 10. The Author reserves his rights to modify this agreement in the future. The first two clauses would seem to suggest that I can legally distribute verbatim copies of the installer but the last clause has me confused. If the author modifies the agreement and removes the ability to distribute copies, does it apply to my copy that I downloaded a while back?

    Read the article

  • What are some good seminar topics that can be used to improve designer&developer communication?

    - by tactoth
    Hello guys the thing I'll tell is what happens in the company I work for but I know it's more like a common issue in software companies. I'm development team leader in a internet service company that provides service that's very similar to dropbox. In our company we have mainly two divisions: the tech division and the designers division, both have their own reporting hierarchy. Designers focus on designing UI and prioritizing features, while developers focus on implement designers' ideas (more like being driven as our big boss has said). Then here comes our issue: the DEV team and DES team communicate very bad. DEV complain DES for these reasons: Too frequent changing of requirements Too complicated interaction (our DEV team has actually learned many HCI principles) Documents for design are incomplete, usually you just get 'design principles' and it's up to DEV to complete design details. When you find design defects, you ask DES team to resolve them, then DES team quickly change the principles and you gonna spend another several weeks because the change is so fundamental. While DES complain DEV for these reasons: Code architecture is not good enough to adapt to changing requirements (Obviously DES knows something about software development) Product design is about principles, not details. DEV fails to realize this. Communication should be quick and should be mainly oral. Trying to make most feature discussion in document for reference is too overloaded and doesn't make sense. As you can see, DEV and DES have different ideas on product design, and encourages very different practice. We have this difference because of the way we work. So our solution is that we should plan some seminars to make each part more aware of the way the other part work. Then my question is, what are some good topics for such seminars? Guessing some people may not think seminars can solve this problem, please also suggest your solution.

    Read the article

  • Sending from alternative addresses in Exchange

    - by Sam Cogan
    One of the most frequent requests I get from users with Exchange, is to be able to send from one of their alternative email addresses, that is one of the addreses there account is configured with in Exchange, but that is not their primary address. Unfortuantely as far as I am aware Microsoft have not yet come up with a solution to this. I've used a number of hacks to get round this, sepearate accounts with POP3 access, Using the from field in outlook, but each have there draw back. What have you used in these situations to allow the use of these alternative addresses?

    Read the article

  • Tomcat + Spring + CI workflow

    - by ex3v
    We're starting our very first project with Spring and java web stack. This project will be mainly about rewriting quite large ERP/CRM from Zend Framework to Java. Important factor in my question is that I come from php territory, where things (in terms of quality) tend to look different than in java world. Fatcs: there will be 2-3 developers, at least one of developers uses Windows, rest uses Linux, there is one remote linux-based machine, which should handle test and production instances, after struggling with buggy legacy code, we want to introduce good programming and development practices (CI, tests, clean code and so on) client: internal, frequent business logic changes, scrum, daily deployments What I want to achieve is good workflow on as many development stages as possible (coding - commiting - testing - deploying). The problem is that I've never done this before, so I don't know what are best practices to do this. What I have so far is: developers code locally, there is vagrant instance on every development machine, managed by puppet. It contains the same linux, jenkins and tomcat versions as production machine, while coding, developer deploys to vagrant machine, after local merge to test branch, jenkins on vagrant handles tests, when everything is fine, developer pushes commits and merges jenkins on remote machine pulls commit from test branch, runs tests and so on, if everything looks green, jenkins deploys to test tomcat instance Deployment to production is manual (altough it can be done using helping scripts) when business logic is tested by other divisions and everything looks fine to client. Now, the real question: does above make any sense? Things that I'm not sure about: Remote machine: won't there be any problems with two (or even three, as jenkins might need one) instances of same app on tomcat? Using vagrant to develop on php environment is just vise. Isn't this overkill while using Tomcat? I mean, is there higher probability that tomcat will act the same on every machine? Is there sense of having local jenkins on vagrant?

    Read the article

  • Radiation from a UPS

    - by Erel Segal
    In our office, there are frequent eletric shortages that harm my desktop computer, so I wanted to install a UPS. However, my office-mates pointed me to papers talking about hazardous radiation from the UPS. The UPS manufacturers themselves recommend to put the UPS several meters away from humans, which is not possible because our office is small (the power is about 0.5 meters from us). As an alternative to UPS, my office-mates recommended that I switch to a laptop, which has a battery so it's immune to shortages. I have several questions: Is it true that the radiation from a laptop battery is lower than the radiation from a UPS? They do just the same thing - supply power using a battery! If the answer to 1 is yes - is there an alternative way to attach a battery, similar to a laptop battery, to a desktop computer? If the answer to 1 is no - how can I prove this to my office-mates, so that they let me use UPS?

    Read the article

  • Sudden loss of Wi-Fi connectivity on OS X

    - by GJ.
    Occasionally while I work, without any special provocation, I lose connectivity via Wi-Fi. Other devices connected to the same wifi network have no interruption, and the problem gets resolved once I reboot my MacBook Air, so it's definitely a local problem. Observations: The Wi-Fi symbol in the menu indicates that I'm still connected, but apps can't actually connect neither to the Internet nor to other devices in the LAN. I can't connect to an alternate Wi-Fi network (e.g. Wi-Fi tethering via iPhone). I can connect to the Internet via iPhone USB tethering but this seems to only work some of the time. Only a reboot solves the problem but a regular restart gets stuck on a grey screen with rotating wheel (after all applications have closed) and I have to do a hard reset. How should I go about troubleshooting this? It used to happen very rarely but now is becoming more frequent (approaching once every 2-3 days on average).

    Read the article

  • PostgreSQL disaster recovery options

    - by Alex
    My customer has quite a large (the total "data" folder size is 200G) PostgreSQL database and we are working on a disaster recovery plan. We have identified three different types of disasters so far: hardware outage, too much load and unintentional data loss due to erroneously executed bad migration (like DELETE or ALTER TABLE DROP COLUMN). First two types seem to be easy to mitigate but we can't elaborate a good mitigation plan for the third type. I proposed to use ZFS and frequent (hourly) snapshots but "ZFS" means "OpenIndiana" these days and our Ops engineers do not have much expertise in it, so using OpenIndiana imposes another risk. Colleagues try to convince me that restoring from PostgreSQL PITR backup can be as fast as restoring from a ZFS snapshot but I highly doubt that replaying, say, 50G of archived WALs can be considered "fast". What other options are we missing? Is ZFS an only viable alternative? Can we get a fast Pg DB restore time in the Linux environment?

    Read the article

  • How is determined an impact of a requirement change on the existing code?

    - by MainMa
    Hi, How companies working on large projects evaluate an impact of a single modification on an existing code? Since my question is probably not very clear, here's an example: Let's take a sample business application which deals with tasks. In the database, each task has a state, 0 being "Pending", ... 5 - "Finished". A new requirement adds a new state, between 2nd and 3rd one. It means that: A constraint on the values 1 - 5 in the database must be changed, Business layer and code contracts must be changed to add a new state, Data access layer must be changed to take in account that, for example the state StateReady is now 6 instead of 5, etc. The application must implement a new state visually, add new controls for it, new localized strings for tool-tips, etc. When an application is written recently by one developer, it's more or less easy to predict every change to do. On the other hand, when an application was written for years by many people, no single person can anticipate every change immediately, without any investigation. So since this situation (such changes in requirements) is very frequent, I imagine there are already some clever techniques and ways to predict the impact. Is there any? Do you know any books which deal about this subject? Note: my question is not related to How do you deal with changing requirements? question. In fact, I'm not interested in evaluating the cost of a change, but rather the way to predict the parts of an application which will be concerned by the change. What will be those changes and how difficult they are really doesn't matter in my question.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >