Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 516/880 | < Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >

  • Excel Matching problem with logic expression

    - by abelenky
    I have a block of data that represents the steps in a process and the possible errors: ProcessStep Status FeesPaid OK FormRecvd OK RoleAssigned OK CheckedIn Not Checked In. ReadyToStart Not Ready for Start I want to find the first Status that is not "OK". I have attempted this: =Match("<>""OK""", StatusRange, 0) which is supposed to return the index of the first element in the range that is NOT-EQUAL (<) to "OK" But this doesn't work, instead returning #N/A. I expect it to return 4 (index #4, in a 1-based index, representing that CheckedIn is the first non-OK element) Any ideas how to do this?

    Read the article

  • Software development is (mostly) a trade, and what to do about it

    - by Jeff
    (This is another cross-post from my personal blog. I don’t even remember when I first started to write it, but I feel like my opinion is well enough baked to share.) I've been sitting on this for a long time, particularly as my opinion has changed dramatically over the last few years. That I've encountered more crappy code than maintainable, quality code in my career as a software developer only reinforces what I'm about to say. Software development is just a trade for most, and not a huge academic endeavor. For those of you with computer science degrees readying your pitchforks and collecting your algorithm interview questions, let me explain. This is not an assault on your way of life, and if you've been around, you know I'm right about the quality problem. You also know the HR problem is very real, or we wouldn't be paying top dollar for mediocre developers and importing people from all over the world to fill the jobs we can't fill. I'm going to try and outline what I see as some of the problems, and hopefully offer my views on how to address them. The recruiting problem I think a lot of companies are doing it wrong. Over the years, I've had two kinds of interview experiences. The first, and right, kind of experience involves talking about real life achievements, followed by some variation on white boarding in pseudo-code, drafting some basic system architecture, or even sitting down at a comprooder and pecking out some basic code to tackle a real problem. I can honestly say that I've had a job offer for every interview like this, save for one, because the task was to debug something and they didn't like me asking where to look ("everyone else in the company died in a plane crash"). The other interview experience, the wrong one, involves the classic torture test designed to make the candidate feel stupid and do things they never have, and never will do in their job. First they will question you about obscure academic material you've never seen, or don't care to remember. Then they'll ask you to white board some ridiculous algorithm involving prime numbers or some kind of string manipulation no one would ever do. In fact, if you had to do something like this, you'd Google for a solution instead of waste time on a solved problem. Some will tell you that the academic gauntlet interview is useful to see how people respond to pressure, how they engage in complex logic, etc. That might be true, unless of course you have someone who brushed up on the solutions to the silly puzzles, and they're playing you. But here's the real reason why the second experience is wrong: You're evaluating for things that aren't the job. These might have been useful tactics when you had to hire people to write machine language or C++, but in a world dominated by managed code in C#, or Java, people aren't managing memory or trying to be smarter than the compilers. They're using well known design patterns and techniques to deliver software. More to the point, these puzzle gauntlets don't evaluate things that really matter. They don't get into code design, issues of loose coupling and testability, knowledge of the basics around HTTP, or anything else that relates to building supportable and maintainable software. The first situation, involving real life problems, gives you an immediate idea of how the candidate will work out. One of my favorite experiences as an interviewee was with a guy who literally brought his work from that day and asked me how to deal with his problem. I had to demonstrate how I would design a class, make sure the unit testing coverage was solid, etc. I worked at that company for two years. So stop looking for algorithm puzzle crunchers, because a guy who can crush a Fibonacci sequence might also be a guy who writes a class with 5,000 lines of untestable code. Fashion your interview process on ways to reveal a developer who can write supportable and maintainable code. I would even go so far as to let them use the Google. If they want to cut-and-paste code, pass on them, but if they're looking for context or straight class references, hire them, because they're going to be life-long learners. The contractor problem I doubt anyone has ever worked in a place where contractors weren't used. The use of contractors seems like an obvious way to control costs. You can hire someone for just as long as you need them and then let them go. You can even give them the work that no one else wants to do. In practice, most places I've worked have retained and budgeted for the contractor year-round, meaning that the $90+ per hour they're paying (of which half goes to the person) would have been better spent on a full-time person with a $100k salary and benefits. But it's not even the cost that is an issue. It's the quality of work delivered. The accountability of a contractor is totally transient. They only need to deliver for as long as you keep them around, and chances are they'll never again touch the code. There's no incentive for them to get things right, there's little incentive to understand your system or learn anything. At the risk of making an unfair generalization, craftsmanship doesn't matter to most contractors. The education problem I don't know what they teach in college CS courses. I've believed for most of my adult life that a college degree was an essential part of being successful. Of course I would hold that bias, since I did it, and have the paper to show for it in a box somewhere in the basement. My first clue that maybe this wasn't a fully qualified opinion comes from the fact that I double-majored in journalism and radio/TV, not computer science. Eventually I worked with people who skipped college entirely, many of them at Microsoft. Then I worked with people who had a masters degree who sucked at writing code, next to the high school diploma types that rock it every day. I still think there's a lot to be said for the social development of someone who has the on-campus experience, but for software developers, college might not matter. As I mentioned before, most of us are not writing compilers, and we never will. It's actually surprising to find how many people are self-taught in the art of software development, and that should reveal some interesting truths about how we learn. The first truth is that we learn largely out of necessity. There's something that we want to achieve, so we do what I call just-in-time learning to meet those goals. We acquire knowledge when we need it. So what about the gaps in our knowledge? That's where the most valuable education occurs, via our mentors. They're the people we work next to and the people who write blogs. They are critical to our professional development. They don't need to be an encyclopedia of jargon, but they understand the craft. Even at this stage of my career, I probably can't tell you what SOLID stands for, but you can bet that I practice the principles behind that acronym every day. That comes from experience, augmented by my peers. I'm hell bent on passing that experience to others. Process issues If you're a manager type and don't do much in the way of writing code these days (shame on you for not messing around at least), then your job is to isolate your tradespeople from nonsense, while bringing your business into the realm of modern software development. That doesn't mean you slap up a white board with sticky notes and start calling yourself agile, it means getting all of your stakeholders to understand that frequent delivery of quality software is the best way to deal with change and evolving expectations. It also means that you have to play technical overlord to make sure the education and quality issues are dealt with. That's why I make the crack about sticky notes, because without the right technique being practiced among your code monkeys, you're just a guy with sticky notes. You're asking your business to accept frequent and iterative delivery, now make sure that the folks writing the code can handle the same thing. This means unit testing, the right instrumentation, integration tests, automated builds and deployments... all of the stuff that makes it easy to see when change breaks stuff. The prognosis I strongly believe that education is the most important part of what we do. I'm encouraged by things like The Starter League, and it's the kind of thing I'd love to see more of. I would go as far as to say I'd love to start something like this internally at an existing company. Most of all though, I can't emphasize enough how important it is that we mentor each other and share our knowledge. If you have people on your staff who don't want to learn, fire them. Seriously, get rid of them. A few months working with someone really good, who understands the craftsmanship required to build supportable and maintainable code, will change that person forever and increase their value immeasurably.

    Read the article

  • What&rsquo;s in your wallet, er&hellip;Inbox?

    - by johndoucette
    Since my first UUCP operation in UNIX to deliver and receive an email, I have always been challenged to find the ultimate email organizer. About a year ago, I switched to a very simple process of managing email and have found the ultimate in organization. On the craziest of days with 250+ emails, I keep my inbox empty. Here is how I do it; First, start with the following folders in your mailbox; Inbox    Archive    FollowUp    Hold Of course, all inbound emails will start in the Inbox. As you work throughout the day, follow these steps to keep your inbox empty; Read the email. Are you responsible for any action? If you are and can do it immediately, then do it. If you need to do it later, move the email to the “FollowUp” folder If you are not responsible for any action, move it to the archive folder. Use Outlook’s search to find them when you need them. If you will need to reference the email later in the week or for a short term (week or two), then move the email to the “Hold” folder As your day progresses, frequently review the FollowUp folder and accomplish the task *Notes: If I am waiting for someone to do something for me, I keep it in the FollowUp folder. As I review the folder, I am constantly reminded that there is something I am waiting on – and can send a simple reminder by forwarding the original email. I sometimes send myself a “todo” email and park it in the FollowUp folder I like to know how many emails are in the folders so I set the “Show total number of items” property on the folder to show the amount of emails.

    Read the article

  • Data Pump: Consistent Export?

    - by Mike Dietrich
    Ouch ... I have to admit as I did say in several workshops in the past weeks that a data pump export with expdp is per se consistent. Well ... I thought it is ... but it's not. Thanks to a customer who is doing a large unicode migration at the moment. We were discussing parameters in the expdp's par file. And I did ask my colleagues after doing some research on MOS. And here are the results of my "research": MOS Note 377218.1 has a nice example showing a data pump export of a partitioned table with DELETEs on that table as inconsistent Background:Back in the old 9i days when Data Pump was designed flashback technology wasn't as popular and well known as today - and UNDO usage was the major concern as a consistent per default export would have heavily relied on UNDO. That's why - similar to good ol' exp - the export won't operate per default in consistency mode To get a consistent data pump export with expdp you'll have to set: FLASHBACK_TIME=SYSTIMESTAMPin your parameter file. Then it will be consistent according to the timestamp when the process has been started. You could use FLASHBACK_SCN instead and determine the SCN beforehand if you'd like to be exact. So sorry if I had proclaimed a feature which unfortunately is not there by default - Mike

    Read the article

  • How can I create profiles for the windows that are open on my Mac?

    - by Kyle Hayes
    As a semi-dual monitor user on my MacBook, I run into a particular issue that drives me nuts. The scenario is at work I run my MacBook Pro's display as the secondary monitor to a dual-monitor setup. I primarily use the secondary view for my social network windows (Adium, Tweetie, Gabbble, and IRC). I position exactly as I want them to see all at the same time. However, when I disconnect my MBP to go home and reconnect to my other monitor at home, all the applications were shifted onto the main screen (since that is where they went when I disconnected the monitor before I left). With this, I have to go through the whole process again to move and position them on the secondary screen again. Is there a utility or application that will allow me to create screen/window profiles that will basically memorize the current position of all the windows and persist it to be re-instantiated later?

    Read the article

  • Handling early/late/dropped packets for interpolation in a 3D multiplayer game

    - by Ben Cracknell
    I'm working on a multiplayer game that for the purposes of this question, is most similar to Team Fortress. Each network data packet will contain the 3D position of the target moving object. (this object could be another player) The packets are sent on a fixed interval, and linear interpolation will be used to smooth the transition between packets. Under normal circumstances, interpolation will occur between the second-to-last packet, and the last packet received. The linear interpolation algorithm is the same as this post: Interpolating positions in a multiplayer game I have the same issue as in that post, but the answers don't seem like they will work in my situation. Consider the following scenario: Normal packet timing, everything is okay The next expected packet is late. That's okay, we'll just extrapolate based on previous positions The late packet eventually arrives with corrections to our extrapolation. Now what do we do with its information? The answers on the above post suggest we should just interpolate to this new packet's position, but that would not work at all. If we have already extrapolated past that point in time, moving back would cause rubber-banding. The issue is similar in the case of an early or dropped packet. So I believe what I am looking for is some way to smoothly deal with new information in an ongoing interpolation/extrapolation process. Since I might be moving on to quadratic or even cubic interpolation, it would be great if the same solutiuon could be applied to those as well.

    Read the article

  • Lossless cutting of MPEG TS files in Windows

    - by Sebastian P.R. Gingter
    I have several HD video files in transport stream (.ts) format, recorded with my satellite receiver. I want to cut them, as in simply remove a few minutes from the beginning, the end and sometimes a few minutes in the middle of it (remove early start of recordings, late ends and, for some seldom files, the ads). What is a good, ideally but not necessarily free, software with a GUI to do this? Best would be something where you could select points on a timeline and simply cut the elements out. As a resulting file, just the same .ts format would be great, but I could also live with putting the video contents into another container, as long as the video is NOT re-encoded / transcoded. The files have additional audio streams and subtitles. These should be retained in the process. My OS is Windows.

    Read the article

  • Were the first assemblers written in machine code?

    - by The111
    I am reading the book The Elements of Computing Systems: Building a Modern Computer from First Principles, which contains projects encompassing the build of a computer from boolean gates all the way to high level applications (in that order). The current project I'm working on is writing an assembler using a high level language of my choice, to translate from Hack assembly code to Hack machine code (Hack is the name of the hardware platform built in the previous chapters). Although the hardware has all been built in a simulator, I have tried to pretend that I am really constructing each level using only the tools available to me at that point in the real process. That said, it got me thinking. Using a high level language to write my assembler is certainly convenient, but for the very first assembler ever written (i.e. in history), wouldn't it need to be written in machine code, since that's all that existed at the time? And a correlated question... how about today? If a brand new CPU architecture comes out, with a brand new instruction set, and a brand new assembly syntax, how would the assembler be constructed? I'm assuming you could still use an existing high level language to generate binaries for the assembler program, since if you know the syntax of both the assembly and machine languages for your new platform, then the task of writing the assembler is really just a text analysis task and is not inherently related to that platform (i.e. needing to be written in that platform's machine language)... which is the very reason I am able to "cheat" while writing my Hack assembler in 2012, and use some preexisting high level language to help me out.

    Read the article

  • Oracle Tutor: Document Audit and Maintenance

    - by Emily Chorba
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Perhaps the most critical phase in the process of documenting policies and procedure -- and the greatest challenge to owners -- is the maintenance of published documents. Documents must reflect current practice and they must be accurate. The most effective way to ensure this is through the regular audit of documents. In the Tutor environment, a Document Owner must audit each of his/her documents once every 6 to 12 months to verify that the document reflects actual practice. If it does not, the document is updated or employees are retrained (depending on the nature of the discrepancy). If a document update is required, the Tutor system enables the owner to modify and redistribute the document within one work day. This is possible because: Documents contain a minimum of detail, thereby reducing the edits. Document format and structure are simple, so changes are easy to identify The Tutor Author software tool enables the Document Owner or the Document Administrator to update the file quickly. The Document Administrator verifies the document format and integration, publishes the document, and distributes it to all affected employees, thereby freeing the Document Owner of the more tedious tasks. Learn More For more information about Tutor, visit Oracle.Com or the Tutor Blog. Post your questions at the Tutor Forum. Emily Chorba Principle Product Manager Oracle Tutor & UPK

    Read the article

  • I am transferring a namserver domain what do I need to update?

    - by Mech Software
    Perhaps I am totally over thinking this but I have a domain name and name servers that are working just fine. I want to transfer the one domain name that I have for my server which is also the name of the nameserver. e.g. mydomain.com with nameservers ns1.mydomain.com ns2.mydomain.com I am transfering the mydomain.com from the current registrar to the one I use for all my other domains. The question is what do I have to update? Once the transfer is complete mydomain.com will have ns1.mydomain.com and ns2.mydomain.com as it's nameservers as it is today. I was wondering though how ns1.mydomain.com and ns2.mydomain.com are resolving if mydomain.com is pointing to ns1 and ns2. Am I over thinking this or am I missing something in the process here? I always just enter the nameserver names when I configure any domains on my server. Do I have to setup A records somewhere for ns1 and ns2 ?

    Read the article

  • Application Launcher for Hyper-V Server

    - by peterchen
    We are currently in the process of setting up a HyperV R2 Server machine. Though there's not a lot we need to do wihtin the HyperV Server itself, the command line is sure minimalistic. There are a few administrative / Hardware Monitoring tools that we want to run on he machine itself (accessed through remote desktop). I am looking for a simple program/application launcher where we can hook up these maintenance tools (and one to open a new cmd.exe window in case I habitually close the one I'm working in!) However, all tools I tried by now more or less assume explorer is present, and fail in different ways. Before I go and write a simple one myself, any recommendations?

    Read the article

  • links for 2011-01-03

    - by Bob Rhubart
    Using Solaris zfs + iscsi targets with Oracle VM (Wim Coekaerts Blog) "I was playing with my Oracle VM setup and needed some shared storage that was block based. I did not have a storage array available but I did have a Solaris box, that I use for Oracle VDI, available." - Wim Coekaerts (tags: oracle otn solaris oraclevm virtualization) DanT's GridBlog: Oracle Grid Engine: Changes for a Bright Future at Oracle "Today, we are entering a new chapter in Oracle Grid Engine’s life. Oracle has been working with key members of the open source community to pass on the torch for maintaining the open source code base to the Open Grid Scheduler project hosted on SourceForge." - Dan Templeton (tags: oracle gridengine) Oracle Fusion Middleware Security: How do I secure my services? "I've been up early for a couple of days talking to a customer about how they should secure their services,' says Chris Johnson. "I'm going to tell you what I told them." (tags: oracle fusionmiddleware security) OldSpice your Innovation - Dangers of Status Quo E2.0 | Enterprise 2.0 Blogs "If organizations only leverage E2.0 technologies in a 'me too' fashion, they are essentially using a bucket to bail water from a leaking ship." - John Brunswick (tags: oracle enteprise2.0) The Aquarium: GlassFish in 2011 - What to expect A look into the Glassfish crystal ball... (tags: oracle glassfish) Andrejus Baranovskis's Blog: Fusion Middleware 11g Security - Retrieve Security Groups from ADF 11g Oracle ACE Director Andrejus Baranovskis shows you what to do when you need to access security information directly from an ADF 11g application. (tags: oracle otn fusionmiddleware security adf) @eelzinga: Book review : Oracle SOA Suite 11g R1 Developer's Guide "What I really liked in this book...was the compare/description of the Oracle Service Bus. The authors did a great job on describing functionality of components existing in the SOA Suite and how to model them in your own process." - Oracle ACE Eric ElZinga (tags: oracle oracleace soa bookreview soasuite)

    Read the article

  • OpenLDAP RHEL 6

    - by AndyM
    Hi all I've been configuring OpenLDAP on RHEL 6 and its seems you have run the following to rebuild the config dirs. I'm ok with that , but my issues is , say I want to change the server passwd , do I have to go through the whole process every time I change the config ? Is there a way of changing the slapd config after its been built using the RHEL6 method ? below is the advice I've found on the net from http://www.linuxtopia.org/online_books/rhel6/rhel_6_migration_guide/rhel_6_migration_ch07s03.html This example assumes that the file to convert from the old slapd configuration is located at /etc/openldap/slapd.conf and the new directory for OpenLDAP configuration is located at /etc/openldap/slapd.d/. Remove the contents of the new /etc/openldap/slapd.d/ directory: rm -rf /etc/openldap/slapd.d/* Run slaptest to check the validity of the configuration file and specify the new configuration directory: slaptest -f /etc/openldap/slapd.conf -F /etc/openldap/slapd.d Configure permissions on the new directory: chown -R ldap:ldap /etc/openldap/slapd.d chmod -R 000 /etc/openldap/slapd.d chmod -R u+rwX /etc/openldap/slapd.d

    Read the article

  • Strategy for backwards compatibility of persistent storage

    - by Baqueta
    In my experience, trying to ensure that new versions of an application retain compatibility with data storage from previous versions can often be a painful process. What I currently do is to save a version number for each 'unit' of data (be it a file, database row/table, or whatever) and ensure that the version number gets updated each time the data changes in some way. I also create methods to convert from v1 to v2, v2 to v3, and so on. That way, if I'm at v7 and I encounter a v3 file, I can do v3-v4-v5-v6-v7. So far this approach seems to be working out well, but I haven't had to make use of it extensively yet so there may be unforseen problems. I'm also concerned that if the objects I'm loading change significantly, I'll either have to keep around old versions of the classes or face updating all my conversion methods to handle the new class definition. Is my approach sound? Are there other/better approaches I could be using? Are there any design patterns applicable to this problem?

    Read the article

  • Recover open but deleted file on Linux using ln instead of cp

    - by Yang
    Say I have a file that's downloading (from a source that's hard to re-download from), but accidentally deleted from the filesystem namespace (/tmp/blah), and I'd like to recover this file. Normally I could just cp /proc/$PID/fd/$FD /tmp/blah, but in this case that would only get me a partial snapshot, since the file is still downloading. Furthermore, once the download completes, the downloading process (e.g. Chrome) will close the FD. Any way to recover by inode/create a hard link? Any other solutions? If it makes any difference, I'm mainly concerned with ext4. Thanks in advance.

    Read the article

  • Disable pop-up for "Faulting application" on login - Windows Server 2003

    - by Mikael Svenson
    I have a service running on a Windows 2003 server. The service executes a .exe file to process some data. Sometimes the .exe crashes due to incorrect input and it logs an error to the Application Log which is fine. If I remote login to the server I get a pop-up of the .exe file crash, for each crash which has occured since I last logged in. The crashes can safely be ignored and I'd like to ignore these pop-ups. Is there a way to disable these pop-ups?

    Read the article

  • What's the best way to generate an NPC's face using web technologies?

    - by Vael Victus
    I'm in the process of creating a web app. I have many randomly-generated non-player characters in a database. I can pull a lot of information about them - their height, weight, down to eye color, hair color, and hair style. For this, I am solely interested in generating a graphical representation of the face. Currently the information is displayed with text in the nicest way possible, but I believe it's worth generating these faces for a more... human experience. Problem is, I'm not artist. I wouldn't mind commissioning an artist for this system, but I wouldn't know where to start. Were it 2007, I'd naturally think to myself that using Flash would be the best choice. I'd love to see "breathing" simulated. However, since Flash is on its way out, I'm not sure of a solid solution. With a previous game, I simply used layered .pngs to represent various aspects of the player's body: their armor, the face, the skin color. However, these solutions weren't very dynamic and felt very amateur. I can't go deep into this project feeling like that's an inferior way to present these faces, and I'm certain there's a better way. Can anyone give some suggestion on how to pull this off well?

    Read the article

  • Improved Customer Experience, but at what Cost?

    - by Tony Berk
    We can all probably agree that improving your customers' experience is a good thing. But a key question many people are asking is will it help your organization and, in particular, what are the financial benefits?That's a good question, especially when companies ARE experiencing phenomenal return on investment (ROI). Of course, there are many factors that impact ROI or other measures of success, but we'd like to share some success stories as examples of customer experience in action and delivering positive results. If you would like to learn more about the economics of customer experience, see Brian Curran's presentation at the Oracle Customer Experience Summit last month. In this series of blog posts, we'll share actual customer stories. Today's example is Dell, which uses Oracle Real-Time Decisions (RTD) and Siebel CRM as part of their customer experience portfolio to better understand their customers' needs and wants and provide consistent interactions. Regular readers of this blog are probably familiar with Siebel, but RTD may be new to many of you. RTD is a complete decision management solution that delivers real-time decisions and recommendations and automatically renders decisions within a business process to create tailored messaging for every customer interaction.What does that mean? In the video below, Dell describes how customer experience is important not just for one interaction channel, but across all "vehicles." RTD is helping Dell understand customer behavior and communicate with the customer in a more relevant manner, across all communication  or interaction channels including sales and service call centers, email marketing and online. Dell continues to expand use of RTD because the benefits are showing up in sales, service and marketing results including 19% increase in close rates, faster issue resolution and 40% improvement in revenue per click in email marketing. Click here, to learn more about Oracle Customer Experience and stay tuned for more customer spotlights.

    Read the article

  • Toshiba Satellite error 10053A0000 when re installing windows xp home on an existing windows 7 [closed]

    - by Jayapal Chandran
    I had installed windows 7 for testing. Now i want to re-install windows xp home original. I am using the toshiba installation(recovery) disk. The installation process asked a few questions. I selected the option to retain other partitions and to delete only c drive. In the next step i got this error. http://web1.toshiba.ca/support//techsupport/tsbs/all/-tsb001404.htm So, what should i do to retain my files in d drive and only allow the installation to delete c drive?

    Read the article

  • Graduate expectations versus reality

    - by Bobby Tables
    When choosing what we want to study, and do with our careers and lives, we all have some expectations of what it is going to be like. Now that I've been in the industry for almost a decade, I've been reflecting a bit on what I thought (back when I was studying Computer Science) programming working life was going to be like, and how it's actually turning out to be. My two biggest shocks (or should I say, broken expectations) by far are the sheer amount of maintenance work involved in software, and the overall lack of professionalism: Maintenance: At uni, we were all told that the majority of software work is maintenance of existing systems. So I knew to expect this in the abstract. But I never imagined exactly how overwhelming this would turn out to be. Perhaps it's something I mentally glazed over, and hoped I'd be building cool new stuff from scratch a lot more. But it really is the case that most jobs are overwhelmingly maintenance, bug fixing, and support oriented. Lack of professionalism: At uni, I always had the impression that commercial software work is very process-oriented and stringently engineered. I had images of ISO processes, reams of technical documentation, every feature and bug being strictly documented, and a generally professional environment. It came as a huge shock to realise that most software companies operate no differently to a team of students working on a large semester-long project. And I've worked in both the small agile hack shop, and the medium sized corporate enterprise. While I wouldn't say that it's always been outright "unprofessional", it definitely feels like the software industry (on the whole) is far from the strong engineering discipline that I expected it to be. Has anyone else had similar experiences to this? What are the ways in which your expectations of what our profession would be like were different to the reality?

    Read the article

  • Producer-consumer pattern with consumer restrictions

    - by Dan
    I have a processing problem that I am thinking is a classic producer-consumer problem with the two added wrinkles that there may be a variable number of producers and there is the restriction that no more than one item per producer may be consumed at any one time. I will generally have 50-100 producers and as many consumers as CPU cores on the server. I want to maximize the throughput of the consumers while ensuring that there are never more than one work item in process from any single producer. This is more complicated than the classic producer-consumer problem which I think assumes a single producer and no restriction on which work items may be in progress at any one time. I think the problem of multiple producers is relatively easily solved by enqueuing all work items on a single work queue protected by a critical section. I think the restriction on simultaneously processing work items from any single producer is harder because I cannot think of any solution that does not require each consumer to notify some kind of work dispatcher that a particular work item has been completed so as to lift the restriction on work items from that producer. In other words, if Consumer2 has just completed WorkItem42 from Producer53, there needs to be some kind of callback or notification from Consumer2 to a work dispatcher to allow the work dispatcher to release the next work item from Producer53 to the next available consumer (whether Consumer2 or otherwise). Am I overlooking something simple here? Is there a known pattern for this problem? I would appreciate any pointers.

    Read the article

  • How to present a stable data model in a public API that allows internal data structures to be changed without breaking the public view of the data?

    - by Max Palmer
    I am in the process of developing an application that allows users to write C# scripts. These scripts allow users to call selected methods and to access and manipulate data in a document. This works well, however, in the development version, scripts access the document's (internal) data structures directly. This means that if we were to change the internal data model/structure, there is a good chance that someone's script will no longer compile. We obviously want to prevent this breaking change from happening, but still want to allow the user to write sensible C# code (whilst not restricting how we develop our internal data model as a result). We therefore need to decouple our scripting API and its data structures from our internal methods and data structures. We've a few ideas as to how we might allow the user to access a what is effectively a stable public version of the document's internal data*, but I wanted to throw the question out there to someone who might have some real experience of this problem. NB our internal document's data structure is quite complex and it could be quite difficult to wrap. We know we want to expose as little as possible in our public API, especially as once it's out there, it's out there for good. Can anyone help? How do scripting languages / APIs decouple their public API and data structures from their internal data structures? Is there no real alternative to having to write a complex interaction layer? If we need to do this, what's a good approach or pattern for wrapping complex data structures that include nested objects, including collections? I've looked at the API facade pattern, which looks like it's trying to address these kinds of issues, but are there alternatives? *One idea is to build a data facade that is kept stable across versions of our application. The facade exposes a set of facade data objects that are used in the script code. These maintain backwards compatibility and wrap access to our internal document's data model.

    Read the article

  • Drupal on IIS 7 (PHP FastCGI) produces blank every 1 hr

    - by Morron
    Hi, I have Drupal running on IIS 7 with PHP Fast CGI installed. The FastCGI setting: http://img709.imageshack.us/img709/1748/fastcgisetting.jpg hxxp://img404.imageshack.us/img404/5837/fastcgisetting2.jpg I have Drupal running on isolated AppPool with default setting when I created the AppPool: hxxp://img716.imageshack.us/img716/5444/fastcgisetting3.jpg The problem is that after 1 hr or so If I browse to hxxp://localhost , there's nothing but only the blank page until IIS 7 is restarted. I think this has to do with PHP Process Recycling Behavior or other things that I'm not sure about. Can you show what's the cause of the problem?

    Read the article

  • Will BIOS boot mode Ubuntu install be able to boot when firmware "Fast Boot" is "Ultra Fast"?

    - by Pro Backup
    I have an AsRock mainboard with UEFI BIOS P1.50 02/14/2014. The firmware "Fast Boot" option is set to "Fast", Boot Option #1 is set to "AHCI P4: OCZ-VERT...": this is BIOS not UEFI boot. This boot disk has an MBR partitioning scheme (# parted -l | grep Partition\ Table:). Therefore Ubuntu 14.04 is installed in BIOS/CMS (Grub-PC) mode. The Ubuntu boot process ends in a text console (no GUI). There is no external graphics card in use. The stock Ubuntu kernel is replaced with Ubuntu supplied mainline 3.16.0-031600rc6-generic. dmesg outputs lines containing BIOS, like: SMBIOS 2.7 present Calgary: detecting Calgary via BIOS EBDA area Calgary: Unable to locate Rio Grande table in EBDA - bailing! [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored BIOS EDD facility v0.16 2004-Jun-25, 0 devices found The ASRock BIOS it selves display this help text for "Ultra Fast - Fast Boot": Ultra Fast mode is only supported by Windows 8 and the VBIOS must support UEFI GOP if you are using an external graphics card. Please notice that Ultra Fast mode will boot so fast that the only way to enter this UEFI Setup Utility is to Clear CMOS or run the Restart to UEFI utility in Windows. Assumptions: I suspect after changing UEFI setting "Fast Boot" to "Ultra Fast" that the machine will no longer boot into Ubuntu's console. I expect when first exchanging "Grub-pc" with "Grub-efi", that the machine will still be able to boot to a grub menu (thus allowing to change the "Fast Boot" setting back to "Fast" without clearing CMOS). Are these two "Fast Boot" assumptions correct, and/or, may I expect Ubuntu 14.04 running mainline kernel 3.16rc6 and Grub-efi to still boot to console after enabling UEFI Ultra Fast Boot?

    Read the article

  • Remove Live ID authentication from user account

    - by slugster
    I've just run in to a really annoying issue with Windows 8.1 - it seems I cannot remove the need to use Live ID credentials from an account without completely deleting that account. I know the process to do it - use the Disconnect link from the Accounts-Your account screen. The trouble comes when you get to the Switch to a local account screen, it will not let you enter the current account for the user name, instead you must enter a new one thus creating a new user account. Can I revert back to using just a local login without having to recreate the account? It seems quite retarded that I have to recreate the account, as deep down the only change required is which credential provider is used to authenticate the login. (Note that this Live ID linkage was created by using the Windows Store, not as a result of an upgrade from 8 to 8.1).

    Read the article

< Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >