Search Results

Search found 10463 results on 419 pages for 'task tracking'.

Page 326/419 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • Event Driven Behavior Tree: deterministic traversal order with parallel

    - by Heisenbug
    I've studied several articles and listen some talks about behavior trees (mostly the resources available on AIGameDev by Alex J. Champandard). I'm particularly interested on event driven behavior trees, but I have still some doubts on how to implement them correctly using a scheduler. Just a quick recap: Standard Behavior Tree Each execution tick the tree is traversed from the root in depth-first order The execution order is implicitly expressed by the tree structure. So in the case of behaviors parented to a parallel node, even if both children are executed during the same traversing, the first leaf is always evaluated first. Event Driven BT During the first traversal the nodes (tasks) are enqueued using a scheduler which is responsible for updating only running ones every update The first traversal implicitly produce a depth-first ordered queue in the scheduler Non leaf nodes stays suspended mostly of the time. When a leaf node terminate(either with success or fail status) the parent (observer) is waked up allowing the tree traversing to continue and new tasks will be enqueued in the scheduler Without parallel nodes in the tree there will be up to 1 task running in the scheduler Without parallel nodes, the tasks in the queue(excluding dynamic priority implementation) will be always ordered in a depth-first order (is this right?) Now, from what is my understanding of a possible implementation, there are 2 requirements I think must be respected(I'm not sure though): Now, some requirements I think needs to be guaranteed by a correct implementation are: The result of the traversing should be independent from which implementation strategy is used. The traversing result must be deterministic. I'm struggling trying to guarantee both in the case of parallel nodes. Here's an example: Parallel_1 -->Sequence_1 ---->leaf_A ---->leaf_B -->leaf_C Considering a FIFO policy of the scheduler, before leaf_A node terminates the tasks in the scheduler are: P1(suspended),S1(suspended),leaf_A(running),leaf_C(running) When leaf_A terminate leaf_B will be scheduled (at the end of the queue), so the queue will become: P1(suspended),S1(suspended),leaf_C(running),leaf_B(running) In this case leaf_B will be executed after leaf_C at every update, meanwhile with a non event-driven traversing from the root node, the leaf_B will always be evaluated before leaf_A. So I have a couple of question: do I have understand correctly how event driven BT work? How can I guarantee the depth first order is respected with such an implementation? is this a common issue or am I missing something?

    Read the article

  • Small hiccup with VMware Player after upgrading to Ubuntu 12.04

    The upgrade process Finally, it was time to upgrade to a new LTS version of Ubuntu - 12.04 aka Precise Pangolin. I scheduled the weekend for this task and despite the nickname of Mauritius (Cyber Island) it took roughly 6 hours to download nearly 2.400 packages. No problem in general, as I have spare machines to work on, and it was weekend anyway. All went very smooth and only a few packages required manual attention due to local modifications in the configuration. With the new kernel 3.2.0-24 it was necessary to reboot the system and compared to the last upgrade, I got my graphical login as expected. Compilation of VMware Player 4.x fails A quick test on the installed applications, Firefox, Thunderbird, Chromium, Skype, CrossOver, etc. reveils that everything is fine in general. Firing up VMware Player displays the known kernel mod dialog that requires to compile the modules for the newly booted kernel. Usually, this isn't a big issue but this time I was confronted with the situation that vmnet didn't compile as expected ("Failed to compile module vmnet"). Luckily, this issue is already well-known, even though with "Failed to compile module vmmon" as general reason but nevertheless it was very easy and quick to find the solution to this problem. In VMware Communities there are several forum threads related to this topic and VMware provides the necessary patch file for Workstation 8.0.2 and Player 4.0.2. In case that you are still on Workstation 7.x or Player 3.x there is another patch file available. After download extract the file like so: tar -xzvf vmware802fixlinux320.tar.gz and run the patch script as super-user: sudo ./patch-modules_3.2.0.sh This will alter the existing installation and source files of VMware Player on your machine. As last step, which isn't described in many other resources, you have to restart the vmware service, or for the heart-fainted, just reboot your system: sudo service vmware restart This will load the newly created kernel modules into your userspace, and after that VMware Player will start as usual. Summary Upgrading any derivate of Ubuntu, in my case Xubuntu, is quick and easy done but it might hold some surprises from time to time. Nonetheless, it is absolutely worthy to go for it. Currently, this patch for VMware is the only obstacle I had to face so far and my system feels and looks better than before. Happy upgrade! Resources I used the following links based on Google search results: http://communities.vmware.com/message/1902218#1902218http://weltall.heliohost.org/wordpress/2012/01/26/vmware-workstation-8-0-2-player-4-0-2-fix-for-linux-kernel-3-2-and-3-3/ Update on VMware Player 4.0.3 Please continue to read on my follow-up article in case that you upgraded either VMware Workstation 8.0.3 or VMware Player 4.0.3.

    Read the article

  • Need help on a problemset in a programming contest

    - by topher
    I've attended a local programming contest on my country. The name of the contest is "ACM-ICPC Indonesia National Contest 2013". The contest has ended on 2013-10-13 15:00:00 (GMT +7) and I am still curious about one of the problems. You can find the original version of the problem here. Brief Problem Explanation: There are a set of "jobs" (tasks) that should be performed on several "servers" (computers). Each job should be executed strictly from start time Si to end time Ei Each server can only perform one task at a time. (The complicated thing goes here) It takes some time for a server to switch from one job to another. If a server finishes job Jx, then to start job Jy it will need an intermission time Tx,y after job Jx completes. This is the time required by the server to clean up job Jx and load job Jy. In other word, job Jy can be run after job Jx if and only if Ex + Tx,y = Sy. The problem is to compute the minimum number of servers needed to do all jobs. Example: For example, let there be 3 jobs S(1) = 3 and E(1) = 6 S(2) = 10 and E(2) = 15 S(3) = 16 and E(3) = 20 T(1,2) = 2, T(1,3) = 5 T(2,1) = 0, T(2,3) = 3 T(3,1) = 0, T(3,2) = 0 In this example, we need 2 servers: Server 1: J(1), J(2) Server 2: J(3) Sample Input: Short explanation: The first 3 is the number of test cases, following by number of jobs (the second 3 means that there are 3 jobs for case 1), then followed by Ei and Si, then the T matrix (sized equal with number of jobs). 3 3 3 6 10 15 16 20 0 2 5 0 0 3 0 0 0 4 8 10 4 7 12 15 1 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 8 10 4 7 12 15 1 4 0 50 50 50 50 0 50 50 50 50 0 50 50 50 50 0 Sample Output: Case #1: 2 Case #2: 1 Case #3: 4 Personal Comments: The time required can be represented as a graph matrix, so I'm supposing this as a directed acyclic graph problem. Methods I tried so far is brute force and greedy, but got Wrong Answer. (Unfortunately I don't have my code anymore) Could probably solved by dynamic programming too, but I'm not sure. I really have no clear idea on how to solve this problem. So a simple hint or insight will be very helpful to me.

    Read the article

  • Think Before You Leap - Life is Dangerous for Change Agents

    - by technodrone
    So you want to introduce agile methods to your team... The following are some "lessons learned" when from someone who advocated agile/scrum to a group that was not ready for it. "Change agents, in my experience, face negative consequences. Sometimes, most of the time at the beginning, it's painful. This is the question you might have to ask yourself. Do you want to be a developer in scrum project or do you want be a scrum master managing the process? I think with proper mentoring/training, you can become good scrum master. But is that what you want? if yes, you can go ahead, take the training. if you want to be a developer, you may not need to be certified  as scrum master. You can just pick up from a book such as Mike Cohn new book Succeeding with Agile, I am reading it now. It's good. In my experience, I did waste my resources by trying to change the culture. It cost me lot. Instead, I should have focused on technical practices that are core to agile. Then look for teams that are good at agile. I would have saved lot of energy, and time. Try baby steps first yourself in the company, and next with the team, starting with technical practices like writing unit tests, SOLID principles, patterns, refactoring, continuous integration, pairing, and peer code reviews. These have inherent pull that can bring collaboration from a team.  Once you see team adaption in core practices, then you can introduce scrum concepts like user stories/task board etc.  This idea of Leading by example seems to be working for most of the agile folks. You can pitch core practices to the manager, and the team, and start showing them how you are doing.  You can put a road map for agile adaption and you can pitch to your manager. I would include need for scrum master training as part of the road map. " I thought about his advice for a couple of weeks and read about the pitfalls of technical debt and the team not having prior awareness of agile methods. The more I read and think about it the more I think he was right.  What do you think?

    Read the article

  • Develop web site from existing software or cherry pick and use a web framework?

    - by erisco
    A small team and I are tasked with developing a web site. The client has referenced a particular open source project (we'll call it X) when describing some of the features. Because of this, the team wants to start with X and adapt it to satisfy the client. I have looked at X and its code and, in my opinion, it would be unwise. However, my experience is limited, and could really benefit from the insights of others so that I can figure out what I should be asserting as the right direction for the team. My red flags are going up and this is why. X was developed in the earlier days of PHP; 500 line blocks of code are the norm; global variables are abundant; giant switch cases are the norm for switching between which page is shown. There is no clear mapping between URL and where the code for that page sits. From a feature-set standpoint, X is actually software specialized for a different task and has dozens of features we don't need or have use for that come as core assumptions. We will be unable to adapt X through its plugin system. That said, there are a few features which can be mapped, with some modification, to suit our purposes. I believe this is the attraction the team feels. I would feel comfortable if, instead of using X directly, we lifted what is salvageable and useful to us. We can then use that code, and the same 3rd party libraries X is using, in a new code base built on top of a PHP web framework (particularly Agavi, so you understand what I mean by 'web framework'). The web framework gives us a strong MVC structure and provides the common facilities for web development, or adapters to work with 3rd party libraries that do so. We will also have a clean slate feature-wise to work from, which means we can work additively instead of subtractively. Because the code base is better structured, and contains none of what we don't need, it will be easier to document, which is a critical requirement of our client. So to summarize, the team wants to use X, whereas I want to take the bits we can from X and use a web framework instead. I want to bounce this opinion off of other's experiences so that I can be more informed. Thanks for your insight.

    Read the article

  • Eloqua Experience 2013: Mystique, Modern Marketing and Masterful Engagement

    - by Mike Stiles
    The following is a guest post from Erick Mott, a social business leader at Oracle Eloqua. There’s a growing gap between 20th century marketing and a modern marketing way of doing business. I can’t think of a better example of modern marketing in action than what more than 2,000 people experienced in San Francisco at #EE13; customer-obsession, multichannel content, and real-time engagement all coming together at one extraordinary event. This was my first Eloqua Experience as a new Oracle Eloqua employee. In weeks prior, I heard about the mystique but didn’t know what to expect. What I’ve come to understand with more clarity is everything we do revolves around customer success, and we operate and educate at all times with these five tenets in mind: 1. Targeting: Really Know Your Buyer 2. Engagement: Create a 1:1 Relationship 3. Conversion: Visualize Guided Thinking 4. Analysis: Learn What’s Working 5. Marketing Technology: Enable and Extend the Cloud Product News from Eloqua Experience 2013 We made some announcements that John Stetic, VP of Products, Oracle Eloqua covers in this brief ‘Modern Marketing Minute’ video recorded after Wednesday’s keynote; summarized below, too: Oracle Eloqua AdFocus: While understanding the impact of a specific marketing channel was formerly relegated to marketers’ wish lists, the channels we now focus on are digital, social, and mobile. AdFocus gives marketers a single platform to dynamically create, manage and measure display ads alongside owned and earned media. AdFocus enables marketers to target only key accounts or prospects you want to reach with display ads, as well as provide creative content or personalized ad copy based on their persona and activities. Oracle Eloqua Profiler: The details of what we now know about customers have expanded into a universal customer profile, which can be used to create highly targeted segments. Marketers now can take data that’s not even stored in Eloqua to help targeted and score prospects for a complete, multichannel view of the customer. Profiler gives sales reps one, detailed view of the prospect to extend views beyond Oracle Eloqua asset activity (emails, forms, page views) to any external assets stored in Oracle Eloqua. Marketing Resource Management: New capabilities create more secure and controlled access to marketing resources and data. New integrations provide greater insight into campaign resources and management through a central marketing calendar and simplify resource management. Integrated Sales and Marketing Funnel: An integrated sales and marketing funnel view gives marketing and sales users, cross-functional teams, and executive management a consistent and clear view of pipeline performance. It also quickly provides users with historical metrics across different time spans and conditions. Eloqua AppCloud: More than 20 new AppCloud partners have been added to the community, which now includes 100+ apps. Eloqua AppCloud now provides modern marketers with an even broader range of marketing applications that help expand and enrich sales and marketing efforts; easily accessible in the Topliners Community. Social Capabilities: Recent integration between Oracle Eloqua and Oracle Social Relationship Management (SRM) deliver a comprehensive, scalable and integrated modern marketing solution. New capabilities include better tracking of social activities for a more complete customer profile. Engage Facebook custom audiences with AdFocus to deliver ads and meaningful experiences through trusted social networks. Biggest and Best Eloqua Experience. There’s a lot of talk in the industry about the Marketing Cloud. At Oracle Eloqua, we have been on a mission of delivering the most advanced and integrated modern marketing technology on the planet. It’s not just a concept but reality with proven execution, as seen first-hand this week in San Francisco. In this video, Kevin Akeroyd, SVP of Oracle Eloqua, provides some highlights of what made this year’s Eloqua Experience, exceptional, including Steve Woods’ presentation about the journey of modern marketers and Andrea Ward’s conversation with Vince Gilligan, creator of the Breaking Bad television series. The 2013 Markie Awards The Oracle Eloqua Marketing Cloud was best exemplified for me as 19 Markies were awarded to customers for their exceptional creativity and results as modern marketers. Wow, what a night to remember with so many committed and talented people working to create an extraordinary experience! To learn more about how to become a modern marketer, check out these resources. We look forward to seeing you next year at Eloqua Experience. More on Erick: 20 years experience at Oracle, Ektron, Sitecore, Lyris, Habeas, Nokia, creatorbase, Mark Monitor, Cisco Systems, GlobalFluency, Sun Microsystems, Philips NV, Elm Products and CBS TV. Patent holder with agency, Fortune 500, media, and startup company expertise. @mikestiles

    Read the article

  • Loose Coupling and UX Patterns for Applications Integrations

    - by ultan o'broin
    I love that software architecture phrase loose coupling. There’s even a whole book about it. And, if you’re involved in enterprise methodology you’ll know just know important loose coupling is to the smart development of applications integrations too. Whether you are integrating offerings from the Oracle partner ecosystem with Fusion apps or applications coexistence scenarios, loose coupling enables the development of scalable, reliable, flexible solutions, with no second-guessing of technology. Another great book Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions tells us about loose coupling benefits of reducing the assumptions that integration parties (components, applications, services, programs, users) make about each other when they exchange information. Eliminating assumptions applies to UI development too. The days of assuming it’s enough to hard code a UI with linking libraries called code on a desktop PC for an office worker are over. The book predates PaaS development and SaaS deployments, and was written when web services and APIs were emerging. Yet it calls out how using middleware as an assumptions-dissolving technology “glue" is central to applications integration. Realizing integration design through a set of middleware messaging patterns (messaging in the sense of asynchronously communicating data) that enable developers to meet the typical business requirements of enterprises requiring integrated functionality is very Fusion-like. User experience developers can benefit from the loose coupling approach too. User expectations and work styles change all the time, and development is now about integrating SaaS through PaaS. Cloud computing offers a virtual pivot where a single source of truth (customer or employee data, for example) can be experienced through different UIs (desktop, simplified, or mobile), each optimized for the context of the user’s world of work and task completion. Smart enterprise applications developers, partners, and customers use design patterns for user experience integration benefits too. The Oracle Applications UX design patterns (and supporting guidelines) enable loose coupling of the optimized UI requirements from code. Developers can get on with the job of creating integrations through web services, APIs and SOA without having to figure out design problems about how UIs should work. Adding the already user proven UX design patterns (and supporting guidelines to your toolkit means ADF and other developers can easily offer much more than just functionality and be super productive too. Great looking application integration touchpoints can be built with our design patterns and guidelines too for a seamless applications UX. One of Oracle’s partners, Innowave Technologies used loose coupling architecture and our UX design patterns to create an integration for a customer that was scalable, cost effective, fast to develop and kept users productive while paving a roadmap for customers to keep pace with the latest UX designs over time. Innowave CEO Basheer Khan, a Fusion User Experience Advocate explains how to do it on the Usable Apps blog.

    Read the article

  • Is there a usage count for packages or programs?

    - by math
    Motivation: I want to remove applications I do not use to speed up my package processing tasks like dist upgrades, regular updates, but also for saving disk space and other reasons. I know this is a complex topic so first I will ask my question and second I will give some answers I already found out. Question: How do I find out which package I did not used at all? For example I always use the VLC so I could remove totem package. (Which I could have been used some day, yes.) Of course package dependencies could force me to have programs installed which I will never use. Notes: Find the packages which consume much space via synaptic: Select "Status" in lower left, select "Installed" in upper left, sort column on "size" in upper right. Then you can decide which big packages you really need. Use aptitude autoremove Use ubuntu-tweak's Janitor for removing old kernel packages, old configs, apt-cache entries, etc. Manually search for applications for a given task that you usually solve with your standard app. E.g. Movie player, Music player, Office program, Browser etc. (BTW: this is what I want to be helped with my question) When removing packages I always favour "apt-get purge" over "aptitude remove --purge" as aptitude often will also remove essential packages due to package dependencies. E.g. when removing "evolution" (as I use thunderbird) aptitude wants to remove also "ubuntu-desktop" and 756 other packages as well, while apt-get just removes evolution and its helping pacakges like evolution-common. Ubuntu lense gives me most recent used applications which are candidates for keeping :) Employ deborphan as I read in this related answer: How do I clean up my harddrive? I should certainly keep essential packages: Keep only essential packages This question is pretty much a duplicate of How to see what installed packages I have never used for cleaning purposes but covering only few aspects. However one answer suggests to use a program called unusedpkg but the link seems down. There is also a program called Kleen http://code.google.com/p/kleen/ but it won't compile in 11.10. However I hacked it to compile but the results are unusable, as for example the g++ package was marked as not used for 203, but actually I used it seconds ago for compiling Kleen itself ;) So don't use this tool. On http://wiki.debian.org/DebianPackageInformation I read the the package popularity-contest will produce log files with usage statistics. Unfortunately I didn't enabled the popularity contest so I can't find this log file.

    Read the article

  • Link instead of Attaching

    - by Daniel Moth
    With email storage not being an issue in many companies (I think I currently have 25GB of storage on my email account, I don’t even think about storage), this encourages bad behaviors such as liberally attaching office documents to emails instead of sharing a link to the document in SharePoint or SkyDrive or some file share etc. Attaching a file admittedly has its usage scenarios too, but it should not be the default. I thought I'd list the reasons why sharing a link can be better than attaching files directly. In no particular order: Better Review. It allows multiple recipients to review the file and their comments are aggregated into a single document. The alternative is everyone having to detach the document, add their comments, then send back to you, and then you have to collate. Wirth the alternative, you also potentially miss out on recipients reading comments from other recipients. Always up to date. The attachment becomes a fork instead of an always up to date document. For example, you send the email on Thursday, I only open it on Tuesday: between those days you could have made updates that now I am missing because you decided to share a link instead of an attachment. Better bookmarking. When I need to find that document you shared, you are forcing me to search through my email (I may not even be running outlook), instead of opening the link which I have bookmarked in my browser or my collection of links in my OneNote or from the recent/pinned links of the office app on my task bar, etc. Can control access. If someone accidentally or naively forwards your link to someone outside your group/org who you’d prefer not to have access to it, the location of the document can be protected with specific access control. Can add more recipients. If someone adds people to the email thread in outlook, your attachment doesn't get re-attached - instead, the person added is left without the attachment unless someone remembers to re-attach it. If it was a link, they are immediately caught up without further actions. Enable Discovery. If you put it on a share, I may be able to discover other cool stuff that lives alongside that document. Save on storage. So this doesn't apply to me given my opening statement, but if in your company you do have such limitations, attaching files eats up storage on all recipients accounts and will also get "lost" when those people archive email (and lose completely at some point if they follow the company retention policy). Like I said, attachments do have their place, but they should be an explicit choice for explicit reasons rather than the default. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Legitimate use of the Windows "Documents" folder in programs.

    - by romkyns
    Anyone who likes their Documents folder to contain only things they place there knows that the standard Documents folder is completely unsuitable for this task. Every program seems to want to put its settings, data, or something equally irrelevant into the Documents folder, despite the fact that there are folders specifically for this job1. So that this doesn't sound empty, take my personal "Documents" folder as an example. I don't ever use it, in that I never, under any circumstances, save anything into this folder myself. And yet, it contains 46 folders and 3 files at the top level, for a total of 800 files in 500 folders. That's 190 MB of "documents" I didn't create. Obviously any actual documents would immediately get lost in this mess. My question is: can anything be done to improve the situation sufficiently to make "Documents" useful again, say over the next 5 years? Can programmers be somehow educated en-masse not to use it as a dumping ground? Could the OS start reporting some "fake" location hidden under AppData through the existing APIs, while only allowing Explorer and the various Open/Save dialogs to know where the "real" Documents folder resides? Or are any attempts completely futile or even unnecessary? 1For the record, here's a quick summary of the various standard directories that should be used instead of "Documents": RoamingAppData for user-specific data and settings. This is the directory to use for user-specific non-temporary data. Anything placed here will be available on any machine that a given user logs on to in networks where this is configured. Do not place large files here though, because they slow down login/logout in such environments. LocalAppData for user-and-machine-specific data and settings. This data differs for every user and every machine. This is also where very large user-specific data should be placed. ProgramData for machine-specific data and settings. These are the same regardless of which user is logged on, and will not roam to other machines in a network. GetTempPath for all files that may be wiped without loss of data when not in use. This is also the place for things like caches, because like temporary data, a cache does not need to be backed up. Place your huge cache here and you'll save your user some backup trouble. "Documents" itself should only ever be used if the user specified it manually by entering a path or selecting it in a Save dialog. That is the only time it is ever appropriate to save stuff in "Documents".

    Read the article

  • Anticipating JavaOne 2012 – Number 17!

    - by Janice J. Heiss
    As I write this, JavaOne 2012 (September 30-October 4 in San Francisco, CA) is just over a week away -- the seventeenth JavaOne! I’ll resist the impulse to travel in memory back to the early days of JavaOne. But I will say that JavaOne is a little like your birthday or New Year’s in that it invites reflection, evaluation, and comparison. It’s a time when we take the temperature of Java and assess the world of information technology generally. At JavaOne, insight and information flow amongst Java developers like no other time of the year.This year, the status of Java seems more secure in the eyes of most Java developers who agree that Oracle is doing an acceptable job of stewarding the platform, and while the story is still in progress, few doubt that Oracle is engaging strongly with the Java community and wants to see Java thrive. From my perspective, the biggest news about Java is the growth of some 250 alternative languages for the JVM – from Groovy to Jython to JRuby to Scala to Clojure and on and on – offering both new opportunities and challenges. The JVM has proven itself to be unusually flexible, resulting in an embarrassment of riches in which, more and more, developers are challenged to find ways to optimally mix together several different languages on projects.    To the matter at hand -- I can say with confidence that Oracle is working hard to make each JavaOne better than the last – more interesting, more stimulating, more networking, and more fun! A great deal of thought and attention is being devoted to the task. To free up time for the 475 technical sessions/Birds of feather/Hands-on-Labs slots, the Java Strategy, Partner, and Technical keynotes will be held on Sunday September 30, beginning at 4:00 p.m.   Let’s not forget Java Embedded@JavaOne which is being held Wednesday, Oct. 3rd and Thursday, Oct. 4th at the Hotel Nikko. It will provide business decision makers, technical leaders, and ecosystem partners important information about Java Embedded technologies and new business opportunities.   This year's JavaOne theme is “Make the Future Java”. So come to JavaOne and make your future better by:--Choosing from 475 sessions given by the experts to improve your working knowledge and coding expertise --Networking with fellow developers in both casual and formal settings--Enjoying world-class entertainment--Delighting in one of the world’s great cities (my home town) Hope to see you there! Originally published on blogs.oracle.com/javaone.

    Read the article

  • Learning by doing (and programming by trial and error)

    - by AlexBottoni
    How do you learn a new platform/toolkit while producing working code and keeping your codebase clean? When I know what I can do with the underlying platform and toolkit, I usually do this: I create a new branch (with GIT, in my case) I write a few unit tests (with JUnit, for example) I write my code until it passes my tests So far, so good. The problem is that very often I do not know what I can do with the toolkit because it is brand new to me. I work as a consulant so I cannot have my preferred language/platform/toolkit. I have to cope with whatever the customer uses for the task at hand. Most often, I have to deal (often in a hurry) with a large toolkit that I know very little so I'm forced to "learn by doing" (actually, programming by "trial and error") and this makes me anxious. Please note that, at some point in the learning process, usually I already have: read one or more five-stars books followed one or more web tutorials (writing working code a line at a time) created a couple of small experimental projects with my IDE (IntelliJ IDEA, at the moment. I use Eclipse, Netbeans and others, as well.) Despite all my efforts, at this point usually I can just have a coarse understanding of the platform/toolkit I have to use. I cannot yet grasp each and every detail. This means that each and every new feature that involves some data preparation and some non-trivial algorithm is a pain to implement and requires a lot of trial-and-error. Unfortunately, working by trial-and-error is neither safe nor easy. Actually, this is the phase that makes me most anxious: experimenting with a new toolkit while producing working code and keeping my codebase clean. Usually, at this stage I cannot use the Eclipse Scrapbook because the code I have to write is already too large and complex for this small tool. In the same way, I cannot use any more an indipendent small project for my experiments because I need to try the new code in place. I can just write my code in place and rely on GIT for a safe bail-out. This makes me anxious because this kind of intertwined, half-ripe code can rapidly become incredibly hard to manage. How do you face this phase of the development process? How do you learn-by-doing without making a mess of your codebase? Any tips&tricks, best practice or something like that?

    Read the article

  • Cone of Uncertainty in classic and agile projects

    - by DigiMortal
    David Starr from Scrum.org made interesting session in TechEd Europe 2012 - Implementing Scrum Using Team Foundation Server 2012. One of interesting things for me was how Cone of Uncertainty looks like in agile projects (or how agile methodologies distort the cone we know from waterfall projects). This posting illustrates two cones – one for waterfall and one for agile world. Cone of Uncertainty Cone of Uncertainty was introduced to software development community by Steve McConnell and it visualizes how accurate are our estimates over project timeline. Here is the Cone of Uncertainty when we deal with waterfall and Big Design Up-Front (BDUF). Cone of Uncertainty. Taken from MSDN Library page Estimating. The closer we are to project end the more accurate are our estimates. When project ends we know exactly how much every task took time. As we can see then cone is wide when we usually have to give our estimates – it happens somewhere between Initial Project Concept and Requirements Complete. Don’t ask me why Initial Project Concept is the stage where some companies give their best estimates – they just do it every time and doesn’t learn a thing later. This cone is inevitable for software development and agile methodologies that try to make software world better are also able to change the cone. Cone of Uncertainty in agile projects Agile methodologies usually try to avoid BDUF, waterfalls and other things that make all our mistakes highly expensive. Of course, we are not the only ones who make mistakes – don’t also forget our dear customers. Agile methodologies take development as creational work and focus on making it better. One main trick is to focus on small and short iterations. What it means? We are estimating functionalities that are easier for us to understand and implement. Therefore our estimates are more accurate. As we move from few big iterations to many small iterations we also distort and slice Cone of Uncertainty. This is how cone looks when agile methodologies are used. Cone of Uncertainty in agile projects. We have more cones to live with but they are way smaller. I don’t have any numbers to put here because I found any but still this “chart” should give you the point: more smaller iterations cause more but way smaller cones of uncertainty. We can handle these small uncertainties because steps we take to complete small tasks are more predictable and doesn’t grow very often above our heads. One more note. Consider that both of charts given in this posting describe exactly the same phase of same project – just uncertainties are different.

    Read the article

  • Evolution laggy due to IMAP -profile or due to some odd Sync -issue?

    - by Izzy
    I'm fighting with Evolution. Basically it's working fine -- but it is very slow to react in certain situations. Helper questions Could it be that changing away from Bonobo has to do with slowing-down? There might be some trouble with the new engine and "asynchronous actions". What to do about it? Are there e.g. any configuration files? I want to get the previous "working mood" back. How can I speed this thing up? Different scenarios when sending a mail, the composer window hangs there inactive for a couple of seconds, everything grayed out. Though there is a green check mark saying it's sent, I'm not sure a) why it's still blocking everything and b) whether I could simply close it without "breaking"/"losing" anything. In earlier versions, the composer window was closing pretty fast, and one could see the message being stored into the local "outbox" until it was sent, and one could immediately continue with the next task. I prefer that behaviour over the current, where I cannot do anything in the application until the window closes. switching between modules. Coming from mail and switching to the address book takes a couple of seconds. Same for switching to the calendar. I read about different "possible causes" and tried a few things: I only have 3 local address books, so no networking should be involved here. To make sure, I switched to offline mode and then tried to access the address book. No noticeable difference. I use 3 Google Calendars. Switching to offline mode made a minor difference, but so minor that it also could be "imagination" since one might have expected this in this case according to some reports, disabling the tasks should help. Well, it didn't in my case, as I don't use them regularly (just two local items stored here) Maybe I should also mention that I'm using the KDE4 desktop (so no Unity or Gnome, though both is installed on the computer). And I did not have this issue before I updated to 12.04.

    Read the article

  • Is my work on a developer test being taken advantage of?

    - by CodeWarrior
    I am looking for a job and have applied to a number of positions. One of them responded, I had a pretty lengthy phone interview (perhaps an hour +) and they then set me up with a developer test. I was told that this test is estimated to take between 6 and 8 hours and that, provided it met with their approval, I could be paid for my work on it. That gave me some pause, but I endeavored. The developer test took place on a VM accessed via RDP. The task was to implement a search page in a web project that requests data from the server, displays it on the screen in a table, has a pretty complicated search filtering scheme (there are about 15 statuses and when sending the search to the server you can search by these statuses) in addition to the string/field search. They want some SVG icons to change color on certain data values, they want some data to be represented differently than how it is in the database, etc. Loooong story short, this took one heck of a lot longer than 6-8 hours. Much of it was due to the very poor VM that I was running on (Visual Studio 2013 took 10 minutes to load, and another 15 minutes to open the 3 GB ginormous solution). After completing, I was told to commit my changes to source control... Hmm, OK. I get an email back that they thought that the SVGs could have their color changed differently, they found a bug in this edge-case, there was an occasional problem with this other thing that I never experienced, etc. So I am 13-14 hours into this thing now, and I have to do bug fixes. I do them, and they come back with some more. This is all apparently going into a production application. I noticed some anomalies in the code that was already in there where it looked like other people had coded all of one functionality and not anything else that I could find. Am I just being used for cheap labor? Even if they pay me the promised 50 dollars and hour for 6 hours, I have committed like 18 hours to this thing now. If I bug fix all of the stuff they keep coming up with, I will have worked at least 16 hours for free. I have taken a number of developer tests. I have never taken one where I worked on code that was destined for production. I have never taken one where I implemented a feature that was in the pipeline for development (it was planned for, and I implemented it through the course of the test). And I have never taken one that took 4 rounds and a total of 20+ hours. I get the impression that they are using their developer test to field some of the functionality, that they don't have time for in their normal team, on the cheap. Also, I wouldn't mind a 'devtest' tag.

    Read the article

  • Enterprise with eyes on NoSQL

    - by thegreeneman
    Since joining Oracle a few months back, I have had the fortune of being able to interact with a number of large enterprise organizations and discuss their current state of adoption for NoSQL database technology.   It is worth noting that a large percentage of these organizations do have some NoSQL use and have been steadily increasing their understanding of its applicability for certain data management workloads.   Thru those discussions I’ve learned that it seems one of the biggest issues confronting enterprise adoption of NoSQL databases is the lack of standards for access, administration and monitoring.    This was not so much of an issue with the early adopters of NoSQL technology because they employed a highly DevOps centric approach to application deployment leaving a select few highly qualified developers with the task of managing the production of the system that they designed and implemented. However, as NoSQL technology moves out of the startup and into the hands of larger corporate entities, developers with a broad skill set that are capable of both development and I.T. type production management are in short supply and quickly get moved on to do new projects, often moving to different roles within the company.  This difference in the way smaller more agile startups operate as compared to more established companies is revealing a gap in the NoSQL technology segment that needs to get addressed.    This is one of places that a company such as Oracle has a leg up in the NoSQL Database front.  A combination of having gone thru a past database maturization process,  combined with a vast set of corporate relationships that have grown hand in hand to solve these types of issues, Oracle is in a great place to lead the way in closing the requirements gap for NoSQL technology.  Oracle's understanding of the needs specific to mature organizations have already made their way into the Oracle’s NoSQL Database offering with features such as:  One click cluster deployment with visual topology planning,  standards based monitoring protocols such as SNMP, support for data access for reporting via standard SQL  and integration with emerging standards for data access such as MapReduce.  Given the exciting developments we’re driving in the Oracle NoSQL Database group, I will have a lot more to say about this topic as we move into the second half of the year.

    Read the article

  • How do you take into account usability and user requirements for your application?

    - by voroninp
    Our team supports BackOffice application: a mix of WinForm and WPF windows. (about 80 including dialogs). Really a kind of a Swiss Army Knife. It is used by developers, tech writers, security developers, testers. The requirements for new features come quite often and sometimes we play Wizard of Oz to decide which GUI our users like the most. And it usually happens (I admit it can be just my subjective interpretation of the reality) that one tiny detail giving the flavor of good usability to our app requires a lot of time. This time is being spent on 'fighting' with GUI framework making it act like we need. And it very difficult to make estimations for this type of tasks (at least for me and most members of our team). Scrum poker is not a help either. Management often considers this usability perfectionism to be a waste of time. On the other hand an accumulated affect of features where each has some little usability flaw frustrates users. But the same users want frequent releases and instant bug fixes. Hence, no way to get the positive feedback: there is always somebody who is snuffy. I constantly feel myself as competing with ourselves: more features - more bugs/tasks/architecture. We are trying to outrun the cart we are pushing. New technologies arrive and some of them can potentially help to improve the design or decrease task implementation time but these technologies require learning, prototyping and so on. Well, that was a story. And now is the question: How do you balance between time pressure, product quality, users and management satisfaction? When and how do you decide to leave the problem with not a perfect but to some extent acceptable solution, how often do you make these decisions? How do you do with your own satisfaction? What are your priorities? P.S. Please keep in mind, we are a BackOffice team, we have neither dedicated technical writer nor GUI designer. The tester have joined us recently. We've much work to do and much freedom concerning 'how'. I like it because it fosters creativity but I don't want to become too nerdy perfectionist.

    Read the article

  • Interaction of a GUI-based App and Windows Service

    - by psubsee2003
    I am working on personal project that will be designed to help manage my media library, specifically recordings created by Windows Media Center. So I am going to have the following parts to this application: A Windows Service that monitors the recording folder. Once a new recording is completed that meets specific criteria, it will call several 3rd party CLI Applications to remove the commercials and re-encode the video into a more hard-drive friendly format. A controller GUI to be able to modify settings of the service, specifically add new shows to watch for, and to modify parameters for the CLI Applications A standalone (GUI-based) desktop application that can perform many of the same functions as the windows service, expect manually on specific files instead of automatically based on specific criteria. (It should be mentioned that I have limited experience with an application of this complexity, and I have absolutely zero experience with Windows Services) Since the 1st and 3rd bullet share similar functionality, my design plan is to pull the common functionality into a separate library shared by both parts applications, but these 2 components do not need to interact otherwise. The 2nd and 3rd bullets seem to share some common functionality, both will have a GUI, both will have to help define similar parameters (one to send to the service and the other to send directly to the CLI applications), so I can see some advantage to combining them into the same application. On the other hand, the standalone application (bullet #3) really does not need to interact with the service at all, except for possibly sharing a few common default parameters that can easily be put into an XML in a common location, so it seems to make more sense to just keep everything separate. The controller GUI (2nd bullet) is where I am stuck at the moment. Do I just roll this functionality (allow for user interaction with the service to update settings and criteria) into the standalone application? Or would it be a better design decision to keep them separate? Specifically, I'm worried about adding the complexity of communicating with the Windows Service to the standalone application when it doesn't need it. Is WCF the right approach to allow the controller GUI to interact with the Windows Service? Or is there a better alternative? At the moment, I don't envision a need for a significant amount of interaction, maybe just adding a new task once in a while and occasionally tweaking a parameter, but when something is changed, I do expect the windows service to immediately use the new settings.

    Read the article

  • Why is everything crashing?

    - by Kopkins
    I've been using Ubuntu for a while now and I love it, I wouldn't think of using another OS unless I can't fix this issue. The install I'm on is only around a month and a half old. I'm running 12.04 64bit on a 8,1 MBP. Up until around 2 weeks ago everything was running smoothly. Around then applications started crashing and weird things started happening. At first I thought it was just certain applications. The first thing to start giving me trouble was compiz. Occasionally compiz will stop decorating windows and lost many other functionalities. running compiz --replace fixes this, but I don't feel like doing it usually once a day. The other thing with this is that after running compiz --replace, my conky window gets lost somewhere and so I run killall conky && conky -c .conkyrc. But this isn't with just a couple applications, it seems like it is proliferating through my system. Last week fontforge started crashing while doing whatever task. So I ended up unable to finish what I was working on to completeness. Didn't find a fix. Today rhythmbox started crashing. Whenever I try to play anything, Rhythmbox becomes unresponsive and needs to be forced to close. When I try to do certain things with the disk utility, it crashes. I get the Ubuntu has experienced an internal error message much more often than I would like. Frequently applications stop appearing in the launcher. Wine almost never does anymore. After not being active for a little while, thunderbird can only fetch my mail after restarting wireless, sudo rmmod b43 && sudo modprobe b43 Occasionally some of my startup apps don't start. What is my best option here? Could they be bugs? I don't want to submit a ton of vague bug reports. Reinstall? switch OS? Thank you to anyone who responds. Kopkins

    Read the article

  • ATG Live Webcast Nov. 8th: Advanced Management of EBS with Oracle Enterprise Manager

    - by Bill Sawyer
    The task of managing and monitoring Oracle E-Business Suite environments can be very challenging. The Application Management Pack plug-in is part of Oracle Enterprise Manager 12c Application Management Suite for Oracle E-Business Suite. The Application Management Pack plug-in is designed to monitor and manage all the different technologies that constitute Oracle E-Business Suite applications, including midtier, configuration, host, and database management—to name just a few. Customers that have implemented Oracle Enterprise Manager have experienced dramatic improvements in system visibility, diagnostic capability, and administrator productivity. This webcast will highlight the key features and benefits of Oracle Enterprise Manager, the latest version of the Oracle Application Management Suite for Oracle E-Business Suite. Advanced Management of Oracle E-Business Suite with Oracle Enterprise Manager Date:                Thursday, November 8, 2012Time:               8:00 AM - 9:00 AM Pacific Standard TimePresenters:   Angelo Rosado, Principal Product Manager, E-Business Suite ATG                         Lauren Cohn, Principal Curriculum Developer, E-Business Suite ATGWebcast Registration Link (Preregistration is optional but encouraged)To hear the audio feed:   Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              103191To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  591460967 If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here. If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • JCP.next.3: time to get to work

    - by Patrick Curran
    As I've previously reported in this blog, we planned three JSRs to improve the JCP’s processes and to meet our members’ expectations for change. The first - JCP.next.1, or more formally JSR 348: Towards a new version of the Java Community Process - was completed in October 2011. This focused on a small number of simple but important changes to make our process more transparent and to enable broader participation. We're already seeing the benefits of these changes as new and existing JSRs adopt the new requirements. However, because we wanted to complete this JSR quickly we deliberately postponed a number of more complex items, including everything that would require modifying the JSPA (the legal agreement that members sign when they join the organization) to a follow-on JSR. The second JSR (JSR 355: JCP Executive Committee Merge) is in progress now and will complete later this year. This JSR is even simpler than the first, and is focused solely on merging the two Executive Committees into one for greater efficiency and to encourage synergies between the Java ME and Java SE platforms. Continuing the momentum to move Java and the JCP forward we have just filed the third JSR (JCP.next.3) as JSR 358: A major revision of the Java Community Process. This JSR will modify the JSPA as well as the Process Document, and will tackle a large number of complex issues, many of them postponed from JSR 348. For these reasons we expect to spend a considerable amount of time working on it - at least a year, and probably more. The current version of the JSPA was created back in 2002, although some minor changes were introduced in 2005. Since then the organization and the environment in which we operate have changed significantly, and it is now time to revise our processes to ensure that they meet our current needs. We have a long list of topics to be considered, including the role of independent implementations (those not derived from the Reference Implementation), licensing and open source, ensuring that our new transparency requirements are implemented correctly, compatibility policy and TCKs, the role of individual members, patent policy, and IP flow. The Expert Group for JSR 358, as with all process-change JSRs, consists of all members of the Executive Committees. Even though the JSR has just been filed we started discussions on the various topics several months ago (see the EC's meeting minutes for details) and our EC members - including the new members who joined within the last year or two - are actively engaged. Now it's your opportunity to get involved. As required by version 2.8 of our Process (introduced with JSR 348) we will conduct all our business in the open. We have a public java.net project where you can follow and participate in our work. All of our deliberations will be copied to a public Observer mailing list, we'll track our issues on a public Issue Tracker, and all our documents (meeting agendas and minutes, task lists, working drafts) will be published in our Document Archive. We're just getting started, but we do want your input. Please visit us on java.net where you can learn how to participate. Let's get to work...

    Read the article

  • Help writing server script to ban IP's from a list

    - by Chev_603
    I have a VPS that I use as an openvpn and web server. For some reason, my apache log files are filled with thousands of these hack attempts: "POST /xmlrpc.php HTTP/1.0" 404 395 These attack attempts fill up 90% of my logs. I think it's a WordPress vulnerability they're looking for. Obviously they are not successful (I don't even have Wordpress on my server), but it's annoying and probably resource consuming as well. I am trying to write a bash script that will do the following: Search the apache logs and grab the offending IP's (even if they try it once), Sort them into a list with each unique IP on a seperate line, And then block them using the IP table rules. I am a bash newb, and so far my script does everything except Step 3. I can manually block the IP's, but that's tedious and besides, this is Linux and it's perfectly capable of doing it for me. I also want the script to be customizable so that I (or anyone else who wants to use it) can change the variables to suit whatever situation I/they may deal with in the future. Here is the script so far: #!/bin/bash ##IP LIST GENERATOR ##Author Chev Young ##Script to search Apache logs and list IP's based on custom filters ## ##Define our variables: DIRECT=~/Script ##Location of script&where to put results/temp files LOGFILE=/var/log/apache2/access.log ## Logfile to search for offenders TEMPLIST=xml_temp ## Temporary file name IP_LIST=ipstoban ## Name of results file FILTER1=xmlrpc ## What are we looking for? (Requests we want to ban) cd $DIRECT if [ ! -f $TEMPLIST ];then touch $TEMPLIST ##Create temp file fi cat $LOGFILE | grep $FILTER1 >> $DIRECT/$TEMPLIST ## Only interested in the IP's, so: sed -e 's/\([0-9]\+\.[0-9]\+\.[0-9]\+\.[0-9]\+\).*$/\1/' -e t -e d $DIRECT/$TEMPLIST | sort | uniq > $DIRECT/$IP_LIST rm $TEMPLIST ## Clean temp file echo "Done. Results located at $DIRECT/$IP_LIST" So I need help with the next part of the script, which should ban the IP's (incoming and perhaps outgoing too) from the resulting $IP_LIST file. I don't care if it utilizes UFW or IPTables directly, as long as it bans the IP's. I'd probably run it as a cron task. What I'm having trouble with is understanding how to use line of the result file as a seperate variable to do something like: ufw deny $IP1 $IP2 $IP3, ect Any ideas? Thanks.

    Read the article

  • Forced to be trained [closed]

    - by steeb
    is it OK to force employees to take a training in order to make them sign a contract to stay in the job for X years? Here I'm the only developer in the company and before this job I've developed in VB6, VB.Net and others. There are others technicians here but they query the database server to make reports and make little programs but do not spend all day in programming. I was hired here to manage some legacy data to be migrated to a new system and I used my experience to accomplish the task. I also have made some utilities and other developments. Since we went live with the new system I've been developing a lot of side programs, add-ons and changed the source code of it and basically I've been learning from it since then and became some sort of jack of all trades when some feature needs to be changed or corrected. I have one and a half year in this company but during the last 8 months I've been entirely programming in the system. There have been times when even the implementers ask me how I accomplish certain things. The issue here is that the company has come and told me and other co-workers (which do not program in the system but know basic programming and databases) to have a basic training to program for the new system's platform and when we finish it we will sign a contract to stay in the company for an unspecified time. They have offered also an unspecified better salary. I'm feeling very suspicious cause I know the basics of system and I don't understand why I have to take it, being known by everybody all developments I've made. I gathered with my boss and told him that I should not take that course because I feel that I can learn more with all the daily requirements I've been asked and they should save that money. But he responded me that I have to take it and sign after that. I think they want me to be with them for a long time, and I'd like to, but my opinion is that I would like to stay in the company because I feel comfortable in all laboral aspects (including salary) but not because I signed a contract forced to do more tasks and forced to say no if I have others and better job offers. What advice can you give me in this kind of situation?

    Read the article

  • The Problem Should Define the Process, Not the Tool

    - by thatjeffsmith
    All around awesome tool, but not the only gadget in your toolbox.I’m stepping down from my SQL Developer pulpit today and standing up on my philosophical soap box. I’m frequently asked to help folks transition from one set of database tools over to Oracle SQL Developer, which I’m MORE than happy to do. But, I’m not looking to simply change the way people interact with Oracle database. What I care about is your productivity. Is there a faster, more efficient way for you to connect the dots, get from A to B, or just get home to your kids or to the pub for happy hour? If you have defined a business process around a specific tool, what happens when that tool ‘goes away?’ Does the business stop? No, you feel immediate pain until you are able to re-implement the process using another mechanism. Where I get confused, or even frustrated, is when someone asks me to redesign our tool to match their problem. Tools are just tools. Saying you ‘can’t load your data anymore because XYZ’ isn’t valid when you could easily do that same task via SQL*Loader, Create Table As Selects, or 9 other different mechanisms. Sometimes changes brings opportunity for improvement in the process. Don’t be afraid to step back and re-evaluate a problem with a fresh set of eyes. Just trying to replicate your process in another tool exactly as it was done in the ‘old tool’ doesn’t always make sense. Quick sidebar: scheduling a Windows program to kick off thousands if not millions of table inserts from Excel versus using a ‘proper’ server process using SQL*Loader and or external tables means sacrificing scalability and reliability for convenience. Don’t let old habits blind you to new solutions and possibilities. Of couse I’m not going to sit here and say that our tools aren’t deficient in some areas or can’t be improved upon. But I bet if we work together we can find something that’s not only better for the business, but is also better for you. What do you ‘miss’ since you’ve started using SQL Developer as your primary Oracle database tools? I’d love to start a thread here and share ideas on how we can better serve you and your organizations needs. The end solution might not look exactly what you have in mind starting out, but I had no idea I’d be a Product Manager when I started college either What can you no longer ‘do’ since you picked up SQL Developer? What hurts more than it should? What keeps you from being great versus just good?

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 13

    - by MarkPearl
    Learning Outcomes Explain the advantages of using a large number of registers Discuss the way in which compilers optimize register usage Discuss the evolution of CISC machines Describe the characteristics of RISC architecture Discuss the RISC vs. CISC controversy Describe the way in which RISC and CISC design principles can be combined Instruction Execution Characteristics To understand the the line of reasoning of RISC advocates, we need a brief overview of instruction execution characteristics. These include… Operations Operands Procedure Calls These three sections can be studied in depth in the textbook at pages 503 - 505 A number of groups have come up with the conclusion that the attempt to make the instruction set architecture closer to HLLs (High Level Languages) is not the most effective design strategy. Rather HLL’s can be best supported by optimizing performance of the most time-consuming features of typical HLL programs. Generally 3 main characteristics came up to improve performance… Use a large number of registers or use a compiler to optimize register usage Careful attention needs to be paid to the design of instruction pipelines A simplified (reduced) instruction set is indicated The use of a large register optimization One of the most important design principles of RISC machines is the use of a large number of registers. The concept of register windows and the use of a large register file versus the use of cache memory are discussed. On the face of it, the use of a large set of registers should decrease the need to access memory. The design task is to organize the registers in such a fashion that this goal is realized. Read page 507 – 510 for a detailed explanation. Compiler-based register optimization   Reduced Instructions Set Architecture There are two advantages to smaller programs… Because the program takes up less memory, there is a savings in that resource (this was more compelling when memory was more expensive) Smaller programs should improve performance, and this will happen in two ways – fewer instructions means fewer instruction bytes to be fetched and in a paging environment smaller programs occupy fewer pages, reducing page faults. Certain characteristics are common to RISC processors… One instruction per cycle Register-to-register operations Simple addressing modes Simple instruction formats RISC vs. CISC After initial enthusiasm for RISC machines, there has been a growing realization that RISC designs may benefit from the inclusion of some CISC features CISC designs may benefit from the inclusion of some RISC features

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >