Search Results

Search found 3096 results on 124 pages for 'scope creep'.

Page 91/124 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • Oracle JDeveloper 11gR2 Cookbook book review

    - by Chris Muir
    I recently received a free copy of Oracle JDeveloper 11gR2 Cookbook published by Packt Publishing for review. Readers of technical cookbooks would know this genre of text includes problems that developers will hit and the prescribed solutions, in this case for Oracle's Application Development Framework (ADF).  Books like this excel themselves on excellent coverage, a logical progress of solutions through out the book, and providing a readable narrative around the numerous steps and code. This book progresses well through ADF application assembly, ADF Business Components, the view layer, security, deployment and tuning.  Each recipe had a clear introduction and I especially enjoyed the "There's more" follow up sections for some recipes that leads the reader onto related ideas and issues the reader really needs to be aware of. Also worthy of comment having worked with ADF for over 5 years, there certainly was recipes and solutions I hadn't encountered before, this book gets bonus points for that. As a reviewer what negatives can I give this text? The book has cast it's net too wide by trying to cover "everything from design and construction, to deployment, testing, debugging and optimization."  ADF is such a large and sophistication technology, this book with 100 recipes barely scrapes the surface.  Don't expect all your ADF problems to be solved here. In turn there is inconsistency in the level of problems and solutions.  I felt at the beginning the book was pitching itself at advanced problems to solve (that's great for me), but then it introduces topics like building a static View Object or train.  These topics in my opinion are fairly simple and are covered by the Oracle documentation just as well, they shouldn't have been included here.  In conclusion, ADF beginners will find this book worthwhile as it will open your eyes to the wider problems and solutions required for ADF, and experts for just the fact they can point junior programmers at the book for certain problems and say "get on with it". Is there scope for more ADF tombs like this?  Yes!  I'd love to see a cookbook specializing on ADF Business Components (hint hint to budding authors).

    Read the article

  • Partial Submit vs. Auto Submit

    - by Frank Nimphius
    Partial Submit ADF Faces adds the concept of partial form submit to JavaServer Faces 1.2 and beyond. A partial submit actually is a form submit that does not require a page refresh and only updates components in the view that are referenced from the command component PartialTriggers property. Another option for refreshing a component in response to a partial submit is call AdfContext.getCurrentInstance.addPartialTarget(component_instance_handle_goes_here)in a managed bean. If a form contains required fields that the user left empty invoking the partial submit, then errors are shown for each of the field as the full form gets submitted. Autosubmit An input component that has its autosubmit property set to true also performs a partial submit of the form. However, this time it doesn't submit the entire form but only the component that triggers the submit plus components referenced it in their PartialTriggers property. For example, consider a form that has three input fields inpA, inpB and inpC with autosubmit=true set on inpA and required=true set on inpB and inpC. use case 1: Running the view, entering data into inpA and then tabbing out of the field will submit the content for inpA but not for inpB and inpC. Further more, none of the required field settings on inpB and inpC causes an error. use case 2: You change the configuration of inpC and set its PartialTriggers property to point to the ID of component inpA. When rerunning the sample, entering a value into inpA and tabbing out of the field will now submit the inpA and inpC fields and thus show an error for the missing required value on inpC. Internally, using autosubmit=true on an input component sets the event root to just this field, which good to have in case of dependent field validation or behavior. The event root can extended to include other components by using the Partial Triggers property on these components to point to the input field that has autosubmit=true defined. PartialSubmit vs. AutoSubmit Partial submit set on a command component submits the whole form and leaves it to the developer to decide which UI component is refreshed in response. Client side required field validation (as well as the server side equivalent) is not disabled by executed in this scenario. Setting immediate=true on the command item to skip validation doesn't help as it would also skip the model update. Auto submit is a functionality on the input components and also performs a partial form submit. However, in addition an event root is defined that narrows the scope for the submitted data and thus the components that are validated on the request. To read more about this topic, see: http://docs.oracle.com/cd/E23943_01/web.1111/b31973/af_lifecycle.htm#CIAHCFJF

    Read the article

  • Becoming the well-integrated content company (and combating AIUTLVFS)

    - by Lance Shaw
    Every single day, each of us create more and more content. Sometimes it is brand new material and many times it is iterations of existing content, but no one would argue that information and content growth is growing at an almost exponential rate. With all this content being created and stored, a number of problems naturally arise. One of the most common issues that users run into is "Am I Using The Latest Version of this File Syndrome", or AIUTLVFS. This insidious syndrome is all too common and results in ineffective, poor or downright wrong business decisions being made.  When content or files are unavailable or incorrect within the scope of key business processes, the chance for erroneous and costly business decisions is magnified even further. For many companies, the ideal scenario is to be able to connect multiple business systems, both old and new, into one common content repository.  Not only does this reduce content duplication, it also helps guarantee that everyone in various departments is working off the proverbial "same page".  Sounds simple - but for many organizations, the proliferation of file shares, SharePoint sites, and other storage silos of content keep the dream of a more efficient business a distant one. We've created some online assets to help you in your evaluation and eventual improvement of your current content management and delivery systems. Take a few minutes to check out our Online Assessment Tool.  It's quick, easy and just might provide you with insights into how you can improve your current content ecosystem. While you are there, check out our new Infographic that outlines common issues faced by companies today. Feel free to save our informative Infographic PDF and share it with business colleagues and your management to help them understand the business costs and impact of inaction. Together we can stop AIUTLVFS in its tracks and run our businesses more effectively than ever. Additionally, we hope you will take a few minutes to visit our new and informative webpages dedicated to the value of a well connected, fully integrated content management system. It's a great place to learn more about how integrating WebCenter Content into your infrastructure can lower your operational costs while boosting process and worker efficiency.

    Read the article

  • Software Tuned to Humanity

    - by Phil Factor
    I learned a great deal from a cynical old programmer who once told me that the ideal length of time for a compiler to do its work was the same time it took to roll a cigarette. For development work, this is oh so true. After intently looking at the editing window for an hour or so, it was a relief to look up, stretch, focus the eyes on something else, and roll the possibly-metaphorical cigarette. This was software tuned to humanity. Likewise, a user’s perception of the “ideal” time that an application will take to move from frame to frame, to retrieve information, or to process their input has remained remarkably static for about thirty years, at around 200 ms. Anything else appears, and always has, to be either fast or slow. This could explain why commercial applications, unlike games, simulations and communications, aren’t noticeably faster now than they were when I started programming in the Seventies. Sure, they do a great deal more, but the SLAs that I negotiated in the 1980s for application performance are very similar to what they are nowadays. To prove to myself that this wasn’t just some rose-tinted misperception on my part, I cranked up a Z80-based Jonos CP/M machine (1985) in the roof-space. Within 20 seconds from cold, it had loaded Wordstar and I was ready to write. OK, I got it wrong: some things were faster 30 years ago. Sure, I’d now have had all sorts of animations, wizzy graphics, and other comforting features, but it seems a pity that we have used all that extra CPU and memory to increase the scope of what we develop, and the graphical prettiness, but not to speed the processes needed to complete a business procedure. Never mind the weight, the response time’s great! To achieve 200 ms response times on a Z80, or similar, performance considerations influenced everything one did as a developer. If it meant writing an entire application in assembly code, applying every smart algorithm, and shortcut imaginable to get the application to perform to spec, then so be it. As a result, I’m a dyed-in-the-wool performance freak and find it difficult to change my habits. Conversely, many developers now seem to feel quite differently. While all will acknowledge that performance is important, it’s no longer the virtue is once was, and other factors such as user-experience now take precedence. Am I wrong? If not, then perhaps we need a new school of development technique to rival Agile, dedicated once again to producing applications that smoke the rear wheels rather than pootle elegantly to the shops; that forgo skeuomorphism, cute animation, or architectural elegance in favor of the smell of hot rubber. I struggle to name an application I use that is truly notable for its blistering performance, and would dearly love one to do my everyday work – just as long as it doesn’t go faster than my brain.

    Read the article

  • What shall I include in a 10 week web technologies course?

    - by Iain
    In September I will be teaching a university module on web technologies. This session will be available to 1st year (freshman) students who don't necessarily have any programming knowledge or know how the web works. In the 2nd semester I will be teaching Flash, which is my specialism, so I know exactly what I am going to teach, but in the 1st semester I will be teaching them web standards technologies - HTML, CSS, JS, jQuery, PHP and MySQL. Where I need advice is how to proportion the emphasis for each part, and which parts of each technology to cover. Another real issue I'm struggling with is how much of the bad old ways should I teach them? Do they need to know about bold as well as strong, etc. UPDATE: based, on your feedback I will only be teaching the latest version of everything - CSS3, HTML5 etc. I'm not sure exactly how long the semester will be but I'm guessing about 10-12 weeks. Each session is a 2 hour lab. Obviously there's only so much I can cover in that time and it will be up to the students to go a research this stuff properly on W3 schools etc. My ideas so far were: Lesson 0 - Course intro and overview of the current tech landscape. What is out there, what will we be learning, what won't we. What is a web server, URL etc. Looking at different example websites and discussing how they work. Lesson 1 - HTML basics (head, body, title, img, table, a, lists, h1, strong etc) Lesson 2 - CSS for styling and layout - fonts, webfonts, float etc Lesson 3 - Intro to programming JS (variables, loops, conditionals, functions) Lesson 4 - more JS programming fundamentals, DOM manipulation Lesson 5 - jQuery - making things fly about and look cool Lesson 6 - XML and Ajax Lesson 7 - PHP basics - syntax, server-side principles Lesson 8 - PHP and MySQL - forms, logins, saving user info Lesson 9 - don't know Lesson 10 - don't know Please let me know if you think this is the right order, what have I missed, how to use any spare sessions etc. Thanks :) UPDATE BASED ON RESPONSES: Thanks for all your responses - some great stuff. To be absolutely clear, this is not a computer science course, it is a practical module on a creative technology course. The emphasis definitely has to be on making cool things work rather than understanding how the backbone of the internet works. That can come later, if the students are interested. At the end of the module I would like the students to be able to produce a web page or pages that does something cool, using some or all of the technologies I cover. Many of these topics are of course far beyond the scope of a 2 hour session, however I do not have the option of reducing the syllabus, I will just have to explain what the technology does and encourage the student to research it in their own time.

    Read the article

  • How to explain bad software to non-technical people?

    - by mtutty
    In discussing software development with non-technical people (customers, business owners, project sponsors, etc.), I often resort to analogies and metaphors. It's relatively easy and effective to use a "house" or other metaphor for describing the size and complexity of new development. However, we often inherit someone else's code or data, and this approach doesn't seem to hold up as well when trying to explain why we're gutting something that already seems to work. Of course we can point to cycle time and cost to be saved in the future but this generally means nothing to business folks. I know doctors can say "just take this pill," but I'm not sure that software devs have the same authority. Ideas? EDIT: Let me add a bit to the discussion. The specific project I'm talking about has customers that don't realize (or care) about specific aspects of the system we're retiring (i.e., they think it was just fine): The system would save a NEW RECORD every time someone updated a field The system contained tables for reference data. These tables had new records added every day, even though they were duplicates of previous records. And there was no way to tie the reference data used for a particular case at the time it was closed. This is like 99% of the data in the old system. The field NAMES also have spaces, apostrophes and other inappropriate characters in them, making everything harder to work with. In addition to the incredible amount of duplicate data, they have around 1000 XLS files with data they want added to the system. Previously, they would do a spreadsheet for each case in the database, IN ADDITION TO what they typed into the database. Getting rid of this old, unneeded information and piping in the XLS data comprises about 80% of the total project effort, and was not something we could accurately predict. I'm trying to find a concrete way to describe how bad this thing was, mostly so that the customer will understand why the migration process has been so time-consuming. The actual coding was done pretty quickly and the new system works fine, but without the old data they won't be happy. Sorry to get into the weeds, but most of the answers I've seen so far are pretty basic scope/schedule/cost things. I've been doing this for 15 years, so this really is more of a reflective, philosophical question - but without some of the details it can be difficult to really appreciate the awful beauty of this problem.

    Read the article

  • Oracle PeopleSoft PeopleTools 8.53 Release Value Proposition (RVP) published

    - by Greg Kelly
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The Oracle PeopleSoft PeopleTools 8.53 Release Value Proposition (RVP) can be found at: https://supporthtml.oracle.com/epmos/faces/ui/km/DocumentDisplay.jspx?id=1473194.1 The PeopleSoft PeopleTools 8.53 release continues Oracle’s commitment to protect and extend the value of your PeopleSoft implementation, provide additional technology options and enhancements that reduce ongoing operating costs and provide the applications user a dramatically improved experience. Across the PeopleSoft product development organization we have defined three design principles: Simplicity, Productivity and Total Cost of Ownership. These development principles have directly influenced the PeopleTools product direction during the past few releases. The scope for the PeopleTools 8.53 release again builds additional functionality into the product as a result of direct customer input, industry analysis and internal feature design. New features, bug fixes and new certifications found in PeopleTools 8.53 combine to offer customers improved application user experience, page interaction, and cost-effectiveness. Key PeopleTools 8.53 features include: · PeopleSoft Styles and User Interaction Model · PeopleSoft Data Migration Workbench · PeopleSoft Update Manager · Secure by Default Initiative Be sure to check out the PeopleSoft Update Manager. Many other things are also happening in this time frame. · See the posting on the PeopleSoft Interaction Hub https://blogs.oracle.com/peopletools/entry/introducing_the_peoplesoft_interaction_hub · The application 9.2 RVPs will also be published over the next few months · If you haven't seen it, check out John Webb's posting on the PeopleSoft Information Portal https://blogs.oracle.com/peoplesoft/entry/peoplesoft_information_find_it_quickly

    Read the article

  • Announcing Key Functional White Papers for SIM and ReIM

    - by Oracle Retail Documentation Team
    Oracle Retail has published two new documents on My Oracle Support (https://support.oracle.com)  that provide partners and retailers with deeper functional information about two products: Oracle Retail Store Inventory Management (SIM) and Oracle Retail Invoice Matching. Oracle Retail Store Inventory Management Item Configuration White Paper (Doc ID 1507221.1) There is functionality within the Store Inventory Management system related to item configuration that spans across multiple concepts that apply to the application as a whole rather than to a specific area. This white paper covers numerous topics around item configuration including: Item Transaction Levels Item Long Description Pack Size Standard Unit of Measure Standard Unit of Measure Conversion Pack Items Simple Pack Conversion Items (Notional Packs) Ranging Items Item Status Non-Sellable Items Type-2 Item Recognition UPC-E Barcodes Non-Inventory Items Consignment and Concession Items Quick Response Codes Oracle Retail Invoice Matching Financial Transactions (Doc ID 1500209.1) This document explains the financial transactions that are posted by Oracle Retail Invoice Matching (ReIM). The scope of the document is limited to ReIM transactions only, and does not explain Retail Merchandising System (RMS), Finance, or Account Receivable transactions. ReIM follows the double-entry accounting standard, which works by recording the debit and credit of each financial transaction belonging to each party involved. Each transaction means a profit to one account (debit) and a loss to another account (credit). Full invoice match processing is completed in ReIM with payment recommendations communicated to Oracle Accounts Payable. ReIM matches merchandise orders and receipts against merchandise invoices, performing automated and manual matching, as well as discrepancy-resolution processing. Matched invoices are posted to interface staging tables specifying the amount and date to pay, vendor, site ID, General Ledger Chart of Accounts (GL CoA) information, and payment terms. Other payables documents, including debit memos, credit memos and credit notes are also interfaced to Accounts Payable through the ReIM staging tables (IM_AP_STAGE_HEAD and IM_AP_STAGE_DETAIL). For information about how ReIM engages in this processing, see the latest Oracle Retail Invoice Matching Operations Guide. Certain ReIM transactions are not interfaced to Oracle Payables, but instead are interfaced to Oracle General Ledger through the IM_FINANCIAL_STAGE table. When analyzing transactions posted through the staging tables, retailers should note the transaction type, Standard/Credit, as well as the sign in the amount field. Technically, a negative sign on a credit transaction changes the transaction to a debit entry, and vice versa. This document is concerned about the financial meaning of the transactions, and will avoid a discussion of negative numbers in T-charts.

    Read the article

  • Perfect is the enemy of “Good Enough”

    - by Daniel Moth
    This is one of the quotes that I was against, but now it is totally part of my core beliefs: "Perfect is the enemy of Good Enough" Folks used to share this quote a lot with me in my early career and my frequent interpretation was that they were incompetent people that were satisfied with mediocrity, i.e. I ignored them and their advice. (Yes, I went through an arrogance phase). I later "grew up" and "realized" that they were missing the point, so instead of ignoring them I would retort: "Of course we have to aim for perfection, because as human beings we'll never achieve perfection, so by aiming for perfection we will indeed achieve good enough results". (Yes, I went through a smart ass phase). Later I grew up a bit more and "understood" that what I was really being told is to finish my work earlier and move on to other things because by trying to perfect that one thing, another N things that I was responsible for were suffering by not getting my attention - all things on my plate need to move beyond the line, not just one of them to go way over the line. It is really a statement of increasing scale and scope. To put it in other words, getting PASS grades on 10 things is better than getting an A+ with distinction on 1-2 and a FAIL on the rest. Instead of saying “I am able to do very well these X items” it is best if you can say I can do well enough on these X * Y items”, where Y > 1. That is how breadth impact is achieved. In the future, I may grow up again and have a different interpretation, but for now - even though I secretly try to "perfect" things, I try not to do that at the expense of other responsibilities. This means that I haven't had anybody quote that saying to me in a while (or perhaps my quality of work has dropped so much that it doesn't apply to me any more - who knows :-)). Wikipedia attributes the quote to Voltaire and it also makes connections to the “Law of diminishing returns”, and to the “80-20 rule” or “Pareto principle”… it commonly takes 20% of the full time to complete 80% of a task while to complete the last 20% of a task takes 80% of the effort …check out the Wikipedia entry on “Perfect is the enemy of Good” and its links. Also use your favorite search engine to search and see what others are saying (Bing, Google) – it is worth internalizing this in a way that makes sense to you… Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • returning a heap block by reference in c++

    - by basicR
    I was trying to brush up my c++ skills. I got 2 functions: concat_HeapVal() returns the output heap variable by value concat_HeapRef() returns the output heap variable by reference When main() runs it will be on stack,s1 and s2 will be on stack, I pass the value by ref only and in each of the below functions, I create a variable on heap and concat them. When concat_HeapVal() is called it returns me the correct output. When concat_HeapRef() is called it returns me some memory address (wrong output). Why? I use new operator in both the functions. Hence it allocates on heap. So when I return by reference, heap will still be VALID even when my main() stack memory goes out of scope. So it's left to OS to cleanup the memory. Right? string& concat_HeapRef(const string& s1, const string& s2) { string *temp = new string(); temp->append(s1); temp->append(s2); return *temp; } string* concat_HeapVal(const string& s1, const string& s2) { string *temp = new string(); temp->append(s1); temp->append(s2); return temp; } int main() { string s1,s2; string heapOPRef; string *heapOPVal; cout<<"String Conact Experimentations\n"; cout<<"Enter s-1 : "; cin>>s1; cout<<"Enter s-2 : "; cin>>s2; heapOPRef = concat_HeapRef(s1,s2); heapOPVal = concat_HeapVal(s1,s2); cout<<heapOPRef<<" "<<heapOPVal<<" "<<endl; return -9; }

    Read the article

  • Oracle Could Lead In Cloud Business Apps Within Year

    - by Richard Lefebvre
    Below is the reprint from an article, writen by By Pete Barlas, Investor's Business Daily, published on Investorscom: Oracle (ORCL) is all but destined to become the largest seller of cloud business-software applications, analysts say, and perhaps within a year. What that means in the long run is much debated, though, as analysts aren't sure whether pricing competition might cut into profit or what other issues might develop in the fast-emerging cloud software field. But the database leader, which is either No. 1 or 2 to SAP (SAP) in business apps overall, simply has the size and scope to overtake current cloud business-app leader, Salesforce.com (CRM), analysts say. Oracle rolled out its first full suite of cloud applications on June 6. Cloud computing lets companies store data and apps on the Internet "cloud" and access it quickly and easily. The applications run the gamut of customer relationship management software to social networking sites for employees, partners and customers. For longtime software giants like Oracle, the cloud is a big switch. They get the great bulk of revenue from companies and other enterprises buying or licensing software that the customers keep on their own computer systems. Vendors also get annual maintenance fees. Analysts estimate Oracle is taking in a mere $1 billion or so a year from cloud-based software sales and services now. But while that's just a sliver of the company's $37 billion in sales last year, it's already about a third of the total sales for Salesforce, which is expected to end this year with some $3 billion in revenue. Operates In 145 Countries Oracle operates in more than 145 countries vs. about 70 for Salesforce. And Oracle has far more apps than Salesforce. Revenue doesn't equate to profit, but it's inevitable that huge Oracle will become the largest seller of cloud applications, says Trip Chowdhry, an analyst for Global Equities Research. "What Oracle has is global presence," he said. "They have two things driving the revenue: breadth of the offering and breadth of the distribution. You put those applications in those sales reps' hands and you get deployments not in just one country but several countries." At the June 6 event, Oracle CEO Larry Ellison emphasized that his company could and would beat Salesforce.com in head-to-head battles for customers. Oracle makes software to help companies manage such tasks as customer relationships, recruiting, supply chains, projects, finances and more. That range gives it an edge over all rivals, says Michael Fauscette, an analyst for research firm IDC.

    Read the article

  • How can I cleanly and elegantly handle data and dependancies between classes

    - by Neophyte
    I'm working on 2d topdown game in SFML 2, and need to find an elegant way in which everything will work and fit together. Allow me to explain. I have a number of classes that inherit from an abstract base that provides a draw method and an update method to all the classes. In the game loop, I call update and then draw on each class, I imagine this is a pretty common approach. I have classes for tiles, collisions, the player and a resource manager that contains all the tiles/images/textures. Due to the way input works in SFML I decided to have each class handle input (if required) in its update call. Up until now I have been passing in dependencies as needed, for example, in the player class when a movement key is pressed, I call a method on the collision class to check if the position the player wants to move to will be a collision, and only move the player if there is no collision. This works fine for the most part, but I believe it can be done better, I'm just not sure how. I now have more complex things I need to implement, eg: a player is able to walk up to an object on the ground, press a key to pick it up/loot it and it will then show up in inventory. This means that a few things need to happen: Check if the player is in range of a lootable item on keypress, else do not proceed. Find the item. Update the sprite texture on the item from its default texture to a "looted" texture. Update the collision for the item: it might have changed shape or been removed completely. Inventory needs to be updated with the added item. How do I make everything communicate? With my current system I will end up with my classes going out of scope, and method calls to each other all over the place. I could tie up all the classes in one big manager and give each one a reference to the parent manager class, but this seems only slightly better. Any help/advice would be greatly appreciated! If anything is unclear, I'm happy to expand on things.

    Read the article

  • Data Source Use of Oracle Edition Based Redefinition (EBR)

    - by Steve Felts
    Edition-based redefinition is a new feature in the 11gR2 release of the Oracle database. It enables you to upgrade the database component of an application while it is in use, thereby minimizing or eliminating down time. It works by allowing for a pre-upgrade and post-upgrade view of the data to exist at the same time, providing a hot upgrade capability. You can then specify which view you want for a particular session.  See the Oracle Database Advanced Application Developer's Guide for further information. There is also a good white paper at Edition Based Definition. Using this feature of the Oracle database does not require any new WebLogic Server functionality. It is set for each connection in the pool automatically by simply specifying SQL ALTER SESSION SET EDITION = edition_name in the Init SQL parameter in the data source configuration. This can be configured either via the console or via WLST (setInitSQL on the JDBCConnectionPoolParams). This SQL statement is executed for each newly created physical database connection.Note that we are assuming that a data source references only one edition of the database. To make use of this feature, you would have an earlier version of the application with a data source that references the earlier EDITION and a later version of the application with a data source that references the later EDITION.   Once you start talking about multiple versions of a WLS application, you should be using the WLS "side-by-side" or "versioned" deployment feature.  See Developing Applications for Production Redeployment for more information.  By combining Oracle database EBR and WLS versioned deployment, the application can be failed over with no downtime, making the combination of features more powerful than either independently. There is a catch - you need to be running with a versioned database and a versioned application initially so then you can switch versions.  The recommended way to version a WLS application is to simply add the "Weblogic-Application-Version" property in the MANIFEST.MF file(you can also specify it at deployment time). The recommended way to configure the data source is to use a packaged data source descriptor that's stored in the ear or war so that everything is self-contained.  There are some restrictions.  You can't use a packaged data source with Logging Last Resource (LLR) - you need to use a system resource.  You can't use an application-scoped packaged data source with EmulateTwoPhaseCommit for the global-transactions-protocol with a versioned application - use a global scope.  See Configuring JDBC Application Modules for Deployment for more details. There's one known problem - it doesn't work correctly with an XA data source (patch available with bug 14075837).

    Read the article

  • Criteria for a programming language to be considered "mature"

    - by Giorgio
    I was recently reading an answer to this question, and I was struck by the statement "The language is mature". So I was wondering what we actually mean when we say that "A programming language is mature"? Normally, a programming language is initially developed out of a need, e.g. Try out / implement a new programming paradigm or a new combination of features that cannot be found in existing languages. Try to solve a problem or overcome a limitation of an existing language. Create a language for teaching programming. Create a language that solves a particular class of problems (e.g. concurrency). Create a language and an API for a special application field, e.g. the web (in this case the language might reuse a well-known paradigm, but the whole API must be new). Create a language to push your competitor out of the market (in this case the creator might want the new language to be very similar to an existing one, in order to attract developers to the new programming language and platform). Regardless of what the original motivation and scenario in which a language has been created, eventually some languages are considered mature. In my intuition, this means that the language has achieved (at least one of) its goals, e.g. "We can now use language X as a reliable tool for writing web applications." This is however a bit vague, so I wanted to ask what you consider the most important criteria (if any) that are applied when saying that a language is mature. IMPORTANT NOTE This question is (on purpose) language-agnostic because I am only interested in general criteria. Please write only language-agnostic answers and comments! I am not asking whether any specific "language X is mature" or "which programming languages can be considered mature", or whether "language X is more mature than language Y": please avoid posting any opinions or reference about any specific languages because these are out of the scope of this question. EDIT To make the question more precise, by criteria I mean such things as "tool support", "adoption by the industry", "stability", "rich API", "large user community", "successful application record", "standardization", "clean and uniform semantics", and so on.

    Read the article

  • Simulate 'Shock absorbtion' with tire rubber in PhysX (2.8.x)

    - by Mungoid
    This is a kinda tricky question and I fear there is no easy enough solution, but I figured I'd hit SE up before giving up on it and just doing what I can. A machine I am working on has no suspension or shocks or springs of any sort in the real machine, so you would think that when it drives over bumps, it would shake like crazy but because its tires (6 of them) are quite large they seem to absorb a lot of shock from the bumps. Part of this is because the machine is around 30k lbs and it just smashes/compresses any bumps in the ground down (This is another issue im still working on) and the other part is that the tires seem to have a lot of flex to them with a lot of air as well. So my current task is to simulate shock absorption in physx without visibly separating the tires from the spindle/axle.. I have been messing with all kinds of NxMaterial, NxSpring, Joints, etc. and have had no luck getting this to work. The main problem is that the spindle attached to the tire is directly in the center and the axle is basically solidly attached to the chassis, so if i give it any spring or suspension travel, that spindle on the tires will move upwards or downwards, looking very odd because now its not any longer in the center of the tire. I tried giving it a higher restitution but that just makes it bouncy without any shock absorption. Another avenue I am messing with is to actively smooth the terrain in front of the tires so that before it hits a bumpy patch, that patch is smoothed and it doesn't bounce. The only issue with this is that it is pretty expensive to do with 6 tires, high tesselation of the terrain and other complex things going on at the same time in this simulation. I am still working on this but I am hoping to mix and match a few different aspects to get the best possible outcome. This is a bit of a complex issue so I'm not expecting anyone to have a definitive answer, just hoping someone may think of something I haven't =-) -Side note: Yes i know PhysX 2.8.x is quite outdated but we have to stick with it for this implementation. We are in the process of going to another physics engine but it is out of scope to apply that engine to this project.

    Read the article

  • Encapsulating code in F# (Part 1)

    - by MarkPearl
    I have been looking at F# for a while now and seem a few really interesting samples and snippets on howto’s. This has been great to see the basic outline of the language and the possibilities, however a nagging question in the back of my mind has been what does an F# project look like? How do I code group code in F# so that it can be modularized and brought in and out of a project easily? My Expert F# book has an entire chapter (7) dedicated to this and after browsing the other chapters of the book I decided that this topic was something I really wanted to know more about now! Because of my C# background I keep trying to think in F# of objects. So to try and get a clearer idea of how to do things the F# way I am first going to take a very simplified C# example and try to “translate” it. using System; namespace ConsoleApplication1 { namespace ExampleOfEncapsulationInCSharp { class Program { static void EncapsulatedVariableInAMethod() { int count = 10; Console.WriteLine(count); } static void Main(string[] args) { EncapsulatedVariableInAMethod(); Console.ReadLine(); } } } } From the above example the count integer is encapsulated within EncapsulatedVariableInAMethod method. You couldn’t access the count variable from outside the scope of its parent method but have full access to it within the method. Lets look at my F# equivalent… open System let EncapsulatedVariableInAMethod = let count = 10 Console.WriteLine(count) () EncapsulatedVariableInAMethod Console.ReadLine()   Now, when I first attempted to write the F# code I got stuck… I didn’t have the Console.WriteLine calls but had the following… open System let EncapsulatedVariableInAMethod = let count = 10 EncapsulatedVariableInAMethod Console.ReadLine()   The compiler didn’t like the let before the count = 10. This is because every F# expression must evaluate to a value. If I did not want to make the Console call, I would still need to evaluate the expression to something – and for this reason the Unit Type is provided. I could have done something like…. open System let EncapsulatedVariableInAMethod = let count = 10 () EncapsulatedVariableInAMethod Console.ReadLine()   Which the compiler would be happy with…

    Read the article

  • Is there a factory pattern to prevent multiple instances for same object (instance that is Equal) good design?

    - by dsollen
    I have a number of objects storing state. There are essentially two types of fields. The ones that uniquely define what the object is (what node, what edge etc), and the others that store state describing how these things are connected (this node is connected to these edges, this edge is part of these paths) etc. My model is updating the state variables using package methods, so all these objects act as immutable to anyone not in Model scope. All Objects extend one base type. I've toyed with the idea of a Factory approach which accepts a Builder object and constructs the applicable object. However, if an instance of the object already exists (ie would return true if I created the object defined by the builder and passed it to the equal method for the existing instance) the factory returns the current object instead of creating a new instance. Because the Equal method would only compare what uniquely defines the type of object (this is node A to node B) but won't check the dynamic state stuff (node A is currently connected to nodes C and E) this would be a way of ensuring anyone that wants my Node A automatically knows its state connections. More importantly it would prevent aliasing nightmares of someone trying to pass an instance of node A with different state then the node A in my model has. I've never heard of this pattern before, and it's a bit odd. I would have to do some overriding of serialization methods to make it work (ensure that when I read in a serilized object I add it to my facotry list of known instances, and/or return an existing factory in its place), as well as using a weakHashMap as if it was a weakHashSet to know whether an instance exists without worrying about a quasi-memory leak occuring. I don't know if this is too confusing or prone to its own obscure bugs. One thing I know is that plugins interface with lowest level hardware. The plugins have to be able to return state that is different than my memory; to tell my memory when its own state is inconsistent. I believe this is possible despite their fetching objects that exist in my memory; we allow building of objects without checking their consistency with the model until the addToModel is called anyways; and the existing plugins design was written before all this extra state existed and worked fine without ever being aware of it. Should I just be using some other design to avoid this crazyness? (I have another question to that affect that I'm posting).

    Read the article

  • is a factory pattern to prevent multuple instances for same object (instance that is Equal) good design?

    - by dsollen
    I have a number of objects storing state. There are essentially two types of fields. The ones that uniquly define what the object is (what node, what edge etc), and the oens that store state describing how these things are connected (this node is connected to these edges, this edge is part of these paths) etc. My model is updating the state variables using package methdos, so these objects all act as immutable to anyone not in Model scope. All Objects extend one base type. I've toyed with the idea of a Factory approch which accepts a Builder object and construct the applicable object. However, if an instance of the object already exists (ie would return true if I created the object defined by the builder and passed it to the equal method for the existing instance) the factory returns the current object instead of creating a new instance. Because the Equal method would only compare what uniquly defines the type of object (this is node A nto node B) but won't check the dynamic state stuff (node A is currently connected to nodes C and E) this would be a way of ensuring anyone that wants my Node A automatically knows it's state connections. More importantly it would prevent aliasing nightmares of someone trying to pass an instance of node A with different state then the node A in my model has. I've never heard of this pattern before, and it's a bit odd. I would have to do some overiding of serlization methods to make it work (ensure when I read in a serilized object I add it to my facotry list of known instances, and/or return an existing factory in it's place), as well as using a weakHashMap as if it was a weakHashSet to know rather an instance exists without worrying about a quasi-memory leak occuring. I don't know if this is too confusing or prone to it's own obscure bugs. One thing I know is that plugins interface with lowest level hardware. The plugins have to be able to return state taht is different then my memory; to tell my memory when it's own state is inconsistent. I believe this is possible despit their fetching objects that exist in my memory; we allow building of objects without checking their consistency with the model until the addToModel is called anyways; and the existing plugins design was written before all this extra state existed and worked fine without ever being aware of it. Should I just be using some other design to avoid this crazyness? (I have another question to that affect I'm posting).

    Read the article

  • Should the 12-String be in it's own class and why?

    - by MayNotBe
    This question is regarding a homework project in my first Java programming class (online program). The assignment is to create a "stringed instrument" class using (among other things) an array of String names representing instrument string names ("A", "E", etc). The idea for the 12-string is beyond the scope of the assignment (it doesn't have to be included at all) but now that I've thought of it, I want to figure out how to make it work. Part of me feels like the 12-String should have it's own class, but another part of me feels that it should be in the guitar class because it's a guitar. I suppose this will become clear as I progress but I thought I would see what kind of response I get here. Also, why would they ask for a String[] for the instrument string names? Seems like a char[] makes more sense. Thank you for any insight. Here's my code so far (it's a work in progress): public class Guitar { private int numberOfStrings = 6; private static int numberOfGuitars = 0; private String[] stringNotes = {"E", "A", "D", "G", "B", "A"}; private boolean tuned = false; private boolean playing = false; public Guitar(){ numberOfGuitars++; } public Guitar(boolean twelveString){ if(twelveString){ stringNotes[0] = "E, E"; stringNotes[1] = "A, A"; stringNotes[2] = "D, D"; stringNotes[3] = "G, G"; stringNotes[4] = "B, B"; stringNotes[5] = "E, E"; numberOfStrings = 12; } } public int getNumberOfStrings() { return numberOfStrings; } public void setNumberOfStrings(int strings) { if(strings == 12 || strings == 6) { if(strings == 12){ stringNotes[0] = "E, E"; stringNotes[1] = "A, A"; stringNotes[2] = "D, D"; stringNotes[3] = "G, G"; stringNotes[4] = "B, B"; stringNotes[5] = "E, E"; numberOfStrings = strings; } if(strings == 6) numberOfStrings = strings; }//if else System.out.println("***ERROR***Guitar can only have 6 or 12 strings***ERROR***"); } public void getStringNotes() { for(int i = 0; i < stringNotes.length; i++){ if(i == stringNotes.length - 1) System.out.println(stringNotes[i]); else System.out.print(stringNotes[i] + ", "); }//for }

    Read the article

  • Should the 12-String be in it's own class and why? Java

    - by MayNotBe
    This is my first question here. I will amend it as instructed. This is regarding a homework project in my first Java programming class (online program). The assignment is to create a "stringed instrument" class using (among other things) an array of String names representing instrument string names ("A", "E", etc). The idea for the 12-string is beyond the scope of the assignment (it doesn't have to be included at all) but now that I've thought of it, I want to figure out how to make it work. Part of me feels like the 12-String should have it's own class, but another part of me feels that it should be in the guitar class because it's a guitar. I suppose this will become clear as I progress but I thought I would see what kind of response I get here. Also, why would they ask for a String[] for the instrument string names? Seems like a char[] makes more sense. Thank you for any insight. Here's my code so far (it's a work in progress): public class Guitar { private int numberOfStrings = 6; private static int numberOfGuitars = 0; private String[] stringNotes = {"E", "A", "D", "G", "B", "A"}; private boolean tuned = false; private boolean playing = false; public Guitar(){ numberOfGuitars++; } public Guitar(boolean twelveString){ if(twelveString){ stringNotes[0] = "E, E"; stringNotes[1] = "A, A"; stringNotes[2] = "D, D"; stringNotes[3] = "G, G"; stringNotes[4] = "B, B"; stringNotes[5] = "E, E"; numberOfStrings = 12; } } public int getNumberOfStrings() { return numberOfStrings; } public void setNumberOfStrings(int strings) { if(strings == 12 || strings == 6) { if(strings == 12){ stringNotes[0] = "E, E"; stringNotes[1] = "A, A"; stringNotes[2] = "D, D"; stringNotes[3] = "G, G"; stringNotes[4] = "B, B"; stringNotes[5] = "E, E"; numberOfStrings = strings; } if(strings == 6) numberOfStrings = strings; }//if else System.out.println("***ERROR***Guitar can only have 6 or 12 strings***ERROR***"); } public void getStringNotes() { for(int i = 0; i < stringNotes.length; i++){ if(i == stringNotes.length - 1) System.out.println(stringNotes[i]); else System.out.print(stringNotes[i] + ", "); }//for }

    Read the article

  • BSOD Dump - EXCEPTION_DOUBLE_FAULT - ON Windows 2008 Server 64bit

    - by Mark K
    Hello, my windows 2008 server (datacenter ed) 64bit , have recently created a series of BSOD on a different applications. the error message is in general EXCEPTION_DOUBLE_FAULT. Can anyone please help with the analysis of the dump file bellow- Best regards, Mark 2: kd !analyze -v * Bugcheck Analysis * * UNEXPECTED_KERNEL_MODE_TRAP (7f) This means a trap occurred in kernel mode, and it's a trap of a kind that the kernel isn't allowed to have/catch (bound trap) or that is always instant death (double fault). The first number in the bugcheck params is the number of the trap (8 = double fault, etc) Consult an Intel x86 family manual to learn more about what these traps are. Here is a portion of those codes: If kv shows a taskGate use .tss on the part before the colon, then kv. Else if kv shows a trapframe use .trap on that value Else .trap on the appropriate frame will show where the trap was taken (on x86, this will be the ebp that goes with the procedure KiTrap) Endif kb will then show the corrected stack. Arguments: Arg1: 0000000000000008, EXCEPTION_DOUBLE_FAULT Arg2: 0000000080050033 Arg3: 00000000000006f8 Arg4: fffff800018b1678 Debugging Details: BUGCHECK_STR: 0x7f_8 CUSTOMER_CRASH_COUNT: 1 DEFAULT_BUCKET_ID: DRIVER_FAULT_SERVER_MINIDUMP PROCESS_NAME: CustomerService. CURRENT_IRQL: 1 EXCEPTION_RECORD: fffffa6004e45568 -- (.exr 0xfffffa6004e45568) ExceptionAddress: fffff800018a0150 (nt!RtlVirtualUnwind+0x0000000000000250) ExceptionCode: 10000004 ExceptionFlags: 00000000 NumberParameters: 2 Parameter[0]: 0000000000000000 Parameter[1]: 00000000000000d8 TRAP_FRAME: fffffa6004e45610 -- (.trap 0xfffffa6004e45610) NOTE: The trap frame does not contain all registers. Some register values may be zeroed or incorrect. rax=0000000000000050 rbx=0000000000000000 rcx=0000000000000004 rdx=00000000000000d8 rsi=0000000000000000 rdi=0000000000000000 rip=fffff800018a0150 rsp=fffffa6004e457a0 rbp=fffffa6004e459e0 r8=0000000000000006 r9=fffff8000181e000 r10=ffffffffffffff88 r11=fffff80001a1c000 r12=0000000000000000 r13=0000000000000000 r14=0000000000000000 r15=0000000000000000 iopl=0 nv up ei pl zr na po nc nt!RtlVirtualUnwind+0x250: fffff800018a0150 488b02 mov rax,qword ptr [rdx] ds:00000000000000d8=???????????????? Resetting default scope LAST_CONTROL_TRANSFER: from fffff800018781ee to fffff80001878450 STACK_TEXT: fffffa6001768a68 fffff800018781ee : 000000000000007f 0000000000000008 0000000080050033 00000000000006f8 : nt!KeBugCheckEx fffffa6001768a70 fffff80001876a38 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!KiBugCheckDispatch+0x6e fffffa6001768bb0 fffff800018b1678 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!KiDoubleFaultAbort+0xb8 fffffa6004e44e30 fffff800018782a9 : fffffa6004e45568 0000000000000001 fffffa6004e45610 000000000000023b : nt!KiDispatchException+0x34 fffffa6004e45430 fffff800018770a5 : 0000000000000000 0000000000000000 0000000000000000 0000000000000001 : nt!KiExceptionDispatch+0xa9 fffffa6004e45610 fffff800018a0150 : fffffa6004e46638 fffffa6004e46010 fffff80001965190 fffff8000181e000 : nt!KiPageFault+0x1e5 fffffa6004e457a0 fffff800018a3f78 : fffffa6000000001 0000000000000000 0000000000000000 ffffffffffffff88 : nt!RtlVirtualUnwind+0x250 fffffa6004e45810 fffff800018b1706 : fffffa6004e46638 fffffa6004e46010 fffffa6000000000 0000000000000000 : nt!RtlDispatchException+0x118 fffffa6004e45f00 0000000000000000 : 0000000000000000 0000000000000000 0000000000000000 0000000000000000 : nt!KiDispatchException+0xc2 STACK_COMMAND: kb FOLLOWUP_IP: nt!KiDoubleFaultAbort+b8 fffff800`01876a38 90 nop SYMBOL_STACK_INDEX: 2 SYMBOL_NAME: nt!KiDoubleFaultAbort+b8 FOLLOWUP_NAME: MachineOwner MODULE_NAME: nt IMAGE_NAME: ntkrnlmp.exe DEBUG_FLR_IMAGE_TIMESTAMP: 4a7801eb FAILURE_BUCKET_ID: X64_0x7f_8_nt!KiDoubleFaultAbort+b8 BUCKET_ID: X64_0x7f_8_nt!KiDoubleFaultAbort+b8 Followup: MachineOwner

    Read the article

  • Postfix: LDAP not working (warning: dict_ldap_lookup: Search base not found: 32: No such object)

    - by Heinzi
    I set up LDAP access with postfix. ldapsearch -D "cn=postfix,ou=users,ou=system,[domain]" -w postfix -b "ou=users,ou=people,[domain]" -s sub "(&(objectclass=inetOrgPerson)(mail=[mailaddr]))" delivers the correct entry. The LDAP config file looks like root@server2:/etc/postfix/ldap# cat mailbox_maps.cf server_host = localhost search_base = ou=users,ou=people,[domain] scope = sub bind = yes bind_dn = cn=postfix,ou=users,ou=system,[domain] bind_pw = postfix query_filter = (&(objectclass=inetOrgPerson)(mail=%s)) result_attribute = uid debug_level = 2 The bind_dn and bind_pw should be the same as I used above with ldapsearch. Nevertheless, calling postmap doesn't work: root@server2:/etc/postfix/ldap# postmap -q [mailaddr] ldap:/etc/postfix/ldap/mailbox_maps.cf postmap: warning: dict_ldap_lookup: /etc/postfix/ldap/mailbox_maps.cf: Search base 'ou=users,ou=people,[domain]' not found: 32: No such object If I change LDAP configuration, so that anonymous users have complete access to LDAP olcAccess: {-1}to * by * read then it works: root@server2:/etc/postfix/ldap# postmap -q [mailaddr] ldap:/etc/postfix/ldap/mailbox_maps.cf [user-id] But when I restrict this access to the postfix user: olcAccess: {-1}to * by dn="cn=postfix,ou=users,ou=system,[domain]" read by * break it doesn't work but produces the error printed above (although ldapsearch works, only postmap doesn't). Why doesn't it work when binding with a postfix DN? I think I set up the LDAP ACL for the postfix user correctly, as the ldapsearch command should prove. What can be the reason for this behaviour?

    Read the article

  • Bind9 configured to start at boot, has to be started manually

    - by antik
    I've configured bind9 on my system and it works great when it runs. It's currently configured to be run at runlevel 2 by setting: $ sudo update-rc.d bind9 enable 2 This appears to have done its work: $ tree -f /etc/rc?.d | grep -e ".*bind9$" |-- /etc/rc0.d/K85bind9 -> ../init.d/bind9 |-- /etc/rc2.d/S15bind9 -> ../init.d/bind9 |-- /etc/rc3.d/S15bind9 -> ../init.d/bind9 |-- /etc/rc4.d/S15bind9 -> ../init.d/bind9 |-- /etc/rc5.d/S15bind9 -> ../init.d/bind9 |-- /etc/rc6.d/K85bind9 -> ../init.d/bind9 Booting the system, I believe I am at runlevel 2: $ runlevel N 2 Given the above configuration, when the system is rebooted, bind does not come up. Only on occasion, for some reason, can I resolve hostnames immediately after startup. Far more often than not however, I cannot. I can interrogate the service's status: $ sudo /etc/init.d/bind9 status * could not access PID file for bind9 When the service doesn't start, I can start it successfully via a terminal by issuing $ sudo /etc/init.d/bind9 start And it works great from then on. Loopback configuration: $ ifconfig lo lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1872 errors:0 dropped:0 overruns:0 frame:0 TX packets:1872 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:220205 (220.2 KB) TX bytes:220205 (220.2 KB) Do I have my startup misconfigured? (I'm used to Gentoo so Ubuntu's model is still a little new to me) I'm not seeing any log indication of a failed attempt to start at boot in syslog. Is there someplace else I should be looking? What else should I look into to get bind working at startup?

    Read the article

  • windows 8.1 upgrade fails with error code 0xc1900101-0x20017

    - by cmorse
    I just tried to install windows 8.1 on my laptop, but it fails to install with the message: Sorry we couldn't complete the update to Windows 8.1. We restored your previous version of Windows to this PC 0xC1900101 - 0x20017 It looks like on the first boot that the laptop is going to the PC restore screen (it asks what kind of keyboard I have, and then what repair optins I would like to take). Thus far I have just been selecting "Continue to windows 8." I'm running a Lenovo x220i tablet. I've got 43GB of free disk space. It installed just fine on my desktop. The primary difference between the two machines is that the desktop has media center install, and isn't using TrueCrypt. Full WindowsUpdate.log http://pastebin.com/hGmAW4Q1 Most important portion of WindowsUpdate.log: 2013-10-17 10:41:06:671 964 694 Agent ************* 2013-10-17 10:41:06:671 964 694 Agent ** START ** Agent: Finding updates [CallerId = AutomaticUpdates] 2013-10-17 10:41:06:671 964 694 Agent ********* 2013-10-17 10:41:06:671 964 694 Agent * Online = No; Ignore download priority = No 2013-10-17 10:41:06:671 964 694 Agent * Criteria = "IsInstalled=0 and DeploymentAction='Installation' or IsPresent=1 and DeploymentAction='Uninstallation' or IsInstalled=1 and DeploymentAction='Installation' and RebootRequired=1 or IsInstalled=0 and DeploymentAction='Uninstallation' and RebootRequired=1" 2013-10-17 10:41:06:671 964 694 Agent * ServiceID = {7971F918-A847-4430-9279-4A52D1EFE18D} Third party service 2013-10-17 10:41:06:671 964 694 Agent * Search Scope = {Machine & All Users} 2013-10-17 10:41:06:671 964 694 Agent * Caller SID for Applicability: S-1-5-18 2013-10-17 10:41:07:233 964 870 Report REPORT EVENT: {AD47FBDC-F7F9-4E7F-BAF4-DBA3784C7101} 2013-10-17 10:41:06:436-0600 1 202 [AU_REBOOT_COMPLETED] 102 {00000000-0000-0000-0000-000000000000} 0 0 AutomaticUpdates Success Content Install Reboot completed. 2013-10-17 10:41:07:233 964 870 Report REPORT EVENT: {8D4E7A67-9526-4702-A897-5BE5F97497AF} 2013-10-17 10:41:06:639-0600 1 204 [AGENT_INSTALLING_FAILED_POST_REBOOT] 101 {8951E70D-4332-4F7C-B92D-D9362E384959} 1 c1900101 WSAcquisition Failure Content Install Installation Failure Post Reboot. 2013-10-17 10:41:07:249 964 870 Report CWERReporter::HandleEvents - WER report upload completed with status 0x8 2013-10-17 10:41:07:249 964 870 Report WER Report sent: 7.8.9200.16715 0xc1900101(0x20017) 8951E70D-4332-4F7C-B92D-D9362E384959 Install 101 Unmanaged 2013-10-17 10:41:07:249 964 870 Report CWERReporter finishing event handling. (00000000)

    Read the article

  • can't find port 22 traffic under VirtualBox

    - by telliott99
    I'm trying to learn to use tcpdump. I thought I'd eavesdrop on my ssh login. The setup is a bit unusual, I have OS X Lion running VirtualBox, with Ubuntu running in the VM. I have ssh enabled and can login from OS X normally: > ssh -p 22 10.0.1.2 -l telliott Welcome to Ubuntu 11.10 (GNU/Linux 3.0.0-17-generic i686) * Documentation: https://help.ubuntu.com/ 0 packages can be updated. 0 updates are security updates. Last login: Sat Mar 31 19:54:36 2012 from toms-mac-mini.local telliott@U32:~$ logout Connection to 10.0.1.2 closed. > I have not obfuscated the ssh port on Ubuntu. From OS X, stroke gives what I expect: > ./stroke 10.0.1.2 22 22 Port Scanning host: 10.0.1.2 Open TCP Port: 22 ssh So from OS X I do: > sudo tcpdump -i en1 -v port 22 Password: tcpdump: listening on en1, link-type EN10MB (Ethernet), capture size 65535 bytes Then I login from OS X to Ubuntu using ssh, but I see nothing with tcpdump. Here is ifconfig from Ubuntu: telliott@U32:~$ ifconfig eth1 Link encap:Ethernet HWaddr 08:00:27:d7:ba:0e inet addr:10.0.1.2 Bcast:10.0.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fed7:ba0e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:799 errors:0 dropped:0 overruns:0 frame:0 TX packets:465 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:96863 (96.8 KB) TX bytes:68638 (68.6 KB) Where are the packets I was hoping to see? Thanks for any help.

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >