Search Results

Search found 4015 results on 161 pages for 'packet capture'.

Page 145/161 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Can anyone help with this (Javascript arrays)?

    - by Rich
    Hi I am new to Netui and Javascript so go easy on me please. I have a form that is populated with container.item data retuned from a database. I am adding a checkbox beside each repeater item returned and I want to add the container item data to an array when one of the checkboxes is checked for future processing. The old code used Anchor tag to capture the data but that does not work for me. <!--netui:parameter name="lineupNo" value="{container.item.lineupIdent.lineupNo}" /> here is my checkbox that is a repeater. <netui:checkBox dataSource="{pageFlow.checkIsSelected}" onClick="checkBoxClicked()" tagId="pceChecked"/> this is my Javascript function so far but I want to a way to store the container.item.lineupIdent.lineupNo in the array. function checkBoxClicked() { var checkedPce = []; var elem = document.getElementById("PceList").elements; for (var i = 0; i < elem.length; i ++) { if (elem[i].name == netui_names.pceChecked) { if (elem[i].checked == true) { //do some code. } } } } I hope this is enough info for someone to help me. I have searched the web but could not find any examples. Thanks.

    Read the article

  • Embedded Development Board

    - by ALF3130
    I'm new to the embedded development world and am looking to get my very first board. After some research, I realize that there aren't many choices with FPUs. This is important in my project as I'm going to be doing quite a bit of floating point computations. I found the Mini2440 which seems to run on the ARM920T core. This particular unit is perfect for my needs (decent price, all the right I/O ports, and a touch screen to boot) but it seems that it doesn't have an FPU. I don't know how big of a penalty I'd be paying for FP emulation, so I'm unsure of whether to pull the trigger on this one. That said: Can someone please confirm whether this product (Mini2440) has an FPU or not? My project will do image capture and analysis. Does anyone have any experience with running things like OpenMP on such platforms? Please suggest any other similar boards in the = $200 price range that have an FPU. This world is new to me. Any other advice or things I should be aware of is much appreciated.

    Read the article

  • Disable home button in android toddler app?

    - by cmerrell
    I've developed and app that is a slide show of pictures which each play a sound when you tap them. It's like a picture book for ages 2-4. The problem is, since android won't let you capture a home button press and essentially disable it, when parents give the phone to their child to play with unattended (brave parent), the child can inadvertenly exit the app and then make calls or otherwise tweak the phone. There are two other apps that currently have a psuedo fix for this issue. The apps are Toddler Lock and ToddlePhone. I've tried contacting the developers of these apps for some guidance but they haven't been willing to disclose anything, which if fine, but does anyone here have any suggestions? It looks like both of those other apps are acting like a home screen replacement app. When you enable the "childproof mode" on those apps the user is prompted to chose and app for the action and the choices are "Launcher, LauncherPro, etc." plus the toddler app. You then have to make the toddler app the default and voila, the phone is "locked" and can only be "unlocked" using a key combination or touching the four corners of the screen, etc. when you "unlock" the phone. your normal home screen app default restored. You don't even have to make the toddler app the default the next time you enable the "childproof mode". I have read that these two apps have problems with Samsung phones and they can cause an an infinite crash-and-restart-loop that requires a factory reset to fix. Obviously this is not the ideal solution to the problem but it looks like the only one availiable at this point. Does anyone have any ideas on how to implement a "childproof mode"?

    Read the article

  • Obtaining XML from U.S. Postal Service (USPS) rate calculator API with PHP

    - by Chris F
    hoping somebody here can help me. I'm attempting to pull an XML page from the U.S. Postal Service (USPS) rate calculator, using PHP. Here is the code I am using (with my API login and password replaced of course): <? $api = "http://production.shippingapis.com/ShippingAPI.dll?API=RateV4&XML=<RateV4Request ". "USERID=\"MYUSERID\" PASSWORD=\"MYPASSWORD\"><Revision/><Package ID=\"1ST\">". "<Service>FIRST CLASS</Service><FirstClassMailType>PARCEL</FirstClassMailType>". "<ZipOrigination>12345</ZipOrigination><ZipDestination>54321</ZipDestination>". "<Pounds>0</Pounds><Ounces>9</Ounces><Container/><Size>REGULAR</Size></Package></RateV4Request>"; $xml_string = file_get_contents($api); $xml = simplexml_load_string($xml_string); ?> Pretty straightforward. However it never returns anything. I can paste the URL directly into my browser's address bar: http://production.shippingapis.com/ShippingAPI.dll?API=RateV4&XML=<RateV4RequestUSERID="MYUSERID" PASSWORD="MYPASSWORD"><Revision/><Package ID="1ST"><Service>FIRST CLASS</Service><FirstClassMailType>PARCEL</FirstClassMailType><ZipOrigination>12345</ZipOrigination><ZipDestination>54321</ZipDestination><Pounds>0</Pounds><Ounces>9</Ounces><Container/><Size>REGULAR</Size></Package></RateV4Request> And it returns the XML I need, so I know the URL is valid. But I cannot seem to capture it using PHP. Any help would be tremendously appreciated. Thanks in advance.

    Read the article

  • HTML + Button with text and image on it.

    - by lucky
    Hello, I have a problem in creating a Button with text and image on it. <td> <button type="submit" name="report" value="Report" <?php if($tab == 'Excel') echo "id=\"tab_inactive\""; else echo "id=\"tab_active\"" ; ?>> <img src="images/report.gif" alt="Report"/>Report </button> <button type="submit" name="excel" value="Excel" <?php if($tab == 'Excel' ) echo "id=\"tab_active\""; else echo "id=\"tab_inactive\"" ; ?>> <img src="images/Excel.gif" alt="Excel" width="16" height="16" /> Excel </button> </td> Here $tab is $tab = strlen(trim($_POST[excel]))>0 ? $_POST[excel] : $_POST[report]; I tried this way, but this is behaving so strangely. On click the button:- The submit function is working properly in firefox, but not in IE. Instead of submitting the value(in this example values are 'Report' and 'Excel'), indeed it is submitting the label of the button. That is if i am checking the value of array PRINT_R($_POST). The value of it is Array ( [report] =>(Icon that i used) Report [excel] => (Icon that i used) Excel [frm_analysis] => [to_analysis] => ) Here i have more than 1 button, then all the labels are submitted eventhough one of them is pressed. i dont how to capture which button is pressed. I even tried changing button type=button and onclick="document.formname.submit()" Even this is resulting in the same. Can you please help me to solve this.

    Read the article

  • null reference problems with c#

    - by alex
    Hi: In one of my window form, I created an instance of a class to do some works in the background. I wanted to capture the debug messages in that class and displayed in the textbox in the window form. Here is what I did: class A //window form class { public void startBackGroundTask() { B backGroundTask = new B(this); } public void updateTextBox(string data) { if (data != null) { if (this.Textbox.InvokeRequired) { appendUIDelegate updateDelegate = new appendUIDelegate(updateUI); try { this.Invoke(updateDelegate, data); } catch (Exception e) { Console.WriteLine(e.Message); } } else { updateUI(data); } } } private void updateUI(string data) { if (this.Textbox.InvokeRequired) { this.Textbox.Invoke(new appendUIDelegate(this.updateUI), data); } else { //update the text box this.Textbox.AppendText(data); this.Textbox.AppendText(Environment.NewLine); } } private delegate void appendUIDelegate(string data); } class B // background task { A curUI; public b( A UI) { curUI = UI; } private void test() { //do some works here then log the debug message to UI. curUI.updateTextBox("message); } } I keep getting a null reference exception after this.Invoke(updateDelegate, data); is called. I know passing "this" as a parameter is strange. But I want to send the debug message to my window form. Please help. Thanks

    Read the article

  • Why does LogonUser place user profiles in c:\users of the server?

    - by Lalit_M
    We have developed a ASP.NET web application and has implemented a custom authentication solution using active directory as the credentials store. Our front end application uses a normal login form to capture the user name and password and leverages the Win32 LogonUser method to authenticate the user’s credentials. When we are calling the LogonUser method, we are using the LOGON32_LOGON_NETWORK as the logon type. The issue we have found is that user profile folders are being created under the C:\Users folder of the web server. The folder seems to be created when a new user who has never logged on before is logging in for the first time. As the number of new users logging into the application grows, disk space is shrinking due to the large number of new user folders getting created. Has anyone seen this behavior with the Win32 LogonUser method? Does anyone know how to disable this behavior? I have tried LOGON32_LOGON_BATCH but it was giving an error 1385 in authentication user. I need either of the solution 1) Is there any way to stop the folder generation. 2) What parameter I need to pass this to work? Thanks

    Read the article

  • WIN32 API question - Looking for answer asap

    - by Lalit_M
    We have developed a ASP.NET web application and has implemented a custom authentication solution using active directory as the credentials store. Our front end application uses a normal login form to capture the user name and password and leverages the Win32 LogonUser method to authenticate the user’s credentials. When we are calling the LogonUser method, we are using the LOGON32_LOGON_NETWORK as the logon type. The issue we have found is that user profile folders are being created under the C:\Users folder of the web server. The folder seems to be created when a new user who has never logged on before is logging in for the first time. As the number of new users logging into the application grows, disk space is shrinking due to the large number of new user folders getting created. Has anyone seen this behavior with the Win32 LogonUser method? Does anyone know how to disable this behavior? I have tried LOGON32_LOGON_BATCH but it was giving an error 1385 in authentication user. I need either of the solution 1) Is there any way to stop the folder generation. 2) What parameter I need to pass this to work? Thanks

    Read the article

  • Ignoring a xml Tag in the middle of the file in Regex (with non capturing group ?)

    - by schmirrwurst
    I have an xml with an embeded tag, and I would like to capture everthing but the FType Tags... in python regex. <xml> <EType> <E></E> <F></F> <FType><E1></E1><E2></E2></FType> <FType><E1></E1><E2></E2></FType> <FType><E1></E1><E2></E2></FType> <G></G> </EType> </xml> I tried : (?P<xml>.*(?=<FType>.*<FType>).*) But it give me everything ;-( I Expect : <xml> <EType> <E></E> <F></F> <G></G> </EType> </xml>

    Read the article

  • What is the most challenging part of a project like MyLifeBits

    - by kennyzx
    I have made a small utility to keep track of my daily expenses, and my coffee consumption (I always want to quit coffee but never succeed), and of course, this kind of utility is quite simple, just involving an xml file and a few hundred lines of code to manipulate the file. Then I find this project MyLifeBits, it is very intesting, but I think it should require a lot of effort to achieve its goal- that is, to record everything about a person that can be digitally record. So I wonder, it is possible to write an advanced version of my own utility - but a tiny version of MyLifeBits, that can capture: Every webpage I've read, no matter what browser I am using, just download its contents for offline reading, Auto archive emails/documents/notes that I edited, Auto archive Codes that I run/written. Well, these are basically what I do on my PC. And the captured records can be searched easily. My question is, What do you think is the most challenging part? Interoperating with Visual Studio/Office/Lotes Notes/Web browsers is one thing, Database is another thing given that "everything" is record, Advanced programming patterns since it is not a "toy project"? And others that I have overlooked but can be very difficult to handle?

    Read the article

  • Python - Submit Information on a Website to Extract Data from Resulting Page

    - by bloodstorm17
    So I am trying to figure out how to post on a website that uses a drop down menu which is holding the values like this (based on the page source): <td valign="top" align="right"><span class="emphasis">Select Item Option : </span></td> <td align="left"> <span class="notranslate"> <select name="ItemOption1"> <option value="">Select Item Option</option> <option value="321_cba">Item Option 1</option> <option value="123_abcd">Item Option 2</option> ... Now there are two of these drop down menus on top of each other. I want to be able to select an item from drop down menu 1 and drop down menu 2 and then submit the page. Now based on the code it submits the information using the following code: <td colspan="2" align="center"> <input type="submit" value="View Result" onclick="return check()"> </td> </tr> </table> <input type="hidden" name="ItemOption1" value=""> <input type="hidden" name="ItemOption2" value=""> I have no idea how to select the items in the drop down menu and then submit the page and capture the information on the resulting page into a text file. Can someone please help me with this?

    Read the article

  • Is there a Telecommunications Reference Architecture?

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Abstract   Reference architecture provides needed architectural information that can be provided in advance to an enterprise to enable consistent architectural best practices. Enterprise Reference Architecture helps business owners to actualize their strategies, vision, objectives, and principles. It evaluates the IT systems, based on Reference Architecture goals, principles, and standards. It helps to reduce IT costs by increasing functionality, availability, scalability, etc. Telecom Reference Architecture provides customers with the flexibility to view bundled service bills online with the provision of multiple services. It provides real-time, flexible billing and charging systems, to handle complex promotions, discounts, and settlements with multiple parties. This paper attempts to describe the Reference Architecture for the Telecom Enterprises. It lays the foundation for a Telecom Reference Architecture by articulating the requirements, drivers, and pitfalls for telecom service providers. It describes generic reference architecture for telecom enterprises and moves on to explain how to achieve Enterprise Reference Architecture by using SOA.   Introduction   A Reference Architecture provides a methodology, set of practices, template, and standards based on a set of successful solutions implemented earlier. These solutions have been generalized and structured for the depiction of both a logical and a physical architecture, based on the harvesting of a set of patterns that describe observations in a number of successful implementations. It helps as a reference for the various architectures that an enterprise can implement to solve various problems. It can be used as the starting point or the point of comparisons for various departments/business entities of a company, or for the various companies for an enterprise. It provides multiple views for multiple stakeholders.   Major artifacts of the Enterprise Reference Architecture are methodologies, standards, metadata, documents, design patterns, etc.   Purpose of Reference Architecture   In most cases, architects spend a lot of time researching, investigating, defining, and re-arguing architectural decisions. It is like reinventing the wheel as their peers in other organizations or even the same organization have already spent a lot of time and effort defining their own architectural practices. This prevents an organization from learning from its own experiences and applying that knowledge for increased effectiveness.   Reference architecture provides missing architectural information that can be provided in advance to project team members to enable consistent architectural best practices.   Enterprise Reference Architecture helps an enterprise to achieve the following at the abstract level:   ·       Reference architecture is more of a communication channel to an enterprise ·       Helps the business owners to accommodate to their strategies, vision, objectives, and principles. ·       Evaluates the IT systems based on Reference Architecture Principles ·       Reduces IT spending through increasing functionality, availability, scalability, etc ·       A Real-time Integration Model helps to reduce the latency of the data updates Is used to define a single source of Information ·       Provides a clear view on how to manage information and security ·       Defines the policy around the data ownership, product boundaries, etc. ·       Helps with cost optimization across project and solution portfolios by eliminating unused or duplicate investments and assets ·       Has a shorter implementation time and cost   Once the reference architecture is in place, the set of architectural principles, standards, reference models, and best practices ensure that the aligned investments have the greatest possible likelihood of success in both the near term and the long term (TCO).     Common pitfalls for Telecom Service Providers   Telecom Reference Architecture serves as the first step towards maturity for a telecom service provider. During the course of our assignments/experiences with telecom players, we have come across the following observations – Some of these indicate a lack of maturity of the telecom service provider:   ·       In markets that are growing and not so mature, it has been observed that telcos have a significant amount of in-house or home-grown applications. In some of these markets, the growth has been so rapid that IT has been unable to cope with business demands. Telcos have shown a tendency to come up with workarounds in their IT applications so as to meet business needs. ·       Even for core functions like provisioning or mediation, some telcos have tried to manage with home-grown applications. ·       Most of the applications do not have the required scalability or maintainability to sustain growth in volumes or functionality. ·       Applications face interoperability issues with other applications in the operator's landscape. Integrating a new application or network element requires considerable effort on the part of the other applications. ·       Application boundaries are not clear, and functionality that is not in the initial scope of that application gets pushed onto it. This results in the development of the multiple, small applications without proper boundaries. ·       Usage of Legacy OSS/BSS systems, poor Integration across Multiple COTS Products and Internal Systems. Most of the Integrations are developed on ad-hoc basis and Point-to-Point Integration. ·       Redundancy of the business functions in different applications • Fragmented data across the different applications and no integrated view of the strategic data • Lot of performance Issues due to the usage of the complex integration across OSS and BSS systems   However, this is where the maturity of the telecom industry as a whole can be of help. The collaborative efforts of telcos to overcome some of these problems have resulted in bodies like the TM Forum. They have come up with frameworks for business processes, data, applications, and technology for telecom service providers. These could be a good starting point for telcos to clean up their enterprise landscape.   Industry Trends in Telecom Reference Architecture   Telecom reference architectures are evolving rapidly because telcos are facing business and IT challenges.   “The reality is that there probably is no killer application, no silver bullet that the telcos can latch onto to carry them into a 21st Century.... Instead, there are probably hundreds – perhaps thousands – of niche applications.... And the only way to find which of these works for you is to try out lots of them, ramp up the ones that work, and discontinue the ones that fail.” – Martin Creaner President & CTO TM Forum.   The following trends have been observed in telecom reference architecture:   ·       Transformation of business structures to align with customer requirements ·       Adoption of more Internet-like technical architectures. The Web 2.0 concept is increasingly being used. ·       Virtualization of the traditional operations support system (OSS) ·       Adoption of SOA to support development of IP-based services ·       Adoption of frameworks like Service Delivery Platforms (SDPs) and IP Multimedia Subsystem ·       (IMS) to enable seamless deployment of various services over fixed and mobile networks ·       Replacement of in-house, customized, and stove-piped OSS/BSS with standards-based COTS products ·       Compliance with industry standards and frameworks like eTOM, SID, and TAM to enable seamless integration with other standards-based products   Drivers of Reference Architecture   The drivers of the Reference Architecture are Reference Architecture Goals, Principles, and Enterprise Vision and Telecom Transformation. The details are depicted below diagram. @font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }div.Section1 { page: Section1; } Figure 1. Drivers for Reference Architecture @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Today’s telecom reference architectures should seamlessly integrate traditional legacy-based applications and transition to next-generation network technologies (e.g., IP multimedia subsystems). This has resulted in new requirements for flexible, real-time billing and OSS/BSS systems and implications on the service provider’s organizational requirements and structure.   Telecom reference architectures are today expected to:   ·       Integrate voice, messaging, email and other VAS over fixed and mobile networks, back end systems ·       Be able to provision multiple services and service bundles • Deliver converged voice, video and data services ·       Leverage the existing Network Infrastructure ·       Provide real-time, flexible billing and charging systems to handle complex promotions, discounts, and settlements with multiple parties. ·       Support charging of advanced data services such as VoIP, On-Demand, Services (e.g.  Video), IMS/SIP Services, Mobile Money, Content Services and IPTV. ·       Help in faster deployment of new services • Serve as an effective platform for collaboration between network IT and business organizations ·       Harness the potential of converging technology, networks, devices and content to develop multimedia services and solutions of ever-increasing sophistication on a single Internet Protocol (IP) ·       Ensure better service delivery and zero revenue leakage through real-time balance and credit management ·       Lower operating costs to drive profitability   Enterprise Reference Architecture   The Enterprise Reference Architecture (RA) fills the gap between the concepts and vocabulary defined by the reference model and the implementation. Reference architecture provides detailed architectural information in a common format such that solutions can be repeatedly designed and deployed in a consistent, high-quality, supportable fashion. This paper attempts to describe the Reference Architecture for the Telecom Application Usage and how to achieve the Enterprise Level Reference Architecture using SOA.   • Telecom Reference Architecture • Enterprise SOA based Reference Architecture   Telecom Reference Architecture   Tele Management Forum’s New Generation Operations Systems and Software (NGOSS) is an architectural framework for organizing, integrating, and implementing telecom systems. NGOSS is a component-based framework consisting of the following elements:   ·       The enhanced Telecom Operations Map (eTOM) is a business process framework. ·       The Shared Information Data (SID) model provides a comprehensive information framework that may be specialized for the needs of a particular organization. ·       The Telecom Application Map (TAM) is an application framework to depict the functional footprint of applications, relative to the horizontal processes within eTOM. ·       The Technology Neutral Architecture (TNA) is an integrated framework. TNA is an architecture that is sustainable through technology changes.   NGOSS Architecture Standards are:   ·       Centralized data ·       Loosely coupled distributed systems ·       Application components/re-use  ·       A technology-neutral system framework with technology specific implementations ·       Interoperability to service provider data/processes ·       Allows more re-use of business components across multiple business scenarios ·       Workflow automation   The traditional operator systems architecture consists of four layers,   ·       Business Support System (BSS) layer, with focus toward customers and business partners. Manages order, subscriber, pricing, rating, and billing information. ·       Operations Support System (OSS) layer, built around product, service, and resource inventories. ·       Networks layer – consists of Network elements and 3rd Party Systems. ·       Integration Layer – to maximize application communication and overall solution flexibility.   Reference architecture for telecom enterprises is depicted below. @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Figure 2. Telecom Reference Architecture   The major building blocks of any Telecom Service Provider architecture are as follows:   1. Customer Relationship Management   CRM encompasses the end-to-end lifecycle of the customer: customer initiation/acquisition, sales, ordering, and service activation, customer care and support, proactive campaigns, cross sell/up sell, and retention/loyalty.   CRM also includes the collection of customer information and its application to personalize, customize, and integrate delivery of service to a customer, as well as to identify opportunities for increasing the value of the customer to the enterprise.   The key functionalities related to Customer Relationship Management are   ·       Manage the end-to-end lifecycle of a customer request for products. ·       Create and manage customer profiles. ·       Manage all interactions with customers – inquiries, requests, and responses. ·       Provide updates to Billing and other south bound systems on customer/account related updates such as customer/ account creation, deletion, modification, request bills, final bill, duplicate bills, credit limits through Middleware. ·       Work with Order Management System, Product, and Service Management components within CRM. ·       Manage customer preferences – Involve all the touch points and channels to the customer, including contact center, retail stores, dealers, self service, and field service, as well as via any media (phone, face to face, web, mobile device, chat, email, SMS, mail, the customer's bill, etc.). ·       Support single interface for customer contact details, preferences, account details, offers, customer premise equipment, bill details, bill cycle details, and customer interactions.   CRM applications interact with customers through customer touch points like portals, point-of-sale terminals, interactive voice response systems, etc. The requests by customers are sent via fulfillment/provisioning to billing system for ordering processing.   2. Billing and Revenue Management   Billing and Revenue Management handles the collection of appropriate usage records and production of timely and accurate bills – for providing pre-bill usage information and billing to customers; for processing their payments; and for performing payment collections. In addition, it handles customer inquiries about bills, provides billing inquiry status, and is responsible for resolving billing problems to the customer's satisfaction in a timely manner. This process grouping also supports prepayment for services.   The key functionalities provided by these applications are   ·       To ensure that enterprise revenue is billed and invoices delivered appropriately to customers. ·       To manage customers’ billing accounts, process their payments, perform payment collections, and monitor the status of the account balance. ·       To ensure the timely and effective fulfillment of all customer bill inquiries and complaints. ·       Collect the usage records from mediation and ensure appropriate rating and discounting of all usage and pricing. ·       Support revenue sharing; split charging where usage is guided to an account different from the service consumer. ·       Support prepaid and post-paid rating. ·       Send notification on approach / exceeding the usage thresholds as enforced by the subscribed offer, and / or as setup by the customer. ·       Support prepaid, post paid, and hybrid (where some services are prepaid and the rest of the services post paid) customers and conversion from post paid to prepaid, and vice versa. ·       Support different billing function requirements like charge prorating, promotion, discount, adjustment, waiver, write-off, account receivable, GL Interface, late payment fee, credit control, dunning, account or service suspension, re-activation, expiry, termination, contract violation penalty, etc. ·       Initiate direct debit to collect payment against an invoice outstanding. ·       Send notification to Middleware on different events; for example, payment receipt, pre-suspension, threshold exceed, etc.   Billing systems typically get usage data from mediation systems for rating and billing. They get provisioning requests from order management systems and inquiries from CRM systems. Convergent and real-time billing systems can directly get usage details from network elements.   3. Mediation   Mediation systems transform/translate the Raw or Native Usage Data Records into a general format that is acceptable to billing for their rating purposes.   The following lists the high-level roles and responsibilities executed by the Mediation system in the end-to-end solution.   ·       Collect Usage Data Records from different data sources – like network elements, routers, servers – via different protocol and interfaces. ·       Process Usage Data Records – Mediation will process Usage Data Records as per the source format. ·       Validate Usage Data Records from each source. ·       Segregates Usage Data Records coming from each source to multiple, based on the segregation requirement of end Application. ·       Aggregates Usage Data Records based on the aggregation rule if any from different sources. ·       Consolidates multiple Usage Data Records from each source. ·       Delivers formatted Usage Data Records to different end application like Billing, Interconnect, Fraud Management, etc. ·       Generates audit trail for incoming Usage Data Records and keeps track of all the Usage Data Records at various stages of mediation process. ·       Checks duplicate Usage Data Records across files for a given time window.   4. Fulfillment   This area is responsible for providing customers with their requested products in a timely and correct manner. It translates the customer's business or personal need into a solution that can be delivered using the specific products in the enterprise's portfolio. This process informs the customers of the status of their purchase order, and ensures completion on time, as well as ensuring a delighted customer. These processes are responsible for accepting and issuing orders. They deal with pre-order feasibility determination, credit authorization, order issuance, order status and tracking, customer update on customer order activities, and customer notification on order completion. Order management and provisioning applications fall into this category.   The key functionalities provided by these applications are   ·       Issuing new customer orders, modifying open customer orders, or canceling open customer orders; ·       Verifying whether specific non-standard offerings sought by customers are feasible and supportable; ·       Checking the credit worthiness of customers as part of the customer order process; ·       Testing the completed offering to ensure it is working correctly; ·       Updating of the Customer Inventory Database to reflect that the specific product offering has been allocated, modified, or cancelled; ·       Assigning and tracking customer provisioning activities; ·       Managing customer provisioning jeopardy conditions; and ·       Reporting progress on customer orders and other processes to customer.   These applications typically get orders from CRM systems. They interact with network elements and billing systems for fulfillment of orders.   5. Enterprise Management   This process area includes those processes that manage enterprise-wide activities and needs, or have application within the enterprise as a whole. They encompass all business management processes that   ·       Are necessary to support the whole of the enterprise, including processes for financial management, legal management, regulatory management, process, cost, and quality management, etc.;   ·       Are responsible for setting corporate policies, strategies, and directions, and for providing guidelines and targets for the whole of the business, including strategy development and planning for areas, such as Enterprise Architecture, that are integral to the direction and development of the business;   ·       Occur throughout the enterprise, including processes for project management, performance assessments, cost assessments, etc.     (i) Enterprise Risk Management:   Enterprise Risk Management focuses on assuring that risks and threats to the enterprise value and/or reputation are identified, and appropriate controls are in place to minimize or eliminate the identified risks. The identified risks may be physical or logical/virtual. Successful risk management ensures that the enterprise can support its mission critical operations, processes, applications, and communications in the face of serious incidents such as security threats/violations and fraud attempts. Two key areas covered in Risk Management by telecom operators are:   ·       Revenue Assurance: Revenue assurance system will be responsible for identifying revenue loss scenarios across components/systems, and will help in rectifying the problems. The following lists the high-level roles and responsibilities executed by the Revenue Assurance system in the end-to-end solution. o   Identify all usage information dropped when networks are being upgraded. o   Interconnect bill verification. o   Identify where services are routinely provisioned but never billed. o   Identify poor sales policies that are intensifying collections problems. o   Find leakage where usage is sent to error bucket and never billed for. o   Find leakage where field service, CRM, and network build-out are not optimized.   ·       Fraud Management: Involves collecting data from different systems to identify abnormalities in traffic patterns, usage patterns, and subscription patterns to report suspicious activity that might suggest fraudulent usage of resources, resulting in revenue losses to the operator.   The key roles and responsibilities of the system component are as follows:   o   Fraud management system will capture and monitor high usage (over a certain threshold) in terms of duration, value, and number of calls for each subscriber. The threshold for each subscriber is decided by the system and fixed automatically. o   Fraud management will be able to detect the unauthorized access to services for certain subscribers. These subscribers may have been provided unauthorized services by employees. The component will raise the alert to the operator the very first time of such illegal calls or calls which are not billed. o   The solution will be to have an alarm management system that will deliver alarms to the operator/provider whenever it detects a fraud, thus minimizing fraud by catching it the first time it occurs. o   The Fraud Management system will be capable of interfacing with switches, mediation systems, and billing systems   (ii) Knowledge Management   This process focuses on knowledge management, technology research within the enterprise, and the evaluation of potential technology acquisitions.   Key responsibilities of knowledge base management are to   ·       Maintain knowledge base – Creation and updating of knowledge base on ongoing basis. ·       Search knowledge base – Search of knowledge base on keywords or category browse ·       Maintain metadata – Management of metadata on knowledge base to ensure effective management and search. ·       Run report generator. ·       Provide content – Add content to the knowledge base, e.g., user guides, operational manual, etc.   (iii) Document Management   It focuses on maintaining a repository of all electronic documents or images of paper documents relevant to the enterprise using a system.   (iv) Data Management   It manages data as a valuable resource for any enterprise. For telecom enterprises, the typical areas covered are Master Data Management, Data Warehousing, and Business Intelligence. It is also responsible for data governance, security, quality, and database management.   Key responsibilities of Data Management are   ·       Using ETL, extract the data from CRM, Billing, web content, ERP, campaign management, financial, network operations, asset management info, customer contact data, customer measures, benchmarks, process data, e.g., process inputs, outputs, and measures, into Enterprise Data Warehouse. ·       Management of data traceability with source, data related business rules/decisions, data quality, data cleansing data reconciliation, competitors data – storage for all the enterprise data (customer profiles, products, offers, revenues, etc.) ·       Get online update through night time replication or physical backup process at regular frequency. ·       Provide the data access to business intelligence and other systems for their analysis, report generation, and use.   (v) Business Intelligence   It uses the Enterprise Data to provide the various analysis and reports that contain prospects and analytics for customer retention, acquisition of new customers due to the offers, and SLAs. It will generate right and optimized plans – bolt-ons for the customers.   The following lists the high-level roles and responsibilities executed by the Business Intelligence system at the Enterprise Level:   ·       It will do Pattern analysis and reports problem. ·       It will do Data Analysis – Statistical analysis, data profiling, affinity analysis of data, customer segment wise usage patterns on offers, products, service and revenue generation against services and customer segments. ·       It will do Performance (business, system, and forecast) analysis, churn propensity, response time, and SLAs analysis. ·       It will support for online and offline analysis, and report drill down capability. ·       It will collect, store, and report various SLA data. ·       It will provide the necessary intelligence for marketing and working on campaigns, etc., with cost benefit analysis and predictions.   It will advise on customer promotions with additional services based on loyalty and credit history of customer   ·       It will Interface with Enterprise Data Management system for data to run reports and analysis tasks. It will interface with the campaign schedules, based on historical success evidence.   (vi) Stakeholder and External Relations Management   It manages the enterprise's relationship with stakeholders and outside entities. Stakeholders include shareholders, employee organizations, etc. Outside entities include regulators, local community, and unions. Some of the processes within this grouping are Shareholder Relations, External Affairs, Labor Relations, and Public Relations.   (vii) Enterprise Resource Planning   It is used to manage internal and external resources, including tangible assets, financial resources, materials, and human resources. Its purpose is to facilitate the flow of information between all business functions inside the boundaries of the enterprise and manage the connections to outside stakeholders. ERP systems consolidate all business operations into a uniform and enterprise wide system environment.   The key roles and responsibilities for Enterprise System are given below:   ·        It will handle responsibilities such as core accounting, financial, and management reporting. ·       It will interface with CRM for capturing customer account and details. ·       It will interface with billing to capture the billing revenue and other financial data. ·       It will be responsible for executing the dunning process. Billing will send the required feed to ERP for execution of dunning. ·       It will interface with the CRM and Billing through batch interfaces. Enterprise management systems are like horizontals in the enterprise and typically interact with all major telecom systems. E.g., an ERP system interacts with CRM, Fulfillment, and Billing systems for different kinds of data exchanges.   6. External Interfaces/Touch Points   The typical external parties are customers, suppliers/partners, employees, shareholders, and other stakeholders. External interactions from/to a Service Provider to other parties can be achieved by a variety of mechanisms, including:   ·       Exchange of emails or faxes ·       Call Centers ·       Web Portals ·       Business-to-Business (B2B) automated transactions   These applications provide an Internet technology driven interface to external parties to undertake a variety of business functions directly for themselves. These can provide fully or partially automated service to external parties through various touch points.   Typical characteristics of these touch points are   ·       Pre-integrated self-service system, including stand-alone web framework or integration front end with a portal engine ·       Self services layer exposing atomic web services/APIs for reuse by multiple systems across the architectural environment ·       Portlets driven connectivity exposing data and services interoperability through a portal engine or web application   These touch points mostly interact with the CRM systems for requests, inquiries, and responses.   7. Middleware   The component will be primarily responsible for integrating the different systems components under a common platform. It should provide a Standards-Based Platform for building Service Oriented Architecture and Composite Applications. The following lists the high-level roles and responsibilities executed by the Middleware component in the end-to-end solution.   ·       As an integration framework, covering to and fro interfaces ·       Provide a web service framework with service registry. ·       Support SOA framework with SOA service registry. ·       Each of the interfaces from / to Middleware to other components would handle data transformation, translation, and mapping of data points. ·       Receive data from the caller / activate and/or forward the data to the recipient system in XML format. ·       Use standard XML for data exchange. ·       Provide the response back to the service/call initiator. ·       Provide a tracking until the response completion. ·       Keep a store transitional data against each call/transaction. ·       Interface through Middleware to get any information that is possible and allowed from the existing systems to enterprise systems; e.g., customer profile and customer history, etc. ·       Provide the data in a common unified format to the SOA calls across systems, and follow the Enterprise Architecture directive. ·       Provide an audit trail for all transactions being handled by the component.   8. Network Elements   The term Network Element means a facility or equipment used in the provision of a telecommunications service. Such terms also includes features, functions, and capabilities that are provided by means of such facility or equipment, including subscriber numbers, databases, signaling systems, and information sufficient for billing and collection or used in the transmission, routing, or other provision of a telecommunications service.   Typical network elements in a GSM network are Home Location Register (HLR), Intelligent Network (IN), Mobile Switching Center (MSC), SMS Center (SMSC), and network elements for other value added services like Push-to-talk (PTT), Ring Back Tone (RBT), etc.   Network elements are invoked when subscribers use their telecom devices for any kind of usage. These elements generate usage data and pass it on to downstream systems like mediation and billing system for rating and billing. They also integrate with provisioning systems for order/service fulfillment.   9. 3rd Party Applications   3rd Party systems are applications like content providers, payment gateways, point of sale terminals, and databases/applications maintained by the Government.   Depending on applicability and the type of functionality provided by 3rd party applications, the integration with different telecom systems like CRM, provisioning, and billing will be done.   10. Service Delivery Platform   A service delivery platform (SDP) provides the architecture for the rapid deployment, provisioning, execution, management, and billing of value added telecom services. SDPs are based on the concept of SOA and layered architecture. They support the delivery of voice, data services, and content in network and device-independent fashion. They allow application developers to aggregate network capabilities, services, and sources of content. SDPs typically contain layers for web services exposure, service application development, and network abstraction.   SOA Reference Architecture   SOA concept is based on the principle of developing reusable business service and building applications by composing those services, instead of building monolithic applications in silos. It’s about bridging the gap between business and IT through a set of business-aligned IT services, using a set of design principles, patterns, and techniques.   In an SOA, resources are made available to participants in a value net, enterprise, line of business (typically spanning multiple applications within an enterprise or across multiple enterprises). It consists of a set of business-aligned IT services that collectively fulfill an organization’s business processes and goals. We can choreograph these services into composite applications and invoke them through standard protocols. SOA, apart from agility and reusability, enables:   ·       The business to specify processes as orchestrations of reusable services ·       Technology agnostic business design, with technology hidden behind service interface ·       A contractual-like interaction between business and IT, based on service SLAs ·       Accountability and governance, better aligned to business services ·       Applications interconnections untangling by allowing access only through service interfaces, reducing the daunting side effects of change ·       Reduced pressure to replace legacy and extended lifetime for legacy applications, through encapsulation in services   ·       A Cloud Computing paradigm, using web services technologies, that makes possible service outsourcing on an on-demand, utility-like, pay-per-usage basis   The following section represents the Reference Architecture of logical view for the Telecom Solution. The new custom built application needs to align with this logical architecture in the long run to achieve EA benefits.   Packaged implementation applications, such as ERP billing applications, need to expose their functions as service providers (as other applications consume) and interact with other applications as service consumers.   COT applications need to expose services through wrappers such as adapters to utilize existing resources and at the same time achieve Enterprise Architecture goal and objectives.   The following are the various layers for Enterprise level deployment of SOA. This diagram captures the abstract view of Enterprise SOA layers and important components of each layer. Layered architecture means decomposition of services such that most interactions occur between adjacent layers. However, there is no strict rule that top layers should not directly communicate with bottom layers.   The diagram below represents the important logical pieces that would result from overall SOA transformation. @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Figure 3. Enterprise SOA Reference Architecture 1.          Operational System Layer: This layer consists of all packaged applications like CRM, ERP, custom built applications, COTS based applications like Billing, Revenue Management, Fulfilment, and the Enterprise databases that are essential and contribute directly or indirectly to the Enterprise OSS/BSS Transformation.   ERP holds the data of Asset Lifecycle Management, Supply Chain, and Advanced Procurement and Human Capital Management, etc.   CRM holds the data related to Order, Sales, and Marketing, Customer Care, Partner Relationship Management, Loyalty, etc.   Content Management handles Enterprise Search and Query. Billing application consists of the following components:   ·       Collections Management, Customer Billing Management, Invoices, Real-Time Rating, Discounting, and Applying of Charges ·       Enterprise databases will hold both the application and service data, whether structured or unstructured.   MDM - Master data majorly consists of Customer, Order, Product, and Service Data.     2.          Enterprise Component Layer:   This layer consists of the Application Services and Common Services that are responsible for realizing the functionality and maintaining the QoS of the exposed services. This layer uses container-based technologies such as application servers to implement the components, workload management, high availability, and load balancing.   Application Services: This Service Layer enables application, technology, and database abstraction so that the complex accessing logic is hidden from the other service layers. This is a basic service layer, which exposes application functionalities and data as reusable services. The three types of the Application access services are:   ·       Application Access Service: This Service Layer exposes application level functionalities as a reusable service between BSS to BSS and BSS to OSS integration. This layer is enabled using disparate technology such as Web Service, Integration Servers, and Adaptors, etc.   ·       Data Access Service: This Service Layer exposes application data services as a reusable reference data service. This is done via direct interaction with application data. and provides the federated query.   ·       Network Access Service: This Service Layer exposes provisioning layer as a reusable service from OSS to OSS integration. This integration service emphasizes the need for high performance, stateless process flows, and distributed design.   Common Services encompasses management of structured, semi-structured, and unstructured data such as information services, portal services, interaction services, infrastructure services, and security services, etc.   3.          Integration Layer:   This consists of service infrastructure components like service bus, service gateway for partner integration, service registry, service repository, and BPEL processor. Service bus will carry the service invocation payloads/messages between consumers and providers. The other important functions expected from it are itinerary based routing, distributed caching of routing information, transformations, and all qualities of service for messaging-like reliability, scalability, and availability, etc. Service registry will hold all contracts (wsdl) of services, and it helps developers to locate or discover service during design time or runtime.   • BPEL processor would be useful in orchestrating the services to compose a complex business scenario or process. • Workflow and business rules management are also required to support manual triggering of certain activities within business process. based on the rules setup and also the state machine information. Application, data, and service mediation layer typically forms the overall composite application development framework or SOA Framework.   4.          Business Process Layer: These are typically the intermediate services layer and represent Shared Business Process Services. At Enterprise Level, these services are from Customer Management, Order Management, Billing, Finance, and Asset Management application domains.   5.          Access Layer: This layer consists of portals for Enterprise and provides a single view of Enterprise information management and dashboard services.   6.          Channel Layer: This consists of various devices; applications that form part of extended enterprise; browsers through which users access the applications.   7.          Client Layer: This designates the different types of users accessing the enterprise applications. The type of user typically would be an important factor in determining the level of access to applications.   8.          Vertical pieces like management, monitoring, security, and development cut across all horizontal layers Management and monitoring involves all aspects of SOA-like services, SLAs, and other QoS lifecycle processes for both applications and services surrounding SOA governance.     9.          EA Governance, Reference Architecture, Roadmap, Principles, and Best Practices:   EA Governance is important in terms of providing the overall direction to SOA implementation within the enterprise. This involves board-level involvement, in addition to business and IT executives. At a high level, this involves managing the SOA projects implementation, managing SOA infrastructure, and controlling the entire effort through all fine-tuned IT processes in accordance with COBIT (Control Objectives for Information Technology).   Devising tools and techniques to promote reuse culture, and the SOA way of doing things needs competency centers to be established in addition to training the workforce to take up new roles that are suited to SOA journey.   Conclusions   Reference Architectures can serve as the basis for disparate architecture efforts throughout the organization, even if they use different tools and technologies. Reference architectures provide best practices and approaches in the independent way a vendor deals with technology and standards. Reference Architectures model the abstract architectural elements for an enterprise independent of the technologies, protocols, and products that are used to implement an SOA. Telecom enterprises today are facing significant business and technology challenges due to growing competition, a multitude of services, and convergence. Adopting architectural best practices could go a long way in meeting these challenges. The use of SOA-based architecture for communication to each of the external systems like Billing, CRM, etc., in OSS/BSS system has made the architecture very loosely coupled, with greater flexibility. Any change in the external systems would be absorbed at the Integration Layer without affecting the rest of the ecosystem. The use of a Business Process Management (BPM) tool makes the management and maintenance of the business processes easy, with better performance in terms of lead time, quality, and cost. Since the Architecture is based on standards, it will lower the cost of deploying and managing OSS/BSS applications over their lifecycles.

    Read the article

  • Agile Development

    - by James Oloo Onyango
    Alot of literature has and is being written about agile developement and its surrounding philosophies. In my quest to find the best way to express the importance of agile methodologies, i have found Robert C. Martin's "A Satire Of Two Companies" to be both the most concise and thorough! Enjoy the read! Rufus Inc Project Kick Off Your name is Bob. The date is January 3, 2001, and your head still aches from the recent millennial revelry. You are sitting in a conference room with several managers and a group of your peers. You are a project team leader. Your boss is there, and he has brought along all of his team leaders. His boss called the meeting. "We have a new project to develop," says your boss's boss. Call him BB. The points in his hair are so long that they scrape the ceiling. Your boss's points are just starting to grow, but he eagerly awaits the day when he can leave Brylcream stains on the acoustic tiles. BB describes the essence of the new market they have identified and the product they want to develop to exploit this market. "We must have this new project up and working by fourth quarter October 1," BB demands. "Nothing is of higher priority, so we are cancelling your current project." The reaction in the room is stunned silence. Months of work are simply going to be thrown away. Slowly, a murmur of objection begins to circulate around the conference table.   His points give off an evil green glow as BB meets the eyes of everyone in the room. One by one, that insidious stare reduces each attendee to quivering lumps of protoplasm. It is clear that he will brook no discussion on this matter. Once silence has been restored, BB says, "We need to begin immediately. How long will it take you to do the analysis?" You raise your hand. Your boss tries to stop you, but his spitwad misses you and you are unaware of his efforts.   "Sir, we can't tell you how long the analysis will take until we have some requirements." "The requirements document won't be ready for 3 or 4 weeks," BB says, his points vibrating with frustration. "So, pretend that you have the requirements in front of you now. How long will you require for analysis?" No one breathes. Everyone looks around to see whether anyone has some idea. "If analysis goes beyond April 1, we have a problem. Can you finish the analysis by then?" Your boss visibly gathers his courage: "We'll find a way, sir!" His points grow 3 mm, and your headache increases by two Tylenol. "Good." BB smiles. "Now, how long will it take to do the design?" "Sir," you say. Your boss visibly pales. He is clearly worried that his 3 mms are at risk. "Without an analysis, it will not be possible to tell you how long design will take." BB's expression shifts beyond austere.   "PRETEND you have the analysis already!" he says, while fixing you with his vacant, beady little eyes. "How long will it take you to do the design?" Two Tylenol are not going to cut it. Your boss, in a desperate attempt to save his new growth, babbles: "Well, sir, with only six months left to complete the project, design had better take no longer than 3 months."   "I'm glad you agree, Smithers!" BB says, beaming. Your boss relaxes. He knows his points are secure. After a while, he starts lightly humming the Brylcream jingle. BB continues, "So, analysis will be complete by April 1, design will be complete by July 1, and that gives you 3 months to implement the project. This meeting is an example of how well our new consensus and empowerment policies are working. Now, get out there and start working. I'll expect to see TQM plans and QIT assignments on my desk by next week. Oh, and don't forget that your crossfunctional team meetings and reports will be needed for next month's quality audit." "Forget the Tylenol," you think to yourself as you return to your cubicle. "I need bourbon."   Visibly excited, your boss comes over to you and says, "Gosh, what a great meeting. I think we're really going to do some world shaking with this project." You nod in agreement, too disgusted to do anything else. "Oh," your boss continues, "I almost forgot." He hands you a 30-page document. "Remember that the SEI is coming to do an evaluation next week. This is the evaluation guide. You need to read through it, memorize it, and then shred it. It tells you how to answer any questions that the SEI auditors ask you. It also tells you what parts of the building you are allowed to take them to and what parts to avoid. We are determined to be a CMM level 3 organization by June!"   You and your peers start working on the analysis of the new project. This is difficult because you have no requirements. But from the 10-minute introduction given by BB on that fateful morning, you have some idea of what the product is supposed to do.   Corporate process demands that you begin by creating a use case document. You and your team begin enumerating use cases and drawing oval and stick diagrams. Philosophical debates break out among the team members. There is disagreement as to whether certain use cases should be connected with <<extends>> or <<includes>> relationships. Competing models are created, but nobody knows how to evaluate them. The debate continues, effectively paralyzing progress.   After a week, somebody finds the iceberg.com Web site, which recommends disposing entirely of <<extends>> and <<includes>> and replacing them with <<precedes>> and <<uses>>. The documents on this Web site, authored by Don Sengroiux, describes a method known as stalwart-analysis, which claims to be a step-by-step method for translating use cases into design diagrams. More competing use case models are created using this new scheme, but again, people can't agree on how to evaluate them. The thrashing continues. More and more, the use case meetings are driven by emotion rather than by reason. If it weren't for the fact that you don't have requirements, you'd be pretty upset by the lack of progress you are making. The requirements document arrives on February 15. And then again on February 20, 25, and every week thereafter. Each new version contradicts the previous one. Clearly, the marketing folks who are writing the requirements, empowered though they might be, are not finding consensus.   At the same time, several new competing use case templates have been proposed by the various team members. Each template presents its own particularly creative way of delaying progress. The debates rage on. On March 1, Prudence Putrigence, the process proctor, succeeds in integrating all the competing use case forms and templates into a single, all-encompassing form. Just the blank form is 15 pages long. She has managed to include every field that appeared on all the competing templates. She also presents a 159- page document describing how to fill out the use case form. All current use cases must be rewritten according to the new standard.   You marvel to yourself that it now requires 15 pages of fill-in-the-blank and essay questions to answer the question: What should the system do when the user presses Return? The corporate process (authored by L. E. Ott, famed author of "Holistic Analysis: A Progressive Dialectic for Software Engineers") insists that you discover all primary use cases, 87 percent of all secondary use cases, and 36.274 percent of all tertiary use cases before you can complete analysis and enter the design phase. You have no idea what a tertiary use case is. So in an attempt to meet this requirement, you try to get your use case document reviewed by the marketing department, which you hope will know what a tertiary use case is.   Unfortunately, the marketing folks are too busy with sales support to talk to you. Indeed, since the project started, you have not been able to get a single meeting with marketing, which has provided a never-ending stream of changing and contradictory requirements documents.   While one team has been spinning endlessly on the use case document, another team has been working out the domain model. Endless variations of UML documents are pouring out of this team. Every week, the model is reworked.   The team members can't decide whether to use <<interfaces>> or <<types>> in the model. A huge disagreement has been raging on the proper syntax and application of OCL. Others on the team just got back from a 5-day class on catabolism, and have been producing incredibly detailed and arcane diagrams that nobody else can fathom.   On March 27, with one week to go before analysis is to be complete, you have produced a sea of documents and diagrams but are no closer to a cogent analysis of the problem than you were on January 3. **** And then, a miracle happens.   **** On Saturday, April 1, you check your e-mail from home. You see a memo from your boss to BB. It states unequivocally that you are done with the analysis! You phone your boss and complain. "How could you have told BB that we were done with the analysis?" "Have you looked at a calendar lately?" he responds. "It's April 1!" The irony of that date does not escape you. "But we have so much more to think about. So much more to analyze! We haven't even decided whether to use <<extends>> or <<precedes>>!" "Where is your evidence that you are not done?" inquires your boss, impatiently. "Whaaa . . . ." But he cuts you off. "Analysis can go on forever; it has to be stopped at some point. And since this is the date it was scheduled to stop, it has been stopped. Now, on Monday, I want you to gather up all existing analysis materials and put them into a public folder. Release that folder to Prudence so that she can log it in the CM system by Monday afternoon. Then get busy and start designing."   As you hang up the phone, you begin to consider the benefits of keeping a bottle of bourbon in your bottom desk drawer. They threw a party to celebrate the on-time completion of the analysis phase. BB gave a colon-stirring speech on empowerment. And your boss, another 3 mm taller, congratulated his team on the incredible show of unity and teamwork. Finally, the CIO takes the stage to tell everyone that the SEI audit went very well and to thank everyone for studying and shredding the evaluation guides that were passed out. Level 3 now seems assured and will be awarded by June. (Scuttlebutt has it that managers at the level of BB and above are to receive significant bonuses once the SEI awards level 3.)   As the weeks flow by, you and your team work on the design of the system. Of course, you find that the analysis that the design is supposedly based on is flawedno, useless; no, worse than useless. But when you tell your boss that you need to go back and work some more on the analysis to shore up its weaker sections, he simply states, "The analysis phase is over. The only allowable activity is design. Now get back to it."   So, you and your team hack the design as best you can, unsure of whether the requirements have been properly analyzed. Of course, it really doesn't matter much, since the requirements document is still thrashing with weekly revisions, and the marketing department still refuses to meet with you.     The design is a nightmare. Your boss recently misread a book named The Finish Line in which the author, Mark DeThomaso, blithely suggested that design documents should be taken down to code-level detail. "If we are going to be working at that level of detail," you ask, "why don't we simply write the code instead?" "Because then you wouldn't be designing, of course. And the only allowable activity in the design phase is design!" "Besides," he continues, "we have just purchased a companywide license for Dandelion! This tool enables 'Round the Horn Engineering!' You are to transfer all design diagrams into this tool. It will automatically generate our code for us! It will also keep the design diagrams in sync with the code!" Your boss hands you a brightly colored shrinkwrapped box containing the Dandelion distribution. You accept it numbly and shuffle off to your cubicle. Twelve hours, eight crashes, one disk reformatting, and eight shots of 151 later, you finally have the tool installed on your server. You consider the week your team will lose while attending Dandelion training. Then you smile and think, "Any week I'm not here is a good week." Design diagram after design diagram is created by your team. Dandelion makes it very difficult to draw these diagrams. There are dozens and dozens of deeply nested dialog boxes with funny text fields and check boxes that must all be filled in correctly. And then there's the problem of moving classes between packages. At first, these diagram are driven from the use cases. But the requirements are changing so often that the use cases rapidly become meaningless. Debates rage about whether VISITOR or DECORATOR design patterns should be used. One developer refuses to use VISITOR in any form, claiming that it's not a properly object-oriented construct. Someone refuses to use multiple inheritance, since it is the spawn of the devil. Review meetings rapidly degenerate into debates about the meaning of object orientation, the definition of analysis versus design, or when to use aggregation versus association. Midway through the design cycle, the marketing folks announce that they have rethought the focus of the system. Their new requirements document is completely restructured. They have eliminated several major feature areas and replaced them with feature areas that they anticipate customer surveys will show to be more appropriate. You tell your boss that these changes mean that you need to reanalyze and redesign much of the system. But he says, "The analysis phase is system. But he says, "The analysis phase is over. The only allowable activity is design. Now get back to it."   You suggest that it might be better to create a simple prototype to show to the marketing folks and even some potential customers. But your boss says, "The analysis phase is over. The only allowable activity is design. Now get back to it." Hack, hack, hack, hack. You try to create some kind of a design document that might reflect the new requirements documents. However, the revolution of the requirements has not caused them to stop thrashing. Indeed, if anything, the wild oscillations of the requirements document have only increased in frequency and amplitude.   You slog your way through them.   On June 15, the Dandelion database gets corrupted. Apparently, the corruption has been progressive. Small errors in the DB accumulated over the months into bigger and bigger errors. Eventually, the CASE tool just stopped working. Of course, the slowly encroaching corruption is present on all the backups. Calls to the Dandelion technical support line go unanswered for several days. Finally, you receive a brief e-mail from Dandelion, informing you that this is a known problem and that the solution is to purchase the new version, which they promise will be ready some time next quarter, and then reenter all the diagrams by hand.   ****   Then, on July 1 another miracle happens! You are done with the design!   Rather than go to your boss and complain, you stock your middle desk drawer with some vodka.   **** They threw a party to celebrate the on-time completion of the design phase and their graduation to CMM level 3. This time, you find BB's speech so stirring that you have to use the restroom before it begins. New banners and plaques are all over your workplace. They show pictures of eagles and mountain climbers, and they talk about teamwork and empowerment. They read better after a few scotches. That reminds you that you need to clear out your file cabinet to make room for the brandy. You and your team begin to code. But you rapidly discover that the design is lacking in some significant areas. Actually, it's lacking any significance at all. You convene a design session in one of the conference rooms to try to work through some of the nastier problems. But your boss catches you at it and disbands the meeting, saying, "The design phase is over. The only allowable activity is coding. Now get back to it."   ****   The code generated by Dandelion is really hideous. It turns out that you and your team were using association and aggregation the wrong way, after all. All the generated code has to be edited to correct these flaws. Editing this code is extremely difficult because it has been instrumented with ugly comment blocks that have special syntax that Dandelion needs in order to keep the diagrams in sync with the code. If you accidentally alter one of these comments, the diagrams will be regenerated incorrectly. It turns out that "Round the Horn Engineering" requires an awful lot of effort. The more you try to keep the code compatible with Dandelion, the more errors Dandelion generates. In the end, you give up and decide to keep the diagrams up to date manually. A second later, you decide that there's no point in keeping the diagrams up to date at all. Besides, who has time?   Your boss hires a consultant to build tools to count the number of lines of code that are being produced. He puts a big thermometer graph on the wall with the number 1,000,000 on the top. Every day, he extends the red line to show how many lines have been added. Three days after the thermometer appears on the wall, your boss stops you in the hall. "That graph isn't growing quickly enough. We need to have a million lines done by October 1." "We aren't even sh-sh-sure that the proshect will require a m-million linezh," you blather. "We have to have a million lines done by October 1," your boss reiterates. His points have grown again, and the Grecian formula he uses on them creates an aura of authority and competence. "Are you sure your comment blocks are big enough?" Then, in a flash of managerial insight, he says, "I have it! I want you to institute a new policy among the engineers. No line of code is to be longer than 20 characters. Any such line must be split into two or more preferably more. All existing code needs to be reworked to this standard. That'll get our line count up!"   You decide not to tell him that this will require two unscheduled work months. You decide not to tell him anything at all. You decide that intravenous injections of pure ethanol are the only solution. You make the appropriate arrangements. Hack, hack, hack, and hack. You and your team madly code away. By August 1, your boss, frowning at the thermometer on the wall, institutes a mandatory 50-hour workweek.   Hack, hack, hack, and hack. By September 1st, the thermometer is at 1.2 million lines and your boss asks you to write a report describing why you exceeded the coding budget by 20 percent. He institutes mandatory Saturdays and demands that the project be brought back down to a million lines. You start a campaign of remerging lines. Hack, hack, hack, and hack. Tempers are flaring; people are quitting; QA is raining trouble reports down on you. Customers are demanding installation and user manuals; salespeople are demanding advance demonstrations for special customers; the requirements document is still thrashing, the marketing folks are complaining that the product isn't anything like they specified, and the liquor store won't accept your credit card anymore. Something has to give.    On September 15, BB calls a meeting. As he enters the room, his points are emitting clouds of steam. When he speaks, the bass overtones of his carefully manicured voice cause the pit of your stomach to roll over. "The QA manager has told me that this project has less than 50 percent of the required features implemented. He has also informed me that the system crashes all the time, yields wrong results, and is hideously slow. He has also complained that he cannot keep up with the continuous train of daily releases, each more buggy than the last!" He stops for a few seconds, visibly trying to compose himself. "The QA manager estimates that, at this rate of development, we won't be able to ship the product until December!" Actually, you think it's more like March, but you don't say anything. "December!" BB roars with such derision that people duck their heads as though he were pointing an assault rifle at them. "December is absolutely out of the question. Team leaders, I want new estimates on my desk in the morning. I am hereby mandating 65-hour work weeks until this project is complete. And it better be complete by November 1."   As he leaves the conference room, he is heard to mutter: "Empowermentbah!" * * * Your boss is bald; his points are mounted on BB's wall. The fluorescent lights reflecting off his pate momentarily dazzle you. "Do you have anything to drink?" he asks. Having just finished your last bottle of Boone's Farm, you pull a bottle of Thunderbird from your bookshelf and pour it into his coffee mug. "What's it going to take to get this project done? " he asks. "We need to freeze the requirements, analyze them, design them, and then implement them," you say callously. "By November 1?" your boss exclaims incredulously. "No way! Just get back to coding the damned thing." He storms out, scratching his vacant head.   A few days later, you find that your boss has been transferred to the corporate research division. Turnover has skyrocketed. Customers, informed at the last minute that their orders cannot be fulfilled on time, have begun to cancel their orders. Marketing is re-evaluating whether this product aligns with the overall goals of the company. Memos fly, heads roll, policies change, and things are, overall, pretty grim. Finally, by March, after far too many sixty-five hour weeks, a very shaky version of the software is ready. In the field, bug-discovery rates are high, and the technical support staff are at their wits' end, trying to cope with the complaints and demands of the irate customers. Nobody is happy.   In April, BB decides to buy his way out of the problem by licensing a product produced by Rupert Industries and redistributing it. The customers are mollified, the marketing folks are smug, and you are laid off.     Rupert Industries: Project Alpha   Your name is Robert. The date is January 3, 2001. The quiet hours spent with your family this holiday have left you refreshed and ready for work. You are sitting in a conference room with your team of professionals. The manager of the division called the meeting. "We have some ideas for a new project," says the division manager. Call him Russ. He is a high-strung British chap with more energy than a fusion reactor. He is ambitious and driven but understands the value of a team. Russ describes the essence of the new market opportunity the company has identified and introduces you to Jane, the marketing manager, who is responsible for defining the products that will address it. Addressing you, Jane says, "We'd like to start defining our first product offering as soon as possible. When can you and your team meet with me?" You reply, "We'll be done with the current iteration of our project this Friday. We can spare a few hours for you between now and then. After that, we'll take a few people from the team and dedicate them to you. We'll begin hiring their replacements and the new people for your team immediately." "Great," says Russ, "but I want you to understand that it is critical that we have something to exhibit at the trade show coming up this July. If we can't be there with something significant, we'll lose the opportunity."   "I understand," you reply. "I don't yet know what it is that you have in mind, but I'm sure we can have something by July. I just can't tell you what that something will be right now. In any case, you and Jane are going to have complete control over what we developers do, so you can rest assured that by July, you'll have the most important things that can be accomplished in that time ready to exhibit."   Russ nods in satisfaction. He knows how this works. Your team has always kept him advised and allowed him to steer their development. He has the utmost confidence that your team will work on the most important things first and will produce a high-quality product.   * * *   "So, Robert," says Jane at their first meeting, "How does your team feel about being split up?" "We'll miss working with each other," you answer, "but some of us were getting pretty tired of that last project and are looking forward to a change. So, what are you people cooking up?" Jane beams. "You know how much trouble our customers currently have . . ." And she spends a half hour or so describing the problem and possible solution. "OK, wait a second" you respond. "I need to be clear about this." And so you and Jane talk about how this system might work. Some of her ideas aren't fully formed. You suggest possible solutions. She likes some of them. You continue discussing.   During the discussion, as each new topic is addressed, Jane writes user story cards. Each card represents something that the new system has to do. The cards accumulate on the table and are spread out in front of you. Both you and Jane point at them, pick them up, and make notes on them as you discuss the stories. The cards are powerful mnemonic devices that you can use to represent complex ideas that are barely formed.   At the end of the meeting, you say, "OK, I've got a general idea of what you want. I'm going to talk to the team about it. I imagine they'll want to run some experiments with various database structures and presentation formats. Next time we meet, it'll be as a group, and we'll start identifying the most important features of the system."   A week later, your nascent team meets with Jane. They spread the existing user story cards out on the table and begin to get into some of the details of the system. The meeting is very dynamic. Jane presents the stories in the order of their importance. There is much discussion about each one. The developers are concerned about keeping the stories small enough to estimate and test. So they continually ask Jane to split one story into several smaller stories. Jane is concerned that each story have a clear business value and priority, so as she splits them, she makes sure that this stays true.   The stories accumulate on the table. Jane writes them, but the developers make notes on them as needed. Nobody tries to capture everything that is said; the cards are not meant to capture everything but are simply reminders of the conversation.   As the developers become more comfortable with the stories, they begin writing estimates on them. These estimates are crude and budgetary, but they give Jane an idea of what the story will cost.   At the end of the meeting, it is clear that many more stories could be discussed. It is also clear that the most important stories have been addressed and that they represent several months worth of work. Jane closes the meeting by taking the cards with her and promising to have a proposal for the first release in the morning.   * * *   The next morning, you reconvene the meeting. Jane chooses five cards and places them on the table. "According to your estimates, these cards represent about one perfect team-week's worth of work. The last iteration of the previous project managed to get one perfect team-week done in 3 real weeks. If we can get these five stories done in 3 weeks, we'll be able to demonstrate them to Russ. That will make him feel very comfortable about our progress." Jane is pushing it. The sheepish look on her face lets you know that she knows it too. You reply, "Jane, this is a new team, working on a new project. It's a bit presumptuous to expect that our velocity will be the same as the previous team's. However, I met with the team yesterday afternoon, and we all agreed that our initial velocity should, in fact, be set to one perfectweek for every 3 real-weeks. So you've lucked out on this one." "Just remember," you continue, "that the story estimates and the story velocity are very tentative at this point. We'll learn more when we plan the iteration and even more when we implement it."   Jane looks over her glasses at you as if to say "Who's the boss around here, anyway?" and then smiles and says, "Yeah, don't worry. I know the drill by now."Jane then puts 15 more cards on the table. She says, "If we can get all these cards done by the end of March, we can turn the system over to our beta test customers. And we'll get good feedback from them."   You reply, "OK, so we've got our first iteration defined, and we have the stories for the next three iterations after that. These four iterations will make our first release."   "So," says Jane, can you really do these five stories in the next 3 weeks?" "I don't know for sure, Jane," you reply. "Let's break them down into tasks and see what we get."   So Jane, you, and your team spend the next several hours taking each of the five stories that Jane chose for the first iteration and breaking them down into small tasks. The developers quickly realize that some of the tasks can be shared between stories and that other tasks have commonalities that can probably be taken advantage of. It is clear that potential designs are popping into the developers' heads. From time to time, they form little discussion knots and scribble UML diagrams on some cards.   Soon, the whiteboard is filled with the tasks that, once completed, will implement the five stories for this iteration. You start the sign-up process by saying, "OK, let's sign up for these tasks." "I'll take the initial database generation." Says Pete. "That's what I did on the last project, and this doesn't look very different. I estimate it at two of my perfect workdays." "OK, well, then, I'll take the login screen," says Joe. "Aw, darn," says Elaine, the junior member of the team, "I've never done a GUI, and kinda wanted to try that one."   "Ah, the impatience of youth," Joe says sagely, with a wink in your direction. "You can assist me with it, young Jedi." To Jane: "I think it'll take me about three of my perfect workdays."   One by one, the developers sign up for tasks and estimate them in terms of their own perfect workdays. Both you and Jane know that it is best to let the developers volunteer for tasks than to assign the tasks to them. You also know full well that you daren't challenge any of the developers' estimates. You know these people, and you trust them. You know that they are going to do the very best they can.   The developers know that they can't sign up for more perfect workdays than they finished in the last iteration they worked on. Once each developer has filled his or her schedule for the iteration, they stop signing up for tasks.   Eventually, all the developers have stopped signing up for tasks. But, of course, tasks are still left on the board.   "I was worried that that might happen," you say, "OK, there's only one thing to do, Jane. We've got too much to do in this iteration. What stories or tasks can we remove?" Jane sighs. She knows that this is the only option. Working overtime at the beginning of a project is insane, and projects where she's tried it have not fared well.   So Jane starts to remove the least-important functionality. "Well, we really don't need the login screen just yet. We can simply start the system in the logged-in state." "Rats!" cries Elaine. "I really wanted to do that." "Patience, grasshopper." says Joe. "Those who wait for the bees to leave the hive will not have lips too swollen to relish the honey." Elaine looks confused. Everyone looks confused. "So . . .," Jane continues, "I think we can also do away with . . ." And so, bit by bit, the list of tasks shrinks. Developers who lose a task sign up for one of the remaining ones.   The negotiation is not painless. Several times, Jane exhibits obvious frustration and impatience. Once, when tensions are especially high, Elaine volunteers, "I'll work extra hard to make up some of the missing time." You are about to correct her when, fortunately, Joe looks her in the eye and says, "When once you proceed down the dark path, forever will it dominate your destiny."   In the end, an iteration acceptable to Jane is reached. It's not what Jane wanted. Indeed, it is significantly less. But it's something the team feels that can be achieved in the next 3 weeks.   And, after all, it still addresses the most important things that Jane wanted in the iteration. "So, Jane," you say when things had quieted down a bit, "when can we expect acceptance tests from you?" Jane sighs. This is the other side of the coin. For every story the development team implements,   Jane must supply a suite of acceptance tests that prove that it works. And the team needs these long before the end of the iteration, since they will certainly point out differences in the way Jane and the developers imagine the system's behaviour.   "I'll get you some example test scripts today," Jane promises. "I'll add to them every day after that. You'll have the entire suite by the middle of the iteration."   * * *   The iteration begins on Monday morning with a flurry of Class, Responsibilities, Collaborators sessions. By midmorning, all the developers have assembled into pairs and are rapidly coding away. "And now, my young apprentice," Joe says to Elaine, "you shall learn the mysteries of test-first design!"   "Wow, that sounds pretty rad," Elaine replies. "How do you do it?" Joe beams. It's clear that he has been anticipating this moment. "OK, what does the code do right now?" "Huh?" replied Elaine, "It doesn't do anything at all; there is no code."   "So, consider our task; can you think of something the code should do?" "Sure," Elaine said with youthful assurance, "First, it should connect to the database." "And thereupon, what must needs be required to connecteth the database?" "You sure talk weird," laughed Elaine. "I think we'd have to get the database object from some registry and call the Connect() method. "Ah, astute young wizard. Thou perceives correctly that we requireth an object within which we can cacheth the database object." "Is 'cacheth' really a word?" "It is when I say it! So, what test can we write that we know the database registry should pass?" Elaine sighs. She knows she'll just have to play along. "We should be able to create a database object and pass it to the registry in a Store() method. And then we should be able to pull it out of the registry with a Get() method and make sure it's the same object." "Oh, well said, my prepubescent sprite!" "Hay!" "So, now, let's write a test function that proves your case." "But shouldn't we write the database object and registry object first?" "Ah, you've much to learn, my young impatient one. Just write the test first." "But it won't even compile!" "Are you sure? What if it did?" "Uh . . ." "Just write the test, Elaine. Trust me." And so Joe, Elaine, and all the other developers began to code their tasks, one test case at a time. The room in which they worked was abuzz with the conversations between the pairs. The murmur was punctuated by an occasional high five when a pair managed to finish a task or a difficult test case.   As development proceeded, the developers changed partners once or twice a day. Each developer got to see what all the others were doing, and so knowledge of the code spread generally throughout the team.   Whenever a pair finished something significant whether a whole task or simply an important part of a task they integrated what they had with the rest of the system. Thus, the code base grew daily, and integration difficulties were minimized.   The developers communicated with Jane on a daily basis. They'd go to her whenever they had a question about the functionality of the system or the interpretation of an acceptance test case.   Jane, good as her word, supplied the team with a steady stream of acceptance test scripts. The team read these carefully and thereby gained a much better understanding of what Jane expected the system to do. By the beginning of the second week, there was enough functionality to demonstrate to Jane. She watched eagerly as the demonstration passed test case after test case. "This is really cool," Jane said as the demonstration finally ended. "But this doesn't seem like one-third of the tasks. Is your velocity slower than anticipated?"   You grimace. You'd been waiting for a good time to mention this to Jane but now she was forcing the issue. "Yes, unfortunately, we are going more slowly than we had expected. The new application server we are using is turning out to be a pain to configure. Also, it takes forever to reboot, and we have to reboot it whenever we make even the slightest change to its configuration."   Jane eyes you with suspicion. The stress of last Monday's negotiations had still not entirely dissipated. She says, "And what does this mean to our schedule? We can't slip it again, we just can't. Russ will have a fit! He'll haul us all into the woodshed and ream us some new ones."   You look Jane right in the eyes. There's no pleasant way to give someone news like this. So you just blurt out, "Look, if things keep going like they're going, we're not going to be done with everything by next Friday. Now it's possible that we'll figure out a way to go faster. But, frankly, I wouldn't depend on that. You should start thinking about one or two tasks that could be removed from the iteration without ruining the demonstration for Russ. Come hell or high water, we are going to give that demonstration on Friday, and I don't think you want us to choose which tasks to omit."   "Aw forchrisakes!" Jane barely manages to stifle yelling that last word as she stalks away, shaking her head. Not for the first time, you say to yourself, "Nobody ever promised me project management would be easy." You are pretty sure it won't be the last time, either.   Actually, things went a bit better than you had hoped. The team did, in fact, have to drop one task from the iteration, but Jane had chosen wisely, and the demonstration for Russ went without a hitch. Russ was not impressed with the progress, but neither was he dismayed. He simply said, "This is pretty good. But remember, we have to be able to demonstrate this system at the trade show in July, and at this rate, it doesn't look like you'll have all that much to show." Jane, whose attitude had improved dramatically with the completion of the iteration, responded to Russ by saying, "Russ, this team is working hard, and well. When July comes around, I am confident that we'll have something significant to demonstrate. It won't be everything, and some of it may be smoke and mirrors, but we'll have something."   Painful though the last iteration was, it had calibrated your velocity numbers. The next iteration went much better. Not because your team got more done than in the last iteration but simply because the team didn't have to remove any tasks or stories in the middle of the iteration.   By the start of the fourth iteration, a natural rhythm has been established. Jane, you, and the team know exactly what to expect from one another. The team is running hard, but the pace is sustainable. You are confident that the team can keep up this pace for a year or more.   The number of surprises in the schedule diminishes to near zero; however, the number of surprises in the requirements does not. Jane and Russ frequently look over the growing system and make recommendations or changes to the existing functionality. But all parties realize that these changes take time and must be scheduled. So the changes do not cause anyone's expectations to be violated. In March, there is a major demonstration of the system to the board of directors. The system is very limited and is not yet in a form good enough to take to the trade show, but progress is steady, and the board is reasonably impressed.   The second release goes even more smoothly than the first. By now, the team has figured out a way to automate Jane's acceptance test scripts. The team has also refactored the design of the system to the point that it is really easy to add new features and change old ones. The second release was done by the end of June and was taken to the trade show. It had less in it than Jane and Russ would have liked, but it did demonstrate the most important features of the system. Although customers at the trade show noticed that certain features were missing, they were very impressed overall. You, Russ, and Jane all returned from the trade show with smiles on your faces. You all felt as though this project was a winner.   Indeed, many months later, you are contacted by Rufus Inc. That company had been working on a system like this for its internal operations. Rufus has canceled the development of that system after a death-march project and is negotiating to license your technology for its environment.   Indeed, things are looking up!

    Read the article

  • SMB2 traffic crashes network?

    - by Phil Cross
    We've been having significant network slowdown issues over the past few weeks, primarily on a Friday morning. We run Windows 7 client machines, with Windows Server 2008 R2 servers. What generally happens is the network starts to slow down massively at 08:55 and resumes normal speeds at around 09:20 This affects everything on the network from logging on, resetting passwords, opening programs and files etc. On my client machine, Physical Memory usage remains at around 40% (normal) and CPU usage hovers around 0-10% idle. The servers show memory usage spikes massively and remains quite intense during the times mentioned above. I have taken several wireshark captures, both during the slowdown and when the network operates fine. One of the main things I noticed is the increase in SMB2 entries in the wireshark log during the slowdown. Record Time Source Destination Protocol Length Info 382 3.976460000 10.47.35.11 10.47.32.3 SMB2 362 Create Request File: pcross\My Documents 413 4.525047000 10.47.35.11 10.47.32.3 SMB2 146 Close Request File: pcross\My Documents 441 5.235927000 10.47.32.3 10.47.35.11 SMB2 298 Create Response File: pcross\My Documents\Downloads 442 5.236199000 10.47.35.11 10.47.32.3 SMB2 260 Find Request File: pcross\My Documents\Downloads SMB2_FIND_ID_BOTH_DIRECTORY_INFO Pattern: *;Find Request File: pcross\My Documents\Downloads SMB2_FIND_ID_BOTH_DIRECTORY_INFO Pattern: * 573 6.327634000 10.47.35.11 10.47.32.3 SMB2 146 Close Request File: pcross\My Documents\Downloads 703 7.664186000 10.47.35.11 10.47.32.3 SMB2 394 Create Request File: pcross\My Documents\Downloads\WestlandsProspectus\P24 __ P21.pdf These are some of the SMB2 records from a list of a couple of hundred which original from my computer with a destination of the fileserver. One of the interesting things to note is the last entry in the examples above is for a PDF file. That file was not open anywhere on my computer, or on anyone elses. No folders with the files in were open either. When I took another capture when the network was running fine, there were hardly any SMB2 entries, and the ones that were displayed were mainly from Wireshark. We currently have around 800 computers, 90 Macs and 200 Laptops and Netbooks. Our concern is if this traffic is happening on my computer, is it happening on other computers, and if so, would those computers be adding to the slow network issues? Again, this only happens during certain times. We're pretty sure its not the our antivirus. Is there anything to narrow down whats initializing this SMB traffic during the particular times? Or if anyone has any extra advice, or links to resources it would be appreciate.

    Read the article

  • Fedora 12 Wireless problems (Intel Wireless 4965AGN Card)

    - by Ninefingers
    Hi All, I'm having an interesting experience with my wireless card at the moment. Basically, it does like this: I connect to the local wireless network (netgear router) It works, briefly, allowing me to browse a webpage or maybe two, if I'm lucky. It then stops working / sending any packets, whilst reported still connected. Now, me being me I've had a look to see what I can find. wpa_supplicant.log looks like this: Trying to associate with valid_mac:a2:30 (SSID='vennardwireless' freq=2462 MHz) Associated with valid_mac:a2:30 WPA: Key negotiation completed with valid_mac:a2:30 [PTK=CCMP GTK=TKIP] CTRL-EVENT-CONNECTED - Connection to valid_mac:a2:30 completed (reauth) [id=0 id_str=] CTRL-EVENT-DISCONNECTED - Disconnect event - remove keys So that's working fine. dmesg | grep "*iwl*" spits out this: iwlagn: Intel(R) Wireless WiFi Link AGN driver for Linux, 1.3.27kds iwlagn: Copyright(c) 2003-2009 Intel Corporation iwlagn 0000:03:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 iwlagn 0000:03:00.0: setting latency timer to 64 iwlagn 0000:03:00.0: Detected Intel Wireless WiFi Link 4965AGN REV=0x4 iwlagn 0000:03:00.0: Tunable channels: 13 802.11bg, 19 802.11a channels iwlagn 0000:03:00.0: irq 32 for MSI/MSI-X phy0: Selected rate control algorithm 'iwl-agn-rs' iwlagn 0000:03:00.0: firmware: requesting iwlwifi-4965-2.ucode iwlagn 0000:03:00.0: loaded firmware version 228.61.2.24 Registered led device: iwl-phy0::radio Registered led device: iwl-phy0::assoc Registered led device: iwl-phy0::RX Registered led device: iwl-phy0::TX iwlagn 0000:03:00.0: iwl_tx_agg_start on ra = 00:24:b2:32:a3:30 tid = 0 iwlagn 0000:03:00.0: iwl_tx_agg_start on ra = 00:24:b2:32:a3:30 tid = 0 So that's working too. I can also ping 192.168.0.1 -I wlan0 and arping 192.168.0.1 -I wlan0 the router until the network falls over. uname -r:2.6.32.10-90.fc12.x86_64. Laptop is a Core2 Duo (2Ghz) with 3GB RAM. Other symptoms I've noticed are that wireshark freezes when I capture on the "broken" interface until I disconnect. Am using networkmanager as per normal. Stupidly, I can connect to the same router via eth0/a cat6 cable just fine. Everyone else can connect to the AP fine (from Windows). Yes, I'm sat right next to it and not trying to access a hotspot the other side of the world. Any ideas? Is this a broken update? (I intend to reboot and test an older kernel later)? Anyone else come across this? Edit: iwconfig wlan0 rate auto is the settings I'm using for rates. Also, according to networkmanager the network is still connected. Thanks for any pointers / advice.

    Read the article

  • Simple Backup Strategy for Amazon EC2 instances / volumes?

    - by minerj
    You have entered Introductory Backups for Amazon EC2 EBS-backed Windows Images 010... I have been browsing my brains out to find a simple backup strategy for our single windows 2008 server running SharePoint Services. This is an EBS-backed image of one server with one data volume. I don’t need anything exotic. I only need a “daily” backup (losing a day’s worth of data is not catastrophic). We have created and saved an EBS backed AMI image (Windows 2008) we are comfortable using. We started off making backups by simply creating a new EBS AMI image. This is really simple, but the running server is put offline during the first 10 – 15 minutes of creating the image – not ideal. The standard way of creating backups would seem to be creating snapshots of volumes attached to a running instance. Again it’s pretty simple and the server remains usable during the snapshot generation. The apparent Catch-22 is that you can’t simply launch a new instance directly from a snapshot. I know how to bundle a running instance to S3 storage and then register the AMI from the S3 bucket. This does allow me to capture a backup of a running instance and, if the running instance is lost, register the AMI from the S3 bucket and launch the new AMI to recover the instance, but this seems really convoluted and it seems ridiculous to have to juggle back and forth between the AWS Console and the S3 Organizer plug-in for Firefox to get this accomplished. (Please don't mention the command line approach, this is an 010 level course). From playing around with EBS-backed images, the following approach appears to work for me (all done within the AWS Console): 1.For your backups, simply snapshot the system volume (/dev/sda1) as needed. 2.If you lose your running instance, do the following: a.Create a new volume from your last snapshot backup b.Launch another instance of your starting AMI (must be EBS-backed) c.Stop this instance. d.Detach the existing system volume from the new stopped instance and discard. e.Attach the newly created volume as system volume (/dev/sda1) to the stopped instance. f.Re-start the new instance. I have tested this out a couple of times and it seems to work for me. Question: Is there anything wrong with this approach?

    Read the article

  • free open-source linux screenshot & ocr tool

    - by Gryllida
    I'm looking for a tool which would be able to capture a screen region, pass it to OCR and put the result into clipboard. "import ppm:- | gocr -i - | xclip -selection c" works, but gocr is unreliable: simple text on a webpage has errors. It is a clear font but the OCR tool always misses "r" and replaces it with underscore. "import ppm:- | ocrad -i - | xclip -selection c" says "ocrad: maxval 255 in ppm "P6" file." tesseract needs an image file and does not accept piping input to it. xfce4-screenshooter does not do OCR. ABBYY Screenshot Reader is proprietary. tessnet2 is freeware running on a proprietary platform. Google Docs can OCR screenshots in a batch. But my data is confidential and better not put online. Graphical interface solutions would be acceptable for this question, too. There is a number of existing SuperUser questions about OCR. They fall in several categories. Questions just about OCR without the "screenshot taking" part. Open Source OCR for linux Free OCR for Arabic text Looking for recommendations on OCR problem - tabular numeric data Which has better OCR applications: Ubuntu, or Mac/iPad, or Windows? How can I preform OCR from the command line? OCR solution on linux machine from command line (duplicate) Free OCR software OCR for Sanskrit ( OR devanagari) Copy image and paste to OCR (windows) File processing OCR instead of screenshot. Online OCR website for processing an entire pdf file at one time? Practical OCR solution for converting a large book to a digital format? How to extract text with OCR from a PDF on Linux? Batch-OCR many PDFs OCR Image based PDF Copy image and paste to OCR Extract OCR text from Evernote OCR in Word 2013 Replace (OCR) garbled text in PDF? Process files prior to running OCR. How can I make OCR recognize my documents' text better? Tesseract OCR recognition bilingual document. mistakes tolerance level setup OCR for low quality images How do I get the best quality screenshot for OCR (Optical Character Recognition) and what tool would be the best for screenshots? OCR training. Training Tesseract-OCR for english language fonts None of them answer this question.

    Read the article

  • How to know the source of certain TCP traffic on AIX

    - by A.Rashad
    We have two AIX boxes, one for production system and another for testing. both systems are running ATM machine switches, where the ATM device is connected via TCP socket. we had an issue on production system where the machine would power off or get disconnected but the netstat -na | grep <IP of machine > would still mention that the socket is up when simulated that case on the UAT environment, the problem did not happen, where the socket would terminate in 3 to 5 minutes. when sniffed on the traffic between the machine and ATM we found that no traffic takes place on production while there is some sort of heartbeat on UAT. but it is not initiated by the application. $>tcpdump | grep -v "10.2.2.71" | grep -v "HSRP" | grep "10.3.1.30" tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on en6, link-type 1, capture size 96 bytes 09:08:13.323421 IP server073.afs3-callback > 10.3.1.30.impera: . 278204201:278204202(1) ack 3307884029 win 164 09:08:13.335334 IP 10.3.1.30.impera > server073.afs3-callback: . ack 1 win 64180 09:08:23.425771 IP 10.3.1.30.impera > server073.afs3-callback: . 1:2(1) ack 1 win 64180 09:08:23.425789 IP server073.afs3-callback > 10.3.1.30.impera: . ack 2 win 65535 09:09:13.628985 IP server073.afs3-callback > 10.3.1.30.impera: . 0:1(1) ack 1 win 164 09:09:13.633900 IP 10.3.1.30.impera > server073.afs3-callback: . ack 1 win 64180 09:09:23.373634 IP 10.3.1.30.impera > server073.afs3-callback: . 1:2(1) ack 1 win 64180 09:09:23.373647 IP server073.afs3-callback > 10.3.1.30.impera: . ack 2 win 65535 while on production, that traffic is not there. we want to know where this traffic is initiated from to implement on production to sense disconnection our comms parameters are: tcp_keepcnt = 2 tcp_keepidle = 100 tcp_keepinit = 150 tcp_keepintvl = 150 tcp_finwait2 = 1200 can anyone help? Editing Question: One point I missed because I was rushing to a meeting. the difference between the Production and UAT in setup is that in Production we have an application called F5 working as load balancer between the ATMs and the AIX box, while it is a direct connection through MPLS in case of UAT. note: we had one MPLS and one GPRS connected ATMs on UAT, and both connections terminated when unplugged in about 4 minutes Edit 2 the no -o tcp_timewait command returns 1 in both Production and UAT

    Read the article

  • Why can I not get a WDS-originated PXE boot to progress past the first file download?

    - by Jeff Shattock
    I'm trying to work out an automated Windows install process, and thought I'd give WDS a look. After some promising initial progress, I seem to have hit a wall. I imported the boot and install WIMs, and created the capture WIM successfully. However, whenever I try to PXE boot the reference machine against the WDS server, it kinda craps out. It finds the server and downloads WDSNBP.COM successfully, and then gives the message "TFTP download failed." According to WireShark, the only communication between the WDS box and the client box is the successful TFTP request and download of boot\x86\WDSNBP.COM. No further requests are sent. The WDS log on the server shows the same thing, one successful download and no more activity. I've tried every combination of the following, with exactly zero change in behaviour: Win Server 2008R2 vs 2012 vs 2012R2 WDS virtualized on KVM, ESXi, VirtualBox, VMWare Workstation Client virtualized on KVM, ESXi, VirtualBox, VMWare Workstation Every network adaptor type offered by the virtualization platforms. "Actual" network vs isolated, virtual network. MS DHCP server vs Linux isc-dhcp-server Joined to a domain vs Stand-alone I tried changing the boot filename in DHCP to pxeboot.com instead, and it has no problem downloading that file instead, but it then crabs about Boot\BCD being corrupted. Also, with 2012, it doesnt appear that WDSNBP.com does the architecture detection, or at least does'nt report that it did. 2008 reports that it found x64, and then errors. I find myself out of things to check, and I dont see anything immediately wrong. Where do I go from here? WDS server is at 192.168.1.50, DHCP/DNS at 192.168.1.7. Console of the client computer after the boot: MAC: 52:54:00:28:94:0E UUID: blah blah Searching for server (DHCP)..... Me: 192.168.1.155, DHCP: 192.168.1.7, Gateway 192.168.1.1 Loading 192.168.1.50:boot\x86\wdsnbp.com ...(PXE).................done Downloaded WDSNCP... TFPT download failed Interesting parts of /etc/dhcp/dhcpd.conf on the Linux DHCP server: allow booting; allow bootp; option option-60 code 60 = string; option option-66 code 66 = string; option option-67 code 67 = string; subnet 192.168.1.0 netmask 255.255.255.0 { range 192.168.1.110 192.168.1.253; next-server 192.168.1.50; option tftp-server-name "192.168.1.50"; option option-60 "PXEClient"; filename "boot\\x86\\wdsnbp.com"; option bootfile-name "boot\\x86\\wdsnbp.com"; }

    Read the article

  • Using FFMPEG to reliably convert videos to mp4 for iphone/ipod and flash players

    - by Jake Stevenson
    I need to convert videos for use in both a flash player and the iphone/ipod touch. I'm using the following batch script with ffmpeg: @echo off ffmpeg.exe -i %1 -s qvga -acodec libfaac -ar 22050 -ab 128k -vcodec libx264 -threads 0 -f ipod %2 This always outputs an mp4 file, and I can always play it on my PC. The videos also seem to play fine on my iphone 3GS. But with some input files it won't work for older iphone versions (3G and iPod touch). Here's the ffmpeg output from one such file: D:\ffmpeg>encode.bat d:\temp\recording.flv d:\temp\out.m4v FFmpeg version SVN-r18709, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --enable-memalign-hack --prefix=/mingw --cross-prefix=i686-ming w32- --cc=ccache-i686-mingw32-gcc --target-os=mingw32 --arch=i686 --cpu=i686 --e nable-avisynth --enable-gpl --enable-zlib --enable-bzlib --enable-libgsm --enabl e-libfaac --enable-libfaad --enable-pthreads --enable-libvorbis --enable-libtheo ra --enable-libspeex --enable-libmp3lame --enable-libopenjpeg --enable-libxvid - -enable-libschroedinger --enable-libx264 libavutil 50. 3. 0 / 50. 3. 0 libavcodec 52.27. 0 / 52.27. 0 libavformat 52.32. 0 / 52.32. 0 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0. 7. 1 / 0. 7. 1 built on Apr 28 2009 04:04:42, gcc: 4.2.4 [flv @ 0x187d650]skipping flv packet: type 18, size 164, flags 0 Input #0, flv, from 'd:\temp\recording.flv': Duration: 00:00:07.17, start: 0.001000, bitrate: N/A Stream #0.0: Video: flv, yuv420p, 320x240, 1k tbr, 1k tbn, 1k tbc Stream #0.1: Audio: nellymoser, 44100 Hz, mono, s16 [libx264 @ 0x13518b0]using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE 4.2 [libx264 @ 0x13518b0]profile Baseline, level 4.2 Output #0, ipod, to 'd:\temp\out.m4v': Stream #0.0: Video: libx264, yuv420p, 320x240, q=2-31, 200 kb/s, 1k tbn, 1k tbc Stream #0.1: Audio: libfaac, 22050 Hz, mono, s16, 128 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 90 fps= 0 q=-1.0 Lsize= 128kB time=6.87 bitrate= 152.4kbits/s video:92kB audio:32kB global headers:1kB muxing overhead 2.620892% [libx264 @ 0x13518b0]slice I:8 Avg QP:29.62 size: 7047 [libx264 @ 0x13518b0]slice P:82 Avg QP:30.83 size: 467 [libx264 @ 0x13518b0]mb I I16..4: 17.9% 0.0% 82.1% [libx264 @ 0x13518b0]mb P I16..4: 0.6% 0.0% 0.0% P16..4: 23.1% 0.0% 0.0% 0.0% 0.0% skip:76.3% [libx264 @ 0x13518b0]final ratefactor: 57.50 [libx264 @ 0x13518b0]SSIM Mean Y:0.9544735 [libx264 @ 0x13518b0]kb/s:8412.6 My suspicion is that it has something to do with the audio encoding. If so, does anyone know how to force it to reencode the audio to the proper format? Any other ideas?

    Read the article

  • Android stream to Wowza

    - by Curtis Kiu
    I feel very confused about Android streaming to wowza. I am doing a video conference using rtmp cross-platform, but Android doesn't eat RTMP. Therefore I need to find another way to do it. Upstreaming I found a new open-source app called spydroid-ipcamera. It is using rtp, sending udp packets to computer, and opens it in vlc using the following sdp v=0 s=Unnamed m=video 5006 RTP/AVP 96 a=rtpmap:96 H264/90000 a=fmtp:96 packetization-mode=1;profile-level-id=420016;sprop-parameter-sets=Z0IAFukBQHsg,aM4BDyA=; But it can't work. Then I follow wowza tutorial and stream to it and then play again in VLC. That works! I wrote it in http://code.google.com/p/spydroid-ipcamera/issues/detail?id=2 However when I want to add audio in the packet, it fails to work. I change to code in http://code.google.com/p/spydroid-ipcamera/source/browse/trunk/src/net/mkp/spydroid/CameraStreamer.java mr.setAudioSource(MediaRecorder.AudioSource.MIC); mr.setVideoSource(MediaRecorder.VideoSource.CAMERA); mr.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4); mr.setVideoFrameRate(20); mr.setVideoSize(640, 480); mr.setAudioEncoder(MediaRecorder.AudioEncoder.AAC); mr.setVideoEncoder(MediaRecorder.VideoEncoder.H264); mr.setPreviewDisplay(holder.getSurface()); Then I thought that the problem should be in sdp, but I don't know how to due with sdp. I am streaming H.264/AAC with Mp4 Second I don't understand sdp. So how can I make video conference upstreaming part using this apps. Android ----(UDP Port:5006)----> PC (SDP file) and then Wowza read the SDP file ------> VLC I think in this way the system cannot handle more than 1 client. sdp can only hold 1 port, any idea or actually it wont' work? Also Wowza need to set the stream before we stream it, so does it mean that I should not follow this way to do it? Sorry my English is poor, I hope you guys understand.

    Read the article

  • Forward all traffic through an ssh tunnel

    - by Eamorr
    I hope someone can follow this and I'll explain as best I can. I'm trying to forward all traffic from port 6999 on x.x.x.224, through an ssh tunnel, and onto port 7000 on x.x.x.218. Here is some ASCII art: |browser|-----|Squid on x.x.x.224|------|ssh tunnel|------<satellite link>-----|Squid on x.x.x.218|-----|www| 3128 6999 7000 80 When I remove the ssh tunnel, everything works fine. The idea is to turn off encryption on the ssh tunnel (to save bandwidth) and turn on maximum compression (to save more bandwidth). This is because it's a satellite link. Here's the ssh tunnel I've been using: ssh -C -f -C -o CompressionLevel=9 -o Cipher=none [email protected] -L 7000:172.16.1.224:6999 -N The trouble is, I don't know how to get data from Squid on x.x.x.224 into the ssh tunnel? Am I going about this the wrong way? Should I create an ssh tunnel on x.x.x.218? I use iptables to stop squid on x.x.x.224 from reading port 80, but to feed from port 6999 instead (i.e. via the ssh tunnel). Do I need another iptables rule? Any comments greatly appreciated. Many thanks in advance, Regarding Eduardo Ivanec's question, here is a netstat -i any port 7000 -nn dump from x.x.x.218: 14:42:15.386462 IP 172.16.1.224.40006 > 172.16.1.218.7000: Flags [S], seq 2804513708, win 14600, options [mss 1460,sackOK,TS val 86702647 ecr 0,nop,wscale 4], length 0 14:42:15.386690 IP 172.16.1.218.7000 > 172.16.1.224.40006: Flags [R.], seq 0, ack 2804513709, win 0, length 0 Update 2: When I run the second command, I get the following error in my browser: ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://109.123.109.205/index.php Zero Sized Reply Squid did not receive any data for this request. Your cache administrator is webmaster. Generated Fri, 01 Jul 2011 16:06:06 GMT by remote-site (squid/2.7.STABLE9) remote-site is 172.16.1.224 When I do a tcpdump -i any port 7000 -nn I get the following: root@remote-site:~# tcpdump -i any port 7000 -nn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused channel 2: open failed: connect failed: Connection refused

    Read the article

  • Bladecenter-E Power Module fault

    - by Lihnjo
    We have problem on IBM Bladecenter-E Critical Events Power module 2 is off. DC fault. Power module 4 is off. DC fault. Warnings and System Events Insufficient chassis power to support redundancy What is the best solution for this problem? Thanks AMM Service Data Help SPAPP Capture Available 10/13/2010 17:03:47 1090347 bytes Time: 11/19/2012 11:02:31 UUID: 42E1 5D2F D7BF 41A6 A4A2 48D1 3FB7 0540 MAC Address xx:xx:xx:xx:xx:xx MM Information Name: nnnnn Contact: aaa, bbb, ccc, England Location: [email protected] IP address: 111.222.333.444 Date Time Information GMT offset: +1:00 - Central Europe Time (Western Europe, Algeria, Nigeria, Angola) Adjust for DST: Yes NTP: Enabled NTP Hostname/IP: 111.222.333.444 System Health: Critical System Status Summary One or more monitored parameters are abnormal. Critical Events Power module 2 is off. DC fault. Power module 4 is off. DC fault. Warnings and System Events Insufficient chassis power to support redundancy CHASSIS (BladeCenter-E) in CHASSIS slot: 01 TopoPath is "CHASSIS[1]". Description : BladeCenter-E Width : 1 Sub Type : BladeCenter (BC) Power Mode : 220 v KVM Owner : CHASSIS[1]/BLADE[9] MT Owner : CHASSIS[1]/MGMT_MOD[1] Component Type : CHASSIS Inventory: VPD ID: 336 (decimal) POS ID EXT: 0 (decimal) POS ID: 8 (decimal) Machine Type/Model: 86773RG Machine Serial Number: 99ZL816 Part Number: 39R8561 FRU Number: 39R8563 FRU Serial Number: YK109174W1HV Manufacturer ID: IBM Hardware Revision: 3 (decimal) Manufacture Date: 18 (wk), 07 (yr) UUID: 42E1 5D2F D7BF 41A6 A4A2 48D1 3FB7 0540 (hex) Type Code: 97 (decimal) Sub-type Code: 0 (decimal) IANA Num: 336 (decimal) Product ID: 8 (decimal) Manufacturer Sub ID: FOXC Enviroment data: -------------- Type: : POWER_USAGE Unit: : WATTS Reading: : 0xa Sensor Label: : Midplane Sensor ID: : 0x0 MGMT MOD (Advanced Management Module) in MGMT_MOD slot: 01 TopoPath is "CHASSIS[1]/MGMT_MOD[1]". Description : Advanced Management Module Name : kant Width : 1 Component Role : Primary Component Type : MGMT MOD Insert Time : 28050132 Inventory: VPD ID: 288 (decimal) POS ID EXT: 0 (decimal) POS ID: 4 (decimal) Part Number: 39Y9659 FRU Number: 39Y9661 FRU Serial Number: YK11836CE2RC Manufacturer ID: IBM Hardware Revision: 4 (decimal) Manufacture Date: 50 (wk), 06 (yr) UUID: 1D95 9937 8CA5 11DB 9499 0014 5EDF 1C98 (hex) Type Code: 81 (decimal) Sub-type Code: 1 (decimal) IANA Num: 20301 (decimal) Product ID: 65 (decimal) Manufacturer Sub ID: ASUS Firmware data: Type : AMM firmware Build ID : BPET50P File Name : CNETCMUS.PKT Release Date : 03/26/2010 Release Level : 50 Revision - Major: 80 Port info: ======================================================== Topology Path ID : 1 Label : External Phy Orientation : EXTERNAL Port Number : 1 Type : MGT Physical Meidum : Copper Number of Link Intferfaces : 1 ------------------------------------ Link Ifc ID Number : 1 Link Ifc Transport Protocol : ENET Link Ifc Addr Type : MAC Link Ifc Burned-in Addr : xx:xx:xx:xx:xx:xx Link Ifc Admin Addr : 00:00:00:00:00:00 Link Ifc Addr in use : xx:xx:xx:xx:xx:xx ---------------------------------------------------------- Configuration behaviors: Save Only Enviroment data: -------------- Type: : TEMPERATURE Unit: : DEGREES_C Reading: : 38.00 Sensor Label: : MM Ambient Sensor ID: : 0x0 -------------- Type: : VOLTAGE Unit: : VOLTS Reading: : +4.81 Sensor Label: : +5V Sensor ID: : 0x1b -------------- Type: : VOLTAGE Unit: : VOLTS Reading: : +3.26 Sensor Label: : +3.3V Sensor ID: : 0x19 -------------- Type: : VOLTAGE Unit: : VOLTS Reading: : +11.97 Sensor Label: : +12V Sensor ID: : 0x16 -------------- Type: : VOLTAGE Unit: : VOLTS Reading: : -4.88 Sensor Label: : -5V Sensor ID: : 0x1e -------------- Type: : VOLTAGE Unit: : VOLTS Reading: : +2.47 Sensor Label: : +2.5V Sensor ID: : 0x18 -------------- Type: : VOLTAGE Unit: : VOLTS Reading: : +1.76 Sensor Label: : +1.8V Sensor ID: : 0x15 -------------- Type: : POWER_USAGE Unit: : WATTS Reading: : 0x19 Sensor Label: : kant Sensor ID: : 0x0

    Read the article

  • Profiles and using the local profile for a domain user

    - by Harry
    I’m having some trouble with profiles and would like to reach out for some help. I’ve tried to do some research to help myself along, but I’m not making much progress on my own. I’ve pretty much taken over the sys admin duties for my small lab, I don’t have much experience to justify it besides I’m the only with the time and dedication to go at it (The environment was in a state of disrepair). My network and domain I look over are extremely small by most standards, about 10 users at a time. They are pretty intensive activity on the network, and we do work with fairly large files. None of the network is online, which is nice at the moment because it allows me not to have another headache. On to my profile problem, I have set up roaming profiles for the users in the network. Now after a little research, I think I will be switching this to a hybrid of folder redirection and roaming profiles as this seems to best practice. I also don’t want the users having to wait for a long time if they have a bloated profile. Now I’ve finally got a build working using MDT. We have Mac Pros, and it wasn’t fun getting everything to play nice. The way I did this was by setting up a reference computer and installing all the software and tools that each user would need and editing the settings preferences to how we would need them. I think used MDT to do a sys prep and capture to create the image of my reference computer. Using the reference image I can push out my images to the rest of the desktops in my environment. The issue I’m having is when we join the computer to domain. The user can login and operate fine on the computer, but I’d like a more. When the user is logged on with their domain user name they lose a lot of the icons I had on my reference image, as well as the desktop background and some other miscellaneous settings. I would love to have the user log on using their domain user name and see the icons and desktop environment as I had it setup on the reference computer. I’m not sure if it is possible, or something simple that I’m missing, but any help would be greatly appreciated!

    Read the article

  • Workflows not starting after fresh install

    - by Greg McGuffey
    I just installed Dynamics CRM 4.0. It is working nicely except for workflows. They won't start. I turned on tracing and it appears that there is an IO error. The server is setup with IFD and SSL. No issues accessing it internally or externally. Here is the trace: # CRM Tracing Version 2.0 # LocalTime: 2010-06-08 11:34:58.2 # Categories: # CallStackOn: No # ComputerName: FOX-CRM1 # CRMVersion: 4.0.7333.2741 # DeploymentType: OnPremise # ScaleGroup: # ServerRole: AppServer, AsyncService, DiscoveryService, WebService, ApiServer, HelpServer, DeploymentService [2010-06-08 11:34:58.2] Process:CrmAsyncService |Organization:821a137e-7191-49a4-86cc-69101e2b6d20 |Thread: 24 |Category: Platform.Async |User: 00000000-0000-0000-0000-000000000000 |Level: Error | AsyncOperationCommand.Execute >Exception while trying to execute AsyncOperationId: {DF68F483-2C73-DF11-9A34-18A9053B7B38} AsyncOperationType: 1 - System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a send. ---> System.IO.IOException: The handshake failed due to an unexpected packet format. at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartReceiveBlob(Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.CheckCompletionBeforeNextReceive(ProtocolToken message, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.StartSendBlob(Byte[] incoming, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ForceAuthentication(Boolean receiveFirst, Byte[] buffer, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslState.ProcessAuthentication(LazyAsyncResult lazyResult) at System.Net.TlsStream.CallProcessAuthentication(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.TlsStream.ProcessAuthentication(LazyAsyncResult result) at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.ConnectStream.WriteHeaders(Boolean async) --- End of inner exception stack trace --- at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at Microsoft.Crm.SdkTypeProxy.CrmService.Retrieve(String entityName, Guid id, ColumnSetBase columnSet) at Microsoft.Crm.Asynchronous.SdkTypeProxyCrmServiceWrapper.Retrieve(String entityName, Guid id, ColumnSetBase columnSet) at Microsoft.Crm.Asynchronous.SdkPluginDescriptionProvider.GetPluginTypeDescription(Guid pluginTypeId, IOrganizationContext context) at Microsoft.Crm.Caching.PluginTypeCacheLoader.LoadCacheData(Guid key, IOrganizationContext context) at Microsoft.Crm.Caching.CrmMultiOrgCache`2.CreateEntry(TKey key, IOrganizationContext context) at Microsoft.Crm.Caching.CrmSharedMultiOrgCache`2.LookupEntry(TKey key, IOrganizationContext context) at Microsoft.Crm.Caching.PluginTypeCache.LookupEntry(Guid pluginTypeId, IOrganizationContext context) at Microsoft.Crm.Asynchronous.AsyncOperationCommand.GetPluginType(Guid pluginTypeId) at Microsoft.Crm.Asynchronous.EventOperation.InternalExecute(AsyncEvent asyncEvent) at Microsoft.Crm.Asynchronous.AsyncOperationCommand.Execute(AsyncEvent asyncEvent) The only thing I've tried to to update the AsyncSdkRootDomain row in the Deployment table to match the ADSdkRootDomain and the ADApplicationRootDomain values. It was blank. That didn't appear to work. After some more research, I think this might be caused because the Asynch service can't access the SDK web services using SSL. If this is correct, how would one configure a CRM server for secure access, internal and external (IFD) and still allow asynch service to hit web site? Thanks for your help!

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >