Search Results

Search found 4463 results on 179 pages for 'review board'.

Page 90/179 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Blend for Visual Studio 2013 Prototyping Applications with SketchFlow

    - by T
    Originally posted on: http://geekswithblogs.net/tburger/archive/2014/08/10/blend-for-visual-studio-2013-prototyping-applications-with-sketchflow.aspxSketchFlow enables rapid creating of dynamic interface mockups very quickly. The SketchFlow workspace is the same as the standard Blend workspace with the inclusion of three panels: the SketchFlow Feedback panel, the SketchFlow Animation panel and the SketchFlow Map panel. By using SketchFlow to prototype, you can get feedback early in the process. It helps to surface possible issues, lower development iterations, and increase stakeholder buy in. SketchFlow prototypes not only provide an initial look but also provide a way to add additional ideas and input and make sure the team is on track prior to investing in complete development. When you have completed the prototyping, you can discard the prototype and just use the lessons learned to design the application from or extract individual elements from your prototype and include them in the application. I don’t recommend trying to transition the entire project into a development project. Objects that you add with the SketchFlow style have a hand-sketched look. The sketch style is used to remind stakeholders that this is a prototype. This encourages them to focus on the flow and functionality without getting distracted by design details. The sketchflow assets are under sketchflow in the asset panel and are identifiable by the postfix “–Sketch”. For example “Button-Sketch”. You can mix sketch and standard controls in your interface, if required. Be creative, if there is a missing control or your interface has a different look and feel than the out of the box one, reuse other sketch controls to mimic the functionality or look and feel. Only use standard controls if it doesn’t distract from the idea that this is a prototype and not a standard application. The SketchFlow Map panel provides information about the structure of your application. To create a new screen in your prototype: Right-click the map surface and choose “Create a Connected Screen”. Name the screens with names that are meaningful to the stakeholders. The start screen is the one that has the green arrow. To change the start screen, right click on any other screen and set to start screen. Only one screen can be the start screen at a time. Rounded screen are component screens to mimic reusable custom controls that will be built into the final application. You can change the colors of all of the boxes and should use colors to create functional groupings. The groupings can be identified in the SketchFlow Project Settings. To add connections between screens in the SketchFlow Map panel. Move the mouse over a screen in the SketchFlow and a menu will appear at the bottom of the screen node. In the menu, click Connect to an existing screen. Drag the arrow to another screen on the Map. You add navigation to your prototype by adding connections on the SketchFlow map or by adding navigation directly to items on your interface. To add navigation from objects on the artboard, right click the item then from the menu, choose “Navigate to”. This will expose a sub-menu with available screens, backward, or forward. When the map has connected screens, the SketchFlow Player displays the connected screens on the Navigate sidebar. All screens show in the SketchFlow Player Map. To see the SketchFlow Player, run your SketchFlow prototype. The Navigation sidebar is meant to show the desired user work flow. The map can be used to view the different screens regardless of suggested navigation in the navigation bar. The map is able to be hidden and shown. As mentioned, a component screen is a shared screen that is used in more than one screen and generally represents what will be a custom object in the application. To create a component screen, you can create a screen, right click on it in the SketchFlow Map and choose “Make into component screen”. You can mouse over a screen and from the menu that appears underneath, choose create and insert component screen. To use an existing screen, select if from the Asset panel under SketchFlow, Components. You can use Storyboards and Visual State animations in your SketchFlow project. However, SketchFlow also offers its own animation technique that is simpler and better suited for prototyping. The SketchFlow Animation panel is above your artboard by default. In SketchFlow animation, you create frames and then position the elements on your interface for each frame. You then specify elapsed time and any effects you want to apply to the transition. The + at the top is what creates new frames. Once you have a new Frame, select it and change the property you want to animate. In the example above, I changed the Text of the result box. You can adjust the time between frames in the lower area between the frames. The easing and effects functions are changed in the center between each frame. You edit the hold time for frames by clicking the clock icon in the lower left and the hold time will appear on each frame and can be edited. The FluidLayout icon (also located in the lower left) will create smooth transitions. Next to the FluidLayout icon is the name of that Animation. You can rename the animation by clicking on it and editing the name. The down arrow chevrons next to the name allow you to view the list of all animations in this prototype and select them for editing. To add the animation to the interface object (such as a button to start the animation), select the PlaySketchFlowAnimationAction from the SketchFlow behaviors in the Assets menu and drag it to an object on your interface. With the PlaySketchFlowAnimationAction that you just added selected in the Objects and Timeline, edit the properties to change the EventName to the event you want and choose the SketchFlowAnimation you want from the drop down list. You may want to add additional information to your screens that isn’t really part of the prototype but is relevant information or a request for clarification or feedback from the reviewer. You do this with annotations or notes. Both appear on the user interface, however, annotations can be switched on or off at design and review time. Notes cannot be switched off. To add an Annotation, chose the Create Annotation from the Tools menu. The annotation appears on the UI where you will add the notes. To display or Hide annotations, click the annotation toggle at the bottom right on the artboard . After to toggle annotations on, the identifier of the person who created them appears on the artboard and you must click that to expand the notes. To add a note to the artboard, simply select the Note-Sketch from Assets ->SketchFlow ->Styles ->Sketch Styles. Drag and drop it to the artboard and place where you want it. When you are ready for users to review the prototype, you have a few options available. Click File -> Export and choose one of the options from the list: Publish to Sharepoint, Package SketchFlowProject, Export to Microsoft Word, or Export as Images. I suggest you play with as many of the options as you can to see what they do. Both the Sharepoint and Packaged SketchFlowProject allow you to collect feedback from one or more users that you can import into the project. The user can make notes on the UI and in the Feedback area in the bottom left corner of the player. When the user is done adding feedback, it is exported from the right most folder icon in the My Feedback panel. Feeback is imported on a panel named SketchFlow Feedback. To get that panel to show up, select Window -> SketchFlow Feedback. Once you have the panel showing, click the + in the upper right of the panel and find the notes you exported. When imported, they will show up in a list and on the artboard. To document your prototype, use the Export to Microsoft Word option from the File menu. That should get you started with prototyping.

    Read the article

  • Video on Architecture and Code Quality using Visual Studio 2012&ndash;interview with Marcel de Vries and Terje Sandstrom by Adam Cogan

    - by terje
    Find the video HERE. Adam Cogan did a great Web TV interview with Marcel de Vries and myself on the topics of architecture and code quality.  It was real fun participating in this session.  Although we know each other from the MVP ALM community,  Marcel, Adam and I haven’t worked together before. It was very interesting to see how we agreed on so many terms, and how alike we where thinking.  The basics of ensuring you have a good architecture and how you could document it is one thing.  Also, the same agreement on the importance of having a high quality code base, and how we used the Visual Studio 2012 tools, and some others (NDepend for example)  to measure and ensure that the code quality was where it should be.  As the tools, methods and thinking popped up during the interview it was a lot of “Hey !  I do that too!”.  The tools are not only for “after the fact” work, but we use them during the coding.  That way the tools becomes an integrated part of our coding work, and helps us to find issues we may have overlooked.  The video has a bunch of call outs, pinpointing important things to remember. These are also listed on the corresponding web page. I haven’t seen that touch before, but really liked this way of doing it – it makes it much easier to spot the highlights.  Titus Maclaren and Raj Dhatt from SSW have done a terrific job producing this video.  And thanks to Lei Xu for doing the camera and recording job.  Thanks guys ! Also, if you are at TechEd Amsterdam 2012, go and listen to Adam Cogan in his session on “A modern architecture review: Using the new code review tools” Friday 29th, 10.15-11.30 and Marcel de Vries session on “Intellitrace, what is it and how can I use it to my benefit” Wednesday 27th, 5-6.15 The highlights points out some important practices.  I’ll elaborate on a few of them here: Add instructions on how to compile the solution.  You do this by adding a text file with instructions to the solution, and keep it under source control.  These instructions should contain what is needed on top of a standard install of Visual Studio.  I do a lot of code reviews, and more often that not, I am not even able to compile the program, because they have used some tool or library that needs to be installed.  The same applies to any new developer who enters into the team, so do this to increase your productivity when the team changes, or a team member switches computer. Don’t forget to document what you have to configure on the computer, the IIS being a common one. The more automatic you can do this, the better.  Use NuGet to get down libraries. When the text document gets more than say, half a page, with a bunch of different things to do, convert it into a powershell script instead.  The metrics warning levels.  These are very conservatively set by Microsoft.  You rarely see anything but green, and besides, you should have color scales for each of the metrics.  I have a blog post describing a more appropriate set of levels, based on both research work and industry “best practices”.  The essential limits are: Cyclomatic complexity and coupling:  Higher numbers are worse On method levels: Green :  From 0 to 10 Yellow:  From 10 to 20  (some say 15).   Acceptable, but have a look to see if there is something unneeded here. Red: From 20 to 40:   Action required, get these down. Bleeding Red: Above 40   This is the real red alert.  Immediate action!  (My invention, as people have asked what do I do when I have cyclomatic complexity of 150.  The only answer I could think of was: RUN! ) Maintainability index:  Lower numbers are worse, scale from 0 to 100. On method levels: Green:  60 to 100 Yellow:  40 – 60.    You will always have methods here too, accept the higher ones, take a look at those who are down to the lower limit.  Check up against the other metrics.) Red:  20 – 40:  Action required, fix these. Bleeding red:  Below 20.  Immediate action required. When doing metrics analysis, you should leave the generated code out.  You do this by adding attributes, unfortunately Microsoft has “forgotten” to add these to all their stuff, so you might have to add them to some of the code.  It most cases it can be done so that it is not overwritten by a new round of code generation.  Take a look a my blog post here for details on how to do that. Class level metrics might also be useful, at least for coupling and maintenance.  But it is much more difficult to set any fixed limits on those.  Any metric aggregations on higher level tend to be pretty useless, as the number of methods vary pretty much, and there are little science on what number of methods can be regarded as good or bad.  NDepend have a recommendation, but they say it may vary too.  And in these days of data binding, the number might be pretty high, as properties counts as methods.  However, if you take the worst case situations, classes with more than 20 methods are suspicious, and coupling and cyclomatic complexity go red above 20, so any classes with more than 20x20 = 400 for these measures should be checked over. In the video we mention the SOLID principles, coined by “Uncle Bob” (Richard Martin). One of them, the Dependency Inversion principle we discuss in the video.  It is important to note that this principle is NOT on whether you should use a Dependency Inversion Container or not, it is about how you design the interfaces and interactions between your classes.  The Dependency Inversion Container is just one technique which is based on this principle, but which main purpose is to isolate things you would like to change at runtime, for example if you implement a plug in architecture.  Overuse of a Dependency Inversion Container is however, NOT a good thing.  It should be used for a purpose and not as a general DI solution.  The general DI solution and thinking however is useful far beyond the DIC.   You should always “program to an abstraction”, and not to the concreteness.  We also talk a bit about the GRASP patterns, a term coined by Craig Larman in his book Applying UML and design patterns. GRASP patterns stand for General Responsibility Assignment Software Patterns and describe fundamental principles of object design and responsibility assignment.  What I find great with these patterns is that they is another way to focus on the responsibility of a class.  One of the things I most often found that is broken in software designs, is that the class lack responsibility, and as a result there are a lot of classes mucking around in the internals of the other classes.  We also discuss the term “Code Smells”.  This term was invented by Kent Beck and Martin Fowler when they worked with Fowler’s “Refactoring” book. A code smell is a set of “bad” coding practices, which are the drivers behind a corresponding set of refactorings.  Here is a good list of the smells, and their corresponding refactor patterns. See also this.

    Read the article

  • OS Analytics - Deep Dive Into Your OS

    - by Eran_Steiner
    Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. We will have a call to discuss this blog - please join us!Date: Thursday, November 1, 2012Time: 11:00 am, Eastern Daylight Time (New York, GMT-04:00)1. Go to https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833067&UID=1512092402&PW=NY2JhMmFjMmFh&RT=MiMxMQ%3D%3D2. If requested, enter your name and email address.3. If a password is required, enter the meeting password: oracle1234. Click "Join". To join the teleconference:Call-in toll-free number:       1-866-682-4770  (US/Canada)      Other countries:                https://oracle.intercallonline.com/portlets/scheduling/viewNumbers/viewNumber.do?ownerNumber=5931260&audioType=RP&viewGa=true&ga=ONConference Code:       7629343#Security code:            7777# Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data View Solaris services status details Drill down into a process details View the busiest zones if applicable Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. On Solaris machines with zones, you get an extra level of tabs, allowing you to get more information on the different zones: This is a good way to see the busiest zones. For example, one zone may not take a lot of CPU but it can consume a lot of memory, or perhaps network bandwidth. To see the detailed Analytics for each of the zones, simply click each of the zones in the tree and go to its Analytics tab. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. Next, if you view a Solaris machine, you will have a "Services" tab: By default, all services will be displayed, but you can choose to display only certain states, for example, those in maintenance or the degraded ones. You can highlight a service and choose to view the details, where you can see the Dependencies, Dependents and also the location of the service log file (not shown in the picture as you need to scroll down to see the log file). The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect On Solaris, the location is /opt/SUNWxvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • Master Data Management and Cloud Computing

    - by david.butler(at)oracle.com
    Cloud Computing is all the rage these days. There are many reasons why this is so. But like its predecessor, Service Oriented Architecture, it can fall on hard times if the underlying data is left unmanaged. Master Data Management is the perfect Cloud companion. It can materially increase the chances for successful Cloud initiatives. In this blog, I'll review the nature of the Cloud and show how MDM fits in.   Here's the National Institute of Standards and Technology Cloud definition: •          Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction.   Cloud architectures have three main layers: applications or Software as a Service (SaaS), Platforms as a Service (PaaS), and Infrastructure as a Service (IaaS). SaaS generally refers to applications that are delivered to end-users over the Internet. Oracle CRM On Demand is an example of a SaaS application. Today there are hundreds of SaaS providers covering a wide variety of applications including Salesforce.com, Workday, and Netsuite. Oracle MDM applications are located in this layer of Oracle's On Demand enterprise Cloud platform. We call it Master Data as a Service (MDaaS). PaaS generally refers to an application deployment platform delivered as a service. They are often built on a grid computing architecture and include database and middleware. Oracle Fusion Middleware is in this category and includes the SOA and Data Integration products used to connect SaaS applications including MDM. Finally, IaaS generally refers to computing hardware (servers, storage and network) delivered as a service.  This typically includes the associated software as well: operating systems, virtualization, clustering, etc.    Cloud Computing benefits are compelling for a large number of organizations. These include significant cost savings, increased flexibility, and fast deployments. Cost advantages include paying for just what you use. This is especially critical for organizations with variable or seasonal usage. Companies don't have to invest to support peak computing periods. Costs are also more predictable and controllable. Increased agility includes access to the latest technology and experts without making significant up front investments.   While Cloud Computing is certainly very alluring with a clear value proposition, it is not without its challenges. An IDC survey of 244 IT executives/CIOs and their line-of-business (LOB) colleagues identified a number of issues:   Security - 74% identified security as an issue involving data privacy and resource access control. Integration - 61% found that it is hard to integrate Cloud Apps with in-house applications. Operational Costs - 50% are worried that On Demand will actually cost more given the impact of poor data quality on the rest of the enterprise. Compliance - 49% felt that compliance with required regulatory, legal and general industry requirements (such as PCI, HIPAA and Sarbanes-Oxley) would be a major issue. When control is lost, the ability of a provider to directly manage how and where data is deployed, used and destroyed is negatively impacted.  There are others, but I singled out these four top issues because Master Data Management, properly incorporated into a Cloud Computing infrastructure, can significantly ameliorate all of these problems. Cloud Computing can literally rain raw data across the enterprise.   According to fellow blogger, Mike Ferguson, "the fracturing of data caused by the adoption of cloud computing raises the importance of MDM in keeping disparate data synchronized."   David Linthicum, CTO Blue Mountain Labs blogs that "the lack of MDM will become more of an issue as cloud computing rises. We're moving from complex federated on-premise systems, to complex federated on-premise and cloud-delivered systems."    Left unmanaged, non-standard, inconsistent, ungoverned data with questionable quality can pollute analytical systems, increase operational costs, and reduce the ROI in Cloud and On-Premise applications. As cloud computing becomes more relevant, and more data, applications, services, and processes are moved out to cloud computing platforms, the need for MDM becomes ever more important. Oracle's MDM suite is designed to deal with all four of the above Cloud issues listed in the IDC survey.   Security - MDM manages all master data attribute privacy and resource access control issues. Integration - MDM pre-integrates Cloud Apps with each other and with On Premise applications at the data level. Operational Costs - MDM significantly reduces operational costs by increasing data quality, thereby improving enterprise business processes efficiency. Compliance - MDM, with its built in Data Governance capabilities, insures that the data is governed according to organizational standards. This facilitates rapid and accurate reporting for compliance purposes. Oracle MDM creates governed high quality master data. A unified cleansed and standardized data view is produced. The Oracle Customer Hub creates a single view of the customer. The Oracle Product Hub creates high quality product data designed to support all go-to-market processes. Oracle Supplier Hub dramatically reduces the chances of 'supplier exceptions'. Oracle Site Hub masters locations. And Oracle Hyperion Data Relationship Management masters financial reference data and manages enterprise hierarchies across operational areas from ERP to EPM and CRM to SCM. Oracle Fusion Middleware connects Cloud and On Premise applications to MDM Hubs and brings high quality master data to your enterprise business processes.   An independent analyst once said "Poor data quality is like dirt on the windshield. You may be able to drive for a long time with slowly degrading vision, but at some point, you either have to stop and clear the windshield or risk everything."  Cloud Computing has the potential to significantly degrade data quality across the enterprise over time. Deploying a Master Data Management solution prior to or in conjunction with a move to the Cloud can insure that the data flowing into the enterprise from the Cloud is clean and governed. This will in turn insure that expected returns on the investment in Cloud Computing will be realized.       Oracle MDM has proven its metal in this area and has the customers to back that up. In fact, I will be hosting a webcast on Tuesday, April 10th at 10 am PT with one of our top Cloud customers, the Church Pension Group. They have moved all mainline applications to a hosted model and use Oracle MDM to insure the master data is managed and cleansed before it is propagated to other cloud and internal systems. I invite you join Martin Hossfeld, VP, IT Operations, and Danette Patterson, Enterprise Data Manager as they review business drivers for MDM and hosted applications, how they did it, the benefits achieved, and lessons learned. You can register for this free webcast here.  Hope to see you there.

    Read the article

  • Building KPIs to monitor your business Its not really about the Technology

    When I have discussions with people about Business Intelligence, one of the questions the inevitably come up is about building KPIs and how to accomplish that. From a technical level the concept of a KPI is very simple, almost too simple in that it is like the tip of an iceberg floating above the water. The key to that iceberg is not really the tip, but the mass of the iceberg that is hidden beneath the surface upon which the tip sits. The analogy of the iceberg is not meant to indicate that the foundation of the KPI is overly difficult or complex. The disparity in size in meant to indicate that the larger thing that needs to be defined is not the technical tip, but the underlying business definition of what the KPI means. From a technical perspective the KPI consists of primarily the following items: Actual Value This is the actual value data point that is being measured. An example would be something like the amount of sales. Target Value This is the target goal for the KPI. This is a number that can be measured against Actual Value. An example would be $10,000 in monthly sales. Target Indicator Range This is the definition of ranges that define what type of indicator the user will see comparing the Actual Value to the Target Value. Most often this is defined by stoplight, but can be any indicator that is going to show a status in a quick fashion to the user. Typically this would be something like: Red Light = Actual Value more than 5% below target; Yellow Light = Within 5% of target either direction; Green Light = More than 5% higher than Target Value Status\Trend Indicator This is an optional attribute of a KPI that is typically used to show some kind of trend. The vast majority of these indicators are used to show some type of progress against a previous period. As an example, the status indicator might be used to show how the monthly sales compare to last month. With this type of indicator there needs to be not only a definition of what the ranges are for your status indictor, but then also what value the number needs to be compared against. So now we have an idea of what data points a KPI consists of from a technical perspective lets talk a bit about tools. As you can see technically there is not a whole lot to them and the choice of technology is not as important as the definition of the KPIs, which we will get to in a minute. There are many different types of tools in the Microsoft BI stack that you can use to expose your KPI to the business. These include Performance Point, SharePoint, Excel, and SQL Reporting Services. There are pluses and minuses to each technology and the right technology is based a lot on your goals and how you want to deliver the information to the users. Additionally, there are other non-Microsoft tools that can be used to expose KPI indicators to your business users. Regardless of the technology used as your front end, the heavy lifting of KPI is in the business definition of the values and benchmarks for that KPI. The discussion about KPIs is very dependent on the history of an organization and how much they are exposed to the attributes of a KPI. Often times when discussing KPIs with a business contact who has not been exposed to KPIs the discussion tends to also be a session educating the business user about what a KPI is and what goes into the definition of a KPI. The majority of times the business user has an idea of what their actual values are and they have been tracking those numbers for some time, generally in Excel and all manually. So they will know the amount of sales last month along with sales two years ago in the same month. Where the conversation tends to get stuck is when you start discussing what the target value should be. The actual value is answering the What and How much questions. When you are talking about the Target values you are asking the question Is this number good or bad. Typically, the user will know whether or not the value is good or bad, but most of the time they are not able to quantify what is good or bad. Their response is usually something like I just know. Because they have been watching the sales quantity for years now, they can tell you that a 5% decrease in sales this month might actually be a good thing, maybe because the salespeople are all waiting until next month when the new versions come out. It can sometimes be very hard to break the business people of this habit. One of the fears generally is that the status indicator is not subjective. Thus, in the scenario above, the business user is going to be fearful that their boss, just looking at a negative red indicator, is going to haul them out to the woodshed for a bad month. But, on the flip side, if all you are displaying is the amount of sales, only a person with knowledge of last month sales and the target amount for this month would have any idea if $10,000 in sales is good or not. Here is where a key point about KPIs needs to be communicated to both the business user and any user who might be viewing the results of that KPI. The KPI is just one tool that is used to report on business performance. The KPI is meant as a quick indicator of one business statistic. It is not meant to tell the entire story. It does not answer the question Why. Its primary purpose is to objectively and quickly expose an area of the business that might warrant more review. There is always going to be the need to do further analysis on any potential negative or neutral KPI. So, hopefully, once you have convinced your business user to come up with some target numbers and ranges for status indicators, you then need to take the next step and help them answer the Why question. The main question here to ask is, Okay, you see the indicator and you need to discover why the number is what is, where do you go?. The answer is usually a combination of sources. A sales manager might have some of the following items at their disposal (Marketing report showing a decrease in the promotional discounts for the month, Pricing Report showing the reduction of prices of older models, an Inventory Report showing the discontinuation of a particular product line, or a memo showing the ending of a large affiliate partnership. The answers to the question Why are never as simple as a single indicator value. Bring able to quickly get to this information is all about designing how a user accesses the KPIs and then also how easily they can get to the additional information they need. This is where a Dashboard mentality can come in handy. For example, the business user can have a dashboard that shows their KPIs, but also has links to some of the common reports that they run regarding Sales Data. The users boss may have the same KPIs on their dashboard, but instead of links to individual reports they are going to have a link to a status report that was created by the user that pulls together all the data about the KPI in a summary format the users boss can review. So some of the key things to think about when building or evaluating KPIs for your organization: Technology should not be the driving factor KPIs are of little value without some indicator for whether a value is good, bad or neutral. KPIs only give an answer to the Is this number good\bad? question Make sure the ability to drill into the Why of a KPI is close at hand and relevant to the user who is viewing the KPI. The KPI is a key business tool when defined properly to help monitor business performance across the enterprise in an objective and consistent manner. At times it might feel like the process of defining the business aspects of a KPI can sometimes be arduous, the payoff in the end can far outweigh the costs. Some of the benefits of going through this process are a better understanding of the key metrics for an organization and the measure of those metrics and a consistent snapshot of business performance that can be utilized across the organization. And I think that these are benefits to any organization regardless of the technology or the implementation.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #052

    - by Pinal Dave
    Let us continue with the final episode of the Memory Lane Series. Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Set Server Level FILLFACTOR Using T-SQL Script Specifies a percentage that indicates how full the Database Engine should make the leaf level of each index page during index creation or alteration. fillfactor must be an integer value from 1 to 100. The default is 0. Limitation of Online Index Rebuld Operation Online operation means when online operations are happening in the database are in normal operational condition, the processes which are participating in online operations does not require exclusive access to the database. Get Permissions of My Username / Userlogin on Server / Database A few days ago, I was invited to one of the largest database company. I was asked to review database schema and propose changes to it. There was special username or user logic was created for me, so I can review their database. I was very much interested to know what kind of permissions I was assigned per server level and database level. I did not feel like asking Sr. DBA the question about permissions. Simple Example of WHILE Loop With CONTINUE and BREAK Keywords This question is one of those questions which is very simple and most of the users get it correct, however few users find it confusing for the first time. I have tried to explain the usage of simple WHILE loop in the first example. BREAK keyword will exit the stop the while loop and control is moved to the next statement after the while loop. CONTINUE keyword skips all the statement after its execution and control is sent to the first statement of while loop. Forced Parameterization and Simple Parameterization – T-SQL and SSMS When the PARAMETERIZATION option is set to FORCED, any literal value that appears in a SELECT, INSERT, UPDATE or DELETE statement is converted to a parameter during query compilation. When the PARAMETERIZATION database option is SET to SIMPLE, the SQL Server query optimizer may choose to parameterize the queries. 2008 Transaction and Local Variables – Swap Variables – Update All At Once Concept Summary : Transaction have no effect over memory variables. When UPDATE statement is applied over any table (physical or memory) all the updates are applied at one time together when the statement is committed. First of all I suggest that you read the article listed above about the effect of transaction on local variant. As seen there local variables are independent of any transaction effect. Simulate INNER JOIN using LEFT JOIN statement – Performance Analysis Just a day ago, while I was working with JOINs I find one interesting observation, which has prompted me to create following example. Before we continue further let me make very clear that INNER JOIN should be used where it cannot be used and simulating INNER JOIN using any other JOINs will degrade the performance. If there are scopes to convert any OUTER JOIN to INNER JOIN it should be done with priority. 2009 Introduction to Business Intelligence – Important Terms & Definitions Business intelligence (BI) is a broad category of application programs and technologies for gathering, storing, analyzing, and providing access to data from various data sources, thus providing enterprise users with reliable and timely information and analysis for improved decision making. Difference Between Candidate Keys and Primary Key Candidate Key – A Candidate Key can be any column or a combination of columns that can qualify as unique key in database. There can be multiple Candidate Keys in one table. Each Candidate Key can qualify as Primary Key. Primary Key – A Primary Key is a column or a combination of columns that uniquely identify a record. Only one Candidate Key can be Primary Key. 2010 Taking Multiple Backup of Database in Single Command – Mirrored Database Backup I recently had a very interesting experience. In one of my recent consultancy works, I was told by our client that they are going to take the backup of the database and will also a copy of it at the same time. I expressed that it was surely possible if they were going to use a mirror command. In addition, they told me that whenever they take two copies of the database, the size of the database, is always reduced. Now this was something not clear to me, I said it was not possible and so I asked them to show me the script. Corrupted Backup File and Unsuccessful Restore The CTO, who was also present at the location, got very upset with this situation. He then asked when the last successful restore test was done. As expected, the answer was NEVER.There were no successful restore tests done before. During that time, I was present and I could clearly see the stress, confusion, carelessness and anger around me. I did not appreciate the feeling and I was pretty sure that no one in there wanted the atmosphere like me. 2011 TRACEWRITE – Wait Type – Wait Related to Buffer and Resolution SQL Trace is a SQL Server database engine technology which monitors specific events generated when various actions occur in the database engine. When any event is fired it goes through various stages as well various routes. One of the routes is Trace I/O Provider, which sends data to its final destination either as a file or rowset. DATEDIFF – Accuracy of Various Dateparts If you want to have accuracy in seconds, you need to use a different approach. In the first example, the accurate method is to find the number of seconds first and then divide it by 60 to convert it in minutes. Dedicated Access Control for SQL Server Express Edition http://www.youtube.com/watch?v=1k00z82u4OI Book Signing at SQLPASS 2012 Who I Am And How I Got Here – True Story as Blog Post If there was a shortcut to success – I want to know. I learnt SQL Server hard way and I am still learning. There are so many things, I have to learn. There is not enough time to learn everything which we want to learn. I am constantly working on it every day. I welcome you to join my journey as well. Please join me in my journey to learn SQL Server – more the merrier. Vacation, Travel and Study – A New Concept Even those who have advanced degrees and went to college for years, or even decades, find studying hard.  There is a difference between studying for a career and studying for a certification.  At least to get a degree there is a variety of subjects, with labs, exams, and practice problems to make things more interesting. Order By Numeric Values Formatted as String We have a table which has a column containing alphanumeric data. The data always has first as an integer and later part as a string. The business need is to order the data based on the first part of the alphanumeric data which is an integer. Now the problem is that no matter how we use ORDER BY the result is not produced as expected. Let us understand this with an example. Resolving SQL Server Connection Errors – SQL in Sixty Seconds #030 – Video One of the most famous errors related to SQL Server is about connecting to SQL Server itself. Here is how it goes, most of the time developers have worked with SQL Server and knows pretty much every error which they face during development language. However, hardly they install fresh SQL Server. As the installation of the SQL Server is a rare occasion unless you are a DBA who is responsible for such an instance – the error faced during installations are pretty rare as well. http://www.youtube.com/watch?v=1k00z82u4OI Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • CodePlex Daily Summary for Monday, May 17, 2010

    CodePlex Daily Summary for Monday, May 17, 2010New Projects.NET Essentials Course: .NET Essentials course @ Telerik Academy Training project for the studentsAU/NZ Office 2010 Launch Demos: The AU/NZ Office 2010 Launch Demos are a collection of code samples that were used as part of the Office/SharePoint 2010 launch parties in Australi...CybennyCMS: Very simple CMS system for building sites with ASP.NET with templates for lay-out, content pages with only html content and a xml file for the site...essionPIM: essionPIMGIStance: A library for finding "nearest neighbor" among an in-memory set of positions, in C# and F#. A radius must be specified for making a meaningful s...IP Informer: IP Informer is IP Informer.Kurumsal Ofis Paketi: Kurumsal Ofis Paketi (KOP), Microsoft Ofis 2010 ürünleri için geliştirilmiş eklenti yazılımıdır. KOP, Word ve Excel’de bulunan işlevlerinin genişle...Mockup to XAML: Convert Balsamiq Mockups to XAML. This project supports BMML mockup control conversion using plugins. A standard set of controls are included wit...Open XML Validator: This WPF app give you a brief resume about errors in your Open XML documents.Paint.NET Bulk Image Processor: PDNBulkUpdater is a plug-in for Paint.NET that allows you to efficiently perform operations such as resizing and converting multiple images at the ...PiPiBugNet: PiPiBugNet是一套全新的开源Bug管理系统Roleplay character generator: The roleplay character generator allows the creation of characters for different roleplaying gamesSharePoint User Search WebParts: This project contains SharePoint webparts which provide advanced search configuration and experience for SharePoint 2007. It will be upgrade in few...Spodi: Spodi is created on 22-04-2010TfsPolicyPack: This project will provide a few checkin policies for VS 2010.vccodesandobx: vccodesandobxvccodesandobxvccodesandobxWhiteNile: test project using codeplexNew ReleasesAnimeStore.Net: 1.0.3.0: Build 1.0.3.0 Changes Move some functionality to features (MEF) Filter / Search functionality. Anime hard-copy records storage (e.g Disk Storage ...AU/NZ Office 2010 Launch Demos: Twitter map web part: This is the main twitter map web part download, see the Twitter Map web part page for all the information.Blueset Studio Opensource Projects: 推来: 稳定版本BUtil: BUtil 5.0 Alpha2: The initial implementation of multitasking (except ghost)CassiniDev - Cassini 3.5/4.0 Developers Edition: CassiniDev 3.5.1 and 4.0.1 beta: Beta 2 is released here: url http://cassinidev.codeplex.com/releases/view/45456 New in CassiniDev v3.5.1.0/v4.0.1.0 Added .Net 4 / VS10 build. ...CBM-Command: 2010-05-16: Release Notes - 2010-05-16New Features New navigation options: Page Up, Page Down, Top of Directory, Bottom of Directory. See documentation (http:...CCNet Conditional Plugin: CCNet Conditional for CCNet 1.5: A (quick) build of the plugin for CCNet 1.5 to fix the 17365 bug reported by Beakster. This also adds a new condition "timeCondition"CybennyCMS: Cybenny CMS beta 1: The first beta. Includes a small demo site.Data Extracting SDK: Data Extracting SDK v.1.1 RTM: RTM version of Data Extracting SDK.Duckworth Lewis Professional Edition Calculator: DLcalc 2.0: This software can perform all D/L calculations 100% accurately. From version 2.0 onwards, tables for par scores can also be produced.EPiServer CMS Page Type Builder: Page Type Builder 1.2: Release notes can be found in this blog post.Floe IRC Client: Floe IRC Client 2010-05 R5: - Many new context menu options for @s - Ability to select multiple users in the nick list for some operations (kick, ban) - Bunch of minor bug fix...Graffiti CMS Events Plugin: Version 1.0.1: Minor update to previous version to fix bug where deleted posts were still showing in the calendar.Microsoft Research Boogie: 2010-05-16: Binary release of Boogie and Dafny. (Note, Chalice is not pre-built as part of this binary release. To obtain it, you need to build it yourself f...MSBuild Launch Pad (mPad): 1.0 Beta 2: Basic support for sln, csproj, vbproj, vcxproj, shfbproj, ccproj, oxygene and proj files are added. Basic settings (Show Prompt, and Auto Hide) are...Multi-Language Words Memorizer: Memorizer 1.1: Issues fix, XML db update with new words.NShader - HLSL - GLSL - CG - Shader Syntax Highlighter AddIn for Visual Studio: NShader 1.1: New release of NShader! New : - a Visual Studio 2010 port can be installed through the new extension manager : you just have to download NShaderV...PHPExcel: PHPExcel 1.7.3 Production: Want to contribute?Please refer the Contribute page. DonationsDonate via PayPal. If you want to, we can also add your name / company on our Donati...Rollback - A social backup tool.: Rollback Setup 0.5.1.2 Build 48360: Bug fixes for backing up files which are hidden/system. Changes to make builds on 64 bit Windows 7 using VS 2010 Express edition.Rollback - A social backup tool.: Rollback Setup 0.5.1.3: Updated version number.Shake - C# Make: Shake v0.1.20: New: Simple console logger Changes: Command line params helper writes out syntax and samples (like msbuild) Fixes: Assembly info, file task and r...SharePoint User Search WebParts: v0.1 Friendly MOSS 2007 Search WebPart: Very first version of this webpart. A more stabilized version will follow in few days.Team Deploy: Team Deploy 2010 Beta 1: This is the initial release for Team Deploy 2010 for TFS Team Build 2010. All features from Team Build 2.x are functional in this version. Comp...Team Foundation Server Administration Tool: 2.0: TFS Administration Tool 2.0 TFS Administration Tool 2.0 is built on top of the Team Foundation Server 2008 object model and in order to connect to...The Ping Master: v0.9.0.0: Installer for The Ping Master binariesUseful Office Macros: All Macro Downloads: Please find above the downloads related to this project. Each Excel Workbook below works independently of the others, so you only need to download...VCC: Latest build, v2.1.30516.0: Automatic drop of latest buildVisual Studio DSite: Advanced Digital Board Game (Visual C++ 2008): An advanced digital board game made in visual c 2008.YUI Compressor Custom Tool for Visual Studio: YUI Compressor Custom Tool Full Version: Version 1.0 The following changes have been made: Merged classes to automatically sense if the target file is Javascript or CSS. Cleaned up setu...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryPHPExcelBlogEngine.NETRawrMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersDotNetZip LibraryCaliburn: An Application Framework for WPF and SilverlightSQL Server PowerShell Extensions

    Read the article

  • XBRL US Conference Highlights

    - by john.orourke(at)oracle.com
    Back in early November I had an opportunity to attend the XBRL US National Conference in Philadelphia.  At the event, XBRL US announced that Oracle had joined the initiative, so I had a chance to participate in a press conference and attend a number of sessions.  Oracle joined XBRL US so we can stay ahead of the standard and leverage it in our products, and to help drive awareness with customers and improve adoption of XBRL. There were roughly 250 attendees at the event, about half of which were vendors and consultants and the rest financial reporting staff from corporate filers.  Event sponsors included Ernst & Young, SWIFT and Fujitsu.  There were also a number of XBRL technology and service providers exhibiting at the conference.  On Monday Nov. 8th, the XBRL US Steering Committee meetings and Annual Members meeting and reception were held.  At the Annual Members meeting the big news was that current XBRL US President, Mark Bolgiano, is moving to a new position at Howard Hughes Medical Center.  Campbell Pryde, who had led the Taxonomy Development for XBRL US, is taking over as XBRL US President. Other items that were highlighted at the members meeting included: The US GAAP XBRL taxonomy is being used by over 1500 SEC filers and has now been handed over to the FASB to maintain and enhance 16 filer training events were held in 2010 XBRL Global Magazine was launched Corporate Actions proposal was submitted to the SEC with SWIFT in May XBRL Labs for iPhone, XBRL US Consistency Suite launched ISO 2022 Corporate Actions Alignment with XBRL achieved The XBRL Credit Rating taxonomy was accepted Tuesday Nov. 9th included Keynotes, General Sessions, Innovation Workshop for Governments and Securities Professionals, and an Opening Reception.  General sessions included: Lessons Learned from the SEC's rollout of XBRL.  More than 18,000 errors were identified in reviews of filings between June 2009 and September 2010.  Most of these related to negative values being used where they shouldn't have.  Also, the SEC feels there are too many taxonomy extensions being created - mostly in the Cash Flow Statements.  They emphasize using existing elements in the US GAAP taxonomy and advise filers not to  create extensions to improve the visual formatting of XBRL filings. Investors and XBRL - Setting the Standard for Data Quality.  In this panel discussion, the key learning was that CFA's, academics and the financial community are not using XBRL as expected.  The issues raised include the  accuracy and completeness of filings, number of taxonomy extensions, and limited number of tools available to help analyze XBRL data.  Another big issue that was raised is the lack of historic results in XBRL - most analysts need 10 quarters of historic data.  On the positive side, XBRL has the potential to eliminate re-keying of data and errors here and can improve analytic capabilities for financial analysts once more historic data is available and more companies are providing detailed tagging of their filings. A US Roadmap for XBRL Financial Reporting.  This was a panel discussion featuring Jeff Neumann(SEC), Campbell Pryde(XBRL US), and Louis Matherne(FASB).  Key points included the fact that XBRL is currently used by 1500 companies, with 8000 more companies coming in 2011.  XBRL for Mutual Fund Reporting will start in 2011 for 8000 funds, and a Credit Rating Taxonomy has now been submitted for review.  The XBRL tagging/filing process is improving each quarter - more education is helping here.  The FASB is looking at extensions to date, and potential additions to US GAAP taxonomy, while the SEC is evaluating filings for accuracy, consistency in tagging, and tools for analyzing data.  The big news is that the FASB 2011 US GAAP Taxonomy has been completed and reviewed by SEC.  The 2011 US GAAP Taxonomy supports new FASB accounting standards issued since 2009, has new taxonomy elements for certain industries (i.e airlines) and the elimination of 500 concepts.  (meaning they can't be used going forward but are still supported for historical comparison)  The 2011 US GAAP Taxonomy will be available for usage with Q2 2011 SEC filings.  More information about this can be found on the FASB web site.  http://www.fasb.org/home Accounting Firms and XBRL.  This session covered the Role of Audit Firms, which includes awareness and education, validation of XBRL filings, and in-house transition planning.  The main advice provided was that organizations should document XBRL mapping process, perform peer comparisons, and risk assessments on a regular basis. Wednesday Nov. 10th included more Keynotes, General Sessions on Corporate Actions, and XBRL Essentials Workshop Training for corporate filers.  The XBRL Essentials Training included: Getting Started Once you Have the Basics Detailed Footnote Tagging and Handling Tables Quality Control and Trust in the XBRL Process Bringing XBRL In-House:  What are the Options, What should you consider? The US GAAP Financial Reporting Taxonomy - Overview of the 2011 release The XBRL Essentials Training was well-attended with about 80 people.  This included a good overview of the SEC's XBRL mandate, limited liability issue, tagging levels, recommended planning process, internal vs. outsourced approach, and how to manage service providers.  I learned a lot from the session on detailed tagging.  This is the requirement that kicks in during a company's second year of XBRL filing with the SEC and applies to financial statements, footnotes and disclosures (it does not apply to MD&A, executive communications and other information).  The review of the Linkbase model, or dimensional table structure, was very interesting and can be complex to understand.  The key takeaway here is that using dimensional tables in XBRL filings can help limit the number of taxonomy extensions that are required.  The slides from this session are posted on the XBRL US web site. (http://xbrl.us/events/Pages/archive.aspx) For me, the main summary points and takeaways from the XBRL US conference are: XBRL for financial reporting has turned the corner and gone mainstream - with 1500 companies currently using it and 8000 more coming in 2011 The expected value is not being achieved by filers or consumers of XBRL data - this will improve when more companies are filing in XBRL, more history is available, and more software tools are available for analysis (hmm, sounds like an opportunity for Oracle) XBRL is becoming the global standard for all business communications beyond just the financials - i.e. adoption for mutual funds, corporate actions and others planned for the future If you would like to learn more about XBRL and the various training programs, services and software tools that are available check out the XBRL US web site and even better - become a member.  Here's a link:  http://xbrl.us/Pages/default.aspx

    Read the article

  • Big Visible Charts

    - by Robert May
    An important part of Agile is the concept of transparency and visibility. In proper functioning teams, stakeholders can look at any team at any time in the iteration or release and see how that team is doing by simply looking at what we call Big Visible Charts. If you’ve done Scrum, you’ve seen these charts. However, interpreting these charts can often be an art form. There are several different charts that can be useful. In this newsletter, I’ll focus on the Iteration Burndown and Cumulative Flow charts. I’ve included a copy of the spreadsheet that I used to create the charts, and if you don’t have a tool that creates them for you, you can use this spreadsheet to do so. Our preferred tool for managing Scrum projects is Rally. Rally creates all of these charts for you, saving you quite a bit of time. The Iteration Burndown and Cumulative Flow Charts This is the main chart that teams use. Although less useful to stakeholders, this chart is critical to the team and provides quite a bit of information to the team about how their iteration is going. Most charts are a combination of the charts below, so you may need to combine aspects of each section to understand what is happening in your iterations. Ideal Ah, isn’t that a pretty picture? Unfortunately, it’s also very unrealistic. I’ve seen iterations that come close to ideal, but never that match perfectly. If your iteration matches perfectly, chances are, someone is playing with the numbers. Reality is just too difficult to have a burndown chart that matches this exactly. Late Planning Iteration started, but the team didn’t. You can tell this by the fact that the real number of estimated hours didn’t appear until day two. In the cumulative flow, you can also see that nothing was defined in Day one and two. You want to avoid situations like this. You’ll note that the team had to burn faster than is ideal to meet the iteration because of the late planning. This often results in long weeks and days. Testing Starved Determining whether or not testing is starved is difficult without the cumulative flow. The pattern in the burndown could be nothing more that developers not completing stories early enough or could be caused by stories being too big. With the cumulative flow, however, you see that only small bites are in progress and stories were completed early, but testing didn’t start testing until the end of the iteration, and didn’t complete testing all stories in the iteration. When this happens, question whether or not your testing resources are sufficient for your team and whether or not acceptance is adequately defined. No Testing With this one, both graphs show the same thing; the team needs testers and testing! Without testing, what was completed cannot be verified to make sure that it is acceptable to the business. If you find yourself in this situation, review your testing practices and acceptance testing process and make changes today. Late Development With this situation, both graphs tell a story. In the top graph, you can see that the hours failed to burn down as quickly as the team expected. This could be caused by the team not correctly estimating their hours or the team could have had illness or some other issue that affected them. Often, when teams are tackling something that is more unknown, they’ll run into technical barriers that cause the burn down to happen slower than expected. In the cumulative flow graph, you can see that not much was completed in the first few days. This could be because of illness or technical barriers or simply poor estimation. Testing was able to keep up with everything that was completed, however. No Tool Updating When you see graphs that look like this, you can be assured that it’s because the team is not updating the tool that generates the graphs. Review your policy for when they are to update. On the teams that I run, I require that each team member updates the tool at least once daily. You should also check to see how well the team is breaking down stories into tasks. If they’re creating few large tasks, graphs can look similar to this. As a general rule, I never allow tasks, other than Unit Testing and Uncertainty, to be greater than eight hours in duration. Scope Increase I always encourage team members to enter in however much time they think they have left on a task, even if that means increasing the total amount of time left to do. You get a much better and more realistic picture this way. Increasing time remaining could explain the burndown graph, but by looking at the cumulative flow graph, we can see that stories were added to the iteration and scope was increased. Since planning should consume all of the hours in the iteration, this is almost always a bad thing. If the scope change happened late in the iteration and the hours remaining were well below the ideal burn, then increasing scope is probably o.k., but estimation needs to get better. However, with the charts above, that’s clearly not what happened and the team was required to do extra work to make the iteration. If you find this happening, your product owner and ScrumMasters need training. The team also needs to learn to say no. Scope Decrease Scope decreases are just as bad as scope increases. Usually, graphs above show that the team did a poor job of estimating their stories and part way through had to reduce scope to change the iteration. This will happen once in a while, but if you find it’s a pattern on your team, you need to re-evaluate planning. Some teams are hopelessly optimistic. In those cases, I’ll introduce a task I call “Uncertainty.” With Uncertainty, the team estimates how many hours they might need if things don’t go well with the tasks they’ve defined. They try to estimate things that could go poorly and increase the time appropriately. Having an Uncertainty task allows them to have a low and high estimate. Uncertainty should not just be an arbitrary buffer. It must correlate to real uncertainty in the tasks that have been defined. Stories are too Big Often, we see graphs like the ones above. Note that the burndown looks fairly good, other than the chunky acceptance of stories. However, when you look at cumulative flow, you can see that at one point, everything is in progress. This is a bad thing. When you see graphs like this, you’re in one of two states. You may just have a very small team and can only handle one or two stories in your iteration. If you have more than one or two people, then the most likely problem is that your stories are far too big. To combat this, break large high hour stories into smaller pieces that can be completed independently and accepted independently. If you don’t, you’ll likely be requiring your testers to do heroic things to complete testing on the last day of the iteration and you’re much more likely to have the entire iteration fail, because of the limited amount of things that can be completed. Summary There are other charts that can be useful when doing scrum. If you don’t have any big visible charts, you really need to evaluate your process and change. These charts can provide the team a wealth of information and help you write better software. If you have any questions about charts that you’re seeing on your team, contact me with a screen capture of the charts and I’ll tell you what I’m seeing in those charts. I always want this information to be useful, so please let me know if you have other questions. Technorati Tags: Agile

    Read the article

  • How to create a virtual network with Azure Connect

    - by Herve Roggero
    If you are trying to establish a virtual network between machines located in disparate networks, you can either use VPN, Virtual Network or Azure Connect. If you want to establish a connection between machines located in Windows Azure, you should consider using the Virtual Network service. If you want to establish a connection between local machines and Virtual Machines in Windows Azure, you may be able to use your existing VPN device (assuming you have one), as long as the device is supported by Microsoft. If the VPN device you are using isn’t supported, or if you are trying to create a virtual network between machines from disparate networks (such as machines located in another cloud provider), you can use Azure Connect. This blog post explains how Azure Connect can help you create virtual networks between multiple servers in the cloud, various servers in different cloud environments, and on-premise. Note: Azure Connect is currently in Technical Preview. About Azure Connect Let’s do a quick review of Azure Connect. This technology implements an IPSec tunnel from machines to to a relay service located in the Microsoft cloud (Azure). So in essence, Azure Connect doesn’t provide a point-to-point connection between machines; the network communication is tunneled through the relay service. The relay service in turn offers a mechanism to enforce basic communication rules that you define through Groups. We will review this later. You could network two or more VMs in the Azure cloud (although you should consider using a Virtual Network if you go this route), or servers in the Azure cloud and other machines in the Amazon cloud for example, or even two or more on-premise servers located in different locations for which a direct network connection is not an option. You can place any number of machines in your topology. Azure Connect gives you great flexibility on how you want to build your virtual network across various environments. So Azure Connect makes sense when you want to: Connect machines located in different cloud providers Connect on-premise machines running in different locations Connect Azure VMs with on-premise (if you do not have a VPN device, or if your device is not supported) Connect Azure Roles (Worker Roles, Web Roles) with on-premise servers or in other cloud providers The diagram below shows you a high level network topology that involves machines in the Windows Azure cloud, other cloud providers and on-premise. You should note that the only required component in this diagram is the Relay itself. The other machines are optional (although your network is useful only if you have two or more machines involved). Relay agents are currently available in three geographic areas: US, Europe and Asia. You can change which region you want to use in the Windows Azure management portal. High Level Network Topology With Azure Connect Azure Connect Agent Azure Connect establishes a virtual network and creates virtual adapters on your machines; these virtual adapters communicate through the Relay using IPSec. This is achieved by installing an agent (the Azure Connect Agent) on all the machines you want in your network topology. However, you do not need to install the agent on Worker Roles and Web Roles; that’s because the agent is already installed for you. Any other machine, including Virtual Machines in Windows Azure, needs the agent installed.  To install the agent, simply go to your Windows Azure portal (http://windows.azure.com) and click on Networks on the bottom left panel. You will see a list of subscriptions under Connect. If you select a subscription, you will be able to click on the Install Local Endpoint icon on top. Clicking on this icon will begin the download and installation process for the agent. Activating Roles for Azure Connect As previously mentioned, you do not need to install the Azure Connect Agent on Worker Roles and Web Roles because it is already loaded. However, you do need to activate them if you want the roles to participate in your network topology. To do this, you will need to click on the Get Activation Token icon. The activation token must then be copied and placed in the configuration file of your roles. For more information on how to perform this step, visit MSDN at http://msdn.microsoft.com/en-us/library/windowsazure/gg432964.aspx. Firewall Rules Note that specific firewall rules must exist to allow the agent to communicate through the Relay. You will need to allow TCP 443 and ICMPv6. For additional information, please visit MSDN at http://msdn.microsoft.com/en-us/library/windowsazure/gg433061.aspx. CA Certificates You can optionally require agents to sign their activation request with the Relay using a trusted certificate issued by a Certificate Authority (CA). Click on Activation Options to learn more. Groups To create your network topology you must first create a group. A group represents a logical container of endpoints (or machines) that can communicate through the Relay. You can create multiple groups allowing you to manage network communication differently. For example you could create a DEVELOPMENT group and a PRODUCTION group. To add an endpoint you must first install an agent that will create a virtual adapter on the machine on which it is installed (as discussed in the previous section). Once you have created a group and installed the agents, the machines will appear in the Windows Azure management portal and you can start assigning machines to groups. The next figure shows you that I created a group called LocalGroup and assigned two machines (both on-premise) to that group. Groups and Computers in Azure Connect As I mentioned previously you can allow these machines to establish a network connection. To do this, you must enable the Interconnected option in the group. The following diagram shows you the definition of the group. In this topology I chose to include local machines only, but I could also add worker roles and web roles in the Azure Roles section (you must first activate your roles, as discussed previously). You could also add other Groups, allowing you to manage inter-group communication. Defining a Group in Azure Connect Testing the Connection Now that my agents have been installed on my two machines, the group defined and the Interconnected option checked, I can test the connection between my machines. The next screenshot shows you that I sent a PING request to DEVLAP02 from DEVDSK02. The PING request was successful. Note however that the time is in the hundreds of milliseconds on average. That is to be expected because the machines are connecting through the Relay located in the cloud. Going through the Relay introduces an extra hop in the communication chain, so if your systems rely on high performance, you may want to conduct some basic performance tests. Sending a PING Request Through The Relay Conclusion As you can see, creating a network topology between machines using the Azure Connect service is simple. It took me less than five minutes to create the above configuration, including the time it took to install the Azure Connect agents on the two machines. The flexibility of Azure Connect allows you to create a virtual network between disparate environments, as long as your operating systems are supported by the agent. For more information on Azure Connect, visit the MSDN website at http://msdn.microsoft.com/en-us/library/windowsazure/gg432997.aspx. About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net. Special Thanks I would like thank those that helped me figure out how Azure Connect works: Marcel Meijer - http://blogs.msmvps.com/marcelmeijer/ Michael Wood - Http://www.mvwood.com Glenn Block - http://www.codebetter.com/glennblock Yves Goeleven - http://cloudshaper.wordpress.com/ Sandrino Di Mattia - http://fabriccontroller.net/ Mike Martin - http://techmike2kx.wordpress.com

    Read the article

  • Behavior Driven Development (BDD) and DevExpress XAF

    - by Patrick Liekhus
    So in my previous posts I showed you how I used EDMX to quickly build my business objects within XPO and XAF.  But how do you test whether your business objects are actually doing what you want and verify that your business logic is correct?  Well I was reading my monthly MSDN magazine last last year and came across an article about using SpecFlow and WatiN to build BDD tests.  So why not use these same techniques to write SpecFlow style scripts and have them generate EasyTest scripts for use with XAF.  Let me outline and show a few things below.  I plan on releasing this code in a short while, I just wanted to preview what I was thinking. Before we begin… First, if you have not read the article in MSDN, here is the link to the article that I found my inspiration.  It covers the overview of BDD vs. TDD, how to write some of the SpecFlow syntax and how use the “Steps” logic to create your own tests. Second, if you have not heard of EasyTest from DevExpress I strongly recommend you review it here.  It basically takes the power of XAF and the beauty of your application and allows you to create text based files to execute automated commands within your application. Why would we do this?  Because as you will see below, the cucumber syntax is easier for business analysts to interpret and digest the business rules from.  You can find most of the information you will need on Cucumber syntax within The Secret Ninja Cucumber Scrolls located here.  The basics of the syntax are that Given X When Y Then Z.  For example, Given I am at the login screen When I enter my login credentials Then I expect to see the home screen.  Pretty easy syntax to follow. Finally, we will need to download and install SpecFlow.  You can find it on their website here.  Once you have this installed then let’s write our first test. Let’s get started… So where to start.  Create a new testing project within your solution.  I typically call this with a similar naming convention as used by XAF, my project name .FunctionalTests (i.e.  AlbumManager.FunctionalTests).  Remove the basic test that is created for you.  We will not use the default test but rather create our own SpecFlow “Feature” files.  Add a new item to your project and select the SpecFlow Feature file under C#.  Name your feature file as you do your class files after the test they are performing. Now you can crack open your new feature file and write the actual test.  Make sure to have your Ninja Scrolls from above as it provides valuable resources on how to write your test syntax.  In this test below you can see how I defined the documentation in the Feature section.  This is strictly for our purposes of readability and do not effect the test.  The next section is the Scenario Outline which is considered a test template.  You can see the brackets <> around the fields that will be filled in for each test.  So in the example below you can see that Given I am starting a new test and the application is open.  This means I want a new EasyTest file and the windows application generated by XAF is open.  Next When I am at the Albums screen tells XAF to navigate to the Albums list view.  And I click the New:Album button, tells XAF to click the new button on the list grid.  And I enter the following information tells XAF which fields to complete with the mapped values.  And I click the Save and Close button causes the record to be saved and the detail form to be closed.  Then I verify results tests the input data against what is visible in the grid to ensure that your record was created. The Scenarios section gives each test a unique name and then fills in the values for each test.  This way you can use the same test to make multiple passes with different data. Almost there.  Now we must save the feature file and the BDD tests will be written using standard unit test syntax.  This is all handled for you by SpecFlow so just save the file.  What you will see in your Test List Editor is a unit test for each of the above scenarios you just built. You can now use standard unit testing frameworks to execute the test as you desire.  As you would expect then, these BDD SpecFlow tests can be automated into your build process to ensure that your business requirements are satisfied each and every time. How does it work? What we have done is to intercept the testing logic at runtime to interpret the SpecFlow syntax into EasyTest syntax.  This is the basic StepDefinitions that we are working on now.  We expect to put these on CodePlex within the next few days.  You can always override and make your own rules as you see fit for your project.  Follow the MSDN magazine above to start your own.  You can see part of our implementation below. As you can gather from the MSDN article and the code sample below, we have created our own common rules to build the above syntax. The code implementation for these rules basically saves your information from the feature file into an EasyTest file format.  It then executes the EasyTest file and parses the XML results of the test.  If the test succeeds the test is passed.  If the test fails, the EasyTest failure message is logged and the screen shot (as captured by EasyTest) is saved for your review. Again we are working on getting this code ready for mass consumption, but at this time it is not ready.  We will post another message when it is ready with all details about usage and setup. Thanks

    Read the article

  • Effectiveness and Efficiency

    - by Daniel Moth
    In the professional environment, i.e. at work, I am always seeking personal growth and to be challenged. The result is that my assignments, my work list, my tasks, my goals, my commitments, my [insert whatever word resonates with you] keep growing (in scope and desired impact). Which in turn means I have to keep finding new ways to deliver more value, while not falling into the trap of working more hours. To do that I continuously evaluate both my effectiveness and my efficiency. EFFECTIVENESS The first thing I check is my effectiveness: Am I doing the right things? Am I focusing too much on unimportant things? Am I spending more time doing stuff that is important to my team/org/division/business/company, or am I spending it on stuff that is important to me and that I enjoy doing? Am I valuing activities that maybe I have outgrown and should be delegated to others who are at a stage I have surpassed (in Microsoft speak: is the work I am doing level appropriate or am I still operating at the previous level)? Notice how the answers to those questions change over time and due to certain events, so I have to remind myself to revisit them frequently. Events that force me to re-examine them are: change of role, change of team/org/etc, change of direction of team/org/etc, re-org, new hires on the team that take on some of the work I did, personal promotion, change of manager... and if none of those events has occurred since the last annual review, I ask myself those at each annual review anyway. If you think you are not being effective at work, make a list of the stuff that you do and start tracking where your time goes. In parallel, have a discussion with your manager about where they think your time should go. Ultimately your time is finite and hence it is your most precious investment, don't waste it. If your management doesn't value as highly what you spend your time on, then either convince your management, or stop spending your time on it, or find different management: Lead, Follow, or get out of the way! That's my view on effectiveness. You have to fix that before moving to being efficient, or you may end up being very efficient at stuff that nobody wants you to be doing in the first place. For example, you may be spending your time writing blog posts and becoming better and faster at it all the time. If your manager thinks that is not even part of your job description, you are wasting your time to satisfy your inner desires. Nobody can help you with your effectiveness other than your management chain and your management peers - they are the judges of it. EFFICIENCY The second thing I check is my efficiency: Am I doing things right? For me, doing things right means that I deliver the same quality of work faster [than what I used to, and than my peers, and than expected of me]. The result is that I can achieve more [than what I used to, and than my peers, and than expected of me]. Notice how the efficiency goal is a more portable one. If, by whatever criteria, you think you are the best at [insert your own skill here], this can change at two events: because you have new colleagues (who are potentially better than your older ones), and it can change with a change of manager (who has potentially higher expectations). That's about it. Once you are efficient at something, you carry that with you... All you need to really be doing here is, when taking on new kinds of work that you haven't done before, try a few approaches and devise a system so that you can become efficient at this new activity too... Just keep "collecting" stuff that you are efficient at. If you think you are not being efficient at something, break it down: What are the steps you take to complete that task? How long do you spend on each step? Talk to others about what steps they take, to see if you can optimize some steps away or trade them for better steps, or just learn how to complete a step faster. Have a system for every task you take so that you can have repeatable success. That's my view on efficiency. You have to fix it so that you can free up time to do more. When you plan a route from A to B - all else being equal - you try to get there as fast as possible so why would you not want to do that with your everyday work? For example, imagine you are inefficient at processing email: You spend more time than necessary dealing with email, and you still end up with dropped email threads and with slower response times than others. How can you improve? Talk to someone that you think is good at this, understand their system (e.g. here is my email processing system) and come up with one that works for you. Parting Thoughts Are you considered, by your colleagues and manager, an effective and efficient person at your workplace? If you are, what would you change if you were asked by your management to do the job of two people? Seriously, think about that! Your immediate reaction may be "that is not possible", but it actually is. You just have to re-assess what things that were previously important will now stop being important, by discussing them with your management and reaching agreement on relative priorities. For example, stuff that was previously on your plate may now have to be delegated or dropped. Where you thought you were efficient, maybe now you have to find an even faster path to completion, perhaps keeping in mind that Perfect is the Enemy of “Good Enough”. My personal experience (from both observing others and from my own reflection) is that when folks are struggling to keep up at work it is because of two reasons: They are investing energy in stuff that they enjoy doing which the business regards as having a lower priority than a lot of other things on their plate. They are completing tasks to a level of higher quality than what is required (due to personal pride) missing the big picture which almost always mandates completing three tasks at good enough quality than knocking only one of them out of the park while the other two come in late or not at all. There is a lot of content on the web, so I strongly encourage you to use your favorite search engine to read other views on effectiveness and efficiency (Bing, Google). Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • What's the best/most efficent way to create a semi-intelligent AI for a tic tac toe game?

    - by Link
    basically I am attempting to make a a efficient/smallish C game of Tic-Tac-Toe. I have implemented everything other then the AI for the computer so far. my squares are basically structs in an array with an assigned value based on the square. For example s[1].value = 1; therefore it's a x, and then a value of 3 would be a o. My question is whats the best way to create a semi-decent game playing AI for my tic-tac-toe game? I don't really want to use minimax, since It's not what I need. So how do I avoid a a lot of if statments and make it more efficient. Here is the rest of my code: #include <stdio.h> #include <stdlib.h> #include <string.h> #include <time.h> struct state{ // defined int state; // 0 is tie, 1 is user loss, 2 is user win, 3 is ongoing game int moves; }; struct square{ // one square of the board int value; // 1 is x, 3 is o char sign; // no space used }; struct square s[9]; //set up the struct struct state gamestate = {0,0}; //nothing void setUpGame(){ // setup the game int i = 0; for(i = 0; i < 9; i++){ s[i].value = 0; s[i].sign = ' '; } gamestate.moves=0; printf("\nHi user! You're \"x\"! I'm \"o\"! Good Luck :)\n"); } void displayBoard(){// displays the game board printf("\n %c | %c | %c\n", s[6].sign, s[7].sign, s[8].sign); printf("-----------\n"); printf(" %c | %c | %c\n", s[3].sign, s[4].sign, s[5].sign); printf("-----------\n"); printf(" %c | %c | %c\n\n", s[0].sign, s[1].sign, s[2].sign); } void getHumanMove(){ // get move from human int i; while(1){ printf(">>:"); char line[255]; // input the move to play fgets(line, sizeof(line), stdin); while(sscanf(line, "%d", &i) != 1) { //1 match of defined specifier on input line printf("Sorry, that's not a valid move!\n"); fgets(line, sizeof(line), stdin); } if(s[i-1].value != 0){printf("Sorry, That moves already been taken!\n\n");continue;} break; } s[i-1].value = 1; s[i-1].sign = 'x'; gamestate.moves++; } int sum(int x, int y, int z){return(x*y*z);} void getCompMove(){ // get the move from the computer } void checkWinner(){ // check the winner int i; for(i = 6; i < 9; i++){ // check cols if((sum(s[i].value,s[i-3].value,s[i-6].value)) == 8){printf("The Winner is o!\n");gamestate.state=1;} if((sum(s[i].value,s[i-3].value,s[i-6].value)) == 1){printf("The Winner is x!\n");gamestate.state=2;} } for(i = 0; i < 7; i+=3){ // check rows if((sum(s[i].value,s[i+1].value,s[i+2].value)) == 8){printf("The Winner is o!\n");gamestate.state=1;} if((sum(s[i].value,s[i+1].value,s[i+2].value)) == 1){printf("The Winner is x!\n");gamestate.state=2;} } if((sum(s[0].value,s[4].value,s[8].value)) == 8){printf("The Winner is o!\n");gamestate.state=1;} if((sum(s[0].value,s[4].value,s[8].value)) == 1){printf("The Winner is x!\n");gamestate.state=2;} if((sum(s[2].value,s[4].value,s[6].value)) == 8){printf("The Winner is o!\n");gamestate.state=1;} if((sum(s[2].value,s[4].value,s[6].value)) == 1){printf("The Winner is x!\n");gamestate.state=2;} } void playGame(){ // start playing the game gamestate.state = 3; //set-up the gamestate srand(time(NULL)); int temp = (rand()%2) + 1; if(temp == 2){ // if two comp goes first temp = (rand()%2) + 1; if(temp == 2){ s[4].value = 2; s[4].sign = 'o'; gamestate.moves++; }else{ s[2].value = 2; s[2].sign = 'o'; gamestate.moves++; } } displayBoard(); while(gamestate.state == 3){ if(gamestate.moves<10); getHumanMove(); if(gamestate.moves<10); getCompMove(); checkWinner(); if(gamestate.state == 3 && gamestate.moves==9){ printf("The game is a tie :p\n"); break; } displayBoard(); } } int main(int argc, const char *argv[]){ printf("Welcome to Tic Tac Toe\nby The Elite Noob\nEnter 1-9 To play a move, standard numpad\n1 is bottom-left, 9 is top-right\n"); while(1){ // while game is being played printf("\nPress 1 to play a new game, or any other number to exit;\n>>:"); char line[255]; // input whether or not to play the game fgets(line, sizeof(line), stdin); int choice; // user's choice about playing or not while(sscanf(line, "%d", &choice) != 1) { //1 match of defined specifier on input line printf("Sorry, that's not a valid option!\n"); fgets(line, sizeof(line), stdin); } if(choice == 1){ setUpGame(); // set's up the game playGame(); // Play a Game }else {break;} // exit the application } printf("\nThank's For playing!\nHave a good Day!\n"); return 0; }

    Read the article

  • Finding nuggets in ARC discussions

    - by alanc
    A bit over twenty years ago, Sun formed an Architecture Review Committee (ARC) that evaluates proposals to change interfaces between components in Sun software products. During the OpenSolaris days, we opened many of these discussions to the community. While they’re back behind closed doors, and at a different company now, we still continue to hold these reviews for the software from what’s now the Sun Systems Group division of Oracle. Recently one of these reviews was held (via e-mail discussion) to review a proposal to update our GNU findutils package to the latest upstream release. One of the upstream changes discussed was the addition of an “oldfind” program. In findutils 4.3, find was modified to use the fts() function to walk the directory tree, and oldfind was created to provide the old mechanism in case there were bugs in the new implementation that users needed to workaround. In Solaris 11 though, we still ship the find descended from SVR4 as /usr/bin/find and the GNU find is available as either /usr/bin/gfind or /usr/gnu/bin/find. This raised the discussion of if we should add oldfind, and if so what should we call it. Normally our policy is to only add the g* names for GNU commands that conflict with an existing Solaris command – for instance, we ship /usr/bin/emacs, not /usr/bin/gemacs. In this case however, that seemed like it would be more confusing to have /usr/bin/oldfind be the older version of /usr/bin/gfind not of /usr/bin/find. Thus if we shipped it, it would make more sense to call it /usr/bin/goldfind, which several ARC members noted read more naturally as “gold find” than as “g old find”. One of the concerns we often discuss in ARC is if a change is likely to be understood by users or if it will result in more calls to support. As we hit this part of the discussion on a Friday at the end of a long week, I couldn’t resist putting forth a hypothetical support call for this command: “Hello, Oracle Solaris Support, how may I help you?” “My admin is out sick, but he sent an email that he put the findutils package on our server, and I can run goldfind now. I tried it, but goldfind didn’t find gold.” “Did he get the binutils package too?” “No he just said findutils, do we need binutils?” “Well, gold comes in the binutils package, so goldfind would be able to find gold if you got that package.” “How much does Oracle charge for that package?” “It’s free for Solaris users.” “You mean Oracle ships packages of gold to customers for free?” “Yes, if you get the binutils package, it includes GNU gold.” “New gold? Is that some sort of alchemy, turning stuff into gold?” “Not new gold, gold from the GNU project.” “Oracle’s taking gold from the GNU project and shipping it to me?” “Yes, if you get binutils, that package includes gold along with the other tools from the GNU project.” “And GNU doesn’t mind Oracle taking their gold and giving it to customers?” “No, GNU is a non-profit whose goal is to share their software.” “Sharing software sure, but gold? Where does a non-profit like GNU get gold anyway?” “Oh, Google donated it to them.” “Ah! So Oracle will give me the gold that GNU got from Google!” “Yes, if you get the package from us.” “How do I get the package with the gold?” “Just run pkg install binutils and it will put it on your disk.” “We’ve got multiple disks here - which one will it put it on?” “The one with the system image - do you know which one that is? “Well the note from the admin says the system is on the first disk and the users are on the second disk.” “Okay, so it should go on the first disk then.” “And where will I find the gold?” “It will be in the /usr/bin directory.” “In the user’s bin? So thats on the second disk?” “No, it would be on the system disk, with the other development tools, like make, as, and what.” “So what’s on the first disk?” “Well if the system image is there the commands should all be there.” “All the commands? Not just what?” “Right, all the commands that come with the OS, like the shell, ps, and who.” “So who’s on the first disk too?” “Yes. Did your admin say when he’d be back?” “No, just that he had a massive headache and was going home after I tried to get him to explain this stuff to me.” “I can’t imagine why.” “Oh, is why a command too?” “No, _why was a Ruby programmer.” “Ruby? Do you give those away with the gold too?” “Yes, but it comes in the ruby package, not binutils.” “Oh, I’ll have to have my admin get that package too! Thanks!” Needless to say, we decided this might not be the best idea. Since the GNU package hasn’t had to release a serious bug fix in the new find in the past few years, the new GNU find seems pretty stable, and we always have the SVR4 find to use as a fallback in Solaris, so it didn’t seem that adding oldfind was really necessary, so we passed on including it when we update to the new findutils release. [Apologies to Abbott, Costello, their fans, and everyone who read this far. The Gold (linker) page on Wikipedia may explain some of the above, but can’t explain why goldfind is the old GNU find, but gold is the new GNU ld.]

    Read the article

  • Data Modeling: Logical Modeling Exercise

    - by swisscheese
    In trying to learn the art of data storage I have been trying to take in as much solid information as possible. PerformanceDBA posted some really helpful tutorials/examples in the following posts among others: is my data normalized? and Relational table naming convention. I already asked a subset question of this model here. So to make sure I understood the concepts he presented and I have seen elsewhere I wanted to take things a step or two further and see if I am grasping the concepts. Hence the purpose of this post, which hopefully others can also learn from. Everything I present is conceptual to me and for learning rather than applying it in some production system. It would be cool to get some input from PerformanceDBA also since I used his models to get started, but I appreciate all input given from anyone. As I am new to databases and especially modeling I will be the first to admit that I may not always ask the right questions, explain my thoughts clearly, or use the right verbage due to lack of expertise on the subject. So please keep that in mind and feel free to steer me in the right direction if I head off track. If there is enough interest in this I would like to take this from the logical to physical phases to show the evolution of the process and share it here on Stack. I will keep this thread for the Logical Diagram though and start new one for the additional steps. For my understanding I will be building a MySQL DB in the end to run some tests and see if what I came up with actually works. Here is the list of things that I want to capture in this conceptual model. Edit for V1.2 The purpose of this is to list Bands, their members, and the Events that they will be appearing at, as well as offer music and other merchandise for sale Members will be able to match up with friends Members can write reviews on the Bands, their music, and their events. There can only be one review per member on a given item, although they can edit their reviews and history will be maintained. BandMembers will have the chance to write a single Comment on Reviews about the Band they are associated with. Collectively as a Band only one Comment is allowed per Review. Members can then rate all Reviews and Comments but only once per given instance Members can select their favorite Bands, music, Merchandise, and Events Bands, Songs, and Events will be categorized into the type of Genre that they are and then further subcategorized into a SubGenre if necessary. It is ok for a Band or Event to fall into more then one Genre/SubGenre combination. Event date, time, and location will be posted for a given band and members can show that they will be attending the Event. An Event can be comprised of more than one Band, and multiple Events can take place at a single location on the same day Every party will be tied to at least one address and address history shall be maintained. Each party could also be tied to more then one address at a time (i.e. billing, shipping, physical) There will be stored profiles for Bands, BandMembers, and general members. So there it is, maybe a bit involved but could be a great learning tool for many hopefully as the process evolves and input is given by the community. Any input? EDIT v1.1 In response to PerformanceDBA U.3) That means no merchandise other than Band merchandise in the database. Correct ? That was my original thought but you got me thinking. Maybe the site would want to sell its own merchandise or even other merchandise from the bands. Not sure a mod to make for that. Would it require an entire rework of the Catalog section or just the identifying relationship that exists with the Band? Attempted a mod to sell both complete albums or song. Either way they would both be in electronic format only available for download. That is why I listed an Album as being comprised of Songs rather then 2 separate entities. U.5) I understand what you bring up about the circular relation with Favorite. I would like to get to this “It is either one Entity with some form of differentiation (FavoriteType) which identifies its treatment” but how to is not clear to me. What am I missing here? u.6) “Business Rules This is probably the only area you are weak in.” Thanks for the honest response. I will readdress these but I hope to clear up some confusion in my head first with the responses I have posted back to you. Q.1) Yes I would like to have Accepted, Rejected, and Blocked. I am not sure what you are referring to as to how this would change the logical model? Q.2) A person does not have to be a User. They can exist only as a BandMember. Is that what you are asking? Minor Issue Zero, One, or More…Oops I admit I forgot to give this attention when building the model. I am submitting this version as is and will address in a future version. I need to read up more on Constraint Checking to make sure I am understanding things. M.4) Depends if you envision OrderPurchase in the future. Can you expand as to what you mean here? EDIT V1.2 In response to PerformanceDBA input... Lessons learned. I was mixing the concept of Identifying / Non-Identifying and Cardinality (i.e. Genre / SubGenre), and doing so inconsistently to make things worse. Associative Tables are not required in Logical Diagrams as their many-to-many relationships can be depicted and then expanded in the Physical Model. I was overlooking the Cardinality in a lot of the relationships The importance of reading through relationships using effective Verb Phrases to reassure I am modeling what I want to accomplish. U.2) In the concept of this model it is only required to track a Venue as a location for an Event. No further data needs to be collected. With that being said Events will take place on a given EventDate and will be hosted at a Venue. Venues will host multiple events and possibly multiple events on a given date. In my new model my thinking was that EventDate is already tied to Event . Therefore, Venue will not need a relationship with EventDate. The 5th and 6th bullets you have listed under U.2) leave me questioning my thinking though. Am I missing something here? U.3) Is it time to move the link between Item and Band up to Item and Party instead? With the current design I don't see a possibility to sell merchandise not tied to the band as you have brought up. U.5) I left as per your input rather than making it a discrete Supertype/Subtype Relationship as I don’t see a benefit of having that type of roll up. Additional Revisions AR.1) After going through the exercise for FavoriteItem, I feel that Item to Review requires a many-to-many relationship so that is indicated. Necessary? Ok here we go for v1.3 I took a few days on this version, going back and forth with my design. Once the logical process is complete, as I want to see if I am on the right track, I will go through in depth what I had learned and the troubles I faced as a beginner going through this process. The big point for this version was it took throwing in some Keys to help see what I was missing in the past. Going through the process of doing a matrix proved to be of great help also. Regardless of anything, if it wasn't for the input given by PerformanceDBA I would still be a lost soul wondering in the dark. Who knows my current design might reaffirm that I still am, but I have learned a lot so I am know I at least have a flashlight in my hand. At this point in time I admit that I am still confused about identifying and non-identifying relationships. In my model I had to use non-identifying relationships with non nulls just to join the relationships I wanted to model. In reading a lot on the subject there seems to be a lot of disagreement and indecisiveness on the subject so I did what I thought represented the right things in my model. When to force (identifying) and when to be free (non-identifying)? Anyone have inputs? EDIT V1.4 Ok took the V1.3 inputs and cleaned things up for this V1.4 Currently working on a V1.5 to include attributes.

    Read the article

  • WordPress contact form email as PDF

    - by lock
    I am using the below code for my WordPress site which is emailing all the form details as an HTML text but I need the details to be written into a PDF first and then have to email the PDF as an attachment. How can I achieve this? This is not a PHP code to use PHP's writePDF modules. So, any idea or any code to implement this? <div style="padding-left: 100px;"> [raw] [contact-form subject="Best Aussie Broker" to="[email protected]"] <div id="main34" style="border: 1px solid black; border-radius: 15px; width: 720px; padding: 15px;"> &nbsp; <h2><span style="color: #ff6600;">Express Application</span></h2> &nbsp; [contact-field label="First Name" type="name" required="true" /] [contact-field label="Last Name" type="text" /] [contact-field label="Email" type="email" required="true" /] [contact-field label="Purpose of Finance?" type="select" options="Home Loan,Refinance,Investment Loan,Debt Consolidation,Other" /] [contact-field label="Your deposit amount" type="text" /] [contact-field label="Amount you need to borrow?" type="text" /] [contact-field label="Brief description of the purpose for finance" type="textarea" required="true" /] <div><label></label> <input class="radio" type="radio" name="19" value="Single Application" onchange="showsingle();" /> <label class="radio">Single Application</label> <div class="clear-form"></div> <input class="radio" type="radio" name="19" value="Joint Application" onchange="showjoint();" /> <label class="radio">Joint Application</label> <div class="clear-form"></div> [contact-field label="Privacy Act" type="checkbox" required="true" /] I have read the Privacy Act 1988 (as Amended) and understand that by selecting the submit button I/we Authorize Best Aussie Broker to act on my/our behalf and manage personal information in relation to this application.<br> <a href="http://googleplex.com.au/pdf.pdf"><img src="http://googleplex.com.au/pdf.png" alt="" /> </a> </div> </div> <div id="single" style="display: none; width: 720px; border: 1px solid black; border-radius: 15px; padding: 15px; margin-top: 10px;"> <div style="padding-top: 10px; width: 720px; text-align: left;"> <h4><span style="color: #ff6600;">Last step then we will get all listed Australian vendors to fight it out for your best deal</span></h4> </div> <div> <label class="select" for="19-date-of-birth">Date of Birth</label> [contact-field label="Day" type="select" options="1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31" /] [contact-field label="Month" type="select" options="January,February,March,April,May,June,July,August,September,October,November,December" /] [contact-field label="Year" type="select" options="2000,1999,1998,1997,1996,1995,1994,1993,1992,1991,1990,1989,1988,1987,1986,1985,1984,1983,1982,1981,1980,1979,1978,197,1976,1975,1974,1973,1972,1971,1970,1969,1968,1967,1966,1965,1964,1963,1962,1961,1960,1959,1958,1957,1956,1955,1954,1953,1952,1951,1950,1949,1948,1947,1946,1945,1944,1943,1942,1941,1940,1939,1938,1937,1936,1935,1934,1933,1932,1931,1930,1929,1928,1927,1926,1925,1924,1923,1922,1921,1920, 1919,1918,1917,1916,1915,1914,1913,1912,1911,1910,1909" /] </div> [contact-field label="Address" type="text" /] [contact-field label="Suburb" type="text" /] [contact-field label="Postcode" type="text" /] <div> [contact-field label="State" type="select" options="VIC,NSW,QLD,SA,WA,TAS,NZ,Other" /] </div> [contact-field label="Best Contact" type="radio" options="Landline,Mobile" /] [contact-field label="Phone Number" type="text" /] [contact-field label="Marital Status" type="select" options="Married,Single,Other" /] [contact-field label="Residential Status" type="select" options="Renting, Home Owned, Home Mortgage, Board, Other" /] [contact-field label="Children/Dependents" type="select" options="0,1,2,3,4,5,6" /] <div></div> [contact-field label="Gross Yearly Income" type="text" /] [contact-field label="Current Employer" type="text" /] <div> <label class="select" for="19-year-of-empl">Time at this employer</label> [contact-field label="Year" type="select" options="0,1,2,3,4,5,6,7,8,9,10,More" /] [contact-field label="Month" type="select" options="0,1,2,3,4,5,6,7,8,9,10,11,12" /] </div> <div style="padding-right: 15px;"></div> </div> <div id="joint" style="display: none; width: 720px; border: 1px solid black; border-radius: 15px; padding: 15px; margin-top: 10px;"> <div style="padding-top: 10px; width: 720px; text-align: left;"> <h4><span style="color: #ff6600;">Last step then we will get all listed Australian vendors to fight it out for your best deal</span></h4> </div> <div style="float: left; width: 320px;"> <div> <label class="select" for="19-date-of-birth1">Date of Birth</label> [contact-field label="Day" type="select" options="1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31" /] [contact-field label="Month" type="select" options="January,February,March,April,May,June,July,August,September,October,November,December" /] [contact-field label="Year" type="select" options="2000,1999,1998,1997,1996,1995,1994,1993,1992,1991,1990,1989,1988,1987,1986,1985,1984,1983,1982,1981,1980,1979,1978,197,1976,1975,1974,1973,1972,1971,1970,1969,1968,1967,1966,1965,1964,1963,1962,1961,1960,1959,1958,1957,1956,1955,1954,1953,1952,1951,1950,1949,1948,1947,1946,1945,1944,1943,1942,1941,1940,1939,1938,1937,1936,1935,1934,1933,1932,1931,1930,1929,1928,1927,1926,1925,1924,1923,1922,1921,1920, 1919,1918,1917,1916,1915,1914,1913,1912,1911,1910,1909" /] </div> [contact-field label="Address" type="text" /] [contact-field label="Suburb" type="text" /] [contact-field label="Postcode" type="text" /] <div> [contact-field label="State" type="select" options="VIC,NSW,QLD,SA,WA,TAS,NZ,Other" /] </div> [contact-field label="Best Contact" type="radio" options="Landline,Mobile" /] [contact-field label="Phone Number" type="text" /] <div></div> <div></div> [contact-field label="Marital Status" type="select" options="Married,Single,Other" /] [contact-field label="Residential Status" type="select" options="Renting, Home Owned, Home Mortgage, Board, Other" /] [contact-field label="Children/Dependents" type="select" options="0,1,2,3,4,5,6" /] <div></div> <div><label class="text" for="netincome">Net Income</label> <input id="netincome" type="text" name="netincome" /> <select id="netincome-dropdown" name="netincome-dropdown"> <option>Monthly</option> <option>Yearly</option> </select></div> [contact-field label="Current Employer" type="text" /] <div> <label class="select" for="19-year-of-empl2">Time at this employer</label> [contact-field label="Year" type="select" options="0,1,2,3,4,5,6,7,8,9,10,More" /] [contact-field label="Month" type="select" options="0,1,2,3,4,5,6,7,8,9,10,11,12" /] </div> </div> <div style="float: right; width: 320px; padding-right: 50px;"> <div> <label class="select" for="19-date-of-birth3">Date of Birth</label> [contact-field label="Day" type="select" options="1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31" /] [contact-field label="Month" type="select" options="January,February,March,April,May,June,July,August,September,October,November,December" /] [contact-field label="Year" type="select" options="2000,1999,1998,1997,1996,1995,1994,1993,1992,1991,1990,1989,1988,1987,1986,1985,1984,1983,1982,1981,1980,1979,1978,197,1976,1975,1974,1973,1972,1971,1970,1969,1968,1967,1966,1965,1964,1963,1962,1961,1960,1959,1958,1957,1956,1955,1954,1953,1952,1951,1950,1949,1948,1947,1946,1945,1944,1943,1942,1941,1940,1939,1938,1937,1936,1935,1934,1933,1932,1931,1930,1929,1928,1927,1926,1925,1924,1923,1922,1921,1920, 1919,1918,1917,1916,1915,1914,1913,1912,1911,1910,1909" /] </div> [contact-field label="Address" type="text" /] [contact-field label="Suburb" type="text" /] [contact-field label="Postcode" type="text" /] <div> [contact-field label="State" type="select" options="VIC,NSW,QLD,SA,WA,TAS,NZ,Other" /] </div> [contact-field label="Best Contact" type="radio" options="Landline,Mobile" /] [contact-field label="Phone Number" type="text" /] <div></div> <div></div> [contact-field label="Marital Status" type="select" options="Married,Single,Other" /] [contact-field label="Residential Status" type="select" options="Renting, Home Owned, Home Mortgage, Board, Other" /] [contact-field label="Children/Dependents" type="select" options="0,1,2,3,4,5,6" /] <div></div> <div><label class="text" for="netincome">Net Income</label> <input id="netincome" type="text" name="netincome" /> <select id="netincome-dropdown" name="netincome-dropdown"> <option>Monthly</option> <option>Yearly</option> </select></div> [contact-field label="Current Employer" type="text" /] <div> <label class="select" for="19-year-of-empl">Time at this employer</label> [contact-field label="Year" type="select" options="0,1,2,3,4,5,6,7,8,9,10,More" /] [contact-field label="Month" type="select" options="0,1,2,3,4,5,6,7,8,9,10,11,12" /] </div> </div> <div style="clear: both;"></div> <div></div> </div> &nbsp; [/contact-form][/raw] </div>

    Read the article

  • undefined reference to function, despite giving reference in c

    - by Jamie Edwards
    I'm following a tutorial, but when it comes to compiling and linking the code I get the following error: /tmp/cc8gRrVZ.o: In function `main': main.c:(.text+0xa): undefined reference to `monitor_clear' main.c:(.text+0x16): undefined reference to `monitor_write' collect2: ld returned 1 exit status make: *** [obj/main.o] Error 1 What that is telling me is that I haven't defined both 'monitor_clear' and 'monitor_write'. But I have, in both the header and source files. They are as follows: monitor.c: // monitor.c -- Defines functions for writing to the monitor. // heavily based on Bran's kernel development tutorials, // but rewritten for JamesM's kernel tutorials. #include "monitor.h" // The VGA framebuffer starts at 0xB8000. u16int *video_memory = (u16int *)0xB8000; // Stores the cursor position. u8int cursor_x = 0; u8int cursor_y = 0; // Updates the hardware cursor. static void move_cursor() { // The screen is 80 characters wide... u16int cursorLocation = cursor_y * 80 + cursor_x; outb(0x3D4, 14); // Tell the VGA board we are setting the high cursor byte. outb(0x3D5, cursorLocation >> 8); // Send the high cursor byte. outb(0x3D4, 15); // Tell the VGA board we are setting the low cursor byte. outb(0x3D5, cursorLocation); // Send the low cursor byte. } // Scrolls the text on the screen up by one line. static void scroll() { // Get a space character with the default colour attributes. u8int attributeByte = (0 /*black*/ << 4) | (15 /*white*/ & 0x0F); u16int blank = 0x20 /* space */ | (attributeByte << 8); // Row 25 is the end, this means we need to scroll up if(cursor_y >= 25) { // Move the current text chunk that makes up the screen // back in the buffer by a line int i; for (i = 0*80; i < 24*80; i++) { video_memory[i] = video_memory[i+80]; } // The last line should now be blank. Do this by writing // 80 spaces to it. for (i = 24*80; i < 25*80; i++) { video_memory[i] = blank; } // The cursor should now be on the last line. cursor_y = 24; } } // Writes a single character out to the screen. void monitor_put(char c) { // The background colour is black (0), the foreground is white (15). u8int backColour = 0; u8int foreColour = 15; // The attribute byte is made up of two nibbles - the lower being the // foreground colour, and the upper the background colour. u8int attributeByte = (backColour << 4) | (foreColour & 0x0F); // The attribute byte is the top 8 bits of the word we have to send to the // VGA board. u16int attribute = attributeByte << 8; u16int *location; // Handle a backspace, by moving the cursor back one space if (c == 0x08 && cursor_x) { cursor_x--; } // Handle a tab by increasing the cursor's X, but only to a point // where it is divisible by 8. else if (c == 0x09) { cursor_x = (cursor_x+8) & ~(8-1); } // Handle carriage return else if (c == '\r') { cursor_x = 0; } // Handle newline by moving cursor back to left and increasing the row else if (c == '\n') { cursor_x = 0; cursor_y++; } // Handle any other printable character. else if(c >= ' ') { location = video_memory + (cursor_y*80 + cursor_x); *location = c | attribute; cursor_x++; } // Check if we need to insert a new line because we have reached the end // of the screen. if (cursor_x >= 80) { cursor_x = 0; cursor_y ++; } // Scroll the screen if needed. scroll(); // Move the hardware cursor. move_cursor(); } // Clears the screen, by copying lots of spaces to the framebuffer. void monitor_clear() { // Make an attribute byte for the default colours u8int attributeByte = (0 /*black*/ << 4) | (15 /*white*/ & 0x0F); u16int blank = 0x20 /* space */ | (attributeByte << 8); int i; for (i = 0; i < 80*25; i++) { video_memory[i] = blank; } // Move the hardware cursor back to the start. cursor_x = 0; cursor_y = 0; move_cursor(); } // Outputs a null-terminated ASCII string to the monitor. void monitor_write(char *c) { int i = 0; while (c[i]) { monitor_put(c[i++]); } } void monitor_write_hex(u32int n) { s32int tmp; monitor_write("0x"); char noZeroes = 1; int i; for (i = 28; i > 0; i -= 4) { tmp = (n >> i) & 0xF; if (tmp == 0 && noZeroes != 0) { continue; } if (tmp >= 0xA) { noZeroes = 0; monitor_put (tmp-0xA+'a' ); } else { noZeroes = 0; monitor_put( tmp+'0' ); } } tmp = n & 0xF; if (tmp >= 0xA) { monitor_put (tmp-0xA+'a'); } else { monitor_put (tmp+'0'); } } void monitor_write_dec(u32int n) { if (n == 0) { monitor_put('0'); return; } s32int acc = n; char c[32]; int i = 0; while (acc > 0) { c[i] = '0' + acc%10; acc /= 10; i++; } c[i] = 0; char c2[32]; c2[i--] = 0; int j = 0; while(i >= 0) { c2[i--] = c[j++]; } monitor_write(c2); } monitor.h: // monitor.h -- Defines the interface for monitor.h // From JamesM's kernel development tutorials. #ifndef MONITOR_H #define MONITOR_H #include "common.h" // Write a single character out to the screen. void monitor_put(char c); // Clear the screen to all black. void monitor_clear(); // Output a null-terminated ASCII string to the monitor. void monitor_write(char *c); #endif // MONITOR_H common.c: // common.c -- Defines some global functions. // From JamesM's kernel development tutorials. #include "common.h" // Write a byte out to the specified port. void outb ( u16int port, u8int value ) { asm volatile ( "outb %1, %0" : : "dN" ( port ), "a" ( value ) ); } u8int inb ( u16int port ) { u8int ret; asm volatile ( "inb %1, %0" : "=a" ( ret ) : "dN" ( port ) ); return ret; } u16int inw ( u16int port ) { u16int ret; asm volatile ( "inw %1, %0" : "=a" ( ret ) : "dN" ( port ) ); return ret; } // Copy len bytes from src to dest. void memcpy(u8int *dest, const u8int *src, u32int len) { const u8int *sp = ( const u8int * ) src; u8int *dp = ( u8int * ) dest; for ( ; len != 0; len-- ) *dp++ =*sp++; } // Write len copies of val into dest. void memset(u8int *dest, u8int val, u32int len) { u8int *temp = ( u8int * ) dest; for ( ; len != 0; len-- ) *temp++ = val; } // Compare two strings. Should return -1 if // str1 < str2, 0 if they are equal or 1 otherwise. int strcmp(char *str1, char *str2) { int i = 0; int failed = 0; while ( str1[i] != '\0' && str2[i] != '\0' ) { if ( str1[i] != str2[i] ) { failed = 1; break; } i++; } // Why did the loop exit? if ( ( str1[i] == '\0' && str2[i] != '\0' || (str1[i] != '\0' && str2[i] =='\0' ) ) failed =1; return failed; } // Copy the NULL-terminated string src into dest, and // return dest. char *strcpy(char *dest, const char *src) { do { *dest++ = *src++; } while ( *src != 0 ); } // Concatenate the NULL-terminated string src onto // the end of dest, and return dest. char *strcat(char *dest, const char *src) { while ( *dest != 0 ) { *dest = *dest++; } do { *dest++ = *src++; } while ( *src != 0 ); return dest; } common.h: // common.h -- Defines typedefs and some global functions. // From JamesM's kernel development tutorials. #ifndef COMMON_H #define COMMON_H // Some nice typedefs, to standardise sizes across platforms. // These typedefs are written for 32-bit x86. typedef unsigned int u32int; typedef int s32int; typedef unsigned short u16int; typedef short s16int; typedef unsigned char u8int; typedef char s8int; void outb ( u16int port, u8int value ); u8int inb ( u16int port ); u16int inw ( u16int port ); #endif //COMMON_H main.c: // main.c -- Defines the C-code kernel entry point, calls initialisation routines. // Made for JamesM's tutorials <www.jamesmolloy.co.uk> #include "monitor.h" int main(struct multiboot *mboot_ptr) { monitor_clear(); monitor_write ( "hello, world!" ); return 0; } here is my makefile: C_SOURCES= main.c monitor.c common.c S_SOURCES= boot.s C_OBJECTS=$(patsubst %.c, obj/%.o, $(C_SOURCES)) S_OBJECTS=$(patsubst %.s, obj/%.o, $(S_SOURCES)) CFLAGS=-nostdlib -nostdinc -fno-builtin -fno-stack-protector -m32 -Iheaders LDFLAGS=-Tlink.ld -melf_i386 --oformat=elf32-i386 ASFLAGS=-felf all: kern/kernel .PHONY: clean clean: -rm -f kern/kernel kern/kernel: $(S_OBJECTS) $(C_OBJECTS) ld $(LDFLAGS) -o $@ $^ $(C_OBJECTS): obj/%.o : %.c gcc $(CFLAGS) $< -o $@ vpath %.c source $(S_OBJECTS): obj/%.o : %.s nasm $(ASFLAGS) $< -o $@ vpath %.s asem Hopefully this will help you understand what is going wrong and how to fix it :L Thanks in advance. Jamie.

    Read the article

  • Dell PowerEdge R720xd stuck in BIOS

    - by G_P
    I have a Dell PowerEdge R720xd that gets stuck in the BIOS when booting. It successfully gets past the "configuring memory" and "configuring iDRAC" screens, but once it shows the "CPLD version : 103" with the various management engine versions/patches, it just hangs. No errors messages are displayed. This started happening when we tried adding additional RAM to the machine. Since then, we tried re-seating the new memory which resulted in the same issue. Then, we took out all the new memory, and the problem persists. We have also tried pressing F2 to get into System Setup, but it just indicates "Entering System Setup" and hangs at the same point. Has anybody seen this issue before or have any ideas on what to try next? UPDATE After troubleshooting and trying to isolate the issue (stripping things down to a single CPU and single DIMM, same problem, swapping to the other CPU and a different DIMM, same problem), Dell support will be coming out to swap the system board.

    Read the article

  • (0xC03A0014) Failed to add device 'Microsoft Virtual Hard Disk'

    - by maniargaurav
    We had Windows 2008 SP2 Server. It was crashed due to mother board problem. After we got new motherboard we have installed Windows 2008 R2. Now when we try to attach Old VHD File we are getting following issue. Failed to add device 'Microsoft Virtual Hard Disk'. Cannot open attachment 'D:\Test\test.vhd'. Error: 'A virtual disk support provider for the specified file was not found.' TestVM': Cannot open attachment 'D:\Test\test.vhd'. Error: 'A virtual disk support provider for the specified file was not found.' (0xC03A0014). (Virtual machine ID 5626AAB2-C21C-48FF-8B70-40671CBC573B)

    Read the article

  • Sharepoint 2010 - One or more services have started or stopped unexpectedly

    - by Mr Shoubs
    Does anyone know why I keep getting the following message in Sharepoint 2010 review problems and solutions list: The following services are managed by SharePoint, but their running state does not match what SharePoint expects: SPAdminV4. This can happen if a service crashes or if an administrator starts or stops a service using a non-SharePoint interface. If SharePoint-managed services do not match their expected running state, SharePoint will be unable to correctly distribute work to the service. Failing Services: SPTimerService (SPTimerV4) I assume this is referring to Sharepoint 2010 Timer Service but I can only see this in the services msc (and it is running), not in the Application Management > Manage services on server. Does anyone know why this keeps occurring and how I can stop it? Note - I looked in the event log and all I can see is that same error.

    Read the article

  • HP SmartArray P400 has slow read and write speed

    - by Tadas D.
    I have desktop in which I installed HP SmartArray P400 controller with two HP DF0146B8052 hard drives. I made RAID0 logical volume from them, but I am getting 20MB/s write speed and ~140-120MB/s read speed. Also there is quite low scatter for benchmark results (I am getting quite nice line) and it looks like controller is "capping" my speeds. I tried reseting controllers configuration and I haven't found any settings in HP ACU (Array Configuration Utility) to help me. I am using Windows 7 Ultimate and M4A78 board Does anyone have ideas what could be wrong? Also I am attaching diagnostic results.

    Read the article

  • iis7 .net webservice 404 error

    - by user11457
    I have a webservice /test/Service1.asmx in the same folder as a page /test/test.aspx. the page works fine but I get the message bellow for the services in the same location. I know the file is there and the url is correct, added the script module and managed handler as well. If anyone knows what I'm missing here I'd appreciate it Server Error in '/' Application. The resource cannot be found. Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. Requested URL: /test/Service1.asmx Version Information: Microsoft .NET Framework Version:2.0.50727.4200; ASP.NET Version:2.0.50727.4016

    Read the article

  • kernel panic with exitcode=0x00000004 and no call trace

    - by litmusconfig
    A bit of background first - I'm trying to configure a MicroBlaze Linux (big-endian version) system on a Xilinx ML506 eval board. The goal is to use the second partition of a CompactFlash card attached to the Xilinx SystemACE controller. So far, root in initramfs works and after boot, I can mount and use said partition, no problem. But if I try to use it right from the getgo with the "root=/dev/xsa2" kernel command line parameter, the system hangs with [...] Freeing unused kernel memory: 143k freed Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000004 And that's it - no regdump, no call trace, no further nothing from the serial console, even though kernel has been configured with debugging enabled. Now, I'm pretty new at this, so is there something else I should be doing to see something more informative from the kernel?

    Read the article

  • eeePC 1001HA/1101HA max resolution when connected to external display?

    - by Marco Demaio
    Hello, I would like to buy a new Eee PC 1001HA or 1101HA. I know the max display resolution is: 1024x600 for eeePC 1001 and 1366x768 for eeePC 1101 But what's the max resolution of the graphic board when connecting these two types of eeePC to an external LCD monitor??? Let's say the external LCD monitor supports a full HD resolution of 1920x1080, are these eeePC graphic boards able to go up to such resolution??? It's really incredible to me, how such a useful information is missing everywhere on every ASUS website. Eee PC are very well suited to be connected to external monitor, so I can't believe how difficult is to find out this information. I downloaded aldo the manual, but it's not in there too. So I was hoping somone has got one and knows the answer. Thanks!

    Read the article

  • Debian 3.1 (Sarge) init.d boot order

    - by Adam Lewis
    I am using a TS-7800 single board computer from Technologic Systems that ships with Debian 3.1 (Sarge). I have updated it to Squeeze, but due to various driver issues I have been forced to roll back to Sarge. I am attempting to configure the various drivers and configurations needed for my application services before they start. Ideally I would call one init.d script that contains the drivers / configurations then call the other init.d scripts (one for each process). I am left scratching my head on how to guarantee the boot sequence. I know in later versions of Debian I could use the lbs-header to achieve this; but is there anything comparable to the LBS header in Sarge?

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >