Search Results

Search found 3574 results on 143 pages for 'difficult'.

Page 45/143 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Big Visible Charts

    - by Robert May
    An important part of Agile is the concept of transparency and visibility. In proper functioning teams, stakeholders can look at any team at any time in the iteration or release and see how that team is doing by simply looking at what we call Big Visible Charts. If you’ve done Scrum, you’ve seen these charts. However, interpreting these charts can often be an art form. There are several different charts that can be useful. In this newsletter, I’ll focus on the Iteration Burndown and Cumulative Flow charts. I’ve included a copy of the spreadsheet that I used to create the charts, and if you don’t have a tool that creates them for you, you can use this spreadsheet to do so. Our preferred tool for managing Scrum projects is Rally. Rally creates all of these charts for you, saving you quite a bit of time. The Iteration Burndown and Cumulative Flow Charts This is the main chart that teams use. Although less useful to stakeholders, this chart is critical to the team and provides quite a bit of information to the team about how their iteration is going. Most charts are a combination of the charts below, so you may need to combine aspects of each section to understand what is happening in your iterations. Ideal Ah, isn’t that a pretty picture? Unfortunately, it’s also very unrealistic. I’ve seen iterations that come close to ideal, but never that match perfectly. If your iteration matches perfectly, chances are, someone is playing with the numbers. Reality is just too difficult to have a burndown chart that matches this exactly. Late Planning Iteration started, but the team didn’t. You can tell this by the fact that the real number of estimated hours didn’t appear until day two. In the cumulative flow, you can also see that nothing was defined in Day one and two. You want to avoid situations like this. You’ll note that the team had to burn faster than is ideal to meet the iteration because of the late planning. This often results in long weeks and days. Testing Starved Determining whether or not testing is starved is difficult without the cumulative flow. The pattern in the burndown could be nothing more that developers not completing stories early enough or could be caused by stories being too big. With the cumulative flow, however, you see that only small bites are in progress and stories were completed early, but testing didn’t start testing until the end of the iteration, and didn’t complete testing all stories in the iteration. When this happens, question whether or not your testing resources are sufficient for your team and whether or not acceptance is adequately defined. No Testing With this one, both graphs show the same thing; the team needs testers and testing! Without testing, what was completed cannot be verified to make sure that it is acceptable to the business. If you find yourself in this situation, review your testing practices and acceptance testing process and make changes today. Late Development With this situation, both graphs tell a story. In the top graph, you can see that the hours failed to burn down as quickly as the team expected. This could be caused by the team not correctly estimating their hours or the team could have had illness or some other issue that affected them. Often, when teams are tackling something that is more unknown, they’ll run into technical barriers that cause the burn down to happen slower than expected. In the cumulative flow graph, you can see that not much was completed in the first few days. This could be because of illness or technical barriers or simply poor estimation. Testing was able to keep up with everything that was completed, however. No Tool Updating When you see graphs that look like this, you can be assured that it’s because the team is not updating the tool that generates the graphs. Review your policy for when they are to update. On the teams that I run, I require that each team member updates the tool at least once daily. You should also check to see how well the team is breaking down stories into tasks. If they’re creating few large tasks, graphs can look similar to this. As a general rule, I never allow tasks, other than Unit Testing and Uncertainty, to be greater than eight hours in duration. Scope Increase I always encourage team members to enter in however much time they think they have left on a task, even if that means increasing the total amount of time left to do. You get a much better and more realistic picture this way. Increasing time remaining could explain the burndown graph, but by looking at the cumulative flow graph, we can see that stories were added to the iteration and scope was increased. Since planning should consume all of the hours in the iteration, this is almost always a bad thing. If the scope change happened late in the iteration and the hours remaining were well below the ideal burn, then increasing scope is probably o.k., but estimation needs to get better. However, with the charts above, that’s clearly not what happened and the team was required to do extra work to make the iteration. If you find this happening, your product owner and ScrumMasters need training. The team also needs to learn to say no. Scope Decrease Scope decreases are just as bad as scope increases. Usually, graphs above show that the team did a poor job of estimating their stories and part way through had to reduce scope to change the iteration. This will happen once in a while, but if you find it’s a pattern on your team, you need to re-evaluate planning. Some teams are hopelessly optimistic. In those cases, I’ll introduce a task I call “Uncertainty.” With Uncertainty, the team estimates how many hours they might need if things don’t go well with the tasks they’ve defined. They try to estimate things that could go poorly and increase the time appropriately. Having an Uncertainty task allows them to have a low and high estimate. Uncertainty should not just be an arbitrary buffer. It must correlate to real uncertainty in the tasks that have been defined. Stories are too Big Often, we see graphs like the ones above. Note that the burndown looks fairly good, other than the chunky acceptance of stories. However, when you look at cumulative flow, you can see that at one point, everything is in progress. This is a bad thing. When you see graphs like this, you’re in one of two states. You may just have a very small team and can only handle one or two stories in your iteration. If you have more than one or two people, then the most likely problem is that your stories are far too big. To combat this, break large high hour stories into smaller pieces that can be completed independently and accepted independently. If you don’t, you’ll likely be requiring your testers to do heroic things to complete testing on the last day of the iteration and you’re much more likely to have the entire iteration fail, because of the limited amount of things that can be completed. Summary There are other charts that can be useful when doing scrum. If you don’t have any big visible charts, you really need to evaluate your process and change. These charts can provide the team a wealth of information and help you write better software. If you have any questions about charts that you’re seeing on your team, contact me with a screen capture of the charts and I’ll tell you what I’m seeing in those charts. I always want this information to be useful, so please let me know if you have other questions. Technorati Tags: Agile

    Read the article

  • Developing a Cost Model for Cloud Applications

    - by BuckWoody
    Note - please pay attention to the date of this post. As much as I attempt to make the information below accurate, the nature of distributed computing means that components, units and pricing will change over time. The definitive costs for Microsoft Windows Azure and SQL Azure are located here, and are more accurate than anything you will see in this post: http://www.microsoft.com/windowsazure/offers/  When writing software that is run on a Platform-as-a-Service (PaaS) offering like Windows Azure / SQL Azure, one of the questions you must answer is how much the system will cost. I will not discuss the comparisons between on-premise costs (which are nigh impossible to calculate accurately) versus cloud costs, but instead focus on creating a general model for estimating costs for a given application. You should be aware that there are (at this writing) two billing mechanisms for Windows and SQL Azure: “Pay-as-you-go” or consumption, and “Subscription” or commitment. Conceptually, you can consider the former a pay-as-you-go cell phone plan, where you pay by the unit used (at a slightly higher rate) and the latter as a standard cell phone plan where you commit to a contract and thus pay lower rates. In this post I’ll stick with the pay-as-you-go mechanism for simplicity, which should be the maximum cost you would pay. From there you may be able to get a lower cost if you use the other mechanism. In any case, the model you create should hold. Developing a good cost model is essential. As a developer or architect, you’ll most certainly be asked how much something will cost, and you need to have a reliable way to estimate that. Businesses and Organizations have been used to paying for servers, software licenses, and other infrastructure as an up-front cost, and power, people to the systems and so on as an ongoing (and sometimes not factored) cost. When presented with a new paradigm like distributed computing, they may not understand the true cost/value proposition, and that’s where the architect and developer can guide the conversation to make a choice based on features of the application versus the true costs. The two big buckets of use-types for these applications are customer-based and steady-state. In the customer-based use type, each successful use of the program results in a sale or income for your organization. Perhaps you’ve written an application that provides the spot-price of foo, and your customer pays for the use of that application. In that case, once you’ve estimated your cost for a successful traversal of the application, you can build that into the price you charge the user. It’s a standard restaurant model, where the price of the meal is determined by the cost of making it, plus any profit you can make. In the second use-type, the application will be used by a more-or-less constant number of processes or users and no direct revenue is attached to the system. A typical example is a customer-tracking system used by the employees within your company. In this case, the cost model is often created “in reverse” - meaning that you pilot the application, monitor the use (and costs) and that cost is held steady. This is where the comparison with an on-premise system becomes necessary, even though it is more difficult to estimate those on-premise true costs. For instance, do you know exactly how much cost the air conditioning is because you have a team of system administrators? This may sound trivial, but that, along with the insurance for the building, the wiring, and every other part of the system is in fact a cost to the business. There are three primary methods that I’ve been successful with in estimating the cost. None are perfect, all are demand-driven. The general process is to lay out a matrix of: components units cost per unit and then multiply that times the usage of the system, based on which components you use in the program. That sounds a bit simplistic, but using those metrics in a calculation becomes more detailed. In all of the methods that follow, you need to know your application. The components for a PaaS include computing instances, storage, transactions, bandwidth and in the case of SQL Azure, database size. In most cases, architects start with the first model and progress through the other methods to gain accuracy. Simple Estimation The simplest way to calculate costs is to architect the application (even UML or on-paper, no coding involved) and then estimate which of the components you’ll use, and how much of each will be used. Microsoft provides two tools to do this - one is a simple slider-application located here: http://www.microsoft.com/windowsazure/pricing-calculator/  The other is a tool you download to create an “Return on Investment” (ROI) spreadsheet, which has the advantage of leading you through various questions to estimate what you plan to use, located here: https://roianalyst.alinean.com/msft/AutoLogin.do?d=176318219048082115  You can also just create a spreadsheet yourself with a structure like this: Program Element Azure Component Unit of Measure Cost Per Unit Estimated Use of Component Total Cost Per Component Cumulative Cost               Of course, the consideration with this model is that it is difficult to predict a system that is not running or hasn’t even been developed. Which brings us to the next model type. Measure and Project A more accurate model is to actually write the code for the application, using the Software Development Kit (SDK) which can run entirely disconnected from Azure. The code should be instrumented to estimate the use of the application components, logging to a local file on the development system. A series of unit and integration tests should be run, which will create load on the test system. You can use standard development concepts to track this usage, and even use Windows Performance Monitor counters. The best place to start with this method is to use the Windows Azure Diagnostics subsystem in your code, which you can read more about here: http://blogs.msdn.com/b/sumitm/archive/2009/11/18/introducing-windows-azure-diagnostics.aspx This set of API’s greatly simplifies tracking the application, and in fact you can use this information for more than just a cost model. After you have the tracking logs, you can plug the numbers into ay of the tools above, which should give a representative cost or in some cases a unit cost. The consideration with this model is that the SDK fabric is not a one-to-one comparison with performance on the actual Windows Azure fabric. Those differences are usually smaller, but they do need to be considered. Also, you may not be able to accurately predict the load on the system, which might lead to an architectural change, which changes the model. This leads us to the next, most accurate method for a cost model. Sample and Estimate Using standard statistical and other predictive math, once the application is deployed you will get a bill each month from Microsoft for your Azure usage. The bill is quite detailed, and you can export the data from it to do analysis, and using methods like regression and so on project out into the future what the costs will be. I normally advise that the architect also extrapolate a unit cost from those metrics as well. This is the information that should be reported back to the executives that pay the bills: the past cost, future projected costs, and unit cost “per click” or “per transaction”, as your case warrants. The challenge here is in the model itself - statistical methods are not foolproof, and the larger the sample (in this case I recommend the entire population, not a smaller sample) is key. References and Tools Articles: http://blogs.msdn.com/b/patrick_butler_monterde/archive/2010/02/10/windows-azure-billing-overview.aspx http://technet.microsoft.com/en-us/magazine/gg213848.aspx http://blog.codingoutloud.com/2011/06/05/azure-faq-how-much-will-it-cost-me-to-run-my-application-on-windows-azure/ http://blogs.msdn.com/b/johnalioto/archive/2010/08/25/10054193.aspx http://geekswithblogs.net/iupdateable/archive/2010/02/08/qampa-how-can-i-calculate-the-tco-and-roi-when.aspx   Other Tools: http://cloud-assessment.com/ http://communities.quest.com/community/cloud_tools

    Read the article

  • DRY and SRP

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/06/11/dry-and-srp.aspxKent Beck’s XP Simplicity Rules (aka Four Rules of Simple Design) are a prioritized list of rules that when applied to your code generally yield a great design.  As you’ll see from the above link the list has slightly evolved over time.  I find today they are usually listed as: All Tests Pass Don’t Repeat Yourself (DRY) Express Intent Minimalistic These are prioritized.  If your code doesn’t work (rule 1) then everything else is forfeit.  Go back to rule one and get the code working before worrying about anything else. Over the years the community have debated whether the priority of rules 2 and 3 should be reversed.  Some say a little duplication in the code is OK as long as it helps express intent.  I’ve debated it myself.  This recent post got me thinking about this again, hence this post.   I don’t think it is fair to compare “Expressing Intent” against “DRY”.  This is a comparison of apples to oranges.  “Expressing Intent” is a principal of code quality.  “Repeating Yourself” is a code smell.  A code smell is merely an indicator that there might be something wrong with the code.  It takes further investigation to determine if a violation of an underlying principal of code quality has actually occurred. For example “using nouns for method names”, “using verbs for property names”, or “using Booleans for parameters” are all code smells that indicate that code probably isn’t doing a good job at expressing intent.  They are usually very good indicators.  But what principle is the code smell of Duplication pointing to and how good of an indicator is it? Duplication in the code base is bad for a couple reasons.  If you need to make a change and that needs to be made in a number of locations it is difficult to know if you have caught all of them.  This can lead to bugs if/when one of those locations is overlooked.  By refactoring the code to remove all duplication there will be left with only one place to change, thereby eliminating this problem. With most projects the code becomes the single source of truth for a project.  If a production code base is inconsistent with a five year old requirements or design document the production code that people are currently living with is usually declared as the current reality (or truth).  Requirement or design documents at this age in a project life cycle are usually of little value. Although comparing production code to external documentation is usually straight forward, duplication within the code base muddles this declaration of truth.  When code is duplicated small discrepancies will creep in between the two copies over time.  The question then becomes which copy is correct?  As different factions debate how the software should work, trust in the software and the team behind it erodes. The code smell of Duplication points to a violation of the “Single Source of Truth” principle.  Let me define that as: A stakeholder’s requirement for a software change should never cause more than one class to change. Violation of the Single Source of Truth principle will always result in duplication in the code.  However, the inverse is not always true.  Duplication in the code does not necessarily indicate that there is a violation of the Single Source of Truth principle. To illustrate this, let’s look at a retail system where the system will (1) send a transaction to a bank and (2) print a receipt for the customer.  Although these are two separate features of the system, they are closely related.  The reason for printing the receipt is usually to provide an audit trail back to the bank transaction.  Both features use the same data:  amount charged, account number, transaction date, customer name, retail store name, and etcetera.  Because both features use much of the same data, there is likely to be a lot of duplication between them.  This duplication can be removed by making both features use the same data access layer. Then start coming the divergent requirements.  The receipt stakeholder wants a change so that the account number has the last few digits masked out to protect the customer’s privacy.  That can be solve with a small IF statement whilst still eliminating all duplication in the system.  Then the bank wants to take a picture of the customer as well as capture their signature and/or PIN number for enhanced security.  Then the receipt owner wants to pull data from a completely different system to report the customer’s loyalty program point total. After a while you realize that the two stakeholders have somewhat similar, but ultimately different responsibilities.  They have their own reasons for pulling the data access layer in different directions.  Then it dawns on you, the Single Responsibility Principle: There should never be more than one reason for a class to change. In this example we have two stakeholders giving two separate reasons for the data access class to change.  It is clear violation of the Single Responsibility Principle.  That’s a problem because it can often lead the project owner pitting the two stakeholders against each other in a vein attempt to get them to work out a mutual single source of truth.  But that doesn’t exist.  There are two completely valid truths that the developers need to support.  How is this to be supported and honour the Single Responsibility Principle?  The solution is to duplicate the data access layer and let each stakeholder control their own copy. The Single Source of Truth and Single Responsibility Principles are very closely related.  SST tells you when to remove duplication; SRP tells you when to introduce it.  They may seem to be fighting each other, but really they are not.  The key is to clearly identify the different responsibilities (or sources of truth) over a system.  Sometimes there is a single person with that responsibility, other times there are many.  This can be especially difficult if the same person has dual responsibilities.  They might not even realize they are wearing multiple hats. In my opinion Single Source of Truth should be listed as the second rule of simple design with Express Intent at number three.  Investigation of the DRY code smell should yield to the proper application SST, without violating SRP.  When necessary leave duplication in the system and let the class names express the different people that are responsible for controlling them.  Knowing all the people with responsibilities over a system is the higher priority because you’ll need to know this before you can express it.  Although it may be a code smell when there is duplication in the code, it does not necessarily mean that the coder has chosen to be expressive over DRY or that the code is bad.

    Read the article

  • Christmas in the Clouds

    - by andrewbrust
    I have been spending the last 2 weeks immersing myself in a number of Windows Azure and SQL Azure technologies.  And in setting up a new business (I’ll speak more about that in the future), I have also become a customer of Microsoft’s BPOS (Business Productivity Online Services).  In short, it has been a fortnight of Microsoft cloud computing. On the Azure side, I’ve looked, of course, at Web Roles and Worker Roles.  But I’ve also looked at Azure Storage’s REST API (including coding to it directly), I’ve looked at Azure Drive and the new VM Role; I’ve looked quite a bit at SQL Azure (including the project “Houston” Silverlight UI) and I’ve looked at SQL Azure labs’ OData service too. I’ve also looked at DataMarket and its integration with both PowerPivot and native Excel.  Then there’s AppFabric Caching, SQL Azure Reporting (what I could learn of it) and the Visual Studio tooling for Azure, including the storage of certificate-based credentials.  And to round it out with some user stuff, on the BPOS side, I’ve been working with Exchange Online, SharePoint Online and LiveMeeting. I have to say I like a lot of what I’ve been seeing.  Azure’s not perfect, and BPOS certainly isn’t either.  But there’s good stuff in all these products, and there’s a lot of value. Azure Goes Deep Most people know that Web and Worker roles put the platform in charge of spinning virtual machines up and down, and keeping them up to date. But you can go way beyond that now.  The still-in-beta VM Role gives you the power to craft the machine (much as does Amazon’s EC2), though it takes away the platform’s self-managing attributes.  It still spins instances up and down, making drive storage non-durable, but Azure Drive gives you the ability to store VHD files as blobs and mount them as virtual hard drives that are readable and writeable.  Whether with Azure Storage or SQL Azure, Azure does data.  And OData is everywhere.  Azure Table Storage supports an OData Interface.  So does SQL Azure and so does DataMarket (the former project “Dallas”).  That means that Azure data repositories aren’t just straightforward to provision and configure…they’re also easy to program against, from just about any programming environment, in a RESTful manner.  And for more .NET-centric implementations, Azure AppFabric caching takes the technology formerly known as “Velocity” and throws it up into the cloud, speeding data access even more. Snapping in Place Once you get the hang of it, this stuff just starts to work in a way that becomes natural to understand.  I wasn’t expecting that, and I was really happy to discover it. In retrospect, I am not surprised, because I think the various Azure teams are the center of gravity for Redmond’s innovation right now.  The products belie this and so do my observations of the product teams’ motivation and high morale.  It is really good to see this; Microsoft needs to lead somewhere, and they need to be seen as the underdog while doing so.  With Azure, both requirements are in place.   BPOS: Bad Acronym, Easy Setup BPOS is about products you already know; Exchange, SharePoint, Live Meeting and Office Communications Server.  As such, it’s hard not to be underwhelmed by BPOS.  Until you realize how easy it makes it to get all that stuff set up.  I would say that from sign-up to productive use took me about 45 minutes…and that included the time necessary to wrestle with my DNS provider, set up Outlook and my SmartPhone up to talk to the Exchange account, create my SharePoint site collection, and configure the Outlook Conferencing add-in to talk to the provisioned Live Meeting account. Never before did I think setting up my own Exchange mail could come anywhere close to the simplicity of setting up an SMTP/POP account, and yet BPOS actually made it faster.   What I want from my Azure Christmas Next Year Not everything about Microsoft’s cloud is good.  I close this post with a list of things I’d like to see addressed: BPOS offerings are still based on the 2007 Wave of Microsoft server technologies.  We need to get to 2010, and fast.  Arguably, the 2010 products should have been released to the off-premises channel before the on-premise sone.  Office 365 can’t come fast enough. Azure’s Internet tooling and domain naming, is scattered and confusing.  Deployed ASP.NET applications go to cloudapp.net; SQL Azure and Azure storage work off windows.net.  The Azure portal and Project Houston are at azure.com.  Then there’s appfabriclabs.com and sqlazurelabs.com.  There is a new Silverlight portal that replaces most, but not all of the HTML ones.  And Project Houston is Silvelright-based too, though separate from the Silverlight portal tooling. Microsoft is the king off tooling.  They should not make me keep an entire OneNote notebook full of portal links, account names, access keys, assemblies and namespaces and do so much CTRL-C/CTRL-V work.  I’d like to see more project templates, have them automatically reference the appropriate assemblies, generate the right using/Imports statements and prime my config files with the right markup.  Then I want a UI that lets me log in with my Live ID and pick the appropriate project, database, namespace and key string to get set up fast. Beta programs, if they’re open, should onboard me quickly.  I know the process is difficult and everyone’s going as fast as they can.  But I don’t know why it’s so difficult or why it takes so long.  Getting developers up to speed on new features quickly helps popularize the platform.  Make this a priority. Make Azure accessible from the simplicity platforms, i.e. ASP.NET Web Pages (Razor) and LightSwitch.  Support .NET 4 now.  Make WebMatrix, IIS Express and SQL Compact work with the Azure development fabric. Have HTML helpers make Azure programming easier.  Have LightSwitch work with SQL Azure and not require SQL Express.  LightSwitch has some promising Azure integration now.  But we need more.  WebMatrix has none and that’s just silly, now that the Extra Small Instance is being introduced. The Windows Azure Platform Training Kit is great.  But I want Microsoft to make it even better and I want them to evangelize it much more aggressively.  There’s a lot of good material on Azure development out there, but it’s scattered in the same way that the platform is.   The Training Kit ties a lot of disparate stuff together nicely.  Make it known. Should Old Acquaintance Be Forgot All in all, diving deep into Azure was a good way to end the year.  Diving deeper into Azure should a great way to spend next year, not just for me, but for Microsoft too.

    Read the article

  • Best Method to SFTP or FTPS Files via SSIS

    - by Registered User
    What is the best method using SSIS (SQL Server Integration Services) to upload a file to either a remote SFTP (secure FTP with SSH2 protocal) or FTPS (FTP over SSL) site? I've used the following methods, but each has short-comings I would like to avoid: COZYROC LIBRARY Method: Install the CozyRoc library on each development and production server and use the SFTP task to upload the files. Pros: Easy to use. It looks, smells, and feels like a normal SSIS task. SSIS also recognizes the password as sensitive information and allows you all the normal options for protecting the sensitive information instead of just storing it in clear text in a non-secure manner. Works well with other SSIS tasks such as ForEach Loop Containers. Errors out when uploads and downloads fail. Works well when you don't know the names of the files on the remote FTP site to download or when you won't know the name of the file to upload until run-time. Cons: Costs money to license in a production environment. Makes you dependent upon the vendor to update their libraries between each version. Although they already have a 2008 version, this caused me a problem during the CTP's of 2008. Requires installing the libraries on each development and production machine. COMMAND LINE SFTP PROGRAM Method: Install a free command-line SFTP application such as Putty and execute it either by running a batch file or operating system process task. Pros: Free, free, and free. You can be sure it is secure if you are using Putty since numerous GUI FTP clients appear to use Putty under the covers. You DEFINATELY know you are using SSH2 and not SSH. Cons: The two command-line utilities I tried (Putty and Cygwin) required storing the SFTP password in a non-secure location. I haven't found a good way to capture failures or errors when uploading files. The process doesn't look and smell like SSIS. Most of the code is encapsulated in text files instead of SSIS itself. Difficult to use if you don't know the exact name of the file you are uploading or downloading. A 3RD PARTY C# or VB.NET LIBRARY Method: Install a SFTP or FTPS library and use a Script Task that references the library to upload the files. (I've never tried this, so I'm going to guess at the pros and cons) Pros: Probably easy to capture errors. Should work well with variables, so it would probably be easy to use even when you don't know the exact name of the file you are uploading or downloading. Cons: It's a script task combined with .NET libraries. If you are using SSIS, then you probably are more comfortable with SSIS tasks then .NET code. Script tasks are also difficult to troubleshoot since they don't have the same debugging tools and features as regular .NET projects. Creates a dependency on 3rd party code that may not work between different versions of SQL Server. To be fair, it is probably MORE likely to work between different versions of SQL Server than a 3rd party SSIS task library. Another huge con -- I haven't found a free C# or VB.NET library that does this as of yet. So if anyone knows of one, then please let me know!

    Read the article

  • How to copy path on a mac?

    - by AngryHacker
    Windows refugee here. On Windows you can easily copy the path and paste it elsewhere to get to the directory. Here is the situation on the Mac. I am in the Finder 20 folders down and I see the file I want. I go to my application and want to open it, so I pick Open Document from the File menu. However, it is exceedingly difficult and time-consuming to get to the place I want. Is there a way to copy the path in the finder and paste it in the File Open dialog of my application?

    Read the article

  • eeePC 1001HA/1101HA max resolution when connected to external display?

    - by Marco Demaio
    Hello, I would like to buy a new Eee PC 1001HA or 1101HA. I know the max display resolution is: 1024x600 for eeePC 1001 and 1366x768 for eeePC 1101 But what's the max resolution of the graphic board when connecting these two types of eeePC to an external LCD monitor??? Let's say the external LCD monitor supports a full HD resolution of 1920x1080, are these eeePC graphic boards able to go up to such resolution??? It's really incredible to me, how such a useful information is missing everywhere on every ASUS website. Eee PC are very well suited to be connected to external monitor, so I can't believe how difficult is to find out this information. I downloaded aldo the manual, but it's not in there too. So I was hoping somone has got one and knows the answer. Thanks!

    Read the article

  • Easy software installation on Linux(Ubuntu/ Linux mint)

    - by Aash Maharoon
    When a software is installed, it has to be done with the terminal and coding. There are some steps to do which can be difficult for a new user(comes from windows OS). In Windows OS, we can set paths for installation directory, uninstall easily and installation GUI application comes with the software which is very easy to do with using mouse only. I am new to Linux and love to use it. Is there any methodologies or softwares which can be very user friendly for application installation with GUI support ? or only the terminal should be used ?

    Read the article

  • Why does my ReadyNas 4200 have a constant beep post disk failure

    - by swagner88
    Went to check on a high pitched constant beep coming from the server room and discovered that all the LED lights on the disks were black except one which had a constant green. Post a Re-boot nothing changed. Console indicated that that particular drive has failed. Pulled the drive out and BAM everything is fine. Except, the high pitched beep remains. Plan is currently to replace the drive with a new same size drive we happen to have purchased for expansion. My question is, what will this do? Will the NAS accept the new drive as a replacement for the failed one and decide to shut-up? The Manual for the device is almost to straight forward and says that I can just swap drives out as I need but I find that difficult to believe: http://www.readynas.com/download/documentation/HM/RN4200_HW_24May10.pdf

    Read the article

  • Rewriting from headers in Postfix

    - by inxilpro
    I want to configure Postfix to replace the 'From' header in all forwarded/aliased messages with a custom email address, and the 'Reply-To' header with the original sender's address. Is that something that can be done with a simple configuration change, or am I looking at a more complex problem? For example: Original Message: From: "John Smith" <[email protected]> To: "Jane Rice" <[email protected]> Would get translated to: From: "My Email Forwarding Service" <[email protected]> Reply-To: "John Smith" <[email protected]> To: "Jane Rice" <[email protected]> Ideally, I would also have it rewrite the message body (adding something about how the message was forwarded for them), but I know that's much more difficult. We have a number of email aliases, and everytime someone reports spam they received through their alias, our server gets flagged. I'm trying to minimize that damage as much as possible. Any help is greatly appreciated!

    Read the article

  • How much effort is SQL Server 2008 Administration?

    - by Adrian Grigore
    Hi, I am looking for a suitable hosting environment for an ASP.NET MVC application. One of the options I have is renting a Hyper-V server and installing my license of SQL Server 2008 on it. I'm a bit wary of shared hosting since the one I have tried so far did not seem to have very consistent performance. One potential problem is that I would have I do not not know much about SQL Server administration, so I am not sure if this is a good option. I've been running a failover cluster of two linux dedicated servers for over 5 years now and MySQL never gave me any trouble. But that was Linux, and it might be different with a windows system. Is running a halfway efficient MS SQL Server 2008 difficult? Does it require any in-depth administration knowledge? Or perhaps recurring administration effort (such as keeping the server up to date with the latest patches)? Or is it rather an "install and forget" experience similar to MySQL?

    Read the article

  • pfSense with two WANs, routing skype traffic over a specific WAN

    - by Eric
    I have a pfSense setup with two WANs (WAN1 and WAN2) and one LAN network. The two WANs are setup for failover. However, QoS has recently been an issue for skype calls in our office place (about 30 people) so we want to dedicate WAN2 for skype traffic (we use skype for all voip calls, etc.) As Skype is notoriously difficult to deal with, does anyone have any suggestions on how I should deal with this? A simple rile based on ports will not work, and using layer7 inspection witha skype porfile on all incoming LAN packets doesn't seem like the way to go eiter. here is a related pfsense forum post: http://forum.pfsense.org/index.php/topic,50406.msg268520.html#msg268520

    Read the article

  • What are industry standards and professional best practices in network hosts naming? [closed]

    - by Ivan
    Possible Duplicate: Naming convention for computers It seems an important and difficult dilemma for me how to name network hosts (routers, servers (while a server can be a router and host diverse services at the same time), virtual machines (while they host important services and can migrate), workstations and notebooks (using pc-username is not the best idea as users may change), printers & MFUs, surveillance IP cameras, etc). Are there known and accepted best practices for this task? Excuse me if there already was a similar question here (I think it probably was), I haven't found it.

    Read the article

  • Redirect as a backup trick? w/o modifying DNS?

    - by acidzombie24
    I specifically looked up how to do something like this ( Can you set a backup ip for your server in DNS? ) and the answer basically was you can't. If i say specify 2 ip addresses could i somehow use a HTTP response header to ignore it temporary (say 5mins) and go to the other IP address? Or maybe i can play dead however i'm unsure how to play dead using nginx. I then would like to be available after my box notice the other box is down and be some kind of readonly server. I'm sure something like this has been implemented i am just wondering how i might implement it with 2 boxes. I'm sure it isn't very difficult? How might i redirect traffic from a backup box to my main server without modifying the DNS?

    Read the article

  • Networking Windows 7 and Vista

    - by foosion
    How do you set up a Windows Vista Home Basic desktop (wired) and a Windows 7 Home Premium laptop (wifi) on a home network so that they can share files and the printers connected to the desktop? The Win7 laptop may also be used on other networks. I'm going to have to set this up for my parents when I visit shortly. I currently use XP and am not really familiar with Vista or Win7 network setup. Would it be difficult to share between the Vista and Win7 machines and and XP machine? If not, how to add that?

    Read the article

  • Sound card / microphone impedance mismatch

    - by axk
    First of all I'm not completely sure this is impedance mismatch, but from what I found on the Internet I believe it is. It seems to be a common problem. The question is not as much about solving the problem as about why it is happening (if I'm right about the cause of the problem, of course). I had this quiet microphone problem with several built in cards and microphones and now with a Creative Audigy SE. There's a microphone boost option which introduces a lot of noise with volume increase, but even this doesn't seem to give loud enough sound in some cases. The mic on my current headphones is very quiet with Audigy SE without the boost but is very loud and low noise with an external Sound Blaster Connect. So the question is have I just been unlucky with my sound cards and microphones or is it a common problem? And if it is a common problem why is it so difficult for the vendors to standardize on the sound card / microphone impedance? Edit: the OS is Windows (XP/7), but I don't believe it is OS-specific.

    Read the article

  • Active Directory using Samba/Open LDAP for user accounts

    - by Jon Rhoades
    I know this is the wrong way round... but Is it possible to use AD in front of Samba for our PC clients, so that the user accounts are in Samba/Open LDAP. Managing our fleet of Windows PC's is becoming more and more difficult with just Samba v3 - until Samba v4 comes along, it would be great if we could leverage Active Directory, but have the accounts stored in Samba/Open LDAP. Windows PC's are a minority in our organisation & Samaba/Open LDAP are used for just about every service (Zimbra/RADIUS/Intranet/SAN/Printing/...) so it will have to remain the definitive account source. Anyway, it probably can't be done, but I thought I would ask for ideas anyway.

    Read the article

  • Mac OS X - Force screen resolution

    - by wjlafrance
    Hello! I'm trying to use the lastest version of a certain development tool and it's sort of difficult to use on a 1200x800 display. Using VMWare Fusion, I can set the screen resolution inside a VM to 1900x1200 on my 13" MBP and it's still usable. Does anyone know of a way to force Mac OS X to scale it's resolution? I tried ScreenResX and it said the scaled resolution was "invalid" or something like that. I know that there are only a certain number of pixels on the screen. I'm only asking how to scale, not set a legit resolution. My current hack solution is to run Snow Leopard Server in a VM with resolution scaling.

    Read the article

  • Linux : Forward users mail using exim4 server (newbiw warning)

    - by llazzaro
    Hello, I have a network of linux server, one of them is running exim4 that server could send [email protected]. At exim4 server, I had setup an alias for the users accounts ...for example root : [email protected] But I have other server, if I put the alias email is marked as spam...since they arent using exim4 server. I want to configure the other server to use that exim4 server, in order to dont get the mails mark as spam. Well I know this is really newbie question, but searching at google is difficult since it confuses exim4 server configuration than the client configuration to use exim4 server. Remember, I want root@server1,root@server2,etc mails send via exim4 server!

    Read the article

  • The procedure entry php_mb_check_encoding_list could not be located

    - by BlackFire27
    I get this error. The procedure entry php_mb_check_encoding_list could not be located in the dynamic link library php_mbstring.dll The error is related to the php.ini settings, I think.. It finds it difficult to locate the extension I suspect, I put an environment path to the php.exe and its extensions folder, but when I run the command line and call php.exe, that error is being thrown. I use windows. I have got this set in my php.ini. ; On windows: extension_dir = "C:\xampp\php\ext" I installed xampp.. the php version is the latest.. Another thing which is weird is when I try to print out: phpinfo..nothing is being printed out <?php echo phpinfo(); ?> nothing comes out?!?! The question is why and how can I prevent it from being thrown..

    Read the article

  • Mod_rewrite display subdomain.domain.com and call domain.com/subdomain/ for SSL

    - by Jeff H.
    I have a website secured by a standard SSL certificate, securing a few different shops under different subdirectories. Ex. domain.com/shop1/ The shops are also accessible via a subdomain e.g. shop1.domain.com. What I'm trying to accomplish: display shop1.domain.com to the user, while keeping all of the actual server calls as domain.com/shop1, so that the secure pages will continue to work properly. (Not sure if I'm using the proper language, exactly, I hope my point is clear.) To be clear: my SSL is working fine, and I don't need help with that, and I don't need or want to purchase a UCC cert. It can't be that difficult for anyone with experience with Apache. (I've spent 3 hours trying to learn about mod_rewrite. It's just not clicking.) I'm on a GoDaddy secure shared server, so please keep in mind that I'm not able to reset the server or anything.

    Read the article

  • Exchange not preserving the "To:" field

    - by Matt Simmons
    I've got a hosted exchange solution through Apptix, which isn't the problem, I think, but it may be relevant. I have my main account, [email protected], and to that, I have an alias, [email protected]. Whenever I send an email to [email protected], I examine the headers, and I see the "To:" field being correct, "To: [email protected]". All is well. I recently set up another user, [email protected] to function as a multipurpose mailbox. I aliased "[email protected]" to the services account in the same method that I did "[email protected]", however nothing I have sent to "[email protected]" actually goes TO "[email protected]". All of the headers say "To: [email protected]". This makes it extremely difficult to filter based on headers alone. Does anyone have any feedback on what settings I would need to look at in order to fix that?

    Read the article

  • SpinRite and USB blues - does a solution exist?

    - by Peter Mortensen
    I use SpinRite to recover hard disks and their content, and to a lesser degree for preventive maintenance. However, if a USB drive (USB thumb drive and/or external hard disk with a USB interface) is connected when SpinRite scans for devices, then SpinRite never finishes/hangs. The work-around is of course to disconnect the drive, but there is value in being able to use SpinRite on USB drives. Some external drives have no screws and it is difficult to take out the hard disk without damaging the casing. And for those that have it would save the disassembling time. Is there a way to fix this problem (e.g. BIOS changes or a modified SpinRite boot CD) without resorting to floppy disks?

    Read the article

  • MacBook Pro 15in High-res hard to read. What setting should I change?

    - by orokusaki
    I just bought a new MacBook Pro with the high-res screen (1680x1050), but I noticed that all text is so small that to read it my face has to be like 18 inches away. When I adjusted the resolution to be the next sizes down (1440 x 852, and 1440 x 852 stretched), as well as all the other smaller sizes it made everything look blurry (similarly to when you use Command + Scroll to zoom in, how the text is really soft on the edges, and difficult to read). Is there a setting somewhere that I'm missing, or another resolution settings area that I can use. I feel like this 2800 dollar notebook may be only good for movie watching otherwise. Thanks in advance.

    Read the article

  • Where can I find a useful Unicode fallback font for Mac OS X?

    - by Stephen Jennings
    On every browser I've tried (Firefox, Safari, Chrome, and Omniweb), when I go to a web page containing somewhat less-common characters, I can't see the glyphs. For example, on the Wikipedia page for the Bengali Language, the very first line contains a string of squares; on Windows, I can see the Bengali writing. On Windows, as long as I have the Arial Unicode MS font installed, these characters fall back to that font and display properly. Mac OS X doesn't seem to ship with a font containing these Unicode characters (it has Arial Unicode MS, but it must be a subset of the Windows version because Bengali doesn't display in that font). I checked on my Snow Leopard DVD and I installed "Additional Fonts" from the Optional Installs package, but I'm still missing many languages. Is there any good, free font that contains a large collection of languages? I know creating fonts is difficult and time-consuming, but it seems like including at least one font like this with operating systems should be standard by now.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >