Search Results

Search found 7529 results on 302 pages for 'replace'.

Page 266/302 | < Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >

  • Lync Server 2010

    - by ManojDhobale
    Microsoft Lync Server 2010 communications software and its client software, such as Microsoft Lync 2010, enable your users to connect in new ways and to stay connected, regardless of their physical location. Lync 2010 and Lync Server 2010 bring together the different ways that people communicate in a single client interface, are deployed as a unified platform, and are administered through a single management infrastructure. Workload Description IM and presence Instant messaging (IM) and presence help your users find and communicate with one another efficiently and effectively. IM provides an instant messaging platform with conversation history, and supports public IM connectivity with users of public IM networks such as MSN/Windows Live, Yahoo!, and AOL. Presence establishes and displays a user’s personal availability and willingness to communicate through the use of common states such as Available or Busy. This rich presence information enables other users to immediately make effective communication choices. Conferencing Lync Server includes support for IM conferencing, audio conferencing, web conferencing, video conferencing, and application sharing, for both scheduled and impromptu meetings. All these meeting types are supported with a single client. Lync Server also supports dial-in conferencing so that users of public switched telephone network (PSTN) phones can participate in the audio portion of conferences. Conferences can seamlessly change and grow in real time. For example, a single conference can start as just instant messages between a few users, and escalate to an audio conference with desktop sharing and a larger audience instantly, easily, and without interrupting the conversation flow. Enterprise Voice Enterprise Voice is the Voice over Internet Protocol (VoIP) offering in Lync Server 2010. It delivers a voice option to enhance or replace traditional private branch exchange (PBX) systems. In addition to the complete telephony capabilities of an IP PBX, Enterprise Voice is integrated with rich presence, IM, collaboration, and meetings. Features such as call answer, hold, resume, transfer, forward and divert are supported directly, while personalized speed dialing keys are replaced by Contacts lists, and automatic intercom is replaced with IM. Enterprise Voice supports high availability through call admission control (CAC), branch office survivability, and extended options for data resiliency. Support for remote users You can provide full Lync Server functionality for users who are currently outside your organization’s firewalls by deploying servers called Edge Servers to provide a connection for these remote users. These remote users can connect to conferences by using a personal computer with Lync 2010 installed, the phone, or a web interface. Deploying Edge Servers also enables you to federate with partner or vendor organizations. A federated relationship enables your users to put federated users on their Contacts lists, exchange presence information and instant messages with these users, and invite them to audio calls, video calls, and conferences. Integration with other products Lync Server integrates with several other products to provide additional benefits to your users and administrators. Meeting tools are integrated into Outlook 2010 to enable organizers to schedule a meeting or start an impromptu conference with a single click and make it just as easy for attendees to join. Presence information is integrated into Outlook 2010 and SharePoint 2010. Exchange Unified Messaging (UM) provides several integration features. Users can see if they have new voice mail within Lync 2010. They can click a play button in the Outlook message to hear the audio voice mail, or view a transcription of the voice mail in the notification message. Simple deployment To help you plan and deploy your servers and clients, Lync Server provides the Microsoft Lync Server 2010, Planning Tool and the Topology Builder. Lync Server 2010, Planning Tool is a wizard that interactively asks you a series of questions about your organization, the Lync Server features you want to enable, and your capacity planning needs. Then, it creates a recommended deployment topology based on your answers, and produces several forms of output to aid your planning and installation. Topology Builder is an installation component of Lync Server 2010. You use Topology Builder to create, adjust and publish your planned topology. It also validates your topology before you begin server installations. When you install Lync Server on individual servers, the installation program deploys the server as directed in the topology. Simple management After you deploy Lync Server, it offers the following powerful and streamlined management tools: Active Directory for its user information, which eliminates the need for separate user and policy databases. Microsoft Lync Server 2010 Control Panel, a new web-based graphical user interface for administrators. With this web-based UI, Lync Server administrators can manage their systems from anywhere on the corporate network, without needing specialized management software installed on their computers. Lync Server Management Shell command-line management tool, which is based on the Windows PowerShell command-line interface. It provides a rich command set for administration of all aspects of the product, and enables Lync Server administrators to automate repetitive tasks using a familiar tool. While the IM and presence features are automatically installed in every Lync Server deployment, you can choose whether to deploy conferencing, Enterprise Voice, and remote user access, to tailor your deployment to your organization’s needs.

    Read the article

  • SQL SERVER – Auditing and Profiling Database Made Easy with SQL Audit and Comply

    - by Pinal Dave
    Do you like auditing your database, or can you think of about a million other things you’d rather do?  Unfortunately, auditing is incredibly important.  As with tax audits, it is important to audit databases to ensure they are following all the rules, but they are also important for troubleshooting and security. There are several ways to audit SQL Server.  There is manual auditing, which is going through your database “by hand,” and obviously takes a long time and is quite inefficient.  SQL Server also provides programs to help you audit your systems.  Different administrators will have different opinions about best practices and which tools to use, and each one will be perfected for certain systems and certain users. Today, though, I would like to talk about Apex SQL Audit.  It is an auditing tool that acts like “track changes” in a word processing document.  It will log what has changed on the database, who made the changes, and what effects these changes have had (i.e. what objects were affected down the line).  All this information is logged, and can be easily viewed or printed for easy access. One of the best features of Apex is that it is so customizable (and easy to use!).  First, start Apex.  Then you can connect to the database you would like to monitor. Once you select your database, you can select which table you want to audit. You can customize right down to the field you’d like to audit, and then select which types of actions you’d like tracked – insert, delete, or update.  Repeat these steps for every database you want monitored. To create the logs, choose “Create triggers” in the menu.  The script written here will be what logs each insert, delete, and update function.  Press F5 to execute.  All this tracking information will be stored in AUDIT_LOG_DATA and AUDIT_LOG_TRANSACTIONS tables.  View these tables using ApexSQL Audit reports. These transaction logs can be extremely detailed – especially on very busy servers, where every move it traced.  Reading them can be overwhelming, to say the least.  Apex has tried to make things easier for the average DBA, though. You can read these tracking logs in Apex, and it will display data and objects that affect your server – even things that were happening on your server before you installed Apex! To read these logs, open Apex, and connect to that database you want to audit. Go to the Transaction Logs tab, and add the logs you want to read. To narrow down what results you want to see, you can use the Filter tab to choose time, operation type, name, users, and more. Click Open, and you can see the results in a grid (as shown below).  You can export these results to CSV, HTML, XML or SQL files and save on the hard disk. One of the advantages is that since there are no triggers here, there are no other processes that will affect SQL Server performance.  Using this method is also how to view history from your database that occurred before Apex was installed.  This type of tracking does require storage space for the data sources, as the database must be fully running, and the transaction logs must exist (things not stored in the transactions logs will not be recoverable). Apex can also replace SQL Server Profiler and SQL Server Traces – which are much more complex and error-prone – with its ApexSQL Comply.  It can do fault tolerant auditing, centralized reporting, and “who saw what” information in an easy-to-use interface.  The tracking settings can be altered by the user, or the default options will provide solutions to the most common auditing problems. To get started: open ApexSQL Comply, and selected Database Filter Settings to choose which database you’d like to audit.  You can select which tracking you’re like in Operation Types – DML, DDL, queries executed, execute statements, and more.  To get started, click Start Auditing. After this, every action will be stored in the central repository database (ApexSQLCrd).  You can view the audit and create a report (or view the standard default report) using a wizard. You can see how easy it is to use ApexSQL Comply.  You can easily set audits, including the type and time, and create customized reports.  Remote users can easily access the reports through the user interface (available online, as well), and security concerns are all taken care of by the program.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Geocaching - World wide treasure hunt

    I'm not quite sure how I came across this topic but actually I find it absolutely interesting, challenging and most of all a great fun for the family and friends. The interesting part is for sure that you can follow other peoples treasures and their preferred locations where a cache might be hidden. Of course, it wont be easy to find a cache after all. Sometimes there are even 'mystery caches' which have either riddles, further instructions or little brain games for you in order to find the actual cache - that's the challenge. And last but not least, those caches are hidden outdoor. A great experience to explore nature either on your own, or your family especially with children, or as a treasure hunting pack with a couple of friends. What is geocaching? It's a high-tech outdoor treasure hunting game that's a great way to explore the world with friends, family or on your own. Participants use GPS-enabled devices to locate hidden containers called geocaches. There are over one million geocaches hidden around the world today, waiting for you to find them. Visit Geocaching.com to search for geocaches near you.(Source: Referral Email of geocaching.com) Checkout the Geocaching 101 for further details and information. They also provide a video channel on YouTube. Which equipment do I need? Any GPS-enabled device is sufficient to go onto the hunt. I'm going to start our geocaching experience equipped with my Samsung Galaxy Tab. Additionally, I installed a geocaching.com client called c:geo that hopefully assists me soon. Combined with a map app like Google Maps and a nice Compass app you should be fully equipped and ready to go. I guess, that even a car navigation system is perfect for that task. Later on, with more experience and demand for technology (or precision) it might be interesting to opt-in for a pure GPS device, like a Garmin or any other brand on the market. {loadposition content_adsense} What is a geocache and what does it contain? In its simplest form, a cache always contains a logbook or logsheet for you to log your find. Larger caches may contain a logbook and any number of items. These items turn the adventure into a true treasure hunt. You never know what the cache owner or visitors to the cache may have left for you to enjoy. Remember, if you take something, leave something of equal or greater value in return. It is recommended that items in a cache be individually packaged in a clear, zipped plastic bag to protect them from the elements. Finding your first geocache Well, first you have to have interest to pick up the challenge. Then you have to check out the Geocache directory on geocaching.com. They have recommendations for beginner's caches but you are free to choose any. Actually, we have a Mystery Cache very close to our base, and I guess that we are going for that one on our first trip. Anyway, there is a very informative guide on the website which should answer all your questions about starting your new outdoor adventure. For sure, it's going to be rewarding. Team up with friends and family Especially as a beginner there might be misunderstandings in handling the GPS coordinates, the compass, or the map, and even finding the container at the documented position isn't easy in the first place. Luckily, there are logbook reports online from other hunters, and most of the time there are even 'spoiler' images available. But also bear in mind, that a geocache might have been removed or is lost due to unconscious people or whatever other reasons. Don't be disappointed in case that you can't find anything... There be nothing anymore. A general recommendation in this case would be to replace the missing container with a new one, and give feedback to the original owner about the state of that particular location. After all, it's about fun and active participation in a world-wide community. Geocaches in Mauritius? Yes, there are currently about 45 geocaches spread all over the island, and even a single in Rodriguez - that's gonna be a tough one. Hopefully, we will get increasing numbers as Geocaching.com allows, no better, even encourages you to hide new containers at your locations of choice. I think this is going to be real fun for us during the upcoming weeks and months. Especially, when we are travelling to other countries and transfer so-called trackable items between geocaches. On my first impression, Geocaching.com seems to be very mature, open and community-oriented. There are literally hundreds of thousands geocache 'hunters' all over the world. And usually finding a container remote from your home is very rewarding. I'll keep you updated in these matters during the next months to come...

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Exitus Acta Probat: The Post-Processing Module

    - by Phil Factor
    Sometimes, one has to make certain ethical compromises to ensure the success of a corporate IT project. Exitus Acta Probat (literally 'the result validates the deeds' meaning that the ends justify the means)It was a while back, whilst working as a Technical Architect for a well-known international company, that I was given the task of designing the architecture of a rather specialized accounting system. We'd tried an off-the-shelf (OTS) Windows-based solution which crashed with dispiriting regularity, and didn't quite do what the business required. After a great deal of research and planning, we commissioned a Unux-based system that used X-terminals for the desktops of  the participating staff. X terminals are now obsolete, but were then hot stuff; stripped-down Unix workstations that provided client GUIs for networked applications long before the days of AJAX, Flash, Air and DHTML. I've never known a project go so smoothly: I'd been initially rather nervous about going the Unix route, believing then that  Unix programmers were excitable creatures who were prone to  indulge in role-play enactments of elves and wizards at the weekend, but the programmers I met from the company that did the work seemed to be rather donnish, earnest, people who quickly grasped our requirements and were faultlessly professional in their work.After thinking lofty thoughts for a while, there was considerable pummeling of keyboards by our suppliers, and a beautiful robust application was delivered to us ahead of dates.Soon, the department who had commissioned the work received shiny new X Terminals to replace their rather depressing lavatory-beige PCs. I modestly hung around as the application was commissioned and deployed to the department in order to receive the plaudits. They didn't come. Something was very wrong with the project. I couldn't put my finger on the problem, and the users weren't doing any more than desperately and futilely searching the application to find a fault with it.Many times in my life, I've come up against a predicament like this: The roll-out of an application goes wrong and you are hearing nothing that helps you to discern the cause but nit-*** noise. There is a limit to the emotional heat you can pack into a complaint about text being in the wrong font, or an input form being slightly cramped, but they tried their best. The answer is, of course, one that every IT executive should have tattooed prominently where they can read it in emergencies: In Vino Veritas (literally, 'in wine the truth', alcohol loosens the tongue. A roman proverb) It was time to slap the wallet and get the department down the pub with the tab in my name. It was an eye-watering investment, but hedged with an over-confident IT director who relished my discomfort. To cut a long story short, The real reason gushed out with the third round. We had deprived them of their PCs, which had been good for very little from the pure business perspective, but had provided them with many hours of happiness playing computer-based minesweeper and solitaire. There is no more agreeable way of passing away the interminable hours of wage-slavery than minesweeper or solitaire, and the employees had applauded the munificence of their employer who had provided them with the means to play it. I had, unthinkingly, deprived them of it.I held an emergency meeting with our suppliers the following day. I came over big with the notion that it was in their interests to provide a solution. They played it cool, probably knowing that it was my head on the block, not theirs. In the end, they came up with a compromise. they would temporarily descend from their lofty, cerebral stamping grounds  in order to write a server-based Minesweeper and Solitaire game for X Terminals, and install it in a concealed place within the system. We'd have to pay for it, though. I groaned. How could we do that? "Could we call it a 'post-processing module?" suggested their account executive.And so it came to pass. The application was a resounding success. Every now and then, the staff were able to indulge in some 'post-processing', with what turned out to be a very fine implementation of both minesweeper and solitaire. There were several refinements: A single click in a 'boss' button turned the games into what looked just like a financial spreadsheet.  They even threw in a multi-user version of Battleships. The extra payment for the post-processing module went through the change-control process without anyone untoward noticing, and peace once more descended. Only one thing niggles. Those games were good. Do they still survive, somewhere in a Linux library? If so, I'd like to claim a small part in their production.

    Read the article

  • Slide Creation Checklist

    - by Daniel Moth
    PowerPoint is a great tool for conference (large audience) presentations, which is the context for the advice below. The #1 thing to keep in mind when you create slides (at least for conference sessions), is that they are there to help you remember what you were going to say (the flow and key messages) and for the audience to get a visual reminder of the key points. Slides are not there for the audience to read what you are going to say anyway. If they were, what is the point of you being there? Slides are not holders for complete sentences (unless you are quoting) – use Microsoft Word for that purpose either as a physical handout or as a URL link that you share with the audience. When you dry run your presentation, if you find yourself reading the bullets on your slide, you have missed the point. You have a message to deliver that can be done regardless of your slides – remember that. The focus of your audience should be on you, not the screen. Based on that premise, I have created a checklist that I go over before I start a new deck and also once I think my slides are ready. Turn AutoFit OFF. I cannot stress this enough. For each slide, explicitly pick a slide layout. In my presentations, I only use one Title Slide, Section Header per demo slide, and for the rest of my slides one of the three: Title and Content, Title Only, Blank. Most people that are newbies to PowerPoint, get whatever default layout the New Slide creates for them and then start deleting and adding placeholders to that. You can do better than that (and you'll be glad you did if you also follow item #11 below). Every slide must have an image. Remove all punctuation (e.g. periods, commas) other than exclamation points and question marks (! ?). Don't use color or other formatting (e.g. italics, bold) for text on the slide. Check your animations. Avoid animations that hide elements that were on the slide (instead use a new slide and transition). Ensure that animations that bring new elements in, bring them into white space instead of over other existing elements. A good test is to print the slide and see that it still makes sense even without the animation. Print the deck in black and white choosing the "6 slides per page" option. Can I still read each slide without losing any information? If the answer is "no", go back and fix the slides so the answer becomes "yes". Don't have more than 3 bullet levels/indents. In other words: you type some text on the slide, hit 'Enter', hit 'Tab', type some more text and repeat at most one final time that sequence. Ideally your outer bullets have only level of sub-bullets (i.e. one level of indentation beneath them). Don't have more than 3-5 outer bullets per slide. Space them evenly horizontally, e.g. with blank lines in between. Don't wrap. For each bullet on all slides check: does the text for that bullet wrap to a second line? If it does, change the wording so it doesn't. Or create a terser bullet and make the original long text a sub-bullet of that one (thus decreasing the font size, but still being consistent) and have no wrapping. Use the same consistent fonts (i.e. Font Face, Font Size etc) throughout the deck for each level of bullet. In other words, don't deviate form the PowerPoint template you chose (or that was chosen for you). Go on each slide and hit 'Reset'. 'Reset' is a button on the 'Home' tab of the ribbon or you can find the 'Reset Slide' menu when you right click on a slide on the left 'Slides' list. If your slides can survive doing that without you "fixing" things after the Reset action, you are golden! For each slide ask yourself: if I had to replace this slide with a single sentence that conveys the key message, what would that sentence be? This exercise leads you to merge slides (where the key message is split) or split a slide into many, if there were too many key messages on the slide in the first place. It can also lead you to redesign a slide so the text on it really is just explanation or evidence for the key message you are trying to convey. Get the length right. Is the length of this deck suitable for the time you have been given to present? If not, cut content! It is far better to deliver less in a relaxed, polished engaging, memorable way than to deliver in great haste more content. As a rule of thumb, multiply 2 minutes by the number of slides you have, add the time you need for each demo and check if that add to more than the time you have allotted. If it does, start cutting content – we've all been there and it has to be done. As always, rules and guidelines are there to be bent and even broken some times. Start with the above and on a slide-by-slide basis decide which rules you want to bend. That is smarter than throwing all the rules out from the start, right? Comments about this post welcome at the original blog.

    Read the article

  • Event Logging in LINQ C# .NET

    The first thing you'll want to do before using this code is to create a table in your database called TableHistory: CREATE TABLE [dbo].[TableHistory] (     [TableHistoryID] [int] IDENTITY NOT NULL ,     [TableName] [varchar] (50) NOT NULL ,     [Key1] [varchar] (50) NOT NULL ,     [Key2] [varchar] (50) NULL ,     [Key3] [varchar] (50) NULL ,     [Key4] [varchar] (50) NULL ,     [Key5] [varchar] (50) NULL ,     [Key6] [varchar] (50)NULL ,     [ActionType] [varchar] (50) NULL ,     [Property] [varchar] (50) NULL ,     [OldValue] [varchar] (8000) NULL ,     [NewValue] [varchar] (8000) NULL ,     [ActionUserName] [varchar] (50) NOT NULL ,     [ActionDateTime] [datetime] NOT NULL ) Once you have created the table, you'll need to add it to your custom LINQ class (which I will refer to as DboDataContext), thus creating the TableHistory class. Then, you'll need to add the History.cs file to your project. You'll also want to add the following code to your project to get the system date: public partial class DboDataContext{ [Function(Name = "GetDate", IsComposable = true)] public DateTime GetSystemDate() { MethodInfo mi = MethodBase.GetCurrentMethod() as MethodInfo; return (DateTime)this.ExecuteMethodCall(this, mi, new object[] { }).ReturnValue; }}private static Dictionary<type,> _cachedIL = new Dictionary<type,>();public static T CloneObjectWithIL<t>(T myObject){ Delegate myExec = null; if (!_cachedIL.TryGetValue(typeof(T), out myExec)) { // Create ILGenerator DynamicMethod dymMethod = new DynamicMethod("DoClone", typeof(T), new Type[] { typeof(T) }, true); ConstructorInfo cInfo = myObject.GetType().GetConstructor(new Type[] { }); ILGenerator generator = dymMethod.GetILGenerator(); LocalBuilder lbf = generator.DeclareLocal(typeof(T)); //lbf.SetLocalSymInfo("_temp"); generator.Emit(OpCodes.Newobj, cInfo); generator.Emit(OpCodes.Stloc_0); foreach (FieldInfo field in myObject.GetType().GetFields( System.Reflection.BindingFlags.Instance | System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.NonPublic)) { // Load the new object on the eval stack... (currently 1 item on eval stack) generator.Emit(OpCodes.Ldloc_0); // Load initial object (parameter) (currently 2 items on eval stack) generator.Emit(OpCodes.Ldarg_0); // Replace value by field value (still currently 2 items on eval stack) generator.Emit(OpCodes.Ldfld, field); // Store the value of the top on the eval stack into // the object underneath that value on the value stack. // (0 items on eval stack) generator.Emit(OpCodes.Stfld, field); } // Load new constructed obj on eval stack -> 1 item on stack generator.Emit(OpCodes.Ldloc_0); // Return constructed object. --> 0 items on stack generator.Emit(OpCodes.Ret); myExec = dymMethod.CreateDelegate(typeof(Func<t,>)); _cachedIL.Add(typeof(T), myExec); } return ((Func<t,>)myExec)(myObject);}I got both of the above methods off of the net somewhere (maybe even from CodeProject), but it's been long enough that I can't recall where I got them.Explanation of the History ClassThe History class records changes by creating a TableHistory record, inserting the values for the primary key for the table being modified into the Key1, Key2, ..., Key6 columns (if you have more than 6 values that make up a primary key on any table, you'll want to modify this), setting the type of change being made in the ActionType column (INSERT, UPDATE, or DELETE), old value and new value if it happens to be an update action, and the date and Windows identity of the user who made the change.Let's examine what happens when a call is made to the RecordLinqInsert method:public static void RecordLinqInsert(DboDataContext dbo, IIdentity user, object obj){ TableHistory hist = NewHistoryRecord(obj); hist.ActionType = "INSERT"; hist.ActionUserName = user.Name; hist.ActionDateTime = dbo.GetSystemDate(); dbo.TableHistories.InsertOnSubmit(hist);}private static TableHistory NewHistoryRecord(object obj){ TableHistory hist = new TableHistory(); Type type = obj.GetType(); PropertyInfo[] keys; if (historyRecordExceptions.ContainsKey(type)) { keys = historyRecordExceptions[type].ToArray(); } else { keys = type.GetProperties().Where(o => AttrIsPrimaryKey(o)).ToArray(); } if (keys.Length > KeyMax) throw new HistoryException("object has more than " + KeyMax.ToString() + " keys."); for (int i = 1; i <= keys.Length; i++) { typeof(TableHistory) .GetProperty("Key" + i.ToString()) .SetValue(hist, keys[i - 1].GetValue(obj, null).ToString(), null); } hist.TableName = type.Name; return hist;}protected static bool AttrIsPrimaryKey(PropertyInfo pi){ var attrs = from attr in pi.GetCustomAttributes(typeof(ColumnAttribute), true) where ((ColumnAttribute)attr).IsPrimaryKey select attr; if (attrs != null && attrs.Count() > 0) return true; else return false;}RecordLinqInsert takes as input a data context which it will use to write to the database, the user, and the LINQ object to be recorded (a single object, for instance, a Customer or Order object if you're using AdventureWorks). It then calls the NewHistoryRecord method, which uses LINQ to Objects in conjunction with the AttrIsPrimaryKey method to pull all the primary key properties, set the Key1-KeyN properties of the TableHistory object, and return the new TableHistory object. The code would be called in an application, like so: Continue span.fullpost {display:none;}

    Read the article

  • Playing with aspx page cycle using JustMock

    - by mehfuzh
    In this post , I will cover a test code that will mock the various elements needed to complete a HTTP page request and  assert the expected page cycle steps. To begin, i have a simple enumeration that has my predefined page steps: public enum PageStep {     PreInit,     Load,     PreRender,     UnLoad } Once doing so, i  first created the page object [not mocking]. Page page = new Page(); Here, our target is to fire up the page process through ProcessRequest call, now if we take a look inside the method with reflector.net,  the call trace will go like : ProcessRequest –> ProcessRequestWithNoAssert –> SetInstrinsics –> Finallly ProcessRequest. Inside SetInstrinsics ,  it requires calls from HttpRequest, HttpResponse and HttpBrowserCababilities. Using this clue at hand, we can easily know the classes / calls  we need to mock in order to get through the expected call. Accordingly, for  HttpBrowserCapabilities our required mock code will look like: var browser = Mock.Create<HttpBrowserCapabilities>(); // Arrange Mock.Arrange(() => browser.PreferredRenderingMime).Returns("text/html"); Mock.Arrange(() => browser.PreferredResponseEncoding).Returns("UTF-8"); Mock.Arrange(() => browser.PreferredRequestEncoding).Returns("UTF-8"); Now, HttpBrowserCapabilities is get though [Instance]HttpRequest.Browser. Therefore, we create the HttpRequest mock: var request = Mock.Create<HttpRequest>(); Then , add the required get call : Mock.Arrange(() => request.Browser).Returns(browser); As, [instance]Browser.PerferrredResponseEncoding and [instance]Browser.PreferredResponseEncoding  are also set to the request object and to make that they are set properly, we can add the following lines as well [not required though]. bool requestContentEncodingSet = false; Mock.ArrangeSet(() => request.ContentEncoding = Encoding.GetEncoding("UTF-8")).DoInstead(() =>  requestContentEncodingSet = true); Similarly, for response we can write:  var response = Mock.Create<HttpResponse>();    bool responseContentEncodingSet = false;  Mock.ArrangeSet(() => response.ContentEncoding = Encoding.GetEncoding("UTF-8")).DoInstead(() => responseContentEncodingSet = true); Finally , I created a mock of HttpContext and set the Request and Response properties that will returns the mocked version. var context = Mock.Create<HttpContext>();   Mock.Arrange(() => context.Request).Returns(request); Mock.Arrange(() => context.Response).Returns(response); As, Page internally calls RenderControl method , we just need to replace that with our one and optionally we can check if  invoked properly: bool rendered = false; Mock.Arrange(() => page.RenderControl(Arg.Any<HtmlTextWriter>())).DoInstead(() => rendered = true); That’s  it, the rest of the code is simple,  where  i asserted the page cycle with the PageSteps that i defined earlier: var pageSteps = new Queue<PageStep>();   page.PreInit +=delegate { pageSteps.Enqueue(PageStep.PreInit); }; page.Load += delegate { pageSteps.Enqueue(PageStep.Load); }; page.PreRender += delegate { pageSteps.Enqueue(PageStep.PreRender);}; page.Unload +=delegate { pageSteps.Enqueue(PageStep.UnLoad);};   page.ProcessRequest(context);   Assert.True(requestContentEncodingSet); Assert.True(responseContentEncodingSet); Assert.True(rendered);   Assert.Equal(pageSteps.Dequeue(), PageStep.PreInit); Assert.Equal(pageSteps.Dequeue(), PageStep.Load); Assert.Equal(pageSteps.Dequeue(), PageStep.PreRender); Assert.Equal(pageSteps.Dequeue(), PageStep.UnLoad);   Mock.Assert(request); Mock.Assert(response); You can get the test class shown in this post here to give a try by yourself with of course JustMock :-). Enjoy!!

    Read the article

  • [Windows 8] Application bar popup button

    - by Benjamin Roux
    Here is a small control to create an application bar button which will display a content in a popup when the button is clicked. Visually it gives this So how to create this? First you have to use the AppBarPopupButton control below.   namespace Indeed.Controls { public class AppBarPopupButton : Button { public FrameworkElement PopupContent { get { return (FrameworkElement)GetValue(PopupContentProperty); } set { SetValue(PopupContentProperty, value); } } public static readonly DependencyProperty PopupContentProperty = DependencyProperty.Register("PopupContent", typeof(FrameworkElement), typeof(AppBarPopupButton), new PropertyMetadata(null, (o, e) => (o as AppBarPopupButton).CreatePopup())); private Popup popup; private SerialDisposable sizeChanged = new SerialDisposable(); protected override void OnTapped(Windows.UI.Xaml.Input.TappedRoutedEventArgs e) { base.OnTapped(e); if (popup != null) { var transform = this.TransformToVisual(Window.Current.Content); var offset = transform.TransformPoint(default(Point)); sizeChanged.Disposable = PopupContent.ObserveSizeChanged().Do(_ => popup.VerticalOffset = offset.Y - (PopupContent.ActualHeight + 20)).Subscribe(); popup.HorizontalOffset = offset.X + 24; popup.DataContext = this.DataContext; popup.IsOpen = true; } } private void CreatePopup() { popup = new Popup { IsLightDismissEnabled = true }; popup.Closed += (o, e) => this.GetParentOfType<AppBar>().IsOpen = false; popup.ChildTransitions = new Windows.UI.Xaml.Media.Animation.TransitionCollection(); popup.ChildTransitions.Add(new Windows.UI.Xaml.Media.Animation.PopupThemeTransition()); var container = new Grid(); container.Children.Add(PopupContent); popup.Child = container; } } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The ObserveSizeChanged method is just an extension method which observe the SizeChanged event (using Reactive Extensions - Rx-Metro package in Nuget). If you’re not familiar with Rx, you can replace this line (and the SerialDisposable stuff) by a simple subscription to the SizeChanged event (using +=) but don’t forget to unsubscribe to it ! public static IObservable<Unit> ObserveSizeChanged(this FrameworkElement element) { return Observable.FromEventPattern<SizeChangedEventHandler, SizeChangedEventArgs>( o => element.SizeChanged += o, o => element.SizeChanged -= o) .Select(_ => Unit.Default); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The GetParentOfType extension method just retrieve the first parent of type (it’s a common extension method that every Windows 8 developer should have created !). You can of course tweak to control (for example if you want to center the content to the button or anything else) to fit your needs. How to use this control? It’s very simple, in an AppBar control just add it and define the PopupContent property. <ic:AppBarPopupButton Style="{StaticResource RefreshAppBarButtonStyle}" HorizontalAlignment="Left"> <ic:AppBarPopupButton.PopupContent> <Grid> [...] </Grid> </ic:AppBarPopupButton.PopupContent> </ic:AppBarPopupButton> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } When the button is clicked the popup is displayed. When the popup is closed, the app bar is closed too. I hope this will help you !

    Read the article

  • Profit's COLLABORATE 10 Session Selections

    - by Aaron Lazenby
    COLLABORATE 2010 is a mere 11 days away (thanks for the reminder @ocp_advisor). Every year I publish my a list of the sessions I think reflect some of the more interesting people/trends in enterprise IT. I should be at all of these sessions, so drop by for a chat--I'll be the guy tapping out emails on my iPad... Monday, April 19 9:15 a.m. - Keynote: Transforming Customer Value, Delivering Highest Customer Service Location: Keynote Hall I never miss Charles Phillips when he speaks--it's one of the best opportunities to get an update on Oracle product developments and strategy. And there's certainly occasion for an update: this will be Phillips' first big presentation since the Oracle + Sun Strategy Update in late January. Phillips is appearing with Oracle Executive Vice President of Development Thomas Kurian which means there should be some excellent information about how customers are using Oracle's complete software and hardware stack to address enterprise IT challenges. The session should provide some excellent context for the rest of the week's session...don't miss it. 10:45 a.m. - Oracle Fusion Applications: Functional Overview Location: South Seas FI met Basheer Khan at COLLABORATE 08 in Denver and have followed his work ever since. He's a former member of the OAUG Board of Directors, an Oracle ACE, and a charismatic enterprise IT expert. Having worked with the Oracle Usability Advisory Board, Basheer should have some fascinating insights to share about the features and interface of Oracle's Fusine Applications. This session, along with Nadia Bendjedou's "10 Things You Can Do Today to Prepare for the Next Generation Applications" (on Tuesday, April 20 8:00 a.m. in room 3662) should give attendees the update they need about Oracle's next-generation applications.   1:15p.m. - E-Business Suite in the Amazon Cloud Location: South Seas HI did my first full-fledged cloud computing coverage at last year's COLLABORATE show (check out my interview with Oracle's Bill Hodak), where I first learned about Amazon's EC2 offering. I've since talked with several people who have provisioned server space on Amazon's cloud with great results. So I'm looking forward to watching the audience configure an instance of the Oracle E-Business Suite release 12 on the cloud while Chuck Edwards from Blue Gecko drives. This session should take some of the mist and vapor out of the cloud conversation.2:30 p.m. - "Zero Sign-on" to EBS - Enabling 96000 Users to Login to EBS Without User Maintenance Location: South Seas HI'll be sitting tight in South Seas H for the next session on Monday where Doug Pepka, a ten-year veteran of communications giant Comcast, will be walking attendees through a massive single sign-on (SSO) project across the enterprise. I'm working on a story about SSO for the August issue of Profit, so this session has real practical value to me. Plus the proliferation of user account logins--both personal and professional--makes this a critical usability/change management issue for IT leaders planning for successful long-term IT implementations.   Tuesday 8:00 am  - Information Architecture for Men in Kilts Location: SURF AGetting to a 8:00 a.m. presentation is a tall order in Las Vegas, but presenter Billy Cripe will make it worth your effort. Not only is the title of this session great, but the content should appeal to any IT strategist looking to push the limits of Web 2.0 technologies in the enterprise. Cripe is a product management director of Enterprise 2.0 and Enterprise Content Management at Oracle, author of Reshaping Your Business with Web 2.0, and a prolific blogger--he knows how information architecture is critical to and enterprise 2.0 implementation.    10:30a.m. - Oracle Virtualization: From Desktop to Data Center Location: REEF FData center virtualization is still one of the best ways to reduce the cost of running enterprise IT. With the addition of Sun products, Oracle has the industry's most comprehensive virtualization portfolio. I must admit, I'm no expert in this subject. So I'm looking forward to Monica Kumar's presentation so I can get up to speed.   Wednesday 8:00 a.m. - The Art of the Steal Location: Mandalay Bay Ballroom JMany will know Frank Abagnale from Steven Spielberg's 2002 film "Catch Me if You Can." The one-time con man and international fugitive who swindled $2.5 million in forged checks went on to help U.S. federal officials investigate fraud cases. Now the CEO of Abagnale and Associates, he has become an invaluable source to the business world on the subject of fraud and fraud protection. With identity theft and digital fraud still on the rise, this session should be an entertaining, and sobering, education on the threats facing businesses and customers around the world. A great way to start Wednesday.1:00 p.m. - Google Wave: Will it replace e-mail as we know it today? Location: SURF EBy many assessments (my own included), Google Wave is a bit of an open collaboration failure. It may seem like an odd reason for me to be excited about this session, but I'm looking forward to the chance to revisit the technology. Also, this is a great case study in connecting free, available Internet tools to existing enterprise computing environments--an issue that IT strategists must contend with as workers spreads out and choose their own productivity tools.  

    Read the article

  • Unable to sign in. How to debug?

    - by Dmitriy Budnik
    I had to reboot system with reset button. After reboot I can't sign in. When I enter my password It seems like X-server just restarts. I can sing in as guest and also I can sign in in text TTY. Here is first 150 lines of my lightdm.log: [+0.04s] DEBUG: Logging to /var/log/lightdm/lightdm.log [+0.04s] DEBUG: Starting Light Display Manager 1.2.1, UID=0 PID=1070 [+0.04s] DEBUG: Loaded configuration from /etc/lightdm/lightdm.conf [+0.04s] DEBUG: Using D-Bus name org.freedesktop.DisplayManager [+0.04s] DEBUG: Registered seat module xlocal [+0.04s] DEBUG: Registered seat module xremote [+0.04s] DEBUG: Adding default seat [+0.04s] DEBUG: Starting seat [+0.04s] DEBUG: Starting new display for automatic login as user dmytro [+0.04s] DEBUG: Starting local X display [+3.64s] DEBUG: X server :0 will replace Plymouth [+3.66s] DEBUG: Using VT 7 [+3.66s] DEBUG: Activating VT 7 [+3.66s] DEBUG: Logging to /var/log/lightdm/x-0.log [+3.66s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+3.66s] DEBUG: Launching X Server [+3.66s] DEBUG: Launching process 1154: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background none [+3.66s] DEBUG: Waiting for ready signal from X server :0 [+3.66s] DEBUG: Acquired bus name org.freedesktop.DisplayManager [+3.66s] DEBUG: Registering seat with bus path /org/freedesktop/DisplayManager/Seat0 [+10.78s] DEBUG: Got signal 10 from process 1154 [+10.78s] DEBUG: Got signal from X server :0 [+10.78s] DEBUG: Stopping Plymouth, X server is ready [+10.80s] DEBUG: Connecting to XServer :0 [+10.80s] DEBUG: Automatically logging in user dmytro [+10.80s] DEBUG: Started session 1303 with service 'lightdm-autologin', username 'dmytro' [+13.22s] DEBUG: Session 1303 authentication complete with return value 0: Success [+13.26s] DEBUG: Autologin user dmytro authorized [+13.27s] DEBUG: Autologin using session ubuntu [+14.44s] DEBUG: Dropping privileges to uid 1000 [+14.48s] DEBUG: Restoring privileges [+14.49s] DEBUG: Dropping privileges to uid 1000 [+14.49s] DEBUG: Writing /home/dmytro/.dmrc [+14.61s] DEBUG: Restoring privileges [+14.81s] DEBUG: Starting session ubuntu as user dmytro [+14.81s] DEBUG: Session 1303 running command /usr/sbin/lightdm-session gnome-session --session=ubuntu [+15.76s] DEBUG: New display ready, switching to it [+15.76s] DEBUG: Activating VT 7 [+15.76s] DEBUG: Registering session with bus path /org/freedesktop/DisplayManager/Session0 [+16.63s] DEBUG: Session 1303 exited with return value 0 [+16.63s] DEBUG: User session quit [+16.63s] DEBUG: Stopping display [+16.63s] DEBUG: Sending signal 15 to process 1154 [+17.19s] DEBUG: Process 1154 exited with return value 0 [+17.19s] DEBUG: X server stopped [+17.19s] DEBUG: Removing X server authority /var/run/lightdm/root/:0 [+17.19s] DEBUG: Releasing VT 7 [+17.19s] DEBUG: Display server stopped [+17.19s] DEBUG: Display stopped [+17.19s] DEBUG: Active display stopped, switching to greeter [+17.19s] DEBUG: Switching to greeter [+17.19s] DEBUG: Starting new display for greeter [+17.19s] DEBUG: Starting local X display [+17.19s] DEBUG: Using VT 7 [+17.19s] DEBUG: Logging to /var/log/lightdm/x-0.log [+17.19s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+17.19s] DEBUG: Launching X Server [+17.19s] DEBUG: Launching process 1563: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch [+17.19s] DEBUG: Waiting for ready signal from X server :0 [+17.48s] DEBUG: Got signal 10 from process 1563 [+17.48s] DEBUG: Got signal from X server :0 [+17.48s] DEBUG: Connecting to XServer :0 [+17.48s] DEBUG: Starting greeter [+17.48s] DEBUG: Started session 1575 with service 'lightdm', username 'lightdm' [+17.61s] DEBUG: Session 1575 authentication complete with return value 0: Success [+17.61s] DEBUG: Greeter authorized [+17.61s] DEBUG: Logging to /var/log/lightdm/x-0-greeter.log [+17.68s] DEBUG: Session 1575 running command /usr/lib/lightdm/lightdm-greeter-session /usr/sbin/unity-greeter [+20.86s] DEBUG: Greeter connected version=1.2.1 [+20.86s] DEBUG: Greeter connected, display is ready [+20.86s] DEBUG: New display ready, switching to it [+20.86s] DEBUG: Activating VT 7 [+20.86s] DEBUG: Stopping greeter display being switched from [+24.90s] DEBUG: Greeter start authentication for dmytro [+24.90s] DEBUG: Started session 1746 with service 'lightdm', username 'dmytro' [+25.10s] DEBUG: Session 1746 got 1 message(s) from PAM [+25.10s] DEBUG: Prompt greeter with 1 message(s) [+31.87s] DEBUG: Continue authentication [+33.75s] DEBUG: Session 1746 authentication complete with return value 7: Authentication failure [+33.75s] DEBUG: Authenticate result for user dmytro: Authentication failure [+33.75s] DEBUG: Greeter start authentication for dmytro [+33.75s] DEBUG: Session 1746: Sending SIGTERM [+33.75s] DEBUG: Started session 2264 with service 'lightdm', username 'dmytro' [+33.75s] DEBUG: Session 2264 got 1 message(s) from PAM [+33.75s] DEBUG: Prompt greeter with 1 message(s) [+36.41s] DEBUG: Continue authentication [+36.53s] DEBUG: Session 2264 authentication complete with return value 0: Success [+36.53s] DEBUG: Authenticate result for user dmytro: Success [+36.54s] DEBUG: User dmytro authorized [+36.54s] DEBUG: Greeter requests session ubuntu [+36.54s] DEBUG: Using session ubuntu [+36.54s] DEBUG: Stopping greeter [+36.54s] DEBUG: Session 1575: Sending SIGTERM [+37.41s] DEBUG: Greeter closed communication channel [+37.41s] DEBUG: Session 1575 exited with return value 0 [+37.41s] DEBUG: Greeter quit [+37.42s] DEBUG: Dropping privileges to uid 1000 [+37.42s] DEBUG: Restoring privileges [+37.43s] DEBUG: Dropping privileges to uid 1000 [+37.43s] DEBUG: Writing /home/dmytro/.dmrc [+38.35s] DEBUG: Restoring privileges [+40.37s] DEBUG: Starting session ubuntu as user dmytro [+40.37s] DEBUG: Session 2264 running command /usr/sbin/lightdm-session gnome-session --session=ubuntu [+40.39s] DEBUG: Registering session with bus path /org/freedesktop/DisplayManager/Session1 [+50.78s] DEBUG: Session 2264 exited with return value 0 [+50.78s] DEBUG: User session quit [+50.78s] DEBUG: Stopping display [+50.78s] DEBUG: Sending signal 15 to process 1563 [+51.53s] DEBUG: Process 1563 exited with return value 0 [+51.53s] DEBUG: X server stopped [+51.53s] DEBUG: Removing X server authority /var/run/lightdm/root/:0 [+51.53s] DEBUG: Releasing VT 7 [+51.53s] DEBUG: Display server stopped [+51.53s] DEBUG: Display stopped [+51.53s] DEBUG: Active display stopped, switching to greeter [+51.53s] DEBUG: Switching to greeter [+51.53s] DEBUG: Starting new display for greeter [+51.53s] DEBUG: Starting local X display [+51.53s] DEBUG: Using VT 7 [+51.53s] DEBUG: Logging to /var/log/lightdm/x-0.log [+51.53s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+51.53s] DEBUG: Launching X Server [+51.53s] DEBUG: Launching process 2894: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch [+51.53s] DEBUG: Waiting for ready signal from X server :0 [+51.75s] DEBUG: Got signal 10 from process 2894 [+51.75s] DEBUG: Got signal from X server :0 [+51.75s] DEBUG: Connecting to XServer :0 [+51.75s] DEBUG: Starting greeter [+51.75s] DEBUG: Started session 2898 with service 'lightdm', username 'lightdm' [+51.76s] DEBUG: Session 2898 authentication complete with return value 0: Success [+51.76s] DEBUG: Greeter authorized [+51.76s] DEBUG: Logging to /var/log/lightdm/x-0-greeter.log [+51.76s] DEBUG: Session 2898 running command /usr/lib/lightdm/lightdm-greeter-session /usr/sbin/unity-greeter [+53.26s] DEBUG: Greeter connected version=1.2.1 [+53.26s] DEBUG: Greeter connected, display is ready [+53.26s] DEBUG: New display ready, switching to it [+53.26s] DEBUG: Activating VT 7 [+53.26s] DEBUG: Stopping greeter display being switched from [+54.17s] DEBUG: Greeter start authentication for dmytro [+54.17s] DEBUG: Started session 3152 with service 'lightdm', username 'dmytro' [+54.18s] DEBUG: Session 3152 got 1 message(s) from PAM [+54.18s] DEBUG: Prompt greeter with 1 message(s) [+58.61s] DEBUG: Continue authentication [+58.65s] DEBUG: Session 3152 authentication complete with return value 0: Success [+58.65s] DEBUG: Authenticate result for user dmytro: Success [+58.66s] DEBUG: User dmytro authorized [+58.66s] DEBUG: Greeter requests session ubuntu [+58.66s] DEBUG: Using session ubuntu [+58.66s] DEBUG: Stopping greeter [+58.66s] DEBUG: Session 2898: Sending SIGTERM How can I fix it? What other .log files could possibly give me a clue? Update: Possibly it's duplicate of Desktop login fails, terminal works

    Read the article

  • An issue with tessellation a model with DirectX11

    - by Paul Ske
    I took the hardware tessellation tutorial from Rastertek and implemended texturing instead of color. This is great, so I wanted to implemended the same techique to a model inside my game editor and I noticed it doesn't draw anything. I compared the detailed tessellation from DirectX SDK sample. Inside the shader file - if I replace the HullInputType with PixelInputType it draws. So, I think because when I compiled the shaders inside the program it compiles VertexShader, PixelShader, HullShader then DomainShader. Isn't it suppose to be VertexShader, HullSHader, DomainShader then PixelShader or does it really not matter? I am just curious why wouldn't the model even be drawn when HullInputType but renders fine with PixelInputType. Shader Code: [code] cbuffer ConstantBuffer { float4x4 WVP; float4x4 World; // the rotation matrix float3 lightvec; // the light's vector float4 lightcol; // the light's color float4 ambientcol; // the ambient light's color bool isSelected; } cbuffer cameraBuffer { float3 cameraDirection; float padding; } cbuffer TessellationBuffer { float tessellationAmount; float3 padding2; } struct ConstantOutputType { float edges[3] : SV_TessFactor; float inside : SV_InsideTessFactor; }; Texture2D Texture; Texture2D NormalTexture; SamplerState ss { MinLOD = 5.0f; MipLODBias = 0.0f; }; struct HullOutputType { float3 position : POSITION; float2 texcoord : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; }; struct HullInputType { float4 position : POSITION; float2 texcoord : TEXCOORD0; float3 normal : NORMAL; float3 tangent : TANGENT; }; struct VertexInputType { float4 position : POSITION; float2 texcoord : TEXCOORD; float3 normal : NORMAL; float3 tangent : TANGENT; uint uVertexID : SV_VERTEXID; }; struct PixelInputType { float4 position : SV_POSITION; float2 texcoord : TEXCOORD0; // texture coordinates float3 normal : NORMAL; float3 tangent : TANGENT; float4 color : COLOR; float3 viewDirection : TEXCOORD1; float4 depthBuffer : TEXTURE0; }; HullInputType VShader(VertexInputType input) { HullInputType output; output.position.w = 1.0f; output.position = mul(input.position,WVP); output.texcoord = input.texcoord; output.normal = input.normal; output.tangent = input.tangent; //output.normal = mul(normal,World); //output.tangent = mul(tangent,World); //output.color = output.color; //output.texcoord = texcoord; // set the texture coordinates, unmodified return output; } ConstantOutputType TexturePatchConstantFunction(InputPatch inputPatch,uint patchID : SV_PrimitiveID) { ConstantOutputType output; output.edges[0] = tessellationAmount; output.edges[1] = tessellationAmount; output.edges[2] = tessellationAmount; output.inside = tessellationAmount; return output; } [domain("tri")] [partitioning("integer")] [outputtopology("triangle_cw")] [outputcontrolpoints(3)] [patchconstantfunc("TexturePatchConstantFunction")] HullOutputType HShader(InputPatch patch, uint pointId : SV_OutputControlPointID, uint patchId : SV_PrimitiveID) { HullOutputType output; // Set the position for this control point as the output position. output.position = patch[pointId].position; // Set the input color as the output color. output.texcoord = patch[pointId].texcoord; output.normal = patch[pointId].normal; output.tangent = patch[pointId].tangent; return output; } [domain("tri")] PixelInputType DShader(ConstantOutputType input, float3 uvwCoord : SV_DomainLocation, const OutputPatch patch) { float3 vertexPosition; float2 uvPosition; float4 worldposition; PixelInputType output; // Interpolate world space position with barycentric coordinates float3 vWorldPos = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; // Determine the position of the new vertex. vertexPosition = vWorldPos; // Calculate the position of the new vertex against the world, view, and projection matrices. output.position = mul(float4(vertexPosition, 1.0f),WVP); // Send the input color into the pixel shader. output.texcoord = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.normal = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; output.tangent = uvwCoord.x * patch[0].position + uvwCoord.y * patch[1].position + uvwCoord.z * patch[2].position; //output.depthBuffer = output.position; //output.depthBuffer.w = 1.0f; //worldposition = mul(output.position,WVP); //output.viewDirection = cameraDirection.xyz - worldposition.xyz; //output.viewDirection = normalize(output.viewDirection); return output; } [/code] Somethings are commented out but will be in place when fixed. I'm probably not connecting something correctly.

    Read the article

  • Creating a Reverse Proxy with URL Rewrite for IIS

    - by OWScott
    There are times when you need to reverse proxy through a server. The most common example is when you have an internal web server that isn’t exposed to the internet, and you have a public web server accessible to the internet. If you want to serve up traffic from the internal web server, you can do this through the public web server by creating a tunnel (aka reverse proxy). Essentially, you can front the internal web server with a friendly URL, even hiding custom ports. For example, consider an internal web server with a URL of http://10.10.0.50:8111. You can make that available through a public URL like http://tools.mysite.com/ as seen in the following image. The URL can be made public or it can be used for your internal staff and have it password protected and/or locked down by IP address. This is easy to do with URL Rewrite and IIS. You will also need Application Request Routing (ARR) installed even though for a simple reverse proxy you won’t use most of ARR’s functionality. If you don’t already have URL Rewrite and ARR installed you can do so easily with the Web Platform Installer. A lot can be said about reverse proxies and many different situations and ways to route the traffic and handle different URL patterns. However, my goal here is to get you up and going in the easiest way possible. Then you can dig in deeper after you get the base configuration in place. URL Rewrite makes a reverse proxy very easy to set up. Note that the URL Rewrite Add Rules template doesn’t include Reverse Proxy at the server level. That’s not to say that you can’t create a server-level reverse proxy, but the URL Rewrite rules template doesn’t help you with that. Getting Started First you must create a website on your public web server that has the public bindings that you need. Alternately, you can use an existing site and route using conditions for certain traffic. After you’ve created your site then open up URL Rewrite at the site level. Using the “Add Rule(s)…” template that is opened from the right-hand actions pane, create a new Reverse Proxy rule. If you receive a prompt (the first time) that the proxy functionality needs to be enabled, select OK. This is telling you that a proxy can route traffic outside of your web server, which happens to be our goal in this case. Be aware that reverse proxy rules can be dangerous if you open sites from inside you network to the world, so just be aware of what you’re doing and why. The next and final step of the template asks a few questions. The first textbox asks the name of the internal web server. In our example, it’s 10.10.0.50:8111. This can be any URL, including a subfolder like internal.mysite.com/blog. Don’t include the http or https here. The template assumes that it’s not entered. You can choose whether to perform SSL Offloading or not. If you leave this checked then all requests to the internal server will be over HTTP regardless of the original web request. This can help with performance and SSL bindings if all requests are within a trusted network. If the network path between the two web servers is not completely trusted and safe then uncheck this. Next, the template enables you to create an outbound rule. This is used to rewrite links in the page to look like your public domain name rather than the internal domain name. Outbound rules have a lot of CPU overhead because the entire web content needs to be parsed and updated. However, if you need it, then it’s well worth the extra CPU hit on the web server. If you check the “Rewrite the domain names of the links in HTTP responses” checkbox then the From textbox will be filled in with what you entered for the inbound rule. You can enter your friendly public URL for the outbound rule. This will essentially replace any reference to 10.10.0.50:8111 (or whatever you enter) with tools.mysite.com in all <a>, <form>, and <img> tags on your site. That’s it! Well, there is a lot more that you can do, this but will give you the base configuration. You can now visit www.mysite.com on your public web server and it will serve up the site from your internal web server. You should see two rules show up; one inbound and one outbound. You can edit these, add conditions, and tweak them further as needed. One common issue that can occur without outbound rules has to do with compression. If you run into errors with the new proxied site, try turning off compression to confirm if that’s the issue. Here’s a link with details on how to deal with compression and outbound rules. I hope this was helpful to get started and to see how easy it is to create a simple reverse proxy using URL Rewrite for IIS.

    Read the article

  • Switching the layout in Orchard CMS

    - by Bertrand Le Roy
    The UI composition in Orchard is extremely flexible, thanks in no small part to the usage of dynamic Clay shapes. Every notable UI construct in Orchard is built as a shape that other parts of the system can then party on and modify any way they want. Case in point today: modifying the layout (which is a shape) on the fly to provide custom page structures for different parts of the site. This might actually end up being built-in Orchard 1.0 but for the moment it’s not in there. Plus, it’s quite interesting to see how it’s done. We are going to build a little extension that allows for specialized layouts in addition to the default layout.cshtml that Orchard understands out of the box. The extension will add the possibility to add the module name (or, in MVC terms, area name) to the template name, or module and controller names, or module, controller and action names. For example, the home page is served by the HomePage module, so with this extension you’ll be able to add an optional layout-homepage.cshtml file to your theme to specialize the look of the home page while leaving all other pages using the regular layout.cshtml. I decided to implement this sample as a theme with code. This way, the new overrides are only enabled as the theme is activated, which makes a lot of sense as this is going to be where you’ll be creating those additional layouts. The first thing I did was to create my own theme, derived from the default TheThemeMachine with this command: codegen theme CustomLayoutMachine /CreateProject:true /IncludeInSolution:true /BasedOn:TheThemeMachine .csharpcode, .csharpcode pre { font-size: 12px; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Once that was done, I worked around a known bug and moved the new project from the Modules solution folder into Themes (the code was already physically in the right place, this is just about Visual Studio editing). The CreateProject flag in the command-line created a project file for us in the theme’s folder. This is only necessary if you want to run code outside of views from that theme. The code that we want to add is the following LayoutFilter.cs: using System.Linq; using System.Web.Mvc; using System.Web.Routing; using Orchard; using Orchard.Mvc.Filters; namespace CustomLayoutMachine.Filters { public class LayoutFilter : FilterProvider, IResultFilter { private readonly IWorkContextAccessor _wca; public LayoutFilter(IWorkContextAccessor wca) { _wca = wca; } public void OnResultExecuting(ResultExecutingContext filterContext) { var workContext = _wca.GetContext(); var routeValues = filterContext.RouteData.Values; workContext.Layout.Metadata.Alternates.Add( BuildShapeName(routeValues, "area")); workContext.Layout.Metadata.Alternates.Add( BuildShapeName(routeValues, "area", "controller")); workContext.Layout.Metadata.Alternates.Add( BuildShapeName(routeValues, "area", "controller", "action")); } public void OnResultExecuted(ResultExecutedContext filterContext) { } private static string BuildShapeName( RouteValueDictionary values, params string[] names) { return "Layout__" + string.Join("__", names.Select(s => ((string)values[s] ?? "").Replace(".", "_"))); } } } This filter is intercepting ResultExecuting, which is going to provide a context object out of which we can extract the route data. We are also injecting an IWorkContextAccessor dependency that will give us access to the current Layout object, so that we can add alternate shape names to its metadata. We are adding three possible shape names to the default, with different combinations of area, controller and action names. For example, a request to a blog post is going to be routed to the “Orchard.Blogs” module’s “BlogPost” controller’s “Item” action. Our filters will then add the following shape names to the default “Layout”: Layout__Orchard_Blogs Layout__Orchard_Blogs__BlogPost Layout__Orchard_Blogs__BlogPost__Item Those template names get mapped into the following file names by the system (assuming the Razor view engine): Layout-Orchard_Blogs.cshtml Layout-Orchard_Blogs-BlogPost.cshtml Layout-Orchard_Blogs-BlogPost-Item.cshtml This works for any module/controller/action of course, but in the sample I created Layout-HomePage.cshtml (a specific layout for the home page), Layout-Orchard_Blogs.cshtml (a layout for all the blog views) and Layout-Orchard_Blogs-BlogPost-Item.cshtml (a layout that is specific to blog posts). Of course, this is just an example, and this kind of dynamic extension of shapes that you didn’t even create in the first place is highly encouraged in Orchard. You don’t have to do it from a filter, we only did it this way because that was a good place where we could get the context that we needed. And of course, you can base your alternate shape names on something completely different from route values if you want. For example, you might want to create your own part that modifies the layout for a specific content item, or you might want to do it based on the raw URL (like it’s done in widget rules) or who knows what crazy custom rule. The point of all this is to show that extending or modifying shapes is easy, and the layout just happens to be a shape. In other words, you can do whatever you want. Ain’t that nice? The custom theme can be found here: Orchard.Theme.CustomLayoutMachine.1.0.nupkg Many thanks to Louis, who showed me how to do this.

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Searching for the Perfect Developer&rsquo;s Laptop.

    - by mbcrump
    I have been in the market for a new computer for several months. I set out with a budget of around $1200. I knew up front that the machine would be used for developing applications and maybe some light gaming. I kept switching between buying a laptop or a desktop but the laptop won because: With a Laptop, I can carry it everywhere and with a desktop I can’t. I searched for about 2 weeks and narrowed it down to a list of must-have’s : i7 Processor (I wasn’t going to settle for an i5 or AMD. I wanted a true Quad-core machine, not 2 dual-core fused together). 15.6” monitor SSD 128GB or Larger. – It’s almost 2011 and I don’t want an old standard HDD in this machine. 8GB of DDR3 Ram. – The more the better, right? 1GB Video Card (Prefer NVidia) – I might want to play games with this. HDMI Port – Almost a standard on new Machines. This would be used when I am on the road and want to stream Netflix to the HDTV in the Hotel room. Webcam Built-in – This would be to video chat with the wife and kids if I am on the road. 6-Cell Battery. – I’ve read that an i7 in a laptop really kills the battery. A 6-cell or 9-cell is even better. That is a pretty long list for a budget of around $1200. I searched around the internet and could not buy this machine prebuilt for under $1200. That was even with coupons and my company’s 10% Dell discount. The only way that I would get a machine like this was to buy a prebuilt and replace parts. I chose the  Lenovo Y560 on Newegg to start as my base. Below is a top-down picture of it.   Part 1: The Hardware The Specs for this machine: Color :  GrayOperating System : Windows 7 Home Premium 64-bitCPU Type : Intel Core i7-740QM(1.73GHz)Screen : 15.6" WXGAMemory Size : 4GB DDR3Hard Disk : 500GBOptical Drive : DVD±R/RWGraphics Card : ATI Mobility Radeon HD 5730Video Memory : 1GBCommunication : Gigabit LAN and WLANCard slot : 1 x Express Card/34Battery Life : Up to 3.5 hoursDimensions : 15.20" x 10.00" x 0.80" - 1.30"Weight : 5.95 lbs. This computer met most of the requirements above except that it didn’t come with an SSD or have 8GB of DDR3 Memory. So, I needed to start shopping except this time for an SSD. I asked around on twitter and other hardware forums and everyone pointed me to the Crucial C300 SSD. After checking prices of the drive, it was going to cost an extra $275 bucks and I was going from a spacious 500GB drive to 128GB. After watching some of the SSD videos on YouTube I started feeling better. Below is a pic of the Crucial C300 SSD. The second thing that I needed to upgrade was the RAM. It came with 4GB of DDR3 RAM, but it was slow. I decided to buy the Crucial 8GB (4GB x 2) Kit from Newegg. This RAM cost an extra $120 and had a CAS Latency of 7. In the end this machine delivered everything that I wanted and it cost around $1300. You are probably saying, well your budget was $1200. I have spare parts that I’m planning on selling on eBay or Anandtech.  =) If you are interested then shoot me an email and I will give you a great deal mbcrump[at]gmail[dot]com. 500GB Laptop 7200RPM HDD 4GB of DDR3 RAM (2GB x 2) faceVision HD 720p Camera – Unopened In the end my Windows Experience Rating of the SSD was 7.7 and the CPU 7.1. The max that you can get is a 7.9. Part 2: The Software I’m very lucky that I get a lot of software for free. When choosing a laptop, the OS really doesn’t matter because I would never keep the bloatware pre-installed or Windows 7 Home Premium on my main development machine. Matter of fact, as soon as I got the laptop, I immediately took out the old HDD without booting into it. After I got the SSD into the machine, I installed Windows 7 Ultimate 64-Bit. The BIOS was out of date, so I updated that to the latest version and started downloading drivers off of Lenovo’s site. I had to download the Wireless Networking Drivers to a USB-Key before I could get my machine on my wireless network. I also discovered that if the date on your computer is off then you cannot join the Windows 7 Homegroup until you fix it. I’m aware that most people like peeking into what programs other software developers use and I went ahead and listed my “essentials” of a fresh build. I am a big Silverlight guy, so naturally some of the software listed below is specific to Silverlight. You should also check out my master list of Tools and Utilities for the .NET Developer. See a killer app that I’m missing? Feel free to leave it in the comments below. My Software Essential List. CPU-Z Dropbox Everything Search Tool Expression Encoder Update Expression Studio 4 Ultimate Foxit Reader Google Chrome Infragistics NetAdvantage Ultimate Edition Keepass Microsoft Office Professional Plus 2010 Microsoft Security Essentials 2  Mindscape Silverlight Elements Notepad 2 (with shell extension) Precode Code Snippet Manager RealVNC Reflector ReSharper v5.1.1753.4 Silverlight 4 Toolkit Silverlight Spy Snagit 10 SyncFusion Reporting Controls for Silverlight Telerik Silverlight RadControls TweetDeck Virtual Clone Drive Visual Studio 2010 Feature Pack 2 Visual Studio 2010 Ultimate VS KB2403277 Update to get Feature Pack 2 to work. Windows 7 Ultimate 64-Bit Windows Live Essentials 2011 Windows Live Writer Backup. Windows Phone Development Tools That is pretty much it, I have a new laptop and am happy with the purchase. If you have any questions then feel free to leave a comment below.  Subscribe to my feed

    Read the article

  • SQL SERVER – Number-Crunching with SQL Server – Exceed the Functionality of Excel

    - by Pinal Dave
    Imagine this. Your users have developed an Excel spreadsheet that extracts data from your SQL Server database, manipulates that data through the use of Excel formulas and, possibly, some VBA code which is then used to calculate P&L, hedging requirements or even risk numbers. Management comes to you and tells you that they need to get rid of the spreadsheet and that the results of the spreadsheet calculations need to be persisted on the database. SQL Server has a very small set of functions for analyzing data. Excel has hundreds of functions for analyzing data, with many of them focused on specific financial and statistical calculations. Is it even remotely possible that you can use SQL Server to replace the complex calculations being done in a spreadsheet? Westclintech has developed a library of functions that match or exceed the functionality of Excel’s functions and contains many functions that are not available in EXCEL. Their XLeratorDB library of functions contains over 700 functions that can be incorporated into T-SQL statements. XLeratorDB takes advantage of the SQL CLR architecture introduced in SQL Server 2005. SQL CLR permits managed code to be compiled into the database and run alongside built-in SQL Server functions like COUNT or SUM. The Westclintech developers have taken advantage of this architecture to bring robust analytical functions to the database. In our hypothetical spreadsheet, let’s assume that our users are using the YIELD function and that the data are extracted from a table in our database called BONDS. Here’s what the spreadsheet might look like. We go to column G and see that it contains the following formula. Obviously, SQL Server does not offer a native YIELD function. However, with XLeratorDB we can replicate this calculation in SQL Server with the following statement: SELECT *, wct.YIELD(CAST(GETDATE() AS date),Maturity,Rate,Price,100,Frequency,Basis) AS YIELD FROM BONDS This produces the following result. This illustrates one of the best features about XLeratorDB; it is so easy to use. Since I knew that the spreadsheet was using the YIELD function I could use the same function with the same calling structure to do the calculation in SQL Server. I didn’t need to know anything at all about the mechanics of calculating the yield on a bond. It was pretty close to cut and paste. In fact, that’s one way to construct the SQL. Just copy the function call from the cell in the spreadsheet and paste it into SMS and change the cell references to column names. I built the SQL for this query by starting with this. SELECT * ,YIELD(TODAY(),B2,C2,D2,100,E2,F2) FROM BONDS I then changed the cell references to column names. SELECT * --,YIELD(TODAY(),B2,C2,D2,100,E2,F2) ,YIELD(TODAY(),Maturity,Rate,Price,100,Frequency,Basis) FROM BONDS Finally, I replicated the TODAY() function using GETDATE() and added the schema name to the function name. SELECT * --,YIELD(TODAY(),B2,C2,D2,100,E2,F2) --,YIELD(TODAY(),Maturity,Rate,Price,100,Frequency,Basis) ,wct.YIELD(GETDATE(),Maturity,Rate,Price,100,Frequency,Basis) FROM BONDS Then I am able to execute the statement returning the results seen above. The XLeratorDB libraries are heavy on financial, statistical, and mathematical functions. Where there is an analog to an Excel function, the XLeratorDB function uses the same naming conventions and calling structure as the Excel function, but there are also hundreds of additional functions for SQL Server that are not found in Excel. You can find the functions by opening Object Explorer in SQL Server Management Studio (SSMS) and expanding the Programmability folder under the database where the functions have been installed. The  Functions folder expands to show 3 sub-folders: Table-valued Functions; Scalar-valued functions, Aggregate Functions, and System Functions. You can expand any of the first three folders to see the XLeratorDB functions. Since the wct.YIELD function is a scalar function, we will open the Scalar-valued Functions folder, scroll down to the wct.YIELD function and and click the plus sign (+) to display the input parameters. The functions are also Intellisense-enabled, with the input parameters displayed directly in the query tab. The Westclintech website contains documentation for all the functions including examples that can be copied directly into a query window and executed. There are also more one hundred articles on the site which go into more detail about how some of the functions work and demonstrate some of the extensive business processes that can be done in SQL Server using XLeratorDB functions and some T-SQL. XLeratorDB is organized into libraries: finance, statistics; math; strings; engineering; and financial options. There is also a windowing library for SQL Server 2005, 2008, and 2012 which provides functions for calculating things like running and moving averages (which were introduced in SQL Server 2012), FIFO inventory calculations, financial ratios and more, without having to use triangular joins. To get started you can download the XLeratorDB 15-day free trial from the Westclintech web site. It is a fully-functioning, unrestricted version of the software. If you need more than 15 days to evaluate the software, you can simply download another 15-day free trial. XLeratorDB is an easy and cost-effective way to start adding sophisticated data analysis to your SQL Server database without having to know anything more than T-SQL. Get XLeratorDB Today and Now! Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Excel

    Read the article

  • How do I prove or disprove "god" objects are wrong?

    - by honestduane
    Problem Summary: Long story short, I inherited a code base and an development team I am not allowed to replace and the use of God Objects is a big issue. Going forward, I want to have us re-factor things but I am getting push-back from the teams who want to do everything with God Objects "because its easier" and this means I would not be allowed to re-factor. I pushed back citing my years of dev experience, that I'm the new boss who was hired to know these things, etc, and so did the third party offshore companies account sales rep, and this is now at the executive level and my meeting is tomorrow and I want to go in with a lot of technical ammo to advocate best practices because I feel it will be cheaper in the long run (And I personally feel that is what the third party is worried about) for the company. My issue is from a technical level, I know its good long term but I'm having trouble with the ultra short term and 6 months term, and while its something I "know" I cant prove it with references and cited resources outside of one person (Robert C. Martin, aka Uncle Bob), as that is what I am being asked to do as I have been told having data from one person and only one person (Robert C Martin) is not good enough of an argument. Question: What are some resources I can cite directly (Title, year published, page number, quote) by well known experts in the field that explicitly say this use of "God" Objects/Classes/Systems is bad (or good, since we are looking for the most technically valid solution)? Research I have already done: I have a number of books here and I have searched their indexes for the use of the words "god object" and "god class". I found that oddly its almost never used and the copy of the GoF book I have for example, never uses it (At least according to the index in front of me) but I have found it in 2 books per the below, but I want more I can use. I checked the Wikipedia page for "God Object" and its currently a stub with little reference links so although I personally agree with that it says, It doesn't have much I can use in an environment where personal experience is not considered valid. The book cited is also considered too old to be valid by the people I am debating these technical points with as the argument they are making is that "it was once thought to be bad but nobody could prove it, and now modern software says "god" objects are good to use". I personally believe that this statement is incorrect, but I want to prove the truth, whatever it is. In Robert C Martin's "Agile Principles, Patterns, and Practices in C#" (ISBN: 0-13-185725-8, hardcover) where on page 266 it states "Everybody knows that god classes are a bad idea. We don't want to concentrate all the intelligence of a system into a single object or a single function. One of the goals of OOD is the partitioning and distribution of behavior into many classes and many function." -- And then goes on to say sometimes its better to use God Classes anyway sometimes (Citing micro-controllers as an example). In Robert C Martin's "Clean Code: A Handbook of Agile Software Craftsmanship" page 136 (And only this page) talks about the "God class" and calls it out as a prime example of a violation of the "classes should be small" rule he uses to promote the Single Responsibility Principle" starting on on page 138. The problem I have is all my references and citations come from the same person (Robert C. Martin), and am from the same single person/source. I am being told that because he is just one guy, my desire to not use "God Classes" is invalid and not accepted as a standard best practice in the software industry. Is this true? Am I doing things wrong from a technical perspective by trying to keep to the teaching of Uncle Bob? God Objects and Object Oriented Programming and Design: The more I think of this the more I think this is more something you learn when you study OOP and its never explicitly called out; Its implicit to good design is my thinking (Feel free to correct me, please, as I want to learn), The problem is I "know" this, but but not everybody does, so in this case its not considered a valid argument because I am effectively calling it out as universal truth when in fact most people are statistically ignorant of it since statistically most people are not programmers. Conclusion: I am at a loss on what to search for to get the best additional results to cite, since they are making a technical claim and I want to know the truth and be able to prove it with citations like a real engineer/scientist, even if I am biased against god objects due to my personal experience with code that used them. Any assistance or citations would be deeply appreciated.

    Read the article

  • Inside Red Gate - Be Reasonable!

    - by Simon Cooper
    As I discussed in my previous posts, divisions and project teams within Red Gate are allowed a lot of autonomy to manage themselves. It's not just the teams though, there's an awful lot of freedom given to individual employees within the company as well. Reasonableness How Red Gate treats it's employees is embodied in the phrase 'You will be reasonable with us, and we will be reasonable with you'. As an employee, you are trusted to do your job to the best of you ability. There's no one looking over your shoulder, no one clocking you in and out each day. Everyone is working at the company because they want to, and one of the core ideas of Red Gate is that the company exists to 'let people do the best work of their lives'. Everything is geared towards that. To help you do your job, office services and the IT department are there. If you need something to help you work better (a third or fourth monitor, footrests, or a new keyboard) then ask people in Information Systems (IS) or Office Services and you will be given it, no questions asked. Everyone has administrator access to their own machines, and you can install whatever you want on it. If there's a particular bit of software you need, then ask IS and they will buy it. As an example, last year I wanted to replace my main hard drive with an SSD; I had a summer job at school working in a computer repair shop, so knew what to do. I went to IS and asked for 'an SSD, a SATA cable, and a screwdriver'. And I got it there and then, even the screwdriver. Awesome. I screwed it in myself, copied all my main drive files across, and I was good to go. Of course, if you're not happy doing that yourself, then IS will sort it all out for you, no problems. If you need something that the company doesn't have (say, a book off Amazon, or you need some specifications printing off & bound), then everyone has a expense limit of £100 that you can use without any sign-off needed from your managers. If you need a company credit card for whatever reason, then you can get it. This freedom extends to working hours and holiday; you're expected to be in the office 11am-3pm each day, but outside those times you can work whenever you want. If you need a half-day holiday on a days notice, or even the same day, then you'll get it, unless there's a good reason you're needed that day. If you need to work from home for a day or so for whatever reason, then you can. If it's reasonable, then it's allowed. Trust issues? A lot of trust, and a lot of leeway, is given to all the people in Red Gate. Everyone is expected to work hard, do their jobs to the best of their ability, and there will be a minimum of bureaucratic obstacles that stop you doing your work. What happens if you abuse this trust? Well, an example is company trip expenses. You're free to expense what you like; food, drink, transport, etc, but if you expenses are not reasonable, then you will never travel with the company again. Simple as that. Everyone knows when they're abusing the system, so simply don't do it. Along with reasonableness, another phrase used is 'Don't be a ***'. If you act like a ***, and abuse any of the trust placed in you, even if you're the best tester, salesperson, dev, or manager in the company, then you won't be a part of the company any more. From what I know about other companies, employee trust is highly variable between companies, all the way up to CCTV trained on employee's monitors. As a dev, I want to produce well-written & useful code that solves people's problems. Being able to get whatever I need - install whatever tools I need, get time off when I need to, obtain reference books within a day - all let me do my job, and so let Red Gate help other people do their own jobs through the tools we produce. Plus, I don't think I would like working for a company that doesn't allow admin access to your own machine and blocks Facebook!

    Read the article

  • Refactoring an immediate drawing function into VBO, access violation error

    - by Alex
    I have a MD2 model loader, I am trying to substitute its immediate drawing function with a Vertex Buffer Object one.... I am getting a really annoying access violation reading error and I can't figure out why, but mostly I'd like an opinion as to whether this looks correct (never used VBOs before). This is the original function (that compiles ok) which calculates the keyframe and draws at the same time: glBegin(GL_TRIANGLES); for(int i = 0; i < numTriangles; i++) { MD2Triangle* triangle = triangles + i; for(int j = 0; j < 3; j++) { MD2Vertex* v1 = frame1->vertices + triangle->vertices[j]; MD2Vertex* v2 = frame2->vertices + triangle->vertices[j]; Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; Vec3f normal = v1->normal * (1 - frac) + v2->normal * frac; if (normal[0] == 0 && normal[1] == 0 && normal[2] == 0) { normal = Vec3f(0, 0, 1); } glNormal3f(normal[0], normal[1], normal[2]); MD2TexCoord* texCoord = texCoords + triangle->texCoords[j]; glTexCoord2f(texCoord->texCoordX, texCoord->texCoordY); glVertex3f(pos[0], pos[1], pos[2]); } } glEnd(); What I'd like to do is to calculate all positions before hand, store them in a Vertex array and then draw them. This is what I am trying to replace it with (in the exact same part of the program) int vCount = 0; for(int i = 0; i < numTriangles; i++) { MD2Triangle* triangle = triangles + i; for(int j = 0; j < 3; j++) { MD2Vertex* v1 = frame1->vertices + triangle->vertices[j]; MD2Vertex* v2 = frame2->vertices + triangle->vertices[j]; Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; Vec3f normal = v1->normal * (1 - frac) + v2->normal * frac; if (normal[0] == 0 && normal[1] == 0 && normal[2] == 0) { normal = Vec3f(0, 0, 1); } indices[vCount] = normal[0]; vCount++; indices[vCount] = normal[1]; vCount++; indices[vCount] = normal[2]; vCount++; MD2TexCoord* texCoord = texCoords + triangle->texCoords[j]; indices[vCount] = texCoord->texCoordX; vCount++; indices[vCount] = texCoord->texCoordY; vCount++; indices[vCount] = pos[0]; vCount++; indices[vCount] = pos[1]; vCount++; indices[vCount] = pos[2]; vCount++; } } totalVertices = vCount; glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnableClientState(GL_VERTEX_ARRAY); glNormalPointer(GL_FLOAT, 0, indices); glTexCoordPointer(2, GL_FLOAT, sizeof(float)*3, indices); glVertexPointer(3, GL_FLOAT, sizeof(float)*5, indices); glDrawElements(GL_TRIANGLES, totalVertices, GL_UNSIGNED_BYTE, indices); glDisableClientState(GL_VERTEX_ARRAY); // disable vertex arrays glEnableClientState(GL_TEXTURE_COORD_ARRAY); glDisableClientState(GL_NORMAL_ARRAY); First of all, does it look right? Second, I get access violation error "Unhandled exception at 0x01455626 in Graphics_template_1.exe: 0xC0000005: Access violation reading location 0xed5243c0" pointing at line 7 Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; where the two Vs seems to have no value in the debugger.... Till this point the function behaves in exactly the same way as the one above, I don't understand why this happens? Thanks for any help you may be able to provide!

    Read the article

  • Lessons learned from Word 2007 automation with c# 2008

    - by robertphyatt
    My organization has an ongoing project to take documents produced for internal regulations and such, change some of the formatting and then export it as PDF. Our requirements were that only one person would be doing this, but it has been painfully tedious and sometimes error-prone to do by hand. Enter the fearless developer to automate the situation! Since I am one of those guys that just plain does not like VB, I wanted to do the automation in the ever-so-much-more-familiar C#. While Microsoft had made a dll that makes such a task easier, documentation on MSDN is pretty lame and most of the forumns and posts on the internet had little to do with my task. So, I feel like I can give back to the community and make a post here of the things I have learned so far. I hope this is helpful to whoever stumbles upon it. Steps to do this: 1) First of all, make some sort of a project and use some sort of a means to get the filename of the word document you are trying to open. I got the filename the user wanted with an openFileDialog tied to a button that I labeled 'Browse':        private void btnBrowse_Click(object sender, EventArgs e)        {            try            {                DialogResult myResult = openFileDialog1.ShowDialog();                if (myResult.Equals(DialogResult.OK))                {                    if (openFileDialog1.SafeFileName.EndsWith(".doc"))                    {                        txtFileName.Text = openFileDialog1.SafeFileName;                        paramSourceDocPath = openFileDialog1.FileName;                        paramExportFilePath = openFileDialog1.FileName.Replace(".doc", ".pdf");                    }                    else                    {                        txtFileName.Text = "only something that end with .doc, please";                    }                }            }            catch (Exception err)            {                lblError.Text = err.Message;            }        }   2) Add in "using Microsoft.Office.Interop.Word;" after setting your project to reference Microsoft.Office.Core and Microsoft.Office.Interop.Word so that you don't have to add "Microsoft.Office.Interop.Word" to the front of everything. 3) Now you are ready to play. You will need to have a copy of word open and a copy of your word document that you want to modify open to be able to make the changes that are needed. The word interop dll likes using ref on all the parameters passed in, and likes to have them as objects. If you don't want to specify the parameter, you have to give it a "Type.Missing". I suggest creating some objects that you reuse all over the place to maintain sanity. object paramMissing = Type.Missing; ApplicationClass wordApplication = new ApplicationClass(); Document wordDocument = wordApplication.Documents.Open(                ref paramSourceDocPath, ref paramMissing, ref paramMissing,                ref paramMissing, ref paramMissing, ref paramMissing,                ref paramMissing, ref paramMissing, ref paramMissing,                ref paramMissing, ref paramMissing, ref paramMissing,                ref paramMissing, ref paramMissing, ref paramMissing,                ref paramMissing); 4) There are many ways to modify the text of the inside of the word document. One of the ways that was most effective for me was to break it down by paragraph and then do things on each paragraph by what style the particular paragraph had.            foreach (Paragraph thisParagraph in wordDocument.Content.Paragraphs)            {                string strStyleName = ((Style)thisParagraph.get_Style()).NameLocal;                string strText = thisParagraph.Range.Text;                //Do whatever you need to do            } 5) Sometimes you want to insert a new line character somewhere in the text or insert text into the document, etc.  There are a few ways you can do this: you can either modify the text of a paragraph by doing something like this ('\r' makes a new paragraph, '\v' will make a newline without making a new paragraph. If you remove a '\r' from the text, it will eliminate the paragraph you removed it from): thisParagraph.Range.Text = "A\vNew Paragraph!\r" + thisParagraph.Range.Text; OR you could select where you want to insert it and have it act like you were typing in Word like any normal user (note: if you do not collapse the range first, you will overwrite the thing you got the range from) object oCollapseDirectionEnd = WdCollapseDirection.wdCollapseEnd; object oCollapseDirectionStart = WdCollapseDirection.wdCollapseStart; Range rangeInsertAtBeginning = thisParagraph.Range; Range rangeInsertAtEnd = thisParagraph.Range; rangeInsertAtBeginning.Collapse(ref oCollapseDirectionStart); rangeInsertAtEnd.Collapse(ref oCollapseDirectionEnd); rangeInsertAtBeginning.Select(); wordApplication.Selection.TypeText("Blah Blah Blah"); rangeInsertAtEnd.Select(); wordApplication.Selection.TypeParagraph(); 6) If you want to make text columns, like a newspaper or newsletter, you have to modify the page layout of the document or a section of the document to make it happen. In my case, I only wanted a particular section to have that, and I wanted to have a black line before and after the newspaper-like text columns. First you need to do a section break on either side of what you wanted, then you take the section and modify the page layout. Then you can modify the borders of the section (or another object in the word document). I also show here how to modify the alignment of a paragraph.            object oSectionBreak = WdBreakType.wdSectionBreakContinuous;            //These ranges were set while I was going through the paragraphs of my document, like I was showing earlier            rangeHeaderStart.InsertBreak(ref oSectionBreak);            rangeHeaderEnd.InsertBreak(ref oSectionBreak);            //change the alignment to justify            object oRangeHeaderStart = rangeStartJustifiedAlignment.Start;            object oRangeHeaderEnd = rangeHeaderEnd.End;            Range rangeHeader = wordDocument.Range(ref oRangeHeaderStart, ref oRangeHeaderEnd);            rangeHeader.Paragraphs.Alignment = WdParagraphAlignment.wdAlignParagraphJustify;            //find the section break and make it into triple text columns            foreach (Section mySection in wordDocument.Sections)            {                if (mySection.Range.Start == rangeHeaderStart.Start)                {                    mySection.PageSetup.TextColumns.Add(ref paramMissing, ref paramMissing, ref paramMissing);                    mySection.PageSetup.TextColumns.Add(ref paramMissing, ref paramMissing, ref paramMissing);                    //I didn't like the default spacing and column widths. This is how I adjusted them.                    foreach (TextColumn txtc in mySection.PageSetup.TextColumns)                    {                        try                        {                            txtc.SpaceAfter = 151.6f;                            txtc.Width = 7;                        }                        catch (Exception)                        {                            txtc.Width = 151.6f;                        }                    }                }            } That is all  I have time for today! I hope this was helpful to someone!

    Read the article

  • Mocking HttpContext with JustMock

    - by mehfuzh
    In post , i will show a test code that will mock the various elements needed to complete a HTTP page request and  assert the expected page cycle steps. To begin, i have a simple enumeration that has my predefined page steps: public enum PageStep {     PreInit,     Load,     PreRender,     UnLoad } Once doing so, i  first created the page object [not mocking]. Page page = new Page(); Here, our target is to fire up the page process though ProcessRequest call, now if we take a look inside method though reflector, we will find calls stack like : ProcessRequest –> ProcessRequestWithNoAssert –> SetInstrinsics –> Finallly ProcessRequest. Inside SetIntrinsics , where it requires calls from HttpRequest, HttpResponse and HttpBrowserCababilities. With this , we can easily know what are classes / calls  we need to mock in order to get though the expected call. Accordingly, for  HttpBrowserCapabilities our required test code will look like: Mock.Arrange(() => browser.PreferredRenderingMime).Returns("text/html"); Mock.Arrange(() => browser.PreferredResponseEncoding).Returns("UTF-8"); Mock.Arrange(() => browser.PreferredRequestEncoding).Returns("UTF-8"); Now, HttpBrowserCapabilities is get though [Instance]HttpRequest.Browser. Therefore, we create the HttpRequest mock: var request = Mock.Create<HttpRequest>(); Then , add the required get call : Mock.Arrange(() => request.Browser).Returns(browser); As, [instance]Browser.PerferrredResponseEncoding and [instance]Browser.PreferredResponseEncoding  are also set to the request object and to make that they are set properly, we can add the following lines as well [not required though]. bool requestContentEncodingSet = false; Mock.ArrangeSet(() => request.ContentEncoding = Encoding.GetEncoding("UTF-8")).DoInstead(() =>  requestContentEncodingSet = true); Similarly, for response we can write:  var response = Mock.Create<HttpResponse>();    bool responseContentEncodingSet = false;  Mock.ArrangeSet(() => response.ContentEncoding = Encoding.GetEncoding("UTF-8")).DoInstead(() => responseContentEncodingSet = true); Finally , I created a mock of HttpContext and set the Request and Response properties that will returns the mocked version. var context = Mock.Create<HttpContext>();   Mock.Arrange(() => context.Request).Returns(request); Mock.Arrange(() => context.Response).Returns(response); As, Page internally calls RenderControl method , we just need to replace that with our one and optionally we can check if  invoked properly: bool rendered = false; Mock.Arrange(() => page.RenderControl(Arg.Any<HtmlTextWriter>())).DoInstead(() => rendered = true); That’s  it, the rest of the code is simple,  where  i asserted the page cycle with the PageSteps that i defined earlier: var pageSteps = new Queue<PageStep>();    page.PreInit +=      delegate      {          pageSteps.Enqueue(PageStep.PreInit);      };  page.Load +=      delegate      {          pageSteps.Enqueue(PageStep.Load);      };    page.PreRender +=      delegate      {          pageSteps.Enqueue(PageStep.PreRender);      };    page.Unload +=      delegate      {          pageSteps.Enqueue(PageStep.UnLoad);      };    page.ProcessRequest(context);    Assert.True(requestContentEncodingSet);  Assert.True(responseContentEncodingSet);  Assert.True(rendered);    Assert.Equal(pageSteps.Dequeue(), PageStep.PreInit);  Assert.Equal(pageSteps.Dequeue(), PageStep.Load);  Assert.Equal(pageSteps.Dequeue(), PageStep.PreRender);  Assert.Equal(pageSteps.Dequeue(), PageStep.UnLoad);    Mock.Assert(request);  Mock.Assert(response);   You can get the test class shown in this post here to give a try by yourself with of course JustMock. Enjoy!!

    Read the article

  • JQuery + WCF + HTTP 404 Error

    - by hangar18
    HI All, I've searched high and low and finally decided to post a query here. I'm writing a very basic HTML page from which I'm trying to call a WCF service using jQuery and parse it using JSON. Service: IMyDemo.cs [ServiceContract] public interface IMyDemo { [WebInvoke(Method = "POST", BodyStyle = WebMessageBodyStyle.WrappedRequest, ResponseFormat = WebMessageFormat.Json)] Employee DoWork(); [OperationContract] [WebInvoke(Method = "POST", BodyStyle = WebMessageBodyStyle.WrappedRequest, ResponseFormat = WebMessageFormat.Json)] Employee GetEmp(int age, string name); } [DataContract] public class Employee { [DataMember] public int EmpId { get; set; } [DataMember] public string EmpName { get; set; } [DataMember] public int EmpSalary { get; set; } } MyDemo.svc.cs public Employee DoWork() { // Add your operation implementation here Employee obj = new Employee() { EmpSalary = 12, EmpName = "SomeName" }; return obj; } public Employee GetEmp(int age, string name) { Employee emp = new Employee(); if (age > 0) emp.EmpSalary = 12 + age; if (!string.IsNullOrEmpty(name)) emp.EmpName = "Server" + name; return emp; } WEb.Config <system.serviceModel> <services> <service behaviorConfiguration="EmployeesBehavior" name="MySample.MyDemo"> <endpoint address="" binding="webHttpBinding" contract="MySample.IMyDemo" behaviorConfiguration="EmployeesBehavior"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="EmployeesBehavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> <endpointBehaviors> <behavior name="EmployeesBehavior"> <webHttp/> </behavior> </endpointBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> MyDemo.htm <head> <title></title> <script type="text/javascript" language="javascript" src="Scripts/jquery-1.4.1.js"></script> <script type="text/javascript" language="javascript" src="Scripts/json.js"></script> <script type="text/javascript"> //create a global javascript object for the AJAX defaults. debugger; var ajaxDefaults = {}; ajaxDefaults.base = { type: "POST", timeout : 1000, dataFilter: function (data) { //see http://encosia.com/2009/06/29/never-worry-about-asp-net-ajaxs-d-again/ data = JSON.parse(data); //use the JSON2 library if you aren’t using FF3+, IE8, Safari 3/Google Chrome return data.hasOwnProperty("d") ? data.d : data; }, error: function (xhr) { //see if (!xhr) return; if (xhr.responseText) { var response = JSON.parse(xhr.responseText); //console.log works in FF + Firebug only, replace this code if (response) alert(response); else alert("Unknown server error"); } } }; ajaxDefaults.json = $.extend(ajaxDefaults.base, { //see http://encosia.com/2008/03/27/using-jquery-to-consume-aspnet-json-web-services/ contentType: "application/json; charset=utf-8", dataType: "json" }); var ops = { baseUrl: "/MyService/MySample/MyDemo.svc/", doWork: function () { //see http://api.jquery.com/jQuery.extend/ var ajaxOptions = $.extend(ajaxDefaults.json, { url: ops.baseUrl + "DoWork", data: "{}", success: function (msg) { console.log("success"); console.log(typeof msg); if (typeof msg !== "undefined") { console.log(msg); } } }); $.ajax(ajaxOptions); return false; }, getEmp: function () { var ajaxOpts = $.extend(ajaxDefaults.json, { url: ops.baseUrl + "GetEmp", data: JSON.stringify({ age: 12, name: "NameName" }), success: function (msg) { $("span#lbl").html("age: " + msg.Age + "name:" + msg.Name); } }); $.ajax(ajaxOpts); return false; } } </script> </head> <body> <span id="lbl">abc</span> <br /><br /> <input type="button" value="GetEmployee" id="btnGetEmployee" onclick="javascript:ops.getEmp();" /> </body> I'm just not able to get this running. When I debug, I see the error being returned from the call is " Server Error in '/jQuerySample' Application. <h2> <i>HTTP Error 404 - Not Found.</i> </h2></span> " Looks like I'm missing something basic here. My sample is based on this I've been trying to fix the code for sometime now so I'd like you to take a look and see if you can figure out what is it that I'm doing wrong here. I'm able to see that the service is created when I browse the service in IE. I've also tried changing the setting as mentioned here Appreciate your help. I'm gonna blog about this as soon as the issue is resolved for the benefit of other devs Thanks -Soni

    Read the article

  • Trying to install Team viewer on Ubuntu 12.04

    - by Teknikk
    I recently got Ubuntu installed on my server, I wanted to install TeamViewer so i could easy manage the virtual machines, However, I get errors when installing it from App store?, And I also get errors, but more detailed on the terminal. Error output: tek@tek-G53SW:~/Download$ sudo dpkg -i ipts teamviewer_linux_x64.deb dpkg: error processing ipts (--install): cannot access archive: No such file or directory (Reading database ... 142115 files and directories currently installed.) Preparing to replace teamviewer7 7.0.9360 (using teamviewer_linux_x64.deb) ... Unpacking replacement teamviewer7 ... dpkg: dependency problems prevent configuration of teamviewer7: teamviewer7 depends on libc6-i386 (>= 2.7); however: Package libc6-i386 is not installed. teamviewer7 depends on lib32asound2; however: Package lib32asound2 is not installed. teamviewer7 depends on lib32z1; however: Package lib32z1 is not installed. teamviewer7 depends on ia32-libs; however: Package ia32-libs is not installed. dpkg: error processing teamviewer7 (--install): dependency problems - leaving unconfigured Errors were encountered while processing: ipts teamviewer7 I tried to install it manually, but with no luck, I heard some others has this problems. I am running Ubuntu 12.04 x64. Error @ sudo apt-get install libc6-i386 lib32asound2 lib32z1 ia32-libs : tek@tek-G53SW:~/Download$ sudo apt-get install libc6-i386 lib32asound2 lib32z1 ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). tek@tek-G53SW:~/Download$ More errors tek@tek-G53SW:~/Download$ sudo apt-get -f install [sudo] password for tek: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages will be REMOVED: teamviewer7 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 81.9 MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 142441 files and directories currently installed.) Removing teamviewer7 ... tek@tek-G53SW:~/Download$ sudo apt-get install libc6-i386 lib32asound2 lib32z1 ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done lib32z1 is already the newest version. libc6-i386 is already the newest version. lib32asound2 is already the newest version. Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch E: Unable to correct problems, you have held broken packages. tek@tek-G53SW:~/Download$ sudo apt-get install ia32-libs-multiarch Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs-multiarch:i386 : Depends: gstreamer0.10-plugins-good:i386 but it is not going to be installed Depends: gtk2-engines:i386 but it is not going to be installed Depends: gtk2-engines-murrine:i386 but it is not going to be installed Depends: gtk2-engines-pixbuf:i386 but it is not going to be installed Depends: gtk2-engines-oxygen:i386 but it is not going to be installed Depends: ibus-gtk:i386 but it is not going to be installed Depends: libcanberra-gtk-module:i386 but it is not going to be installed Depends: libcups2:i386 but it is not going to be installed Depends: libcupsimage2:i386 but it is not going to be installed Depends: libfontconfig1:i386 but it is not going to be installed Depends: libgail-common:i386 but it is not going to be installed Depends: libgphoto2-2:i386 but it is not going to be installed Depends: libgtk2.0-0:i386 but it is not going to be installed Depends: libnss3:i386 but it is not going to be installed Depends: libqt4-opengl:i386 but it is not going to be installed Depends: libqt4-qt3support:i386 but it is not going to be installed Depends: libqt4-scripttools:i386 but it is not going to be installed Depends: libqt4-svg:i386 but it is not going to be installed Depends: libqtgui4:i386 but it is not going to be installed Depends: libqtwebkit4:i386 but it is not going to be installed Depends: librsvg2-common:i386 but it is not going to be installed Depends: libsane:i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages. tek@tek-G53SW:~/Download$

    Read the article

< Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >