Search Results

Search found 20631 results on 826 pages for 'release management'.

Page 564/826 | < Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >

  • How to manage bookmarks?

    - by LNK2019
    Hi Everyone, I have 981 bookmarks and about 30 to 40 folders in my firefox browser. Now,they become very difficult to manage. I searched "bookmark management" etc in google but I can't find useful tutorial or guidelines to follow. I've been looking for answers for a long time. I tried Xmakrs ReaditLater lace. But they couldn't help me organize my bookmarks. Do you have any tips or suggestions on how to manage your bookmarks? In what situation you want to create a tag instead of a folder? Thanks

    Read the article

  • Sync local files to S3 similar to Robocopy

    - by Yuck
    I am looking for a way to synchronize an entire local folder structure to Amazon S3, similar to how one might synchronize two folders using Robocopy. Whatever solution I come up with needs to be scheduled to run periodically from the Windows Task Scheduler. So anything that requires a GUI to perform the synchronization is not a viable solution. Standalone Windows .EXE command line utility for Amazon S3 & EC2 looked promising, but seems to have been abandoned and would not work when I tried to use it. Possibly a difference in the way that security is handled now compared to that software's most recent release.

    Read the article

  • MSSQL 2012 Error 26 and remote connection

    - by Rayfloyd
    I'm trying to set up MSSQL 2012 for a school project and I need to be able to connect to it remotely as my teammates will also be working on it. I did a clean install of SQL Server 2012 Express. Knowing I can't connect remotely straight off, I tweaked the settings that needed tweaking according to the internet. What I did 1.Made sure remote connections were allowed 2.Enabled TCP/IP 3.Removed 0s from Dynamic ports and set 1433 in TCP Port 4.Enabled Named Pipes 5.Created Outbound and Inbound traffic rules in the firewall for TCP port 1433 and UDP port 1434 6.Port forwarded 1433 to my "server" and 1434 too 7.made sure I was pingable 8.SQL Server authentication is enabled 9.I have restarted my computer so that changes to the config are saved So whenever I try to connect using management studio on another computer than the server using myusername.dyndns.org\SQLEXPRESS I get error 26 I have been searching for different solutions for 3 hours with no luck.

    Read the article

  • Is LDAP typically used in conjunction with SAP?

    - by michael
    I have potential clients that all use SAP. To avoid silent push back from IT, we want to make sure we don't add extra complexity to their job if the client (our client internal to the company is not IT) wants to adopt our SaaS. One way to avoid headaches is to add support for LDAP in our SaaS (a bit of work, but nothing major) ahead of time and make sure to mention that to IT. I really know nothing about SAP. Does SAP use an internal or external LDAP server usually? Updated question: A Company relies on an Asset management product from SAP. If I wanted to provide a SaaS that shares space in their workflow, is it advisable to use the SAP product authentication service (if it exists), and if so, do they provide an API?

    Read the article

  • Repository Spec file

    - by ahmadfrompk
    I have source of webfiles. I need to make a RPM for it. I have placed my source in SOURCES folder and use following spec file. But it is creating noarch rpm with 2MB size, but my source is greater than 2MB size. Its also did not attach files with this. I think i have a problem in spec file. Summary: my_project rpm script package Name: my_project Version: 1 Release: 1 Source0: my_project-1.tar.gz License: GPL Group: MyJunk BuildArch: noarch BuildRoot: %{_tmppath}/%{name}-buildroot %description Make some relevant package description here %prep %setup -q %build %install install -m 0755 -d $RPM_BUILD_ROOT/opt/my_project %clean rm -rf $RPM_BUILD_ROOT %post echo " " echo "This will display after rpm installs the package!" %files %dir /opt/my_project

    Read the article

  • Setup SSH key per user for Git access

    - by ThatGuyJJ
    I'm setting up a site that will have multiple development instances running on the same server. Essentially, we'd have dev-a.whatever.com, dev-b.whatever.com, etc.. all running off a single server. I want to give each user some bit of SSH access in order to update and check in code from our Git repository and to manage files via SFTP. However, I want to restrict each user to their own site as well. So if you have access to dev-a.whatever.com, you don't also have access to dev-b.whatever.com and so on. The restriction is already in place if I login via FTP as a certain user, I can't navigate outside my own site -- but if I grant SSH access to that account I can immediately navigate to any file on the server in SFTP. Is RSSH part of the solution? And how can I assign the correct SSH pub key to the corresponding user? We're using BeanStalk for our Git repository management if that makes any impact.

    Read the article

  • Connect a linux server to network and access it from another computer browser

    - by user1732451
    I had a server in a hosting company and I took it home. I need to connect him to a local network (not wifi) and access to the server from another computer in the network via browser, like I did it when it was in the hosting company. I don't have any knowledge in linux, I just know how to type in the command line :) I thinks all the IP configuration of the server is one big mess, because it passed from more then one hosting company to another... I tried a lot of tutorials that I found in the web, but nothing works - mainly because I don't know how to check if I did something wrong. I just need to know how to connect to local network ( D-Link router) and then access the server from another computer browser. thanks update: the server os is: CentOS release 4.8 (Final)

    Read the article

  • Automate the process of looking for CVE (new vulnerabilities) related to our infrastructure

    - by skinp
    Is there any service available where you simply list the services, programs and versions you use, and when some CVE comes out about it, you automatically get alerted? Also, is there any other place to look for this kind of information. Do some people release security vulnerabilities to other places than CVE? So in general, how do you guys keep up to date with what might be vulnerable in your infrastructure? Edit: Since I've been asked, we are a Unix shop with mostly Red Hat and some HP-UX. I would still prefer a high level solution which are OS independent. What happens if we use software versions which are not in the official repositories of Red Hat/HP/... or simply not supported by them.

    Read the article

  • Setting up a reverse proxy [on hold]

    - by mrwooster
    I am looking for the best solution for setting up a very low maintenance reverse proxy for a production website (example.com). The setup is as follows: A blog with will be hosted on heroku, and will reside at example.com/blog A static info page which will be hosted on S3 and will reside at example.com/signup A dynamic content management system, provided and hosted by an external vendor which will respond to requests for any other pages. The two solutions which come to mind are: Use HAProxy Ask the external vendor to reverse proxy requests for /blog and /signup The obvious solution would be to use HAProxy, but, if at all possible, I would like to avoid having to setup and maintain another server (especially such a critical one). I came across a company called Snapt which offers hosted HAProxy solutions, but its more geared towards load-balancing than reverse proxying. Option 2 is a possibility, but gives us very little control over changes and configuration. I see a lot of sites hosting blogs on /blog so this must be fairly common practice, am I missing an obvious solution?

    Read the article

  • Laptop HDD not mounting.

    - by D3X
    I have a laptop with broken (shorted out) motherboard and a 640 GB HDD. I want to recover my data from the hard disk. Every time I connect the hard disk using an external casing, it is being detected in the disk management service but not visible in my computer, nor accessible through command prompt. The Hard Disk is functioning as I can feel the hard disk rotor working properly when connected using the casing. Also the LED on the hard disk is blinking, the data is there but i am unable to access the data. Someone please suggest me ways of using the hard-disk as a slave disk using casing.

    Read the article

  • backup util for binary/media files. (to use with source control)

    - by acidzombie24
    I am using git for my source control. I dont backup media such as gifs, pngs, etc. I am thinking everytime i tag a release it would be a good idea to backup the media files as well. But i dont want to make several copies of the same file each time i create a tag. I'd like an app to handle checking if the file already exists and handles restoring everything to a version i like What util might i use to do this? I'm using windows 7.

    Read the article

  • How to force a drop of MSSQL Server database

    - by ng01
    I am trying to delete an MSSQL Server database, however I am having no luck. I have tried multiple things such as user ALTER DATABASE my_database SET RESTRICTED_USER WITH ROLLBACK IMMEDIATE; GO DROP DATABASE my_database; GO I have also tried to right click on it a delete it. This does not work, it tells me "Cannot drop database "ima_debts" because it is currently in use". The thing is there is definately no other user connected to it. In fact I disabled TCP/IP for the database and restarted it. Not even "Microsoft SQL Server Management Studio (Administrator)" is connected to it. I have made sure to login to "master". Why is it telling me it is currently in use. Is it possible for me to delete perhaps a directory or something from the file system to get rid of this database? Any help would be appreciated. Thanks.

    Read the article

  • xf86OpenConsole: Cannot find a free VT: Invalid argument

    - by Oliver Seeliger
    I'v set up an Ubuntu 12.04 from the precreated OpenVZ template. The host system is configured as follows: # $ cat /etc/issue Debian GNU/Linux 6.0 # $ uname -a Linux openvz-02 2.6.32-16-pve #1 SMP Fri Nov 9 11:42:51 CET 2012 x86_64 GNU/Linux # $ apt-cache showpkg proxmox-ve-2.6.32 Package: proxmox-ve-2.6.32 # $ tail -n 3 /etc/apt/sources.list # PVE packages provided by proxmox.com deb http://download.proxmox.com/debian squeeze pve For a software project I need a minimal xserver and followed the instructions at https://help.ubuntu.com/community/ServerGUI. I simply installed the package xorg (xorg 1:7.6+7ubuntu7.1). Now when I 'startx' I get an error message Fatal server error: xf86OpenConsole: Cannot find a free VT: Invalid argument The complete output of startx # startx X.Org X Server 1.11.3 Release Date: 2011-12-16 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.42-23-generic x86_64 Ubuntu Current Operating System: Linux www 2.6.32-16-pve #1 SMP Fri Nov 9 11:42:51 CET 2012 x86_64 Kernel command line: quiet Build Date: 29 August 2012 12:12:33AM xorg-server 2:1.11.4-0ubuntu10.8 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.24.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Nov 20 08:46:04 2012 (==) Using system config directory "/usr/share/X11/xorg.conf.d" Fatal server error: xf86OpenConsole: Cannot find a free VT: Invalid argument Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log Server terminated with error (1). Closing log file.

    Read the article

  • Explained: EF 6 and “Could not determine storage version; a valid storage connection or a version hint is required.”

    - by Ken Cox [MVP]
    I have a legacy ASP.NET 3.5 web site that I’ve upgraded to a .NET 4 web application. At the same time, I upgraded to Entity Framework 6. Suddenly one of the pages returned the following error: [ArgumentException: Could not determine storage version; a valid storage connection or a version hint is required.]    System.Data.SqlClient.SqlVersionUtils.GetSqlVersion(String versionHint) +11372412    System.Data.SqlClient.SqlProviderServices.GetDbProviderManifest(String versionHint) +91    System.Data.Common.DbProviderServices.GetProviderManifest(String manifestToken) +92 [ProviderIncompatibleException: The provider did not return a ProviderManifest instance.]    System.Data.Common.DbProviderServices.GetProviderManifest(String manifestToken) +11431433    System.Data.Metadata.Edm.Loader.InitializeProviderManifest(Action`3 addError) +11370982    System.Data.EntityModel.SchemaObjectModel.Schema.HandleAttribute(XmlReader reader) +216 A search of the error message didn’t turn up anything helpful except that someone mentioned that the error messages was bogus in his case. The page in question uses the ASP.NET EntityDataSource control, consumed by a Telerik RadGrid. This is a fabulous combination for putting a huge amount of functionality on a page in a very short time. Unfortunately, the 6.0.1 release of EF6 doesn’t support EntityDataSource. According to the people in charge, support is planned but there’s no timeline for an EntityDataSource build that works with EF6.  I’m not sure what to do in the meantime. Should I back out EF6 or manually wire up the RadGrid? The upshot is that you might want to rethink plans to upgrade to Entity Framework 6 for Web forms projects if they rely on that handy control. It might also help to spend a User voice vote here:  http://data.uservoice.com/forums/72025-entity-framework-feature-suggestions/suggestions/3702890-support-for-asp-net-entitydatasource-and-dynamicda

    Read the article

  • (Solved) ERROR: Packet source 'wlan0' failed to set channel 2: mac80211_setchannel() in Kismet and Ubuntu 12.10

    - by M. Cunille
    I have installed Ubuntu 12.10 in my computer with an Atheros AR5007 wireless card. I want to use Kismet but when I run it it starts displaying the message: ERROR: Packet source 'wlan0' failed to set channel X: mac80211_setchannel() It keeps displaying the same for every channel except channel 1. I have installed the compat-wireless-3.6.6-1 drivers and patched them with the following patch in order to use them with aircrack-ng. I have installed the latest version of Kismet in the git repository and I even tried with the svn but it keeps displaying the same error. I also have set the kismet.conf file with the nsource=wlan0 as it is the name of my wireless interface according to iwconfig : lo no wireless extensions. wlan0 IEEE 802.11bg ESSID:"XXXX" Mode:Managed Frequency:2.412 GHz Access Point: XX:XX:XX:XX:XX:XX Bit Rate=18 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=28/70 Signal level=-82 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:282 Missed beacon:0 I haven't found any answer since similar errors are supposed to be fixed with the latest Kismet release but this isn't my case. Any help will be appreciated. Thank you!

    Read the article

  • Desktop Fun: Steampunk Theme Wallpapers

    - by Asian Angel
    Do you enjoy imagining what it would be like to live in a world where technology and fantasy are mixed together? Then kick back and get ready to indulge in some great daydreaming with our Steampunk Theme wallpapers collection. Note: Click on the picture to see the full-size image—these wallpapers vary in size so you may need to crop, stretch, or place them on a colored background in order to best match them to your screen’s resolution.                         For more fun wallpapers be certain to visit our new Desktop Fun section. Similar Articles Productive Geek Tips Windows 7 Welcome Screen Taking Forever? Here’s the Fix (Maybe)Desktop Fun: Starship Theme WallpapersDesktop Fun: Underwater Theme WallpapersDesktop Fun: Forest Theme WallpapersDesktop Fun: Fantasy Theme Wallpapers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 NoSquint Remembers Site Specific Zoom Levels (Firefox) New Firefox release 3.6.3 fixes 1 Critical bug Dark Side of the Moon (8-bit) Norwegian Life If Web Browsers Were Modes of Transportation Google Translate (for animals)

    Read the article

  • ASP.NET MVC 3 - New Features

    - by imran_ku07
    Introduction:          ASP.NET MVC 3 just released by ASP.NET MVC team which includes some new features, some changes, some improvements and bug fixes. In this article, I will show you the new features of ASP.NET MVC 3. This will help you to get started using the new features of ASP.NET MVC 3. Full details of this announcement is available at Announcing release of ASP.NET MVC 3, IIS Express, SQL CE 4, Web Farm Framework, Orchard, WebMatrix.   Description:       New Razor View Engine:              Razor view engine is one of the most coolest new feature in ASP.NET MVC 3. Razor is speeding things up just a little bit more. It is much smaller and lighter in size. Also it is very easy to learn. You can say ' write less, do more '. You can get start and learn more about Razor at Introducing “Razor” – a new view engine for ASP.NET.         Granular Request Validation:             Another biggest new feature in ASP.NET MVC 3 is Granular Request Validation. Default request validator will throw an exception when he see < followed by an exclamation(like <!) or < followed by the letters a through z(like <s) or & followed by a pound sign(like &#123) as a part of querystring, posted form, headers and cookie collection. In previous versions of ASP.NET MVC, you can control request validation using ValidateInputAttriubte. In ASP.NET MVC 3 you can control request validation at Model level by annotating your model properties with a new attribute called AllowHtmlAttribute. For details see Granular Request Validation in ASP.NET MVC 3.       Sessionless Controller Support:             Sessionless Controller is another great new feature in ASP.NET MVC 3. With Sessionless Controller you can easily control your session behavior for controllers. For example, you can make your HomeController's Session as Disabled or ReadOnly, allowing concurrent request execution for single user. For details see Concurrent Requests In ASP.NET MVC and HowTo: Sessionless Controller in MVC3 – what & and why?.       Unobtrusive Ajax and  Unobtrusive Client Side Validation is Supported:             Another cool new feature in ASP.NET MVC 3 is support for Unobtrusive Ajax and Unobtrusive Client Side Validation.  This feature allows separation of responsibilities within your web application by separating your html with your script. For details see Unobtrusive Ajax in ASP.NET MVC 3 and Unobtrusive Client Validation in ASP.NET MVC 3.       Dependency Resolver:             Dependency Resolver is another great feature of ASP.NET MVC 3. It allows you to register a dependency resolver that will be used by the framework. With this approach your application will not become tightly coupled and the dependency will be injected at run time. For details see ASP.NET MVC 3 Service Location.       New Helper Methods:             ASP.NET MVC 3 includes some helper methods of ASP.NET Web Pages technology that are used for common functionality. These helper methods includes: Chart, Crypto, WebGrid, WebImage and WebMail. For details of these helper methods, please see ASP.NET MVC 3 Release Notes. For using other helper methods of ASP.NET Web Pages see Using ASP.NET Web Pages Helpers in ASP.NET MVC.       Child Action Output Caching:             ASP.NET MVC 3 also includes another feature called Child Action Output Caching. This allows you to cache only a portion of the response when you are using Html.RenderAction or Html.Action. This cache can be varied by action name, action method signature and action method parameter values. For details see this.       RemoteAttribute:             ASP.NET MVC 3 allows you to validate a form field by making a remote server call through Ajax. This makes it very easy to perform remote validation at client side and quickly give the feedback to the user. For details see How to: Implement Remote Validation in ASP.NET MVC.       CompareAttribute:             ASP.NET MVC 3 includes a new validation attribute called CompareAttribute. CompareAttribute allows you to compare the values of two different properties of a model. For details see CompareAttribute in ASP.NET MVC 3.       Miscellaneous New Features:                    ASP.NET MVC 2 includes FormValueProvider, QueryStringValueProvider, RouteDataValueProvider and HttpFileCollectionValueProvider. ASP.NET MVC 3 adds two additional value providers, ChildActionValueProvider and JsonValueProvider(JsonValueProvider is not physically exist).  ChildActionValueProvider is used when you issue a child request using Html.Action and/or Html.RenderAction methods, so that your explicit parameter values in Html.Action and/or Html.RenderAction will always take precedence over other value providers. JsonValueProvider is used to model bind JSON data. For details see Sending JSON to an ASP.NET MVC Action Method Argument.           In ASP.NET MVC 3, a new property named FileExtensions added to the VirtualPathProviderViewEngine class. This property is used when looking up a view by path (and not by name), so that only views with a file extension contained in the list specified by this new property is considered. For details see VirtualPathProviderViewEngine.FileExtensions Property .           ASP.NET MVC 3 installation package also includes the NuGet Package Manager which will be automatically installed when you install ASP.NET MVC 3. NuGet makes it easy to install and update open source libraries and tools in Visual Studio. See this for details.           In ASP.NET MVC 2, client side validation will not trigger for overridden model properties. For example, if have you a Model that contains some overridden properties then client side validation will not trigger for overridden properties in ASP.NET MVC 2 but client side validation will work for overridden properties in ASP.NET MVC 3.           Client side validation is not supported for StringLengthAttribute.MinimumLength property in ASP.NET MVC 2. In ASP.NET MVC 3 client side validation will work for StringLengthAttribute.MinimumLength property.           ASP.NET MVC 3 includes new action results like HttpUnauthorizedResult, HttpNotFoundResult and HttpStatusCodeResult.           ASP.NET MVC 3 includes some new overloads of LabelFor and LabelForModel methods. For details see LabelExtensions.LabelForModel and LabelExtensions.LabelFor.           In ASP.NET MVC 3, IControllerFactory includes a new method GetControllerSessionBehavior. This method is used to get controller's session behavior. For details see IControllerFactory.GetControllerSessionBehavior Method.           In ASP.NET MVC 3, Controller class includes a new property ViewBag which is of type dynamic. This property allows you to access ViewData Dictionary using C # 4.0 dynamic features. For details see ControllerBase.ViewBag Property.           ModelMetadata includes a property AdditionalValues which is of type Dictionary. In ASP.NET MVC 3 you can populate this property using AdditionalMetadataAttribute. For details see AdditionalMetadataAttribute Class.           In ASP.NET MVC 3 you can also use MvcScaffolding to scaffold your Views and Controller. For details see Scaffold your ASP.NET MVC 3 project with the MvcScaffolding package.           If you want to convert your application from ASP.NET MVC 2 to ASP.NET MVC 3 then there is an excellent tool that automatically converts ASP.NET MVC 2 application to ASP.NET MVC 3 application. For details see MVC 3 Project Upgrade Tool.           In ASP.NET MVC 2 DisplayAttribute is not supported but in ASP.NET MVC 3 DisplayAttribute will work properly.           ASP.NET MVC 3 also support model level validation via the new IValidatableObject interface.           ASP.NET MVC 3 includes a new helper method Html.Raw. This helper method allows you to display unencoded HTML.     Summary:          In this article I showed you the new features of ASP.NET MVC 3. This will help you a lot when you start using ASP MVC 3. I also provide you the links where you can find further details. Hopefully you will enjoy this article too.  

    Read the article

  • Entity Framework Code-First to Provide Replacement for ASP.NET Profile Provider

    - by Ken Cox [MVP]
    A while back, I coordinated a project to add support for the SQL Table Profile Provider in ASP.NET 4 Web Applications.  We urged Microsoft to improve ASP.NET’s built-in Profile support so our workaround wouldn’t be necessary. Instead, Microsoft plans to provide a replacement for ASP.NET Profile in a forthcoming release. In response to my feature suggestion on Connect, Microsoft says we should look for something even better using Entity Framework: “When code-first is officially released the final piece of a full replacement of the ASP.NET Profile will have arrived. Once code-first for EF4 is released, developers will have a really easy and very approachable way to create any arbitrary class, and automatically have the .NET Framework create a table to provide storage for that class. Furthermore developer will also have full LINQ-query capabilities against code-first classes. “ The downside is that there won’t be a way to retrofit this Profile replacement to pre- ASP.NET 4 Web applications. At least there’ll still be the MVP workaround code. It looks like it’s time for me to dig into a CTP of EF Code-First to see what’s available.   Scott Guthrie has been blogging about Code-First Development with Entity Framework 4. It’s not clear when the EF Code-First is coming, but my guess is that it’ll be part of the VS 2010/.NET 4 service pack.

    Read the article

  • Version Assemblies with TFS 2010 Continuous Integration

    - by Steve Michelotti
    When I first heard that TFS 2010 had moved to Workflow Foundation for Team Build, I was *extremely* skeptical. I’ve loved MSBuild and didn’t quite understand the reasons for this change. In fact, given that I’ve been exclusively using Cruise Control for Continuous Integration (CI) for the last 5+ years of my career, I was skeptical of TFS for CI in general. However, after going through the learning process for TFS 2010 recently, I’m starting to become a believer. I’m also starting to see some of the benefits with Workflow Foundation for the overall processing because it gives you constructs not available in MSBuild such as parallel tasks, better control flow constructs, and a slightly better customization story. The first customization I had to make to the build process was to version the assemblies of my solution. This is not new. In fact, I’d recommend reading Mike Fourie’s well known post on Versioning Code in TFS before you get started. This post describes several foundational aspects of versioning assemblies regardless of your version of TFS. The main points are: 1) don’t use source control operations for your version file, 2) use a schema like <Major>.<Minor>.<IncrementalNumber>.0, and 3) do not keep AssemblyVersion and AssemblyFileVersion in sync. To do this in TFS 2010, the best post I’ve found has been Jim Lamb’s post of building a custom TFS 2010 workflow activity. Overall, this post is excellent but the primary issue I have with it is that the assembly version numbers produced are based in a date and look like this: “2010.5.15.1”. This is definitely not what I want. I want to be able to communicate to the developers and stakeholders that we are producing the “1.1 release” or “1.2 release” – which would have an assembly version number of “1.1.317.0” for example. In this post, I’ll walk through the process of customizing the assembly version number based on this method – customizing the concepts in Lamb’s post to suit my needs. I’ll also be combining this with the concepts of Fourie’s post – particularly with regards to the standards around how to version the assemblies. The first thing I’ll do is add a file called SolutionAssemblyVersionInfo.cs to the root of my solution that looks like this: 1: using System; 2: using System.Reflection; 3: [assembly: AssemblyVersion("1.1.0.0")] 4: [assembly: AssemblyFileVersion("1.1.0.0")] I’ll then add that file as a Visual Studio link file to each project in my solution by right-clicking the project, “Add – Existing Item…” then when I click the SolutionAssemblyVersionInfo.cs file, making sure I “Add As Link”: Now the Solution Explorer will show our file. We can see that it’s a “link” file because of the black arrow in the icon within all our projects. Of course you’ll need to remove the AssemblyVersion and AssemblyFileVersion attributes from the AssemblyInfo.cs files to avoid the duplicate attributes since they now leave in the SolutionAssemblyVersionInfo.cs file. This is an extremely common technique so that all the projects in our solution can be versioned as a unit. At this point, we’re ready to write our custom activity. The primary consideration is that I want the developer and/or tech lead to be able to easily be in control of the Major.Minor and then I want the CI process to add the third number with a unique incremental number. We’ll leave the fourth position always “0” for now – it’s held in reserve in case the day ever comes where we need to do an emergency patch to Production based on a branched version.   Writing the Custom Workflow Activity Similar to Lamb’s post, I’m going to write two custom workflow activities. The “outer” activity (a xaml activity) will be pretty straight forward. It will check if the solution version file exists in the solution root and, if so, delegate the replacement of version to the AssemblyVersionInfo activity which is a CodeActivity highlighted in red below:   Notice that the arguments of this activity are the “solutionVersionFile” and “tfsBuildNumber” which will be passed in. The tfsBuildNumber passed in will look something like this: “CI_MyApplication.4” and we’ll need to grab the “4” (i.e., the incremental revision number) and put that in the third position. Then we’ll need to honor whatever was specified for Major.Minor in the SolutionAssemblyVersionInfo.cs file. For example, if the SolutionAssemblyVersionInfo.cs file had “1.1.0.0” for the AssemblyVersion (as shown in the first code block near the beginning of this post), then we want to resulting file to have “1.1.4.0”. Before we do anything, let’s put together a unit test for all this so we can know if we get it right: 1: [TestMethod] 2: public void Assembly_version_should_be_parsed_correctly_from_build_name() 3: { 4: // arrange 5: const string versionFile = "SolutionAssemblyVersionInfo.cs"; 6: WriteTestVersionFile(versionFile); 7: var activity = new VersionAssemblies(); 8: var arguments = new Dictionary<string, object> { 9: { "tfsBuildNumber", "CI_MyApplication.4"}, 10: { "solutionVersionFile", versionFile} 11: }; 12:   13: // act 14: var result = WorkflowInvoker.Invoke(activity, arguments); 15:   16: // assert 17: Assert.AreEqual("1.2.4.0", (string)result["newAssemblyFileVersion"]); 18: var lines = File.ReadAllLines(versionFile); 19: Assert.IsTrue(lines.Contains("[assembly: AssemblyVersion(\"1.2.0.0\")]")); 20: Assert.IsTrue(lines.Contains("[assembly: AssemblyFileVersion(\"1.2.4.0\")]")); 21: } 22: 23: private void WriteTestVersionFile(string versionFile) 24: { 25: var fileContents = "using System.Reflection;\n" + 26: "[assembly: AssemblyVersion(\"1.2.0.0\")]\n" + 27: "[assembly: AssemblyFileVersion(\"1.2.0.0\")]"; 28: File.WriteAllText(versionFile, fileContents); 29: }   At this point, the code for our AssemblyVersion activity is pretty straight forward: 1: [BuildActivity(HostEnvironmentOption.Agent)] 2: public class AssemblyVersionInfo : CodeActivity 3: { 4: [RequiredArgument] 5: public InArgument<string> FileName { get; set; } 6:   7: [RequiredArgument] 8: public InArgument<string> TfsBuildNumber { get; set; } 9:   10: public OutArgument<string> NewAssemblyFileVersion { get; set; } 11:   12: protected override void Execute(CodeActivityContext context) 13: { 14: var solutionVersionFile = this.FileName.Get(context); 15: 16: // Ensure that the file is writeable 17: var fileAttributes = File.GetAttributes(solutionVersionFile); 18: File.SetAttributes(solutionVersionFile, fileAttributes & ~FileAttributes.ReadOnly); 19:   20: // Prepare assembly versions 21: var majorMinor = GetAssemblyMajorMinorVersionBasedOnExisting(solutionVersionFile); 22: var newBuildNumber = GetNewBuildNumber(this.TfsBuildNumber.Get(context)); 23: var newAssemblyVersion = string.Format("{0}.{1}.0.0", majorMinor.Item1, majorMinor.Item2); 24: var newAssemblyFileVersion = string.Format("{0}.{1}.{2}.0", majorMinor.Item1, majorMinor.Item2, newBuildNumber); 25: this.NewAssemblyFileVersion.Set(context, newAssemblyFileVersion); 26:   27: // Perform the actual replacement 28: var contents = this.GetFileContents(newAssemblyVersion, newAssemblyFileVersion); 29: File.WriteAllText(solutionVersionFile, contents); 30:   31: // Restore the file's original attributes 32: File.SetAttributes(solutionVersionFile, fileAttributes); 33: } 34:   35: #region Private Methods 36:   37: private string GetFileContents(string newAssemblyVersion, string newAssemblyFileVersion) 38: { 39: var cs = new StringBuilder(); 40: cs.AppendLine("using System.Reflection;"); 41: cs.AppendFormat("[assembly: AssemblyVersion(\"{0}\")]", newAssemblyVersion); 42: cs.AppendLine(); 43: cs.AppendFormat("[assembly: AssemblyFileVersion(\"{0}\")]", newAssemblyFileVersion); 44: return cs.ToString(); 45: } 46:   47: private Tuple<string, string> GetAssemblyMajorMinorVersionBasedOnExisting(string filePath) 48: { 49: var lines = File.ReadAllLines(filePath); 50: var versionLine = lines.Where(x => x.Contains("AssemblyVersion")).FirstOrDefault(); 51:   52: if (versionLine == null) 53: { 54: throw new InvalidOperationException("File does not contain [assembly: AssemblyVersion] attribute"); 55: } 56:   57: return ExtractMajorMinor(versionLine); 58: } 59:   60: private static Tuple<string, string> ExtractMajorMinor(string versionLine) 61: { 62: var firstQuote = versionLine.IndexOf('"') + 1; 63: var secondQuote = versionLine.IndexOf('"', firstQuote); 64: var version = versionLine.Substring(firstQuote, secondQuote - firstQuote); 65: var versionParts = version.Split('.'); 66: return new Tuple<string, string>(versionParts[0], versionParts[1]); 67: } 68:   69: private string GetNewBuildNumber(string buildName) 70: { 71: return buildName.Substring(buildName.LastIndexOf(".") + 1); 72: } 73:   74: #endregion 75: }   At this point the final step is to incorporate this activity into the overall build template. Make a copy of the DefaultTempate.xaml – we’ll call it DefaultTemplateWithVersioning.xaml. Before the build and labeling happens, drag the VersionAssemblies activity in. Then set the LabelName variable to “BuildDetail.BuildDefinition.Name + "-" + newAssemblyFileVersion since the newAssemblyFileVersion was produced by our activity.   Configuring CI Once you add your solution to source control, you can configure CI with the build definition window as shown here. The main difference is that we’ll change the Process tab to reflect a different build number format and choose our custom build process file:   When the build completes, we’ll see the name of our project with the unique revision number:   If we look at the detailed build log for the latest build, we’ll see the label being created with our custom task:     We can now look at the history labels in TFS and see the project name with the labels (the Assignment activity I added to the workflow):   Finally, if we look at the physical assemblies that are produced, we can right-click on any assembly in Windows Explorer and see the assembly version in its properties:   Full Traceability We now have full traceability for our code. There will never be a question of what code was deployed to Production. You can always see the assembly version in the properties of the physical assembly. That can be traced back to a label in TFS where the unique revision number matches. The label in TFS gives you the complete snapshot of the code in your source control repository at the time the code was built. This type of process for full traceability has been used for many years for CI – in fact, I’ve done similar things with CCNet and SVN for quite some time. This is simply the TFS implementation of that pattern. The new features that TFS 2010 give you to make these types of customizations in your build process are quite easy once you get over the initial curve.

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • SQL SERVER – Shrinking NDF and MDF Files – Readers’ Opinion

    - by pinaldave
    Previously, I had written a blog post about SQL SERVER – Shrinking NDF and MDF Files – A Safe Operation. After that, I have written the following blog post that talks about the advantage and disadvantage of Shrinking and why one should not be Shrinking a file SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008. On this subject, SQL Server Expert Imran Mohammed left an excellent comment. I just feel that his comment is worth a big article itself. For everybody to read his wonderful explanation, I am posting this blog post here. Thanks Imran! Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. Comments from Imran - Before I explain to you the concept of Shrink Database, let us understand the concept of Database Files. When we create a new database inside the SQL Server, it is typical that SQl Server creates two physical files in the Operating System: one with .MDF Extension, and another with .LDF Extension. .MDF is called as Primary Data File. .LDF is called as Transactional Log file. If you add one or more data files to a database, the physical file that will be created in the Operating System will have an extension of .NDF, which is called as Secondary Data File; whereas, when you add one or more log files to a database, the physical file that will be created in the Operating System will have the same extension as .LDF. The questions now are, “Why does a new data file have a different extension (.NDF)?”, “Why is it called as a secondary data file?” and, “Why is .MDF file called as a primary data file?” Answers: Note: The following explanation is based on my limited knowledge of SQL Server, so experts please do comment. A data file with a .MDF extension is called a Primary Data File, and the reason behind it is that it contains Database Catalogs. Catalogs mean Meta Data. Meta Data is “Data about Data”. An example for Meta Data includes system objects that store information about other objects, except the data stored by the users. sysobjects stores information about all objects in that database. sysindexes stores information about all indexes and rows of every table in that database. syscolumns stores information about all columns that each table has in that database. sysusers stores how many users that database has. Although Meta Data stores information about other objects, it is not the transactional data that a user enters; rather, it’s a system data about the data. Because Primary Data File (.MDF) contains important information about the database, it is treated as a special file. It is given the name Primary Data file because it contains the Database Catalogs. This file is present in the Primary File Group. You can always create additional objects (Tables, indexes etc.) in the Primary data file (This file is present in the Primary File group), by mentioning that you want to create this object under the Primary File Group. Any additional data file that you add to the database will have only transactional data but no Meta Data, so that’s why it is called as the Secondary Data File. It is given the extension name .NDF so that the user can easily identify whether a specific data file is a Primary Data File or a Secondary Data File(s). There are many advantages of storing data in different files that are under different file groups. You can put your read only in the tables in one file (file group) and read-write tables in another file (file group) and take a backup of only the file group that has read the write data, so that you can avoid taking the backup of a read-only data that cannot be altered. Creating additional files in different physical hard disks also improves I/O performance. A real-time scenario where we use Files could be this one: Let’s say you have created a database called MYDB in the D-Drive which has a 50 GB space. You also have 1 Database File (.MDF) and 1 Log File on D-Drive and suppose that all of that 50 GB space has been used up and you do not have any free space left but you still want to add an additional space to the database. One easy option would be to add one more physical hard disk to the server, add new data file to MYDB database and create this new data file in a new hard disk then move some of the objects from one file to another, and put the file group under which you added new file as default File group, so that any new object that is created gets into the new files, unless specified. Now that we got a basic idea of what data files are, what type of data they store and why they are named the way they are, let’s move on to the next topic, Shrinking. First of all, I disagree with the Microsoft terminology for naming this feature as “Shrinking”. Shrinking, in regular terms, means to reduce the size of a file by means of compressing it. BUT in SQL Server, Shrinking DOES NOT mean compressing. Shrinking in SQL Server means to remove an empty space from database files and release the empty space either to the Operating System or to SQL Server. Let’s examine this through an example. Let’s say you have a database “MYDB” with a size of 50 GB that has a free space of about 20 GB, which means 30GB in the database is filled with data and the 20 GB of space is free in the database because it is not currently utilized by the SQL Server (Database); it is reserved and not yet in use. If you choose to shrink the database and to release an empty space to Operating System, and MIND YOU, you can only shrink the database size to 30 GB (in our example). You cannot shrink the database to a size less than what is filled with data. So, if you have a database that is full and has no empty space in the data file and log file (you don’t have an extra disk space to set Auto growth option ON), YOU CANNOT issue the SHRINK Database/File command, because of two reasons: There is no empty space to be released because the Shrink command does not compress the database; it only removes the empty space from the database files and there is no empty space. Remember, the Shrink command is a logged operation. When we perform the Shrink operation, this information is logged in the log file. If there is no empty space in the log file, SQL Server cannot write to the log file and you cannot shrink a database. Now answering your questions: (1) Q: What are the USEDPAGES & ESTIMATEDPAGES that appear on the Results Pane after using the DBCC SHRINKDATABASE (NorthWind, 10) ? A: According to Books Online (For SQL Server 2000): UsedPages: the number of 8-KB pages currently used by the file. EstimatedPages: the number of 8-KB pages that SQL Server estimates the file could be shrunk down to. Important Note: Before asking any question, make sure you go through Books Online or search on the Google once. The reasons for doing so have many advantages: 1. If someone else already has had this question before, chances that it is already answered are more than 50 %. 2. This reduces your waiting time for the answer. (2) Q: What is the difference between Shrinking the Database using DBCC command like the one above & shrinking it from the Enterprise Manager Console by Right-Clicking the database, going to TASKS & then selecting SHRINK Option, on a SQL Server 2000 environment? A: As far as my knowledge goes, there is no difference, both will work the same way, one advantage of using this command from query analyzer is, your console won’t be freezed. You can do perform your regular activities using Enterprise Manager. (3) Q: What is this .NDF file that is discussed above? I have never heard of it. What is it used for? Is it used by end-users, DBAs or the SERVER/SYSTEM itself? A: .NDF File is a secondary data file. You never heard of it because when database is created, SQL Server creates database by default with only 1 data file (.MDF) and 1 log file (.LDF) or however your model database has been setup, because a model database is a template used every time you create a new database using the CREATE DATABASE Command. Unless you have added an extra data file, you will not see it. This file is used by the SQL Server to store data which are saved by the users. Hope this information helps. I would like to as the experts to please comment if what I understand is not what the Microsoft guys meant. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Comments syntax for Idoc Script

    - by kyle.hatlestad
    Maybe this is widely known and I'm late to the party, but I just ran across the syntax for making comments in Idoc Script. It's been something I've been hoping to see for a long time. And it looks like it quietly snuck into the 10gR3 release. So for comments in Idoc Script, you simply [[% surround your comments in these symbols. %]] They can be on the same line or span multiple lines. If you look in the documentation, it still mentions making comments using the syntax. Well, that's certainly not an ideal approach. You're stuffing your comment into an actual variable, it's taking up memory, and you have to watch double-quotes in your comment. A perhaps better way in the old method is to start with my comments . Still not great, but now you're not assigning something to a variable and worrying about quotes. Unfortunately, this syntax only works in places that use the Idoc format. It can't be used in Idoc files that get indexed (.hcsp & .hcsf) and use the <!--$...--> format. For those, you'll need to continue using the older methods. While on the topic, I thought I would highlight a great plug-in to Notepad++ that Arnoud Koot here at Oracle wrote for Idoc Script. It does script highlighting as well as type-ahead/auto-completion for common variables, functions, and services. For some reason, I can never seem to remember if it's DOC_INFO_LATESTRELEASE or DOC_INFO_LATEST_RELEASE, so this certainly comes in handy. I've updated his plug-in to use this new comments syntax. You can download a copy of the plug-in here which includes installation instructions.

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • In 12.04: Failed to load session 'ubuntu' [closed]

    - by Stéphane
    Possible Duplicate: There's an issue with an Alpha/Beta Release of Ubuntu, what should I do? I'm using 12.04 beta. Today I was prompted to install some updates, which I did, followed by a reboot. On reboot, X starts, but all I see is a single dialog window in the middle of the screen with the text: Failed to load session 'ubuntu' I don't even see the mouse, or the login screen, just this 1 line of text. When I hit CTRL+ALT+F1 to run dist-upgrade from a command prompt, I get this: The following packages have been kept back: libgnome-desktop-3-2 So to see why it was kept back, I tried the following: $ sudo apt-get install libgnome-desktop-3-2 ... The following packages have unmet dependencies: libgnome-desktop-3-2 : Depends: gnome-desktop3-data (= 3.3.92-0ubuntu1) but 3.3.91-0ubuntu2 is to be installed E: Unable to correct problems, you have held broken packages. Anyone else seeing this, or have an idea how to fix it? If you're going to close it as a duplicate, can you please link to the duplicate question?

    Read the article

  • Oracle Announces Oracle Cloud Office and Oracle Open Office 3.3

    - by Paulo Folgado
    Oracle today introduced Oracle Cloud Office and Oracle Open Office 3.3, two complete, open standards-based office productivity suites for the desktop, web and mobile devices - helping users significantly improve productivity, reduce costs and achieve greater innovation across the enterprise.Oracle Cloud Office 1.0 is a web and mobile office suite that enables web 2.0-style collaboration and mobile document access. Compatibility with Microsoft Office and integration with Oracle Open Office enable rich and seamless offline editing of complex presentations, text and spreadsheet documents. Oracle Open Office 3.3 includes new enterprise connectors to Oracle Business Intelligence, Oracle E-Business Suite, other Oracle Applications and Microsoft Sharepoint, to allow for fast, seamless integration into existing enterprise software stacks. In addition, it adds increased stability, compatibility and performance at up to five times lower license cost compared to Microsoft Office. Based on the Open Document Format (ODF) and open web standards, Oracle Office enables users to share files on any system as it is compatible with both legacy Microsoft Office documents and modern web 2.0 publishing. The Oracle Office APIs and open standards-based approach provides IT users with flexibility, lower short and long-term costs and freedom from vendor lock-in - enabling organizations to build a complete Open Standard Office Stack. If you're interested to learn more, read our today's press release or visit oracle.com/office.

    Read the article

< Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >