Search Results

Search found 3443 results on 138 pages for 'official repositories'.

Page 79/138 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • What's the deal with rubygems on Debian? It's different and strange.

    - by JSW
    I've noticed at least the following oddities around rubygems on Debian (5.0 lenny in my case): Packages go into a different installation location: /var/lib/gems vs /usr/lib/ruby/gems The debian package is rubygems 1.3.6, and updating rubygems to the latest version (1.3.7) doesn't work: $ sudo gem update --system ERROR: While executing gem ... (RuntimeError) gem update --system is disabled on Debian. RubyGems can be updated using the official Debian repositories by aptitude or apt-get. Not all gems appear to work like they do on other systems. For instance, when installing Phusion Passenger, it did not detect the "rack" gem even though it was definitely installed. Manually installing rubygems using the source tarball and reinstalling all my gems (to /usr/lib/ruby/gems) made my problems go away. What's the deal? Why is debian's package different?

    Read the article

  • Better Version Control (Distributed) - Minimum impact on sources - always possible to update

    - by Olav
    I am f...fed up with Subversion. Need a version control that: Can be used without affecting the sources with embedded files (like the Subversion .svn-directories), or having to check in and then check out (If you want to version control live web-site files for example). It should always be possible to bring the repository quickly up to date whatever I have done (Without resolving conflicts or adding files first etc.) Ideally it should be possible to merge repositories starting out as separate. I thing it should be a distributed one, I think GIT is the Lingua Franca, but there is also Mercurial and Bazaar, which should have some advantages since they exist :-)

    Read the article

  • Where should I define Enums?

    - by Ciel
    Hi: I'm setting up a new app, with a Repository layer/assembly, a Services layer/assembly, and a UI assembly. So I end up with namespaces such as: App.UI App.Biz.Services App.Data.Repositories And then I have enums for the args that are used by all 3 layers. Only place that makes sense is to put them in the Cross cutting assembly. (define them in Data layer too low, as UI should have no direct ref to them, defined in Services, too high for Repository layer, which shouldn't be referencing upwards). But...which namespace in Common? Namespaces should mostly be used to define concerns, rather than Type... I've always used something like: namespace App.Common.Enums {...} but it's always felt a bit of a hack that works for me, but not well in a large org where everybody is generating Enums, and if we put them all in Enums folder it's going to make the code folder harder to understand later. Any suggestions?

    Read the article

  • Hudson + gitolite + virtual host on staging server

    - by takeshin
    I have a Ubuntu server which I want to be my continous integration server (for the Zend Application based projects) and the staging server as well. The team is pushing source files to the repository: /home/git/repositories/testing.git Then Hudson does the build, and the master branch is exported (maybe cloned is a better word) by git hudson plugin to: /var/lib/hudson/jobs/test/workspace/ The workspace contains .git folder as well, which is not necessary on my staging website. How do you set up virtual host to see the staging version of the content of the repository? Does the virtual host point to the workspace, or shall I export the files to another directory? What about the permissions and security? Hudson is the owner of all the workspace files. Do I have to do some post-build actions to set up access? P.S. If this question is more apropriate to serverfault, please migrate.

    Read the article

  • Proper way to build a data Repository

    - by rockinthesixstring
    I'm working on using the Repository methodology in my App and I have a very fundamental question. When I build my Model, I have a Data.dbml file and then I'm putting my Repositories in the same folder with it.... IE: Data.dbml IUserRepository.cs UserRepository.cs My question is simple. Is it better to build the folder structure like that above, or is it ok to simply put my Interface in with the UserRepository.cs? Data.dbml UserRepository.cs              which contains both the interface and the class

    Read the article

  • How to do rolling balances in Linq2SQL

    - by David Liddle
    Given an account with a list of transactions I would like to output a query that shows each transaction with the rolling balance (just like you would see on an online banking account). TRANSACTIONS - ID - DATE - AMOUNT Here is what I created in T-SQL however was wondering if this can be translated to linq2sql code? select T.ID, convert(char(10), T.DATE, 101) as 'DATE', T.AMOUNT, (select sum(O.AMOUNT) from TRANSACTIONS O where O.DATE < T.DATE or (O.DATE = T.DATE and O.ID <= T.ID)) 'BALANCE' from TRANSACTIONS as T where T.DATE between @pStartDate and @pEndDate order by T.DATE, T.ID Alternatively I guess my other option is to just call a stored procedure for these kind of results. However, I have Services which call Repositories and didn't really want to put the sproc call in the Repository.

    Read the article

  • Which entities should be Aggregate Roots?

    - by MylesRip
    If Book aggregates Chapter which in turn aggregates Page, then what should be the aggregate root? One possibility might be: Book is an aggregate root with Chapter as a leaf and Chapter is an aggregate with Page as a leaf. In this scenario, Chapter is a leaf in one aggregate and a root in another. Is this okay? Would it make sense in this scenario to have two repositories, one for Book and another for Chapter? If so, then couldn't the Chapter repository be used to circumvent the fact that access to Chapter should only happen via Book? What would be the best way to handle a situation like this?

    Read the article

  • Set plugin’s version on the command line in maven 2

    - by larry cai
    I generate default quickstart maven example, and type mvn checkstyle:checkstyle, it always try to use the lastest SNAPSHOT version, probably it is wrong in my nexus server, but How can I set plugin's version on the command line in maven2, like 2.5 for checkstyle instead of 2.6-SNAPSHOT C:\HelloWorld>mvn checkstyle:checkstyle [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'checkstyle'. [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error building POM (may not be this project's POM). Project ID: org.apache.maven.plugins:maven-checkstyle-plugin Reason: Error getting POM for 'org.apache.maven.plugins:maven-checkstyle-plugin' from the repository: Failed to resolve artifact, possibly due to a repository list that is not appropriately equipped for this artifact's metadata. org.apache.maven.plugins:maven-checkstyle-plugin:pom:2.6-SNAPSHOT from the specified remote repositories: nexus (http://localhost:9081/nexus/content/groups/public) for project org.apache.maven.plugins:maven-checkstyle-plugin I guess it could be "mvn checkstyle:2.5:checkstyle", unfortunately it is not. Surely if I set build dependance in pom.xml, it will work, but I want to see how command line can works

    Read the article

  • Sharepoint Document Upload Page - Passing URL Variables?

    - by Corey O.
    Throughout my SharePoint site, I have several document repositories that are tied to primary keys from an external database. I have added custom columns in the document library metadata fields so that we will know which SharePoint documents correspond with which table entries. As a requirement, we need to have document uploads that have these fields automatically populated. For instance, I'd like to have the following url: ./Upload.aspx?ClassID=2&SystemID=63 So that when you upload any documents to this library, it automatically adds the ClassID and SystemID values to the corresponding ClassID and SystemID columns outlined in the SharePoint document library fields. Is there any quick or easy way to do this, or will I have to completely rewrite the Upload.aspx script from scratch?

    Read the article

  • How to force Maven to download maven-metadata.xml from the central repository?

    - by Alceu Costa
    What I want to do is to force Maven to download the 'maven-metadata.xml' for each artifact that I have in my local repository. The default Maven behaviour is to download only metadata from remote repositories (see this question). Why I want to do that: Currently I have a remote repository running in a build machine. By remote repository I mean a directory located in the build machine that contains all dependencies that I need to build my Maven projects. Note that I'm not using a repository manager like Nexus, the repository is just a copy of a local repository that I have uploaded to my build machine. However, since my local repository did not contain the 'maven-metadata.xml' files, these metadata files are also missing in the build machine repository. If I could retrieve the metadata files from the central repository, then it would be possible to upload a working remote repository to my build machine.

    Read the article

  • How to avoid Mercurial repo corruption when sharing a repository between Windows/Mac?

    - by Stabledog
    I have several projects which are shared between Windows and Mac. The dev machine is a Mac running Parallels: the files are stored on the Mac side, and the source is shared to the Windows side. This is very convenient, as I can switch back and forth between Windows and Mac tools rapidly without having to sync files. Recently I switched from Subversion to Mercurial, and now I'm having problems with the Mercurial repository becoming corrupt if I use the Windows tools to add/update, etc. I have to be very careful about which operations on the Windows side are safe (mainly the read-only stuff) and of course I forget rather regularly. Does anybody know why the corruption occurs? I thought Mercurial repositories were platform-agnostic. Any ideas how to prevent it without removing the Windows tools entirely?

    Read the article

  • Question About NerdDinner Controller Constructors

    - by Gavin Draper
    I've been looking at the Nerd Dinner app, more specifically how it handles its unit tests. The following constructors for the RSVPController are confusing my slightly public RSVPController() : this(new DinnerRepository()) { } public RSVPController(IDinnerRepository repository) { dinnerRepository = repository; } From what I can tell the second one is used by the unit tests so it can use Fake repositories. What I cant work out is what the first constructor does. It doesn't seem to ever set the dinnerRepository variable, it seems to imply its inheriting from something but I really don't get it. Can anyone explain? Thanks

    Read the article

  • Browsing to Subversion repository location indicated in Apache conf file

    - by sonrael
    I have Subversion 1.6.6 and Apache 2.2.14 installed and working. I have made the following changes to the Apache httpd.conf file: #Uncommented by me for Subversion installation LoadModule dav_module modules/mod_dav.so LoadModule dav_fs_module modules/mod_dav_fs.so #Added by me for subversion installation LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so < Location /svn DAV svn SVNPath C:\Users\RED\Repositories < /Location ...When I navigate to localhost Apache is working properly, but if I try to go to localhost/svn the browser just hangs waiting for a response from the server. What is supposed to happen here? Does it have to do with the fact that I'm behind a wireless router on a dynamic IP address (although I can access localhost no problem so...)? As you can see I'm on windows (Vista)

    Read the article

  • Which not displaying location of executible actually run.

    - by Nick
    I have a version of SVN on my system in /usr/bin/svn. This is too old to use with some repositories so I compiled a newer version in /home/user/built/bin/svn which works fine. I added this to my PATH so it should be run first. Typing which svn produces /home/user/built/bin/svn however typing svn --version reveals that it us using the old version still. if I run /home/user/built/bin/svn --version then the correct version is displayed. Since the custom version is first in my $PATH, and which lists it first why is the older version being invoked when I run svn? I thought which used your $PATH to find executables in the same fashion as the shell? [Edit] Type gives: type svn svn is hashed (/usr/bin/svn) Is this the problem?

    Read the article

  • Find git branch that got pushed to a bare repository.

    - by Senthil A Kumar
    Lets have 2 repositories, one containing the actual data repo and a bare repository which is loaded with deltas from the actual data repository by doing a git push from data repo to bare repo. Hope you have understood the model that am using here. Am creating clones by cloning the bare repo, and i will be pushing from the branches in my local clone to the branches in bare repository. When am pushing data from my branch to bare repo, the data is automatically synced to the data repo by a hook. The question i have - is there a way to find from which branch a code has come to the bare repo. I can see the source and target branch during a git push, but after pushing can i see from logs or other way to identify from which branch and repository the data has been pushed from? If there are 5 developers pushing to bare repo, can i find in the bare repo from which branch and clone a code is pushed?

    Read the article

  • I can't use achiva as a proxy of my remote repository

    - by user1553915
    I want to install Achiva to manage my Maven repositories. I add a new internal repository called "public" and a new remote repository, which is my company's repository. Then I configure the proxy connector to let the repository "public" to be a proxy of the remote one. But When I enter the address "http://localhost:8084/archiva/repository/public/.../....pom" in the web brower,error occurred "Unable to fetch artifact resource". However, if I replace the remote repository with another one such as "http://repository.codehaus.org/", the proxy works. I don't know why this happens, are there something wrong with my remote repository address?

    Read the article

  • Trouble with SVN and Filename 'changes'.

    - by Stacey
    I am programming in Visual Studio 2010, using TortiseSVN and VisualSVN as my client to connect to SVN repositories. I am having a bit of a frequent problem though with the whole SVN thing in general. One thing that keeps cropping up is that if I make changes to files - namely filenames, or move them to new folders, etc, I end up getting all kinds of conflicts with the repository and it just causes all sorts of strange errors. I understand the importance of version control and check-in/check-out access like this, but what do most of you do to deal with this kind of thing? I mean, I've tried doing the whole 'Remove from Subversion', change my file, then 'Add to Subversion' thing, and it just doesn't seem to do the job very well. This is especially frustrating when working on web projects where filenames can change very frequently as a project evolves and becomes multifaceted. Are there any standard ways to deal with this kind of thing, or is it just one of the flaws of SVN in general?

    Read the article

  • Windsor + NHibernate + ISession + MVC

    - by dbones
    Hi I am trying to get Windsor to give me an instance ISession for each request, which should be injected into all the repositories Here is my container setup container.AddFacility<FactorySupportFacility>().Register( Component.For<ISessionFactory>().Instance(NHibernateHelper.GetSessionFactory()).LifeStyle.Singleton, Component.For<ISession>().LifeStyle.Transient .UsingFactoryMethod(kernel => kernel.Resolve<ISessionFactory>().OpenSession()) ); //add to the container container.Register( Component.For<IActionInvoker>().ImplementedBy<WindsorActionInvoker>(), Component.For(typeof(IRepository<>)).ImplementedBy(typeof(NHibernateRepository<>)) ); Its based upon a StructureMap post here http://www.kevinwilliampang.com/2010/04/06/setting-up-asp-net-mvc-with-fluent-nhibernate-and-structuremap/ however, when this is run, a new Session is created for every object it is injected too. what am I missing? thanks in advanced (FYI the NHibernateHelper, sets up the config for Nhib)

    Read the article

  • Distributed version control for HUGE projects - is it feasible?

    - by Vilx-
    We're pretty happy with SVN right now, but Joel's tutorial intrigued me. So I was wondering - would it be feasible in our situation too? The thing is - our SVN repository is HUGE. The software itself has a 15 years old legacy and has survived several different source control systems already. There are over 68,000 revisions (changesets), the source itself takes up over 100MB and I cant even begin to guess how many GB the whole repository consumes. The problem then is simple - a clone of the whole repository would probably take ages to make, and would consume far more space on the drive that is remotely sane. And since the very point of distributed version control is to have a as many repositories as needed, I'm starting to get doubts. How does Mercurial (or any other distributed version control) deal with this? Or are they unusable for such huge projects?

    Read the article

  • One SVN repository or many?

    - by nickf
    If you have multiple, unrelated projects, is it a good idea to put them in the same repository? myRepo/projectA/trunk myRepo/projectA/tags myRepo/projectA/branches myRepo/projectB/trunk myRepo/projectB/tags myRepo/projectB/branches or would you create new repositories for each? myRepoA/trunk myRepoA/tags myRepoA/branches myRepoB/trunk myRepoB/tags myRepoB/branches What are the pros and cons of each? All that I can currently think of is that you get mixed revision numbers (so what?), and that you can't use svn:externals unless the repository is actually external. (i think?) The reason I ask is because I'm considering consolidating my multiple repos into one, since my SVN host has started charging per repo.

    Read the article

  • DDD: MailService.SendNotificationToUser() or User.Notify()?

    - by cfs
    I seem to stumble on problem after problem giving my entities behavior. I have a system where a user gets a notification when someone comments his article. Right now it is via an e-mail. I'm struggeling how to implement this the DDD way. Option 1 User entity has a Notify method: User.Notify() The method uses C# built in classes to send an e-mail notification via e-mail The problem with having this in the domain is that it is technology spesific, and how a user is notified might change in the future. I feel this belongs to infrastructure, but how then can a user have behavior? Option 2 I create a Service: NotificationService.Notify(User) The Service uses C# built in classes to send an e-mail The pro is that the service could be an Application Service, and as far as I know an application service can use the infrastructure and call things like the System.Net.Mail and repositories for that sake. How would you implement this?

    Read the article

  • How to give friend access to git repository without giving command line access?

    - by Jack Humphries
    I have some git repositories running on my server and I would like to give a friend read/write access to one. That's simple: I add him as a user, give him SSH access, and change the permissions to the repository folder. Everything works fine; I'm able to clone the git repository using Xcode and change things (ssh://www.example.com/repo.git). However, I do not want him to have command line access. If I recall correctly, Github does not give command line access to those who SSH in. I'm using Snow Leopard Server. Is this more of a server issue or a git issue? Do you have any idea where to begin? Setting the user's Login Shell to none (as opposed to /bin/bash) cuts off access to everything.

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • Default /etc/apt/sources.list?

    - by piemesons
    I need default source list for ubuntu 10.04. Can anybody help me? Here is Mine:--- Ubuntu supported packages deb http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb-src http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse Canonical Commercial Repository deb http://archive.canonical.com/ubuntu lucid partner deb http://archive.canonical.com/ubuntu lucid-backports partner deb http://archive.canonical.com/ubuntu lucid-updates partner deb http://archive.canonical.com/ubuntu lucid-security partner deb http://archive.canonical.com/ubuntu lucid-proposed partner deb-src http://archive.canonical.com/ubuntu lucid partner deb-src http://archive.canonical.com/ubuntu lucid-backports partner deb-src http://archive.canonical.com/ubuntu lucid-updates partner deb-src http://archive.canonical.com/ubuntu lucid-security partner deb-src http://archive.canonical.com/ubuntu lucid-proposed partner medibuntu deb http://packages.medibuntu.org/ lucid free non-free deb-src http://packages.medibuntu.org/ lucid free non-free PlayOnLinux deb http://deb.playonlinux.com/ lucid main opera deb http://deb.opera.com/opera/ lenny non-free google deb http://dl.google.com/linux/deb/ stable non-free main Dropbox Official Source deb http://linux.dropbox.com/ubuntu karmic main Skype deb http://download.skype.com/linux/repos/debian/ stable non-free This is the error i am getting:-- (sudo apt-get update) Get:9 http://dl.google.com stable/main Packages [1,076B] Err http://ppa.launchpad.net lucid/main Packages 404 Not Found Get:10 http://dl.google.com stable/main Packages [735B] and finally :-- Fetched 9,724B in 3s (2,645B/s) W: Failed to fetch http://ppa.launchpad.net/bisig/ppa/ubuntu/dists/lucid/main/binary-i386/Packages.gz 404 Not Found E: Some index files failed to download, they have been ignored, or old ones used instead.

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >