Search Results

Search found 8389 results on 336 pages for 'shared calendar'.

Page 161/336 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • Trying to configure samba share with office server

    - by tomphelps
    Hi, i'm trying to set up fstab to automatically connect to my office shared server. I'm undoubtedly doing something silly here as the username and password and server name work fine in the first code snippet below, just not the second - any help would be appreciated! The following command works as expected... tom@tom-desktop: sudo /usr/bin/smbclient -L Server.local -Uguest Enter guest's password: Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.0.10] Sharename Type Comment --------- ---- ------- Lacie Disk Disk macosx Server Disk macosx IPC$ IPC IPC Service (Server) ADMIN$ IPC IPC Service (Server) Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.0.10] Server Comment --------- ------- ACER-9D60040D10 SERVER Server Workgroup Master --------- ------- WORKGROUP ACER-9D60040D10 But when i add the following line to /etc/fstab, i get this error: cifs_mount failed w/return code = -22 //Server.local/Server /media/maguires cifs username=guest,password=password 0 0

    Read the article

  • How to access network folders from within windows 7 virtualbox

    - by musher
    At work we have fileshare storages accessible via the network. I'm not super familiar with this kind of networking, so I'm not 100% sure how to word this question/give enough details right away so please ask for any clarifications. That being said: Host: Xubuntu 14.04 Guest: Windows 7 enterprise Via thunar I can access these network drives fine by going to Browse Network > Windows Network > Server I want to use The name given in the address bar is: smb://SERVER-NAME So my question now, is how do I go about accessing these folders via my win 7 machine in virtualbox? I've searched but all I get is people who can't figure out shared folders between host/guest (which I've got working)

    Read the article

  • Help with Kodak esp 3250 printer driver on Lubuntu 12.1 SOLVED!

    - by user108608
    First my system: pentium 4 -don't remember the speed-, 1g ram, dual boot to separate physical drives, Fdos and Lubuntu 12.1 second my lan: I have four computers operating for the same printer. 1. Intel quad core i5, 4g ram, running Windoze 7 64 bit, printer connected and shared from here. Kodak ESP 3250 2. Gateway 17" laptop running Windoze 7 32bit 3. Asus tablet (small laptop) running Lumbutu 12.1 4. My dual boot system running Fdos and Lubuntu 12.1 The problem: I downloaded c2esp_25c-1_i386.deb, tried to install it using DEBI Package Installer, it loads the files, looks for cups driver and ends with an error: "Dependancy is not satisfiable: libcupsdriver1 (=1.4.0)" What do I do now? Is there some place that I can get the correct cups driver? further information: The Asus tablet was running Ubuntu 12.1 (very slowly and with a few crashes) and could print from the lan printer with no problems. Is there something in Ubuntu that can be loaded into Lubuntu? noobee user hoping for answers, Paul

    Read the article

  • Is there a difference between "self-plagiarizing" in programming vs doing so as a writer?

    - by makerofthings7
    I read this Gawker article about how a writer reused some of his older material for new assignments for different magazines. Is there any similar ethical (societal?) dilemma when doing the same thing in the realm of Programming? Does reusing a shared library you've accumulated over the years amount to self-plagarizm? What I'm getting at is that it seems that the creative world of software development isn't as stringent regarding self-plagarism as say journalism or blogging. In fact on one of my interviews at GS I was asked what kind of libraries I've developed over the years, implying that me getting the job would entail co-licensing helpful portions of code to that company. Are there any cases where although it's legal to self-plagarize, it would be frowned upon in the software world?

    Read the article

  • Q&amp;A: Does it make sense to run a personal blog on the Windows Azure Platform?

    - by Eric Nelson
    I keep seeing people wanting to do this (or something very similar) and then being surprised at how much it might cost them if they went with Windows Azure. Time for a Q&A. Short answer: No, definitely not. Madness, sheer madness. (Hopefully that was clear enough) Longer answer: No because It would cost you a heck of a lot more than just about any other approach to running a blog. A site that can easily be run on a shared hosting solution (as many blogs do today) does not require the rich capabilities of Windows Azure. Capabilities such as simplified deployed and management, dedicated resources, elastic resources, “unlimited” storage etc. It is simply not the type of application the Windows Azure Platform has been designed for. Related Links: Q&A- How can I calculate the TCO and ROI when considering the Windows Azure Platform? Q&A- When do I get charged for compute hours on Windows Azure? Q&A- What are the UK prices for the Windows Azure Platform

    Read the article

  • eBook Exchange Helps Kindle and Nook Owners Swap Books

    - by Jason Fitzpatrick
    If you have a Kindle or Nook and are looking to do a little free reading, eBook Exchange makes it easy to borrow books from others and to share your books in turn. The service is completely free; in order to use it you simply sign up for an account and begin listing books you have to share. Even if you have no books to share at the moment you can still use the service (although be aware that eBook Exchange ranks requests and in the case of multiple users requesting the same book the system will favor a user who has shared the most). Hit up the link below take eBook Exchange for spin. eBook Exchange [via Gadgetopia] How To Delete, Move, or Rename Locked Files in Windows HTG Explains: Why Screen Savers Are No Longer Necessary 6 Ways Windows 8 Is More Secure Than Windows 7

    Read the article

  • Setup shortcut keys not working

    - by Tim
    In my Ubuntu 12.04, in keyboard settings, I didn't find a shortcut key for restarting X, so in "Customer Shorcut", I set up Ctrl+Alt+Backspace for command sudo restart lightdm. But after that the shortcut doesn't work. Is it because it requires root privilege? Also I have a SysRq key on my keyboard, which I think to be the "Magic SysReq Key". My SysRq key is shared with PrtSc key (for screen shoot), and is in blue which means I have to press Fn key at the same time to invoke SysRq instead of PrtSC. But every time I press Fn+SysRq, it always shoots a photo of the screen, same as just hitting PrtSc i.e. without hitting Fn. I wonder how to use the Magic SysReq Key? Does it mean the shortcut has not been linked to any command that is supposed for Magic SysReq Key yet? PS: My laptop is Lenovo T400 and OS is Ubuntu 12.04. Thanks!

    Read the article

  • Fusion CRM Data Integration and Migration from Conemis (D)

    - by Richard Lefebvre
    Conemis Data Integration Tools edited for Oracle Fusion CRM offers easy-to-use and pre-configured tools for data integration, data quality, and migration of data from Oracle CRM on Demand and third-party applications to Oracle Fusion CRM Conemis solution includes: Pressure Fueling of data for Fusion CRM Migration covered from legacy to Fusion CRM Data Quality in migration and integration Intuitive Data Housekeeping for IT and Sales Backups of Fusion CRM environments Conemis's solution benefits include Fusion CRM integrated out-of-the-box, connection to other applications, ready-made data mapping, instant availability without installation, fully configurable, shared use in integration expert groups, one GUI for several environments/pods, reduced costs & risks in migration projects, etc. Conemis AG, a German-based data integration company founded in 2009, offers Software and services solution and expertize for Oracle CRM products's data migration and integration. For more details, please contact Dr. Daniel Rolli ([email protected]) www.conemis.com.

    Read the article

  • Forward .html/.htm to .php with .config

    - by PhilipK
    I'm moving a site from my linux hosted server to a client's windows hosted server. The .htaccess file no longer works and I'm told that windows servers use .config . How can I forward all users accessing .html & .htm files to the equivalent .php file. Server Info... OS/Hosting Type: Windows / Shared Hosting .Net Runtime Version: ASP.Net 2.0/3.0/3.5 PHP Version: PHP 5.2 IIS Version: IIS 7.0 Data Center: US Regional EDIT *Hosting provided by GoDaddy Was told by a friend following should work but it has no effect on the site. <configuration> <system.webServer> <handlers> <add name="PHP-FastCGI" verb="*" path="*.html" modules="FastCgiModule" scriptProcessor="c:\php\php-cgi.exe" resourceType="Either" /> </handlers> </system.webServer> </configuration>

    Read the article

  • Top Questions and Answers for Pluging into Oracle Database as a Service

    - by David Swanger
    Yesterday we hosted a comprehensive online forum that shared a comprehensive path to help your organization design, deploy, and deliver a Database as a Service cloud. If you missed the online forum, you can watch it on demand by registering here. We received numerous questions.  Below are highlights of the most informative: DBaaS requires a lengthy and careful design efforts. What is the minimum requirements of setting up a scaled-down environment and test it out? You should have an OEM 12c environment for DBaaS administration and then a target database deployment platform that has the key characteristics of what your production environment will look like. This could be a single server or it could be a small pool of hosts if your production DBaaS will be larger and you want to test a more robust / real world configuration with Zones and Pools or DR capabilities for example. How does this benefit companies having their own data center? This allows companies to transform their internal IT to a service delivery model for the database. The benefits to the company are significant cost savings, improved business agility and reduced risk. The benefits to the consumers (internal) of services if much fast provisioning, and response to change in business requirements. From a deployment perspective, is DBaaS's job solely DBA's job? The best deployment model enables the DBA (or end-user) to control the entire process. All resources required to deploy the service are pre-provisioned, and there are no external dependencies (on network, storage, sysadmins teams). The service is created either via a self-service portal or by the DBA. The purpose of self service seems to be that the end user does not rely on the DBA. I just need to give him a template. He decides how much AMM he needs. Why shall I set it one by one. That doesn't seem to be the purpose of self service. Most customers we have worked with define a standardized service catalog, with a few (2 to 5) different classes of service. For each of these classes, there is a pre-defined deployment template, and the user has the ability to select from some pre-defined service sizes. The administrator only has to create this catalog once. Each user then simply selects from the options offered in the catalog.  Looking at DBaaS service definition, it seems to be no different from a service definition provided by a well defined DBA team. Why do you attribute it to DBaaS? There are a couple of perspectives. First, some organizations might already be operating with a high level of standardization and a higher level of maturity from an ITIL or Service Management perspective. Their journey to DBaaS could be shorter and their Service Definition will evolve less but they still might need to add capabilities such as Self Service and Metering/Chargeback. Other organizations are still operating in highly siloed environments with little automation and their formal Service Definition (if they have one) will be a lot less mature today. Therefore their future state DBaaS will look a lot different from their current state, as will their Service Definition. How database as a service impact or help with "Click to Compute" or deploying "Database in cloud infrastructure" DBaaS enables Click to Compute. Oracle DBaaS can be implemented using three architecture models: Oracle Multitenant 12c, native consolidation using Oracle Database and consolidation using virtualization in infrastructure cloud. As Deploy session showed, you get higher consolidating density and efficiency using Multitenant and higher isolation using infrastructure cloud. Depending upon your business needs, DBaaS can be implemented using any of these models. How exactly is the DBaaS different from the traditional db? Storage/OS/DB all together to 'transparently' provide service to applications? Will there be across-databases access by application/user. Some key differences are: 1) The services run on a shared platform. 2) The services can be rapidly provisioned (< 15 minutes). 3) The services are dynamic and can be relocated, grown, shrunk as needed to meet business needs without disruption and rapidly. 4) The user is able to provision the services directly from a standardized service catalog.. With 24x7x365 databases its difficult to find off peak hrs to do basic admin tasks such as gathering stats, running backups, batch jobs. How does pluggable database handle this and different needs/patching downtime of apps databases might be serving? You can gather stats in Oracle Multitenant the same way you had been in regular databases. Regarding patching/upgrading, Oracle Multitenant makes patch/upgrade very efficient in that you can pre-provision a new version/patched multitenant db in a different ORACLE_HOME and then unplug a PDB from its CDB and plug it into the newer/patched CDB in seconds.  Thanks for all the great questions!  If you'd like to learn more and missed the online forum, you can watch it on demand here.

    Read the article

  • Samba installation failed on ubuntu 12.10

    - by Ayyaz
    I was trying to install samba to access shared printer on windows pc connected via office network, following response came out from terminal. please guide me how to install Samba or any other alternative. crm@crm-HP-G62-Notebook-PC:~$ sudo apt-get install samba [sudo] password for crm: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: samba : Depends: samba-common (= 2:3.6.6-3ubuntu4) but 2:3.6.6-3ubuntu5 is to be installed Depends: libwbclient0 (= 2:3.6.6-3ubuntu4) but 2:3.6.6-3ubuntu5 is to be installed Recommends: tdb-tools but it is not going to be installed E: Unable to correct problems, you have held broken packages. crm@crm-HP-G62-Notebook-PC:~$

    Read the article

  • What would you change about C# if you could?

    - by Casebash
    Assume that your objectives are the same as for C#. I don't want an answer like "I'd write Python instead". I'm only interested in changes in the language itself - not in the standard library. Perhaps this would be interesting to discuss, but as it is shared between many languages it would be better to discuss this in another question instead. Please post a single idea per answer; if you want to post several ideas, post several separate answers. Thanks!

    Read the article

  • How to organise projects with dependencies on BitBucket?

    - by Timwi
    Both Mercurial and BitBucket make one fundamental assumption: 1 repo = 1 project. If I have a project that has a dependency (a library) which is shared by many projects, this assumption gets in the way. Now it is no longer possible to have a separate BitBucket page for each project while still being able to commit atomic revisions to multiple projects. If I put all the projects into one repo, they all become one “project” on BitBucket. If I put them in separate repos, it is no longer possible to know which version of the library project was in use at revision X of a dependent project. How is this situation normally solved on BitBucket, or is there explicitly no support for this common scenario?

    Read the article

  • Parleys Testimonial at GlassFish Community Event, JavaOne 2012

    - by arungupta
    Parleys.com is an e-learning platform that provide a unique experience of online and offline viewing presentations, with integrated movies and chaptering, from the top notch developer conferences and about 40 JUGs all around the world. Stephan Janssen (the Devoxx man and Parleys webmaster) presented at the GlassFish Community Event at JavaOne 2012 and shared why they moved from Tomcat to GlassFish. The move paid off as GlassFish was able to handle 2000 concurrent users very easily. Now they are also running Devoxx CFP and registration on this updated infrastructure. The GlassFish clustering, the asadmin CLI, application versioning, and JMS implementation are some of the features that made them a happy user. Recently they migrated their application from Spring to Java EE 6. This allows them to get locked into proprietary frameworks and also avoid 40MB WAR file deployments. Stateless application, JAX-RS, MongoDB, and Elastic Search is their magical forumla for success there. Watch the video below showing him in full action: More details about their infrastructure is available here.

    Read the article

  • Subdomain redirects to different server, maintaining original URL

    - by Jason Baker
    I just took over the webmaster position in my organization, and I'm trying to set up a development environment separate from the production environment. The two environments are hosted with different companies, both on shared hosting plans. Right now, production is at domain.com and development is at domain2.com What I want is to direct my development team to dev.domain.com More importantly, I don't want the URL to change from dev.domain.com back to domain2.com, but I also be responsive to page changes. For example, if my dev team navigates from dev.domain.com to a page (called "page"), I'd like the URL to show as dev.domain.com/page Is this possible, or am I just dreaming?

    Read the article

  • Upgrade from 10.10 to 11.04

    - by hemanta pathak
    On doing an upgrade from 10.10. to 11.04 using Upgrade Manager everything works fine. But on installing a software that bundle its owns runtime environment( loader and system files) and installs the custom runtime ( basically loader) in the location where the native one resides, the above mentioned upgrade fails.(Upgrade starts and after sometime it encounters an error and aborts.) Basically , /usr/bin/dpkg throws up an error on being unable to locate a system shared library in the aforesaid third party runtime folder ( /usr/bin/dpkg should not search the third party runtime folder.Instead it should look at the system default folder) But if we remove the installed third party loader from the default system location /lib and place it in some other location , the upgrade problem goes away. This makes me believe /usr/bin/dpkg invokes(loads) the wrong loader and as such goes looking for the dependent libraries in the third party folder. Can someone take a look at this ? is there some bug with dpkg

    Read the article

  • Library and several small programs that use it: how should I structure my git repository?

    - by Dan
    I have some code that uses a library that I and others frequently modify (usually only by adding functions and methods). We each keep a local fork of the library for our own use. I also have a lot of small "driver" programs (~100 lines) that use the library and are used exclusively by me. Currently, I have both the driver programs and the library in the same repository, because I frequently make changes to both that are logically connected (adding a function to the library and then calling it). I'd like to merge my fork of the library with my co-workers' forks, but I don't want the driver programs to be part of the merged library. What's the best way to organize the git repositories for a large, shared library that needs to be merged frequently and a number of small programs that have changes that are connected to changes in the library?

    Read the article

  • How to Implement Single Sign-On between Websites

    - by hmloo
    Introduction Single sign-on (SSO) is a way to control access to multiple related but independent systems, a user only needs to log in once and gains access to all other systems. a lot of commercial systems that provide Single sign-on solution and you can also choose some open source solutions like Opensso, CAS etc. both of them use centralized authentication and provide more robust authentication mechanism, but if each system has its own authentication mechanism, how do we provide a seamless transition between them. Here I will show you the case. How it Works The method we’ll use is based on a secret key shared between the sites. Origin site has a method to build up a hashed authentication token with some other parameters and redirect the user to the target site. variables Status Description ssoEncode required hash(ssoSharedSecret + , + ssoTime + , + ssoUserName) ssoTime required timestamp with format YYYYMMDDHHMMSS used to prevent playback attacks ssoUserName required unique username; required when a user is logged in Note : The variables will be sent via POST for security reasons Building a Single Sign-On Solution Origin Site has function to 1. Create the URL for your Request. 2. Generate required authentication parameters 3. Redirect to target site. using System; using System.Web.Security; using System.Text; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { string postbackUrl = "http://www.targetsite.com/sso.aspx"; string ssoTime = DateTime.Now.ToString("yyyyMMddHHmmss"); string ssoUserName = User.Identity.Name; string ssoSharedSecret = "58ag;ai76"; // get this from config or similar string ssoHash = FormsAuthentication.HashPasswordForStoringInConfigFile(string.Format("{0},{1},{2}", ssoSharedSecret, ssoTime, ssoUserName), "md5"); string value = string.Format("{0}:{1},{2}", ssoHash,ssoTime, ssoUserName); Response.Clear(); StringBuilder sb = new StringBuilder(); sb.Append("<html>"); sb.AppendFormat(@"<body onload='document.forms[""form""].submit()'>"); sb.AppendFormat("<form name='form' action='{0}' method='post'>", postbackUrl); sb.AppendFormat("<input type='hidden' name='t' value='{0}'>", value); sb.Append("</form>"); sb.Append("</body>"); sb.Append("</html>"); Response.Write(sb.ToString()); Response.End(); } } Target Site has function to 1. Get authentication parameters. 2. Validate the parameters with shared secret. 3. If the user is valid, then do authenticate and redirect to target page. 4. If the user is invalid, then show errors and return. using System; using System.Web.Security; using System.Text; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { if (User.Identity.IsAuthenticated) { Response.Redirect("~/Default.aspx"); } } if (Request.Params.Get("t") != null) { string ticket = Request.Params.Get("t"); char[] delimiters = new char[] { ':', ',' }; string[] ssoVariable = ticket.Split(delimiters, StringSplitOptions.None); string ssoHash = ssoVariable[0]; string ssoTime = ssoVariable[1]; string ssoUserName = ssoVariable[2]; DateTime appTime = DateTime.MinValue; int offsetTime = 60; // get this from config or similar try { appTime = DateTime.ParseExact(ssoTime, "yyyyMMddHHmmss", null); } catch { //show error return; } if (Math.Abs(appTime.Subtract(DateTime.Now).TotalSeconds) > offsetTime) { //show error return; } bool isValid = false; string ssoSharedSecret = "58ag;ai76"; // get this from config or similar string hash = FormsAuthentication.HashPasswordForStoringInConfigFile(string.Format("{0},{1},{2}", ssoSharedSecret, ssoTime, ssoUserName), "md5"); if (string.Compare(ssoHash, hash, true) == 0) { if (Math.Abs(appTime.Subtract(DateTime.Now).TotalSeconds) > offsetTime) { //show error return; } else { isValid = true; } } if (isValid) { //Do authenticate; } else { //show error return; } } else { //show error } } } Summary This is a very simple and basic SSO solution, and its main advantage is its simplicity, only needs to add a single page to do SSO authentication, do not need to modify the existing system infrastructure.

    Read the article

  • Terminator Skull Crafted from Dollar Store Parts [Video]

    - by Jason Fitzpatrick
    Earlier this year we shared an Iron Man prop build made from Dollar Store parts. The same Dollar Store tinker is at it again, this time building a Terminator endoskull. James Bruton has a sort of mad tinker knack for finding odds and ends at the Dollar Store and mashing them together into novel creations. In the video below, he shows how he took a pile of random junk from the store (plastic bowls, cheap computer speakers, even the packaging the junk came in) and turned it into a surprisingly polished Terminator skull. Hit up the link below for the build in photo-tutorial format. Dollar Store Terminator Endoskull Build [via Make] How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • How to set Qwt path to the run-time linker in Xubuntu

    - by Rahul
    I've successfully installed Qwt in Xubuntu 12.04(qmake - make - make install). But now I need to set the Qwt path to run time linker of Xubuntu. In manual it's given like - If you have installed a shared library it's path has to be known to the run-time linker of your operating system. On Linux systems read "man ldconfig" ( or google for it ). Another option is to use the LD_LIBRARY_PATH (on some systems LIBPATH is used instead, on MacOSX it is called DYLD_LIBRARY_PATH) environment variable. But being newbie to Linux environment, I'm not able to proceed further. Please help me with this.

    Read the article

  • Webhosting with custom database choice [closed]

    - by churchill614
    Possible Duplicate: How to find web hosting that meets my requirements? I am trying to find somewhere to host a website which uses OrientDB as its database. My budget doesn't stretch to a dedicated server where I can configure everything as I need it. Rather, I am hoping to find somewhere, ideally UK based, that will allow me to install/install for me OrientDB on their server, that is of the normal shared server variety. Is anybody able to point me in a good direction for this please (whilst UK is preferable it is not essential)?

    Read the article

  • Where are the systray icons for Dropbox in Ubuntu desktop 13.04 (minimal)?

    - by samvv
    I reinstalled Ubuntu desktop using the minimal CD image and the following command: $ sudo apt-get install ubuntu-desktop --no-install-recommends After that I used Ubuntu Software Center to make sure Unity supports application indicators: http://i.imgur.com/bYF162w.png. Everything works great, except for Dropbox. For some reason the icon doesn't appear in the tray, even though the application is running. Steam on the other hand runs just fine, so it seems like there is nothing wrong with the tray itself. According to this post the tray icons should be in /usr/shared/icons/hicolor/22x22/status but it doesn't contain any Dropbox icons. Neither do any of the other resolutions. The answer is a bit outdated, so I'm not entirely sure it is still applicable to the current version of Dropbox. I did the usual thing of reinstalling dropbox with: sudo apt-get purge nautilus-dropbox sudo apt-get install nautilus-dropbox But that didn't solve anything either. Does somebody know how to fix this issue?

    Read the article

  • Extremely slow transfer speed ubuntu -> Windows

    - by Hailwood
    I have two laptops, One is running Ubuntu 12.04 (EXT4) the other is running Windows 7 (NTFS). I am copying over 40gb of data (one file) from the Ubuntu laptop to the Windows Laptop. (Browse the shared folder on Ubuntu using Windows copy/paste) But I am getting transfer speeds topping out at ~700kb/s Surely this is not right. I am transferring via wifi on both laptops. My download speeds can reach 7-8mb/s on both laptops, so I know it is not the wifi cards or the router topping out. wlan0 Link encap:Ethernet HWaddr 84:4b:f5:db:b4:85 inet addr:192.168.1.66 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::864b:f5ff:fedb:b485/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11941185 errors:0 dropped:0 overruns:0 frame:0 TX packets:11306693 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:10087111370 (10.0 GB) TX bytes:7843524888 (7.8 GB)

    Read the article

  • Opinion for my recruitment portal idea [closed]

    - by user1498503
    I am creating a recruitment portal for IT professionals. In this, recruiters while creating a job post would be asked to create a skills requirement matrix. Essential Skills : asp.net MVC Entity Framework Desired Skills : SQL Server 2008 IIS 7.0 On the other hand job seekers would also have their own skills matrix Jobseeker #1 Core Skills : asp.net MVC Entity Framework MangoDB Secondary Skills : SQL Server 2008 IIS 7.0 Jobseeker #2 Core Skills : asp.net Web forms Secondary Skills : SQL Server 2008 IIS 7.0 So when both job seekers apply for the same job. Would it be a good idea for both of them to see each other's skills matrix for comparison?Also no personal details and CVs are shared. I think comparisons would help job seekers to understand what their areas of improvement are and could motivate to fill the skills gap. Your opinion would be appreciated. Regards

    Read the article

  • Using Definition of Done to Drive Agile Maturity

    - by Dylan Smith
    I’ve been an Agile Coach at a lot of different clients over the years, and I want to share an approach I use to help them adopt and mature over time. It’s important to realize that “Agile” is not a black/white yes/no thing. Teams can be varying degrees of agile. I think of this as their agile maturity level. When I coach teams I want them to start out being a little agile, and get more agile as they mature. The approach I teach them is to use the definition of done as a technique to continuously improve their agile maturity over time. We’re probably all familiar with the concept of “Done Done” that represents what *actually* being done a feature means. Not just when a developer says he’s done right after he writes that last line of code that makes the feature kind-of work. Done Done means the coding is done, it’s been tested, installers and deployment packages have been created, user manuals have been updated, architecture docs have been updated, etc. To enable teams to internalize the concept of “Done Done”, they usually get together and come up with their Definition of Done (DoD) that defines all the activities that need to be completed before a feature is considered Done Done. The Done Done technique typically is applied only to features (aka User Stories). What I do is extend this to apply to several concepts such as User Stories, Sprints, Releases (and sometimes Check-Ins). During project kick-off I’ll usually sit down with the team and go through an exercise of creating DoD’s for each of these concepts (Stories/Sprints/Releases). We’ll usually start by just brainstorming a bunch of activities that could end up in these various DoD’s. Here’s some examples: Code Reviews StyleCop FxCop User Manuals Updated Architecture Docs Updated Tested by QA Tested by UAT Installers Created Support Knowledge Base Updated Deployment Instructions (for Ops) written Automated Unit Tests Run Automated Integration Tests Run Then we start by arranging these activities into the place they occur today (e.g. Do you do UAT testing only once per release? every sprint? every feature?). If the team was previously Waterfall most of these activities probably end up in the Release DoD. An extremely mature agile team would probably have most of these activities in the DoD for the User Stories (because an extremely mature agile team will probably do continuous deployment and release every story). So what we need to do as a team, is work to move these activities from their current home (Release DoD) down into the Sprint DoD and eventually into the User Story DoD (and maybe into the lower-level Check-In DoD if we decide to use that). We don’t have to move them all down to User Story immediately, but as a team we figure out what we think we’re capable of moving down to the Sprint cycle, and Story cycle immediately, and that becomes our starting DoD’s. Over time the team makes an effort to continue moving activities down from Release->Sprint->Story as they become more agile and more mature. I try to encourage them to envision a world in which they deploy to production as each User Story is completed. They would need to be updating User Manuals, creating installers, doing UAT testing (typical Release cycle activities) on every single User Story. They may never actually reach that point, but they should envision that, and strive to keep driving the activities down closer to the User Story cycle s they mature. This is a great technique to give a team an easy-to-follow roadmap to mature their agile practices over time. Sure there’s other aspects to maturity outside of this, but it’s a great technique, that’s easy to visualize, to drive agility into the team. Just keep moving those activities (aka “gates”) down the board from Release->Sprint->Story. I’ll try to give an example of what a recent client of mine had for their DoD’s (this is from memory, so probably not 100% accurate): Release Create/Update deployment Instructions For Ops Instructional Videos Updated Run manual regression test suite UAT Testing In this case that meant deploying to an environment shared across the enterprise that mirrored production and asking other business groups to test their own apps to ensure we didn’t break anything outside our system Sprint Deploy to UAT Environment But not necessarily actually request UAT testing occur User Guides updated Sprint Features Video Created In this case we decided to create a video each sprint showing off the progress (video version of Sprint Demo) User Story Manual Test scripts developed and run Tested by BA Deployed in shared QA environment Using automated deployment process Peer Code Review Code Check-In Compiled (warning-free) Passes StyleCop Passes FxCop Create installer packages Run Automated Tests Run Automated Integration Tests PS – One of my clients had a great question when we went through this activity. They said that if a Sprint is by definition done when the end-date rolls around (time-boxed), isn’t a DoD on a sprint meaningless – it’s done on the end-date regardless of whether those other activities are complete or not? My answer is that while that statement is true – the sprint is done regardless when the end date rolls around – if the DoD activities haven’t been completed I would consider the Sprint a failure (similar to not completing what was committed/planned – failure may be too strong a word but you get the idea). In the Retrospective that will become an agenda item to discuss and understand why we weren’t able to complete the activities we agreed would need to be completed each Sprint.

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >