Search Results

Search found 15906 results on 637 pages for 'scott and the dev team'.

Page 164/637 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • T-SQL Tuesday #53-Matt's Making Me Do This!

    - by Most Valuable Yak (Rob Volk)
    Hello everyone! It's that time again, time for T-SQL Tuesday, the wonderful blog series started by Adam Machanic (b|t). This month we are hosted by Matt Velic (b|t) who asks the question, "Why So Serious?", in celebration of April Fool's Day. He asks the contributors for their dirty tricks. And for some reason that escapes me, he and Jeff Verheul (b|t) seem to think I might be able to write about those. Shocked, I am! Nah, not really. They're absolutely right, this one is gonna be fun! I took some inspiration from Matt's suggestions, namely Resource Governor and Login Triggers.  I've done some interesting login trigger stuff for a presentation, but nothing yet with Resource Governor. Best way to learn it! One of my oldest pet peeves is abuse of the sa login. Don't get me wrong, I use it too, but typically only as SQL Agent job owner. It's been a while since I've been stuck with it, but back when I started using SQL Server, EVERY application needed sa to function. It was hard-coded and couldn't be changed. (welllllll, that is if you didn't use a hex editor on the EXE file, but who would do such a thing?) My standard warning applies: don't run anything on this page in production. In fact, back up whatever server you're testing this on, including the master database. Snapshotting a VM is a good idea. Also make sure you have other sysadmin level logins on that server. So here's a standard template for a logon trigger to address those pesky sa users: CREATE TRIGGER SA_LOGIN_PRIORITY ON ALL SERVER WITH ENCRYPTION, EXECUTE AS N'sa' AFTER LOGON AS IF ORIGINAL_LOGIN()<>N'sa' OR APP_NAME() LIKE N'SQL Agent%' RETURN; -- interesting stuff goes here GO   What can you do for "interesting stuff"? Books Online limits itself to merely rolling back the logon, which will throw an error (and alert the person that the logon trigger fired).  That's a good use for logon triggers, but really not tricky enough for this blog.  Some of my suggestions are below: WAITFOR DELAY '23:59:59';   Or: EXEC sp_MSforeach_db 'EXEC sp_detach_db ''?'';'   Or: EXEC msdb.dbo.sp_add_job @job_name=N'`', @enabled=1, @start_step_id=1, @notify_level_eventlog=0, @delete_level=3; EXEC msdb.dbo.sp_add_jobserver @job_name=N'`', @server_name=@@SERVERNAME; EXEC msdb.dbo.sp_add_jobstep @job_name=N'`', @step_id=1, @step_name=N'`', @command=N'SHUTDOWN;'; EXEC msdb.dbo.sp_start_job @job_name=N'`';   Really, I don't want to spoil your own exploration, try it yourself!  The thing I really like about these is it lets me promote the idea that "sa is SLOW, sa is BUGGY, don't use sa!".  Before we get into Resource Governor, make sure to drop or disable that logon trigger. They don't work well in combination. (Had to redo all the following code when SSMS locked up) Resource Governor is a feature that lets you control how many resources a single session can consume. The main goal is to limit the damage from a runaway query. But we're not here to read about its main goal or normal usage! I'm trying to make people stop using sa BECAUSE IT'S SLOW! Here's how RG can do that: USE master; GO CREATE FUNCTION dbo.SA_LOGIN_PRIORITY() RETURNS sysname WITH SCHEMABINDING, ENCRYPTION AS BEGIN RETURN CASE WHEN ORIGINAL_LOGIN()=N'sa' AND APP_NAME() NOT LIKE N'SQL Agent%' THEN N'SA_LOGIN_PRIORITY' ELSE N'default' END END GO CREATE RESOURCE POOL SA_LOGIN_PRIORITY WITH ( MIN_CPU_PERCENT = 0 ,MAX_CPU_PERCENT = 1 ,CAP_CPU_PERCENT = 1 ,AFFINITY SCHEDULER = (0) ,MIN_MEMORY_PERCENT = 0 ,MAX_MEMORY_PERCENT = 1 -- ,MIN_IOPS_PER_VOLUME = 1 ,MAX_IOPS_PER_VOLUME = 1 -- uncomment for SQL Server 2014 ); CREATE WORKLOAD GROUP SA_LOGIN_PRIORITY WITH ( IMPORTANCE = LOW ,REQUEST_MAX_MEMORY_GRANT_PERCENT = 1 ,REQUEST_MAX_CPU_TIME_SEC = 1 ,REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 1 ,MAX_DOP = 1 ,GROUP_MAX_REQUESTS = 1 ) USING SA_LOGIN_PRIORITY; ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION=dbo.SA_LOGIN_PRIORITY); ALTER RESOURCE GOVERNOR RECONFIGURE;   From top to bottom: Create a classifier function to determine which pool the session should go to. More info on classifier functions. Create the pool and provide a generous helping of resources for the sa login. Create the workload group and further prioritize those resources for the sa login. Apply the classifier function and reconfigure RG to use it. I have to say this one is a bit sneakier than the logon trigger, least of all you don't get any error messages.  I heartily recommend testing it in Management Studio, and click around the UI a lot, there's some fun behavior there. And DEFINITELY try it on SQL 2014 with the IO settings included!  You'll notice I made allowances for SQL Agent jobs owned by sa, they'll go into the default workload group.  You can add your own overrides to the classifier function if needed. Some interesting ideas I didn't have time for but expect you to get to before me: Set up different pools/workgroups with different settings and randomize which one the classifier chooses Do the same but base it on time of day (Books Online example covers this)... Or, which workstation it connects from. This can be modified for certain special people in your office who either don't listen, or are attracted (and attractive) to you. And if things go wrong you can always use the following from another sysadmin or Dedicated Admin connection: ALTER RESOURCE GOVERNOR DISABLE;   That will let you go in and either fix (or drop) the pools, workgroups and classifier function. So now that you know these types of things are possible, and if you are tired of your team using sa when they shouldn't, I expect you'll enjoy playing with these quite a bit! Unfortunately, the aforementioned Dedicated Admin Connection kinda poops on the party here.  Books Online for both topics will tell you that the DAC will not fire either feature. So if you have a crafty user who does their research, they can still sneak in with sa and do their bidding without being hampered. Of course, you can still detect their login via various methods, like a server trace, SQL Server Audit, extended events, and enabling "Audit Successful Logins" on the server.  These all have their downsides: traces take resources, extended events and SQL Audit can't fire off actions, and enabling successful logins will bloat your error log very quickly.  SQL Audit is also limited unless you have Enterprise Edition, and Resource Governor is Enterprise-only.  And WORST OF ALL, these features are all available and visible through the SSMS UI, so even a doofus developer or manager could find them. Fortunately there are Event Notifications! Event notifications are becoming one of my favorite features of SQL Server (keep an eye out for more blogs from me about them). They are practically unknown and heinously underutilized.  They are also a great gateway drug to using Service Broker, another great but underutilized feature. Hopefully this will get you to start using them, or at least your enemies in the office will once they read this, and then you'll have to learn them in order to fix things. So here's the setup: USE msdb; GO CREATE PROCEDURE dbo.SA_LOGIN_PRIORITY_act WITH ENCRYPTION AS DECLARE @x XML, @message nvarchar(max); RECEIVE @x=CAST(message_body AS XML) FROM SA_LOGIN_PRIORITY_q; IF @x.value('(//LoginName)[1]','sysname')=N'sa' AND @x.value('(//ApplicationName)[1]','sysname') NOT LIKE N'SQL Agent%' BEGIN -- interesting activation procedure stuff goes here END GO CREATE QUEUE SA_LOGIN_PRIORITY_q WITH STATUS=ON, RETENTION=OFF, ACTIVATION (PROCEDURE_NAME=dbo.SA_LOGIN_PRIORITY_act, MAX_QUEUE_READERS=1, EXECUTE AS OWNER); CREATE SERVICE SA_LOGIN_PRIORITY_s ON QUEUE SA_LOGIN_PRIORITY_q([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]); CREATE EVENT NOTIFICATION SA_LOGIN_PRIORITY_en ON SERVER WITH FAN_IN FOR AUDIT_LOGIN TO SERVICE N'SA_LOGIN_PRIORITY_s', N'current database' GO   From top to bottom: Create activation procedure for event notification queue. Create queue to accept messages from event notification, and activate the procedure to process those messages when received. Create service to send messages to that queue. Create event notification on AUDIT_LOGIN events that fire the service. I placed this in msdb as it is an available system database and already has Service Broker enabled by default. You should change this to another database if you can guarantee it won't get dropped. So what to put in place for "interesting activation procedure code"?  Hmmm, so far I haven't addressed Matt's suggestion of writing a lengthy script to send an annoying message: SET @[email protected]('(//HostName)[1]','sysname') + N' tried to log in to server ' + @x.value('(//ServerName)[1]','sysname') + N' as SA at ' + @x.value('(//StartTime)[1]','sysname') + N' using the ' + @x.value('(//ApplicationName)[1]','sysname') + N' program. That''s why you''re getting this message and the attached pornography which' + N' is bloating your inbox and violating company policy, among other things. If you know' + N' this person you can go to their desk and hit them, or use the following SQL to end their session: KILL ' + @x.value('(//SPID)[1]','sysname') + N'; Hopefully they''re in the middle of a huge query that they need to finish right away.' EXEC msdb.dbo.sp_send_dbmail @recipients=N'[email protected]', @subject=N'SA Login Alert', @query_result_width=32767, @body=@message, @query=N'EXEC sp_readerrorlog;', @attach_query_result_as_file=1, @query_attachment_filename=N'UtterlyGrossPorn_SeriouslyDontOpenIt.jpg' I'm not sure I'd call that a lengthy script, but the attachment should get pretty big, and I'm sure the email admins will love storing multiple copies of it.  The nice thing is that this also fires on Dedicated Admin connections! You can even identify DAC connections from the event data returned, I leave that as an exercise for you. You can use that info to change the action taken by the activation procedure, and since it's a stored procedure, it can pretty much do anything! Except KILL the SPID, or SHUTDOWN the server directly.  I'm still working on those.

    Read the article

  • With WPF and Silverlight against cancer

    - by Laurent Bugnion
    MVPs are well known for their good heart (like the GeekGive initiative shows) and Client App Dev MVP Gregor Biswanger is no exception. At the latest MVP summit (beginning of March 2011), he took over a DVD about WPF 4 and Silverlight 4 and asked a few Microsoft superstars to sign it. Right now, the DVD is auctioned on eBay and of course the proceeds will go to a charitable work: The German League against Cancer (Deutsche Krebshilfe). The post is in German and English (scroll down for the English text). This sounds like a great idea, and considering who signed it, it is going to be a real collectible: Scott Hanselman (Principal Program Manager Lead in Server and Tools Online) Tim Heuer (Program Manager for Microsoft Silverlight) Rob Relyea (Principal Program Manager Lead - Client Platform WPF & Silverlight) Pete Brown (Developer Division Community Program Manager - Windows Client) Eric Fabricant (Program Manager WPF) Jeff Wilcox (Silverlight Senior SDE) Jeffrey R Ferman (SDET Visual Studio Client Dev Tools) Chan Verbeck (Expression Blend Team) Yaniv Feinberg (Expression Blend Team) Douglas Olson (Director Dev Expression) Samuel W. Bent (Principal Software Design Engineer WPF) John Papa (Technical Evangelist for Silverlight) So if you feel that you could do a generous gesture, go ahead and take a look at the auction, and talk about it around you. Let’s prove again that geeks rule, also when it comes to giving to a good cause! Cheers! Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Pivotal Announces JSR-352 Compliance for Spring Batch

    - by reza_rahman
    Pivotal, the company currently funding development of the popular Spring Framework, recently announced JSR 352 (aka Batch Applications for the Java Platform) compliance for the Spring Batch project. More specifically, Spring Batch targets JSR-352 Java SE runtime compatibility rather than Java EE runtime compatibility. If you are surprised that APIs included in Java EE can pass TCKs targeted for Java SE, you should not be. Many other Java EE APIs target compatibility in Java SE environments such as JMS and JPA. You can read about Spring Batch's support for JSR-352 here as well as the Spring configuration to get JSR-352 working in Spring (typically a very low level implementation concern intended to be completely transparent to most JSR-352 users). JSR 352 is one of the few very encouraging cases of major active contribution to the Java EE standard from the Spring development team (the other major effort being Rod Johnson's co-leadership of JSR 330 along with Bob Lee). While IBM's Christopher Vignola led the spec and contributed IBM's years of highly mission critical batch processing experience from products like WebSphere Compute Grid and z/OS batch, the Spring team provided major influences to the API in particular for the chunk processing, listeners, splits and operational interfaces. The GlassFish team's own Mahesh Kannan also contributed, in particular by implementing much of the Java EE integration work for the reference implementation. This was an excellent example of multilateral engineering collaboration through the standards process. For many complex reasons it is not too hard to find evidence of less than amicable interaction between the Spring ecosystem and the Java EE standard over the years if one cares to dig deep enough. In reality most developers see Spring and Java EE as two sides of the same server-side Java coin. At the core Spring and Java EE ecosystems have always shared deep undercurrents of common user bases, bi-directional flows of ideas and perhaps genuine if not begrudging mutual respect. We can all hope for continued strength for both ecosystems and graceful high notes of collaboration via efforts like JSR 352.

    Read the article

  • Click Once Deployment Process and Issue Resolution

    - by Geordie
    Introduction We are adopting Click Once as a deployment standard for Thick .Net application clients.  The latest version of this tool has matured it to a point where it can be used in an enterprise environment.  This guide will identify how to use Click Once deployment and promote code trough the dev, test and production environments. Why Use Click Once over SCCM If we already use SCCM why add Click Once to the deployment options.  The advantages of Click Once are their ability to update the code in a single location and have the update flow automatically down to the user community.  There have been challenges in the past with getting configuration updates to download but these can now be achieved.  With SCCM you can do the same thing but it then needs to be packages and pushed out to users.  Each time a new user is added to an application, time needs to be spent by an administrator, to push out any required application packages.  With Click Once the user would go to a web link and the application and pre requisites will automatically get installed. New Deployment Steps Overview The deployment in an enterprise environment includes several steps as the solution moves through the development life cycle before being released into production.  To make mitigate risk during the release phase, it is important to ensure the solution is not deployed directly into production from the development tools.  Although this is the easiest path, it can introduce untested code into production and result in unexpected results. 1. Deploy the client application to a development web server using Visual Studio 2008 Click Once deployment tools.  Once potential production versions of the solution are being generated, ensure the production install URL is specified when deploying code from Visual Studio.  (For details see ‘Deploying Click Once Code from Visual Studio’) 2. xCopy the code to the test server.  Run the MageUI tool to update the URLs, signing and version numbers to match the test server. (For details see ‘Moving Click Once Code to a new Server without using Visual Studio’) 3. xCopy the code to the production server.  Run the MageUI tool to update the URLs, signing and version numbers to match the production server. The certificate used to sign the code should be provided by a certificate authority that will be trusted by the client machines.  Finally make sure the setup.exe contains the production install URL.  If not redeploy the solution from Visual Studio to the dev environment specifying the production install URL.  Then xcopy the install.exe file from dev to production.  (For details see ‘Moving Click Once Code to a new Server without using Visual Studio’) Detailed Deployment Steps Deploying Click Once Code From Visual Studio Open Visual Studio and create a new WinForms or WPF project.   In the solution explorer right click on the project and select ‘Publish’ in the context menu.   The ‘Publish Wizard’ will start.  Enter the development deployment path.  This could be a local directory or web site.  When first publishing the solution set this to a development web site and Visual basic will create a site with an install.htm page.  Click Next.  Select weather the application will be available both online and offline. Then click Finish. Once the initial deployment is completed, republish the solution this time mapping to the directory that holds the code that was just published.  This time the Publish Wizard contains and additional option.   The setup.exe file that is created has the install URL hardcoded in it.  It is this screen that allows you to specify the URL to use.  At some point a setup.exe file must be generated for production.  Enter the production URL and deploy the solution to the dev folder.  This file can then be saved for latter use in deployment to production.  During development this URL should be pointing to development site to avoid accidently installing the production application. Visual studio will publish the application to the desired location in the process it will create an anonymous ‘pfx’ certificate to sign the deployment configuration files.  A production certificate should be acquired in preparation for deployment to production.   Directory structure created by Visual Studio     Application files created by Visual Studio   Development web site (install.htm) created by Visual Studio Migrating Click Once Code to a new Server without using Visual Studio To migrate the Click Once application code to a new server, a tool called MageUI is needed to modify the .application and .manifest files.  The MageUI tool is usually located – ‘C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bin’ folder or can be downloaded from the web. When deploying to a new environment copy all files in the project folder to the new server.  In this case the ‘ClickOnceSample’ folder and contents.  The old application versions can be deleted, in this case ‘ClickOnceSample_1_0_0_0’ and ‘ClickOnceSample_1_0_0_1’.  Open IIS Manager and create a virtual directory that points to the project folder.  Also make the publish.htm the default web page.   Run the ManeUI tool and then open the .application file in the root project folder (in this case in the ‘ClickOnceSample’ folder). Click on the Deployment Options in the left hand list and update the URL to the new server URL and save the changes.   When MageUI tries to save the file it will prompt for the file to be signed.   This step cannot be bypassed if you want the Click Once deployment to work from a web site.  The easiest solution to this for test is to use the auto generated certificate that Visual Studio created for the project.  This certificate can be found with the project source code.   To save time go to File>Preferences and configure the ‘Use default signing certificate’ fields.   Future deployments will only require application files to be transferred to the new server.  The only difference is then updating the .application file the ‘Version’ must be updated to match the new version and the ‘Application Reference’ has to be update to point to the new .manifest file.     Updating the Configuration File of a Click Once Deployment Package without using Visual Studio When an update to the configuration file is required, modifying the ClickOnceSample.exe.config.deploy file will not result in current users getting the new configurations.  We do not want to go back to Visual Studio and generate a new version as this might introduce unexpected code changes.  A new version of the application can be created by copying the folder (in this case ClickOnceSample_1_0_0_2) and pasting it into the application Files directory.  Rename the directory ‘ClickOnceSample_1_0_0_3’.  In the new folder open the configuration file in notepad and make the configuration changes. Run MageUI and open the manifest file in the newly copied directory (ClickOnceSample_1_0_0_3).   Edit the manifest version to reflect the newly copied files (in this case 1.0.0.3).  Then save the file.  Open the .application file in the root folder.  Again update the version to 1.0.0.3.  Since the file has not changed the Deployment Options/Start Location URL should still be correct.  The application Reference needs to be updated to point to the new versions .manifest file.  Save the file. Next time a user runs the application the new version of the configuration file will be down loaded.  It is worth noting that there are 2 different types of configuration parameter; application and user.  With Click Once deployment the difference is significant.  When an application is downloaded the configuration file is also brought down to the client machine.  The developer may have written code to update the user parameters in the application.  As a result each time a new version of the application is down loaded the user parameters are at risk of being overwritten.  With Click Once deployment the system knows if the user parameters are still the default values.  If they are they will be overwritten with the new default values in the configuration file.  If they have been updated by the user, they will not be overwritten. Settings configuration view in Visual Studio Production Deployment When deploying the code to production it is prudent to disable the development and test deployment sites.  This will allow errors such as incorrect URL to be quickly identified in the initial testing after deployment.  If the sites are active there is no way to know if the application was downloaded from the production deployment and not redirected to test or dev.   Troubleshooting Clicking the install button on the install.htm page fails. Error: URLDownloadToCacheFile failed with HRESULT '-2146697210' Error: An error occurred trying to download <file>   This is due to the setup.exe file pointing to the wrong location. ‘The setup.exe file that is created has the install URL hardcoded in it.  It is this screen that allows you to specify the URL to use.  At some point a setup.exe file must be generated for production.  Enter the production URL and deploy the solution to the dev folder.  This file can then be saved for latter use in deployment to production.  During development this URL should be pointing to development site to avoid accidently installing the production application.’

    Read the article

  • Using Microsoft benefits to kickstart your own development

    - by douglasscott
    Working for a big company I enjoy all the Microsoft tools I can consume. I also have the infrastructure to support my development and team communication.I recently helped form a small consulting team that requires the same type of resources. That is when the realization of the true cost of Microsoft's professional development tools really hit me.Okay, I'll just bite the bullet and get what I'm used to working with to do high quality development projects.  After just a few minutes of looking at street prices and doing some quick math I began to have a realization...doing this right isn't cheap!Luckily there is help.  If you are willing to get your ducks in a row and do a little documentation  Microsoft will give you some developer manna. I went to the Bizspark site and completed the application which describes your company profile and services offer.  The approval process took about a week.  Voila, A Visual Studio Ultimate with MSDN Subscription!As a start-up Office 365 can be a great solution for all your team communications.  I also enrolled in the Microsoft Cloud Essentials program as part of a business track.  Once you meet the Cloud Essentials requirements you will receive 250 Office 365 licenses! This includes Office and hosted Exchange, Lync, and SharePoint.Take advantage of what Microsoft has to offer for your start-up.  It just may surprise you and save you a lot of your start-up budget.

    Read the article

  • ArchBeat Link-o-Rama for December 11, 2012

    - by Bob Rhubart
    Good To Know - Conflicting View Objects and Shared Entity | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis shares his thoughts—and a sample application—dealing with an "interesting ADF behavior" encountered over the weekend. Patching Oracle Exalogic - Updating Linux on the Compute Nodes - Part 1 | Jos Nijhoff Jos Nijhoff launches a series of posts the deal with "patching the operating system on the modified Sun Fire X4170 M2 servers...dubbed compute nodes in Exalogic terminology." Expanding on requestaudit - Tracing who is doing what...and for how long | Kyle Hatlestad "One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing," says Oracle Fusion Middleware A-Team architect Kyle Hatlestad. Get up close and technical in his post. Oracle Data Integrator Presentation from NYOUG Webinar | Gurcan Orhan Oracle ACE Director and award-winning data warehouse architect Gurcan Orhan shares his presentation from the recent NYOUG LI SIG. SOA 11g Technology Adapters – ECID Propagation | Greg Mally "Many SOA Suite 11g deployments include the use of the technology adapters for various activities including integration with FTP, database, and files to name a few," says Oracle Fusion Middleware A-Team member Greg Mally. "Although the integrations with these adapters are easy and feature rich, there can be some challenges from the operations perspective." Greg's post focuses on technical tips for dealing with one of these challenges. Missing Duties for RUP3 upgrade in Fusion Applications Richard from the Oracle Fusion Middleware A-Team explains how to safely apply policy store changes in thirteen easy steps. Thought for the Day "Well over half of the time you spend working on a project (on the order of 70 percent) is spent thinking, and no tool, no matter how advanced, can think for you." — Frederick P. Brooks Source: SoftwareQuotes.com

    Read the article

  • Inside Red Gate - Introduction

    - by Simon Cooper
    I work for Red Gate Software, a software company based in Cambridge, UK. In this series of posts, I'll be discussing how we develop software at Red Gate, and what we get up to, all from a dev's perspective. Before I start the series proper, in this post I'll give you a brief background to what I have done and continue to do as part of my job. The initial few posts will be giving an overview of how the development sections of the company work. There is much more to a software company than writing the products, but as I'm a developer my experience is biased towards that, and so that is what this series will concentrate on. My background Red Gate was founded in 1999 by Neil Davidson & Simon Galbraith, who continue to be joint CEOs. I joined in September 2007, and immediately set to work writing a new Check for Updates client and server (CfU), as part of a team of 2. That was finished at the end of 2007. I then joined the SQL Compare team. The first large project I worked on was updating SQL Compare for SQL Server 2008, resulting in SQL Compare 7, followed by a UI redesign in SQL Compare 8. By the end of this project in early 2009 I had become the 'go-to' guy for the SQL Compare Engine (I'll explain what that means in a later post), which is used by most of the other tools in the SQL Tools division in one way or another. After that, we decided to expand into Oracle, and I wrote the prototype for what became the engine of Schema Compare for Oracle (SCO). In the latter half of 2009 a full project was started, resulting in the release of SCO v1 in early 2010. Near the end of 2010 I moved to the .NET division, where I joined the team working on SmartAssembly. That's what I continue to work on today. The posts in this series will cover my experience in software development at Red Gate, within the SQL Tools and .NET divisions. Hopefully, you'll find this series an interesting look at what exactly goes into producing the software at Red Gate.

    Read the article

  • It&rsquo;s About You: Tell Microsoft How They&rsquo;re Doing!

    - by juanlarios
    Every fall and spring, a survey goes out to a few hundred thousand IT folk in Canada asking what they think of Microsoft as a company. The information they get from this survey helps them understand what problems and issues you’re facing and how they can do better. The team at Microsoft Canada takes the input they get from this survey very seriously. Now I don’t know who of you will get the survey and who won’t but if you do find an email in your inbox from "Microsoft Feedback” with an email address of “ [email protected]and a subject line “Help Microsoft Focus on Customers and Partners” from now until April 13th — it’s not a hoax or phishing email. Please open it and take a few minutes to tell them what you think. This is your chance to get your voice heard: If they’re doing well, feel free to pile on the kudos (they love positive feedback!) and if you see areas they can improve, please point them out so they can make adjustments (they also love constructive criticism!). The Microsoft team would like to thank you for all your feedback in the past — to those of you who have filled out the survey and sent them emails. Thank you to all who engage with them in so many different ways through events, the blogs, online and in person. You are why they do what they do and they feel lucky to work with such a great community! One last thing - even if you don’t get the survey you can always give the team feedback by emailing us directly through the Microsoft Canada IT Pro Feedback email address . They want to make sure they are serving you in the best possible way. Tell them what you want more of. What should they do less of or stop altogether? How can they help? Do you want more cowbell ? Let them know through the survey or the email alias. They love hearing from you!

    Read the article

  • Feasibility to take over a JavaMe Project by Coders who have no experience in JavaMe

    - by Stephenmjm
    As the original JavaMe team will leave to do other items. The JavaMe project will be taken over by some guys knowing nothing about JavaMe. Transition period: One month About this JavaMe project: about 3.5 million lines of code (more than 180 java file, SourceCode is 8.5KB in total) using the Polish, Proguard document: The JavaMe project itself have no document. No UML map. Difficulties I guess: familiar with the JavaMe, this should be okay In order to do the further development. We need to Read the sourceCode ---- It's not easy to read 3.5 million lines of code having not enough comment Adaptation work for more than 100 phone These are the questions, thank you! In the case of our guys have no experience in JavaMe, Is one month too hasty? In order to take the job in time . What we should ask the original JavaMe team to do . Considering we hava no experience in JavaMe. The complication we taking the Adaptation work without the original JavaMe team? Any other suggestions?

    Read the article

  • Worldwide Web Camps

    - by ScottGu
    Over the next few weeks Microsoft is sponsoring a number of free Web Camp events around the world.  These provide a great way to learn about ASP.NET 4, ASP.NET MVC 2, and Visual Studio 2010. The Web Camps are two day events.  The camps aren’t conferences where you sit quietly for hours and people talk at you – they are intended to be interactive.  The first day is focused on learning through presentations that are heavy on coding demos.  The second day is focused on you building real applications using what you’ve learned.  The second day includes hands-on labs, and you’ll join small development teams with other attendees and work on a project together. We’ve got some great speakers lined up for the events – including Scott Hanselman, James Senior, Jon Galloway, Rachel Appel, Dan Wahlin, Christian Wenz and more.  I’ll also be presenting at one of the camps. Below is the schedule of the remaining events (the sold-out Toronto camp was a few days ago): Moscow May 19-19 Beijing May 21-22 Shanghai May 24-25 Mountain View May 27-28 Sydney May 28-29 Singapore June 04-05 London June 04-05 Munich June 07-08 Chicago June 11-12 Redmond, WA June 18-19 New York June 25-26 Many locations are sold out already but we still have some seats left in a few of them.  Registration and attendance to all of the events is completely free.  You can register to attend at www.webcamps.ms. Hope this helps, Scott

    Read the article

  • Files are piling up in /usr/src/. How can I stop this?

    - by Bogdanovist
    I have been having many serious system issues over the past few weeks and have been scratching my head as to why. I've now worked out that this problem is having no inodes left on the root partition $ df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda6 732960 724565 8395 99% / udev 125179 518 124661 1% /dev tmpfs 127001 464 126537 1% /run none 127001 4 126997 1% /run/lock none 127001 8 126993 1% /run/shm /dev/sda7 5234688 144639 5090049 3% /home What is the cause? I've found that 400K of those are in use in /usr/src $ ls /usr/src linux-headers-3.2.0-25-generic linux-headers-3.2.0-33 linux-headers-3.2.0-25-generic-pae linux-headers-3.2.0-33-generic linux-headers-3.2.0-26 linux-headers-3.2.0-33-generic-pae linux-headers-3.2.0-26-generic linux-headers-3.2.0-35 linux-headers-3.2.0-26-generic-pae linux-headers-3.2.0-35-generic linux-headers-3.2.0-27 linux-headers-3.2.0-35-generic-pae linux-headers-3.2.0-27-generic linux-headers-3.2.0-36 linux-headers-3.2.0-27-generic-pae linux-headers-3.2.0-36-generic linux-headers-3.2.0-29 linux-headers-3.2.0-36-generic-pae linux-headers-3.2.0-29-generic linux-headers-3.2.0-39 linux-headers-3.2.0-29-generic-pae linux-headers-3.2.0-39-generic linux-headers-3.2.0-30 linux-headers-3.2.0-39-generic-pae linux-headers-3.2.0-30-generic linux-headers-3.2.0-40 linux-headers-3.2.0-30-generic-pae linux-headers-3.2.0-40-generic linux-headers-3.2.0-31 linux-headers-3.2.0-40-generic-pae linux-headers-3.2.0-31-generic linux-headers-3.2.0-41 linux-headers-3.2.0-31-generic-pae linux-headers-3.2.0-41-generic linux-headers-3.2.0-32 linux-headers-3.2.0-41-generic-pae linux-headers-3.2.0-32-generic linux-headers-3.2.0-43 linux-headers-3.2.0-32-generic-pae Surely not all of these are actually needed? I've tried apt-get autoremove but it leaves them all be. I don't want to remove them manually, but this is crippling my machine. They also take up almost 2G of the 11G system partition that is getting full (80%) aside from the inode issue. How can I safely remove the headers that are not needed?

    Read the article

  • DevDays ‘00 The Netherlands day #2

    - by erwin21
    Day 2 of DevDays 2010 and again 5 interesting sessions at the World Forum in The Hague. The first session of the today in the big world forum theater was from Scott Hanselman, he gives a lap around .NET 4.0. In his way of presenting he talked about all kind of new features of .NET 4.0 like MEF, threading, parallel processing, changes and additions to the CLR and DLR, WPF and all new language features of .NET 4.0. After a small break it was ready for session 2 from Scott Allen about Tips, Tricks and Optimizations of LINQ. He talked about lazy and deferred executions, the difference between IQueryable and IEnumerable and the two flavors of LINQ syntax. The lunch was again very good prepared and delicious, but after that it was time for session 3 Web Vulnerabilities and Exploits from Alex Thissen. This was no normal session but more like a workshop, we decided what kind of subjects we discussed, the subjects where OWASP, XSS and other injections, validation, encoding. He gave some handy tips and tricks how to prevent such attacks. Session 4 was about the new features of C# 4.0 from Alex van Beek. He talked about Optional- en Named Parameters, Generic Co- en Contra Variance, Dynamic keyword and COM Interop features. He showed how to use them but also when not to use them. The last session of today and also the last session of DevDays 2010 was about WCF Best Practices from Gerben van Loon. He talked about 7 best practices that you must know when you are going to use WCF. With some quick demos he showed the problem and the solution for some common issues. It where two interesting days and next year i sure will be attending again.

    Read the article

  • Adventures in Scrum: Lesson 2 - For the record

    - by Martin Hinshelwood
    At SSW we have always done Agile. Recently we have started doing Scrum and we have nearly completed our first Sprint ever using Scrum. As you probably guessed from my previous post, it looks like it is going to be a “Failed Sprint”, but the Scrum Team (This includes the ScrumMaster and the Product Owner) has learned a huge amount about working in the Scrum Framework. We have been running with a “Proxy Product Owner” for the last two weeks, but a simple mistake occurred either during the “Product Planning Meeting” or the “Sprint Planning Meeting” that could have prevented this Sprint from failing. We has a heated discussion on the vision of someone not in the room which ended with the assertion that the Product Owner would be quizzed again on their vision. This did not happen and we ran with the “Proxy Product Owner’s vision for two weeks. Product Owner vision: Update Component A of Product A to Silverlight Proxy Product Owner vision: Update Product A to Silverlight Do you see the problem? Worse than that, as we had a lot of junior members of the Scrum Team and we are just feeling our way around how Scrum will work at SSW I missed implementing a fundamental rule. That’s right, it was me. It does not matter that I did not know about this rule, its on the site and I should have read it. Would a police officer let you off if you did not know that a red light meant stop? I think not… But, what is this amazing rule I hear you shout.. Its simple, as per our rule I should have sent the following email: “ Dear Proxy Product Owner, For the record, I disagree that the Product Owner wants us to ‘Update Product A to Silverlight’ as I still think that he wants us to ‘Update Component A of Product A to Silverlight’ and not the entire application. Regards Martin” - ‘For the record’ - Rules to being Software Consultants - Dealing with Clients This email should have been copied to the entire Scrum Team, which would have included the Product Owner, who would have nipped this misunderstanding in the bud and we would have had one less impediment. Technorati Tags: SSW,SSW Rules,SSW Standards,Scrum,Product Owner,ScrumMaster,Sprint,Sprint Planning Meeting,Product Planning Meeting

    Read the article

  • Data Quality Through Data Governance

    Data Quality Governance Data quality is very important to every organization, bad data cost an organization time, money, and resources that could be prevented if the proper governance was put in to place.  Data Governance Program Criteria: Support from Executive Management and all Business Units Data Stewardship Program  Cross Functional Team of Data Stewards Data Governance Committee Quality Structured Data It should go without saying but any successful project in today’s business world must get buy in from executive management and all stakeholders involved with the project. If management does not fully support a project because they see it is in there and the company’s best interest then they will remove/eliminate funding, resources and allocated time to work on the project. In essence they can render a project dead until it is official killed by the business. In addition, buy in from stake holders is also very important because they can cause delays increased spending in time, money and resources because they do not support a project. Data Stewardship programs are administered by a data steward manager who primary focus is to support, train and manage a cross functional data stewards team. A cross functional team of data stewards are pulled from various departments act to ensure that all systems work to ensure that an organization’s goals are achieved. Typically, data stewards are subject matter experts that act as mediators between their respective departments and IT. Data Quality Procedures Data Governance Committees are composed of data stewards, Upper management, IT Leadership and various subject matter experts depending on a company. The primary goal of this committee is to define strategic goals, coordinate activities, set data standards and offer data guidelines for the business. Data Quality Policies In 1997, Claudia Imhoff defined a Data Stewardship’s responsibility as to approve business naming standards, develop consistent data definitions, determine data aliases, develop standard calculations and derivations, document the business rules of the corporation, monitor the quality of the data in the data warehouse, define security requirements, and so forth. She further explains data stewards responsible for creating and enforcing polices on the following but not limited to issues. Resolving Data Integration Issues Determining Data Security Documenting Data Definitions, Calculations, Summarizations, etc. Maintaining/Updating Business Rules Analyzing and Improving Data Quality

    Read the article

  • Is it reasonable to expect knowing the whole stack bottom up?

    - by Vaibhav Garg
    I am an Sr. developer/architect/Product Manager for embedded systems. The systems that I have had experience with have typically been small to medium size codebases - typically close to 25-30K LOC in C, using 8-16 and 32 bit low end microcontrollers. The systems have been entirely bootstrapped by our team - meaning right from the start-up code to the end application code has either been written by the team, or at the very least, is thoroughly understood and maintained by us. Now, if we were to start developing more complex systems with complex peripherals, such as USB OTG et al. (think, low end cell phones), there are libraries and stacks available commercially and from chip vendors that reduce the task to just calling the right APIs and being able to use those peripherals. Now, from a habit point of view, this does not give me and the team a comfortable feeling, not being able to comprehend the entire code tree, with virtual black boxes at the lower layers. Is it reasonable to devote, and reserve, time getting into the details of how the APIs are implemented, assuming that the same would also entail getting into details of relevant standards (again, for USB as an example)? Or, alternatively, should a thorough understanding of the top level usage of the APIs be sufficient? This of course assumes that the source codes to all libraries are available, which they are, in almost all cases. Edit: In partial response to @Abhi Beckert, the documentation is refreshingly very comprehensive and meticulously maintained, AFAIK and been able to judge. I have not had a long experience with the same.

    Read the article

  • links for 2010-05-06

    - by Bob Rhubart
    Podcast: Collaborate 10 Wrap-Up - Conclusion #c10 More Collaborate 2010 Las Vegas highlights and hijinks from this ten-member panel, including OAUG and ODTUG board members, members of the Oracle ACE program, and OAUG President Dave Ferguson. (tags: otn oracle collaborate2010) Peter Scott: Realtime Data Warehouse Loading Rittman-Mead's Peter Scott looks at putting data in to a data warehouse in real time. (tags: oracle datawarehousing businessintelligence) Live Webcast: Social BPM - Integrating Enterprise 2.0 with Business Applications - May 12, 2010 at 11:00 a.m. PT Business Process Management with integrated Enterprise 2.0 collaboration can improve business responsiveness and enhance overall enterprise productivity. Learn how to take your business to the next level with a unified solution that fosters process-based collaboration between employees, partners, and customers. (tags: oracle otn bpm enterprise2.0 webcast) Management Pack for Identity Management Viewlet A screencast produced by the Grid Control team showing the features of the Identity Management Pack for Grid Control 11g. Grid Control 11g now works with Oracle Virtual Directory 11g. (tags: oracle otn security identitymanagement) @pevansgreenwood: Having too much SOA is a bad thing (and what we might do about it) "The problem is usually too much flexibility, as flexibility creates complexity, and complexity exponentially increases the effort required to manage and deliver the software." -- Peter Evans-Greenwood (tags: soa complexity flexibility) @vampbenepe: Integration patterns for social data: the Open Social Data Bus "The main point is about defining the right integration pattern for social data: is it a 'message bus' pattern or a 'shared database' pattern?" -- William Vampbenepe (tags: oracle otn enterprise2.0 enterprisearchitecture)

    Read the article

  • Announcing the MOS WCI "Community"

    - by brian.harrison
    The WCI Technical Support team are please to announce the launch of the long awaited WCI Support Community on My Oracle Support (MOS) "Community". Users can navigate to this "first stop" for WebCenter Interaction information by logging on to following this link: WCI Community (Note that this requires a valid login credential to the My Oracle Support tool). In this community you'll find a product related discussion forum moderated by Oracle WebCenter Interaction support engineers, recommended tips and tricks, links to knowledge base articles and best practices for setting up and administering up your environment. We hope you'll take a minute to have a look through the community. If you have a question about WebCenter Interaction, a comment or a suggestion regarding the content, please feel free to post it to the forum and someone will respond to your request. Think of the forum here as another method to communicate directly with the WCI Technical Support team for questions and answers to simple WCI support topics. The forum is moderated by WCI Technical Support engineers directly and we hope it will help you avoid the need to log support incidents for less complex support related questions. We encourage all of our customers, both internal and external, to participate in the forums discussions, sharing information, knowledge, best practices and in the effort to help us build a vital and vibrant "home base" for WCI users on the My Oracle Support tool. Thank you for visiting! The WebCenter Interaction Support Community Moderator Team

    Read the article

  • unable to format usb 1204 [daemon inhibited ]

    - by santosamaru
    i try to format my usb 1st time its work all data gone but i can't save any file at this usb . then i try to check is it working or broken here the report santos@santos:~$ sudo badblocks -v /dev/sdb [sudo] password for santos: Sorry, try again. [sudo] password for santos: Checking blocks 0 to 7824383 Checking for bad blocks (read-only test): 0.00% done, 0:00 elapsed. (0/0/0 errdone Pass completed, 0 bad blocks found. (0/0/0 errors) santos@santos:~$ sudo badblocks -v -w /dev/sdb [sudo] password for santos: Sorry, try again. [sudo] password for santos: /dev/sdb is apparently in use by the system; it's not safe to run badblocks! santos@santos:~$ how to format and fix this issues? i have read this link Formatting Pen Drive causes 'Daemon Is Inhibited' Error and it said like this when i try to move any items from desktop " the destination is read only also in this case i use google and find this http://ubuntuforums.org/showthread.php?t=1955353 article as same its not helped following user13509 suggestion ..

    Read the article

  • configure: error: Could not find libavformat - part of ffmpeg after installing libav from source

    - by Patryk
    I tried to install minidlna on my machine but it appeared to me that it's not in repositories anymore. Well then I decided to compile it myself. After downloading version 1.1.3 I tried to compile but I needed libav headers which I couldn't install via apt - no idea why there has been a lot of broken packages: $ sudo apt-get install libavcodec-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libavcodec-dev : Depends: libavutil-dev (= 6:9.13-0ubuntu0.14.04.1) but it is not going to be installed E: Unable to correct problems, you have held broken packages. Anyways I installed libav 9.13 from source and now I came to this point: $ ./configure ... checking for av_open_input_file in -lavformat... no checking for avformat_open_input in -lavformat... no checking for av_open_input_file in -lavformat... no checking for avformat_open_input in -lavformat... no configure: error: Could not find libavformat - part of ffmpeg but I have installed that! Even in the install log I can see : ... INSTALL libavdevice/libavdevice.a INSTALL libavfilter/libavfilter.a INSTALL libavformat/libavformat.a INSTALL libavresample/libavresample.a INSTALL libavcodec/libavcodec.a INSTALL libswscale/libswscale.a INSTALL libavutil/libavutil.a INSTALL libavdevice/avdevice.h INSTALL libavdevice/version.h INSTALL libavdevice/libavdevice.pc INSTALL libavfilter/avfilter.h INSTALL libavfilter/avfiltergraph.h INSTALL libavfilter/buffersink.h INSTALL libavfilter/buffersrc.h INSTALL libavfilter/version.h INSTALL libavfilter/libavfilter.pc INSTALL libavformat/avformat.h ....

    Read the article

  • Naming conventions: camelCase versus underscore_case ? what are your thoughts about it? [closed]

    - by poelinca
    I've been using underscore_case for about 2 years and I recently switched to camelCase because of the new job (been using the later one for about 2 months and I still think underscore_case is better suited for large projects where there are alot of programmers involved, mainly because the code is easier to read). Now everybody at work uses camelCase because (so they say) the code looks more elegant . What are you're thoughts about camelCase or underscore_case p.s. please excuse my bad english Edit Some update first: platform used is PHP (but I'm not expecting strict PHP platform related answers , anybody can share their thoughts on which would be the best to use , that's why I came here in the first place) I use camelCase just as everibody else in the team (just as most of you recomend) we use Zend Framework which also recommends camelCase Some examples (related to PHP) : Codeigniter framework recommends underscore_case , and honestly the code is easier to read . ZF recomends camelCase and I'm not the only one who thinks ZF code is a tad harder to follow through. So my question would be rephrased: Let's take a case where you have the platform Foo which doesn't recommend any naming conventions and it's the team leader's choice to pick one. You are that team leader, why would you pick camelCase or why underscore_case? p.s. thanks everybody for the prompt answers so far

    Read the article

  • Out of space despite lots of free space remaining

    - by Kristian Thomsen
    When upgrading Ubuntu from 11.10 to 12.04 I discovered an unexpected problem. The upgrade was stopped because there wasn't enough free space for the installation. I managed to free some space and do the upgrade but now a prompt appears after logging in saying I'm out of space. This prompt asks me if I want to examine the problem. The "Disk Usage Analyser" is opened. In the top it says: Total filesystem capacity: 47.0 GB (used: 13.5 GB available: 33.4 GB) Folder -- Usage -- Size / -- 100% -- 12.5 GB usr -- 44.8 % -- 5.6 GB home -- 30.3 % -- 3.8 GB lib -- 13.0 % -- 1.6 GB var -- 9.1 % -- 1.1 GB boot 2.5 % 309.5 GB and a lot of small contributors like: etc, opt, sbin, bin etc. I do not really understand this problem since the analyser in the top says that I have 33.4 GB left in this file system. What can I do to make Ubuntu use the remaining space? Running df -i in the terminal gives: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda7 610800 576874 33926 95% / udev 213451 563 212888 1% /dev tmpfs 218524 486 218038 1% /run none 218524 3 218521 1% /run/lock none 218524 7 218517 1% /run/shm /dev/sda8 2264752 16371 2248381 1% /home What does this mean?

    Read the article

  • Using Scrum on small projects where Owner doesn't want to be involved

    - by Andrej Mohar
    Recently I've been reading and learning quite a lot about scrum and I like it a lot. However, I do have a couple of likely scenarios in my head to which I don't know the solution. So let's say that I might want to organize an agile team of (for instance) four web developers (one of them UI/UX designer). This team would operate on scrum principles. Initially we would probably be working on projects like landing pages for ordinary people's small businesses, like renting apartments, selling cookies... Such customers simply can't be set with Product Owner role (IMHO), because they usually expect to hire a company, give them the overall project goal with some details, and then expect the job to be done (including a lot of decision making) with as little of their involvement as possible (in their opinion, they have more important things to do). Let's say I'd like to engage myself in a developer/scrum master role (I know that even that is debatable, being a team member and scrum master at once), so I simply shouldn't take the role of the product owner as well. So as for my questions: If I'm my company's business owner, do I simply need to be a product owner as well (do these roles include each other)? Can I employ a sales person which might have the product owner role? Would it be better if it is an experienced developer instead of a sales person? Is this even a smart move? Lastly, is there another agile approach that might better suit my position? EDIT: Thank you everyone for good inputs. I added some comments, any aditional info will be greatly appreciated.

    Read the article

  • Scaling Scrum within a group of 100s of programmers

    - by blunders
    Most Scrum teams lean toward 7-15 people **, though it's not clear how to scale Scrum among 100s of people, or how the effectiveness of a given team might be compared to another team within the group; meaning beyond just breaking the group into Scrum teams of 7-15 people, it's unclear how efforts between the teams are managed, compared, etc. Any suggestions related to either of these topics, or additional related topics that might be of more importance to account for in planning a large scale SCRUM grouping? ** In reviewing research related to the suggested size of software development teams, which appears to be the basis for the suggested Scrum team size, I found what appears to be an error in the research which oddly appears to show that bigger teams (15+ ppl), not smaller teams (7 ppl) are better. UPDATE, "Re: Scrum doesn't scale": Made huge amounts of progress on personally researching the topic, but thought I'd respond to the general belief of some that Scrum doesn't scale by citing a quote from Succeeding with Agile by Mike Cohn : Scrum Does Scale: You have to admire the intellectual honesty of the earliest agile authors. They were all very careful to say that agile methodolgies like Scrum were for small projects. This conservatism wasn’t because agile or Scrum turned out to be unsuited for large projects but because they hadn’t used these processes on large projects and so were reluctant to advise their readers to do so. But, in the years since the Agile Manifesto and the books that came shortly before and after it, we have learned that the principles and practices of agile development can be scaled up and applied on large projects, albeit it with a considerable amount of overhead. Fortunately, if large organizations use the techniques described regarding the role of the product owner, working with a shared product backlog, being mindful of dependencies, coordinating work among teams, and cultivating communities of practice, they can successfully scale a Scrum project. SOURCE: (ran across the book thanks to Ladislav Mrnka answer)

    Read the article

  • Running CRON job on Ubuntu server for SugarCRM

    - by Logik
    I am pretty inexperienced in Linux, so be descriptive on your answer. My environment: Local Linux server 12.04 hosting Sugar CRM 6.5.2. There is area in sugar CRM called scheduler. I can configured some predefined jobs here. in my case i am trying to run email reminders (ever min/hour/day/month). For this scheduler to be effective, i read some where i need to setup CRON job. So I did some research & finally put following lines in CRONTAB for the root user, as per instructions given in sugarCRM. * * * * * cd /var/www/crm; php -f cron.php > /dev/null 2>&1 Well I am creating contracts in my sugarCRM (AOS module) & I want email reminders to be sent for these contracts to the concern person. Now my sugarCRM email is configured correctly & I can send test emails using it. But the CRON + scheduler not giving any result. I can't receive any emails. Then I tried to read /var/log/syslog & it is showing entry for following line each minute. Oct 27 15:03:01 unicomm CRON[28182]: (root) CMD (cd /var/www/crm; php -f cron.php /dev/null 2&1) I've few questions: what does the CRON job line i've added in crontab mean? cd /var/www/crm; php -f cron.php > /dev/null 2>&1 is not making any sense to me. How am i suppose to get this thing work? I've searched a lot (including SugarCRM forum), but no luck.

    Read the article

  • How can we make agile enjoyable for developers that like to personally, independently own large chunks from start to finish

    - by Kris
    We’re roughly midway through our transition from waterfall to agile using scrum; we’ve changed from large teams in technology/discipline silos to smaller cross-functional teams. As expected, the change to agile doesn’t suit everyone. There are a handful of developers that are having a difficult time adjusting to agile. I really want to keep them engaged and challenged, and ultimately enjoying coming to work each day. These are smart, happy, motivated people that I respect on both a personal and a professional level. The basic issue is this: Some developers are primarily motivated by the joy of taking a piece of difficult work, thinking through a design, thinking through potential issues, then solving the problem piece by piece, with only minimal interaction with others, over an extended period of time. They generally complete work to a high level of quality and in a timely way; their work is maintainable and fits with the overall architecture. Transitioning to a cross-functional team that values interaction and shared responsibility for work, and delivery of working functionality within shorter intervals, the teams evolve such that the entire team knocks that difficult problem over. Many people find this to be a positive change; someone that loves to take a problem and own it independently from start to finish loses the opportunity for work like that. This is not an issue with people being open to change. Certainly we’ve seen a few people that don’t like change, but in the cases I’m concerned about, the individuals are good performers, genuinely open to change, they make an effort, they see how the rest of the team is changing and they want to fit in. It’s not a case of someone being difficult or obstructionist, or wanting to hoard the juiciest work. They just don’t find joy in work like they used to. I’m sure we can’t be the only place that hasn’t bumped up on this. How have others approached this? If you’re a developer that is motivated by personally owning a big chunk of work from end to end, and you’ve adjusted to a different way of working, what did it for you?

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >