Search Results

Search found 21662 results on 867 pages for 'may kadoodi'.

Page 21/867 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Prioritizing Product Features

    - by Robert May
    A very common task in Agile Environments is prioritization.  Teams that are functioning well will prioritize new features, old features, the backlog, and any other source of stories for the team, and they’ll do it regularly. Not all teams are good at prioritizing according to the real return on investment that building stories will yield to the company.  This is unfortunate.  Too often, teams end up building features that are less valuable, and everyone seems to know it except perhaps the product owner!  Most features built into software are never even used.  Clearly, not much return for features that go unused. So how does a company avoid building features that add little value to the company?  This is a tough question to answer, but usually, this prioritization starts at the top with the executives of the company.  After all, they’re responsible for the overall vision of the company. Here’s what I recommend: Know your market. Know your customers and users. Know where you’re going and what you want to achieve. Implement the Vision Know Your Market We often see companies that don’t know their market.  Personally, I’m surprised by this.  These companies don’t know who their competitors are, don’t know what features make their product desirable in the market, and in many cases, get by with saying, “I’ve been doing this for XX years.  I know what the market wants!”  In many cases, they equate “marketing” with “advertising” and don’t understand the difference. This is almost never true.  Good companies will spend significant amounts of time and money finding out who they’re competing against and what makes their competitors successful in the marketplace.  Good companies understand that marketing involves more than just advertising.  Often, marketing is mostly research and analysis, not sales.  Until you understand your market, you cannot know what features will give you the best return on your investment dollar. Good companies have a marketing department and can answer the next important step which is to know your customers and your users. Know your Customers and Users First, note that I included both customers and users.  They’re often not the same thing.  Users use the product that you build.  Customers buy the product that you build.  It’s a subtle difference, but too often, I’ve seen companies that focus exclusively on one or the other and are not successful simply because they ignore an important part of the group. If your company is doing appropriate marketing, you know that these are two different aspects of your product and that both deserve attention to have a product that is successful in your target market.  Your marketing department should be spending a lot of time understanding these personas and then conveying that information to the company. I’m always surprised when development teams think that they can build a product that people want to use without understanding the users of that product.  Developers think differently than most people in the world.  They know what the computer is doing.  The computer isn’t magic to them.  So when they assume that they know how to build something, they bring with them quite a bit of baggage.  Never assume that you know your customer unless you’re regularly having interaction with them.  Also, don’t just leave this to Marketing or Product Management.  Take them time to get your developers out with the customers as well.  Developers are very smart people, and often, seeing how someone uses their software inspires them to make a much better product. Very often, because the users and customers aren’t know, teams will spend a significant amount of time building apps that are super flexible and configurable so that any possible combination of feature can be used.  This demonstrates a clear lack of understanding of the customer.  Most configuration questions can quickly be answered by talking to the customer.  In most cases, if your software requires significant setup and configuration before its usable, you probably don’t know your customers and users very well. Until you know your customers, you cannot know what features will be most valuable to your customers and you cannot build those features in a way that your customers can use. Know Where You’re Going and What You Want to Achieve Many companies suffer from not having a plan.  Executives will tell the team to make them a plan.  The team, not knowing their market and customers and users, will come up with a plan that doesn’t reflect reality and doesn’t consider ROI.  Management then wonders why the product is doing poorly in the market place. Instead of leaving this up to the teams, as executives, work with Marketing to understand what broad categories of features will sell the most product in the marketplace.  Then, once you’ve determined that, give this vision to the team and let them run with it.  Revise the vision as needed, but avoid changing streams frequently.  Sure, sometimes you need to, but often, executives will change priorities many times a month, leading to nothing more than confusion.  If the team has a vision, they’ll be able to execute that vision far better than they could otherwise. By knowing what products are most important, you can set budgetary goals and guidelines that will help you achieve the vision that was created. Implement the Vision Creating the vision is often where the general executives stop participating in the plan.  The team is responsible for implementing that vision.  Executives should attend showcases and and should remain aware of the progress that the team is making towards meeting the vision, however. Once a broad vision has been created, the team should break that vision down into minimal market features (MMF).  These MMFs should be sized using story points so that, using the team’s velocity, an estimated cost can be determined for each feature.  The product management team should then try to quantify the relative value of the MMFs based on customer feedback and interviews.  Once the value and cost of creating the feature is understood, a return on investment can be calculated.  The features should then be prioritized with the MMF’s that have the highest value and lowest cost rising to the top of features to implement.  Don’t let politics get in the way! Once the MMF’s have been prioritized, they should go through release planning to schedule them for implementation. Conclusion By having a good grasp on the strategy of the company, your Agile teams can be much more effective.  Each and every story the team is implementing will roll up into features that matter to the company and provide ROI to them.  The steps outlined in this post should be repeated on a regular basis.  I recommend reviewing them at least once per quarter to make sure that the vision hasn’t shifted and that the teams are still working on what matters most to the company. Technorati Tags: Agile,Product Owner,ROI

    Read the article

  • According to MSDN ReadFile() Win32 function may incorrectly report read operation completion. When?

    - by Martin Dobšík
    The MSDN states in its description of ReadFile() function (http://msdn.microsoft.com/en-us/library/aa365467%28VS.85%29.aspx): “If hFile is opened with FILE_FLAG_OVERLAPPED, the lpOverlapped parameter must point to a valid and unique OVERLAPPED structure, otherwise the function can incorrectly report that the read operation is complete.” I have some applications that are violating the above recommendation and I would like to know the severity of the problem. I mean the program uses named pipe that has been created with FILE_FLAG_OVERLAPPED, but it reads from it using the following call: ReadFile(handle, &buf, n, &n_read, NULL); That means it passes NULL as the lpOverlapped parameter. That call should not work correctly in some circumstances according to documentation. I have spent a lot of time trying to reproduce the problem, but I was unable to! I always got all data in right place at right time. I was testing only Named Pipes though. Would anybody know when can I expect that ReadFile() will incorrectly return and report successful completion even the data are not yet in the buffer? What would have to happen in order to reproduce the problem? Does it happen with files, pipes, sockets, consoles, or other devices? Do I have to use particular version of OS? Or particular version of reading (like register the handle to I/O completion port)? Or particular synchronization of reading and writing processes/threads? Or when would that fail? It works for me :/ Please help! With regards Martin

    Read the article

  • Performing a clean database creation using msbuild

    - by Robert May
    So I’m taking a break from writing about other Agile stuff for a post. :)  I’m still going to get back to the other subjects, but this is fun too. Something I’ve done quite a bit of is MSBuild and CI work.  I’m experimenting with ways to improve what I’ve done in the past, particularly around database CI. Today, I developed a mechanism for starting from scratch with your database.  By scratch, I mean blowing away the existing database and creating it again from a single command line call.  I’m a firm believer that developers should be able to get to a known clean state at the database level with a single command and that they should be operating off of their own isolated database to improve productivity.  These scripts will help that. Here’s how I did it.  First, we have to disconnect users.  I did so using the help of a script from sql server central.  Note that I’m using sqlcmd variable replacement. -- kills all the users in a particular database -- dlhatheway/3M, 11-Jun-2000 declare @arg_dbname sysname declare @a_spid smallint declare @msg varchar(255) declare @a_dbid int set @arg_dbname = '$(DatabaseName)' select @a_dbid = sdb.dbid from master..sysdatabases sdb where sdb.name = @arg_dbname declare db_users insensitive cursor for select sp.spid from master..sysprocesses sp where sp.dbid = @a_dbid open db_users fetch next from db_users into @a_spid while @@fetch_status = 0 begin select @msg = 'kill '+convert(char(5),@a_spid) print @msg execute (@msg) fetch next from db_users into @a_spid end close db_users deallocate db_users GO Once all users are booted from the database, we can commence with recreating the database.  I generated the script that is used to create a database from SQL Server management studio, so I’m only going to show the bits that weren’t generated that are important.  There are a bunch of Alter Database statements that aren’t shown. First, I had to find the default location of the database files in the install, since they can be in many different locations.  I used Method 1 from a technet blog and then modified it a bit to do what I needed to do.  I ended up using dynamic SQL because for the life of me, I couldn’t get the “Filename” property to not return an error when I used anything besides a string.  I’m dropping the database first, if it exists.  Here’s the code:   IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = N'$(DatabaseName)') BEGIN drop database $(DatabaseName) END; go IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = 'zzTempDBForDefaultPath') BEGIN DROP DATABASE zzTempDBForDefaultPath END; -- Create temp database. Because no options are given, the default data and --- log path locations are used CREATE DATABASE zzTempDBForDefaultPath; DECLARE @Default_Data_Path VARCHAR(512), @Default_Log_Path VARCHAR(512); --Get the default data path SELECT @Default_Data_Path = ( SELECT LEFT(physical_name,LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name))+1) FROM sys.master_files mf INNER JOIN sys.[databases] d ON mf.[database_id] = d.[database_id] WHERE d.[name] = 'zzTempDBForDefaultPath' AND type = 0); --Get the default Log path SELECT @Default_Log_Path = ( SELECT LEFT(physical_name,LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name))+1) FROM sys.master_files mf INNER JOIN sys.[databases] d ON mf.[database_id] = d.[database_id] WHERE d.[name] = 'zzTempDBForDefaultPath' AND type = 1); --Clean up. IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = 'zzTempDBForDefaultPath') BEGIN DROP DATABASE zzTempDBForDefaultPath END; DECLARE @SQL nvarchar(max) SET @SQL= 'CREATE DATABASE $(DatabaseName) ON PRIMARY ( NAME = N''$(DatabaseName)'', FILENAME = N''' + @Default_Data_Path + N'$(DatabaseName)' + '.mdf' + ''', SIZE = 2048KB , FILEGROWTH = 1024KB ) LOG ON ( NAME = N''$(DatabaseName)Log'', FILENAME = N''' + @Default_Log_Path + N'$(DatabaseName)' + '.ldf' + ''', SIZE = 1024KB , FILEGROWTH = 10%) ' exec (@SQL) GO And with that, your database is created.  You can run these scripts on any server and on any database name.  To do that, I created an MSBuild script that looks like this: <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0"> <PropertyGroup> <DatabaseName>MyDatabase</DatabaseName> <Server>localhost</Server> <SqlCmd>sqlcmd -v DatabaseName=$(DatabaseName) -S $(Server) -i </SqlCmd> <ScriptDirectory>.\Scripts</ScriptDirectory> </PropertyGroup> <Target Name ="Rebuild"> <ItemGroup> <ScriptFiles Include="$(ScriptDirectory)\*.sql"/> </ItemGroup> <Exec Command="$(SqlCmd) &quot;%(ScriptFiles.Identity)&quot;" ContinueOnError="false"/> </Target> </Project> Note that the Scripts directory is underneath the directory where I’m running the msbuild command and is relative to that directory.  Note also that the target is using batching to run each script in the scripts subdirectory, one after the other.  Each script is passed to the sqlcmd command line execution using the .Identity property on the itemgroup that is created.  This target file is saved in the file “Database.target”. To make this work, you’ll need msbuild in your path, and then run the following command: msbuild database.target /target:Rebuild Once you’ve got your virgin database setup, you’d then need to use a tool like dbdeploy.net to determine that it was a virgin database, build a change script based on the change scripts, and then you’d want another sqlcmd call to update the database with the appropriate scripts.  I’m doing that next, so I’ll post a blog update when I’ve got it working. Technorati Tags: MSBuild,Agile,CI,Database

    Read the article

  • Easy Scaling in XAML (WPF)

    - by Robert May
    Ran into a problem that needed solving that was kind of fun.  I’m not a XAML guru, and I’m sure there are better solutions, but I thought I’d share mine. The problem was this:  Our designer had, appropriately, designed the system for a 1920 x 1080 screen resolution.  This is for a full screen, touch screen device (think Kiosk), which has that resolution, but we also wanted to demo the device on a tablet (currently using the AWESOME Samsung tablet given out at Microsoft Build).  When you’d run it on that tablet, things were ugly because it was at a lower resolution than the target device. Enter scaling.  I did some research and found out that I probably just need to monkey with the LayoutTransform of some grid somewhere.  This project is using MVVM and has a navigation container that we built that lives on a single root view.  User controls are then loaded into that view as navigation occurs. In the parent grid of the root view, I added the following XAML: <Grid.LayoutTransform> <ScaleTransform ScaleX="{Binding ScaleWidth}" ScaleY="{Binding ScaleHeight}" /> </Grid.LayoutTransform> And then in the root View Model, I added the following code: /// <summary> /// The required design width /// </summary> private const double RequiredWidth = 1920; /// <summary> /// The required design height /// </summary> private const double RequiredHeight = 1080; /// <summary>Gets the ActualHeight</summary> public double ActualHeight { get { return this.View.ActualHeight; } } /// <summary>Gets the ActualWidth</summary> public double ActualWidth { get { return this.View.ActualWidth; } } /// <summary> /// Gets the scale for the height. /// </summary> public double ScaleHeight { get { return this.ActualHeight / RequiredHeight; } } /// <summary> /// Gets the scale for the width. /// </summary> public double ScaleWidth { get { return this.ActualWidth / RequiredWidth; } } Note that View.ActualWidth and View.ActualHeight are just pointing directly at FrameworkElement.ActualWidth and FrameworkElement.ActualHeight. That’s it.  Just calculate the ratio and bind the scale transform to it. Hopefully you’ll find this useful. Technorati Tags: WPF,XAML

    Read the article

  • MIA

    - by Robert May
    So, I’ve been missing in action on this blog for quite some time.  I need to rectify that. Part of the reason I’ve been absent is because I haven’t be able to talk about what I’m working on.  A former client watches my blog rather closely, and although we accomplished many good things together, their culture is such that they really don’t like people to freely express their thoughts (you’ll note my blog posts stopped rather abruptly).  I learned some really important lessons about Agile in the last 3 years, and I think its worthwhile to talk about them.  Sometimes things worked really well, sometimes, they failed failed.  Sometimes that failure was me, sometimes it wasn’t. I understand Agile better now, and hopefully, what I have to say will guide others through this process and help others understand Agile better. One thing that I’ve learned is that MANY companies that say they are doing Agile are NOT really doing Agile.  To often, they pick the things they like and don’t follow the process long enough to know what rules they can break, and which ones they shouldn’t.  This is probably the primary reason why Agile fails. So, expect more posts, especially as I’m flying coast to coast. :)

    Read the article

  • RegEx for a date format

    - by Ivan
    Say I have a string like this: 07-MAY-07 Hello World 07-MAY-07 Hello Again So the pattern is, DD-MMM-YY, where MMM is the three letter format for a month. What Regular Expression will break up this string into: 07-MAY-07 Hello World 07-MAY-07 Hello Again Using Jason's code below modified for C#, string input = @"07-MAY-07 Hello World 07-MAY-07 Hello Again"; string pattern = @"(\d{2}-[A-Z]{3}-\d{2}\s)(\D*|\s)"; string[] results = Regex.Split(input, pattern); results.Dump(); Console.WriteLine("Length = {0}", results.Count()); foreach (string split in results) { Console.WriteLine("'{0}'", split); Console.WriteLine(); } I get embedded blank lines? Length = 7 '' '07-MAY-07 ' 'Hello World ' '' '07-MAY-07 ' 'Hello Again' '' I don't even understand why I am getting the blank lines...?

    Read the article

  • May I open my own device driver twice simultanoiusly from a user program under Linux?

    - by Viktor Gyuris
    Somewhere I read that opening the same file twice has an undefined semantics and should be avoided. In my situation I would like to open my own device multiple times associating multiple file descriptors to it. The file operations of my device are all safe. Is there some part of Linux between the sys call open() and the point it calls the registered file operation .open() that is unsafe?

    Read the article

  • Database design problem: intermediate table between 2 tables may end up with too many results.

    - by SK.
    I have to design a database to handle forms. Basically, a form needs to go through (exactly) 7 people, one by one. Each person can either agree or decline a form. If one declines, the chain stops and the following people don't even get notified that there is a form. Right now I have thought of those 3 tables: FORM, PERSON, and RESPONSE inbetween. However, my first solution sounds too heavy because each form could have up to 7 responses. Here we are with the table inbetween. That means that each successful form has 7 rows in the table RESPONSE. Here we have the responding information directly inside the form. It looks ugly but at least keeps everything as singular as possible. On the bad side I can't track the response dates, but I don't think it is crucial for that matter. What is your opinion on this? I feel like both of them are wrong and I don't know how to fix that. If that matters, I'll be using Oracle 9.

    Read the article

  • Quick Script for Adding Skype Groups

    - by Robert May
    So, I needed to add about 30 people to several different Skype groups today, and I didn’t want to repeat the /add [skypename] thing over and over and over.  Building the list was a pain . . . I couldn’t find a good way to extract all of the users in an existing group.  There’s probably an api or something, but I just did that part by hand. Adding them to the groups was pretty easy with Windows Scripting Host.  Basically, I just ran this: <package>    <job id="vbs">       <script language="VBScript">          set WshShell = WScript.CreateObject("WScript.Shell")          WshShell.AppActivate 4484          WScript.Sleep 100          WshShell.SendKeys "/add user1~"          WScript.Sleep 100 …          WshShell.SendKeys "/add usern~"          WScript.Sleep 100       </script>    </job> </package> Add as many users as you need by copying the sendkeys and sleep lines.  Then, save the script to a .wsf file.  The AppActivate line needs to be changed to have the process id of skype instead of the number there.  To get that, open up Task Manager, click on Processes, then find skype.exe and find it’s PID. Before you double click on the file in windows explorer, you’ll need to have created the groups in skype.  For each group, open the group, and click in the chat window of the group.  Then double click on the WSF file.  If you don’t click in the chat window, you will likely get the add user dialog box instead of just adding the users. Technorati Tags: Skype,Script

    Read the article

  • Purple/Pink lines on screen after login on Ubuntu Desktop 13.04

    - by Thomas May
    I downloaded Ubuntu 13.04 and I loaded up the live system. It loaded up fine but when I clicked on the Ubuntu logo purple and pink lines appeared on screen and they didn't go away so I thought that if I installed the OS it would be fine so I installed the OS, logged in and lo and behold the purple and pink lines where back. My video card is nVidia N force (I think) Anyone having the same problem???

    Read the article

  • Agile Executives

    - by Robert May
    Over the years, I have experienced many different styles of software development. In the early days, most of the development was Waterfall development. In the last few years, I’ve become an advocate of Scrum. As I talked about last month, many people have misconceptions about what Scrum really is. The reason why we do Scrum at Veracity is because of the difference it makes in the life of the team doing Scrum. Software is for people, and happy motivated people will build better software. However, not all executives understand Scrum and how to get the information from development teams that use Scrum. I think that these executives need a support system for managing Agile teams. Historical Software Management When Henry Ford pioneered the assembly line, I doubt he realized the impact he’d have on Management through the ages. Historically, management was about managing the process of building things. The people were just cogs in that process. Like all cogs, they were replaceable. Unfortunately, most of the software industry followed this same style of management. Many of today’s senior managers learned how to manage companies before software was a significant influence on how the company did business. Software development is a very creative process, but too many managers have treated it like an assembly line. Idea’s go in, working software comes out, and we just have to figure out how to make sure that the ideas going in are perfect, then the software will be perfect. Lean Manufacturing In the manufacturing industry, Lean manufacturing has revolutionized Henry Ford’s assembly line. Derived from the Toyota process, Lean places emphasis on always providing value for the customer. Anything the customer wouldn’t be willing to pay for is wasteful. Agile is based on similar principles. We’re building software for people, and anything that isn’t useful to them doesn’t add value. Waterfall development would have teams build reams and reams of documentation about how the software should work. Agile development dispenses with this work because excessive documentation doesn’t add value. Instead, teams focus on building documentation only when it truly adds value to the customer. Many other Agile principals are similar. Playing Catch-up Just like in the manufacturing industry, many managers in the software industry have yet to understand the value of the principles of Lean and Agile. They think they can wrap the uncertainties of software development up in a nice little package and then just execute, usually followed by failure. They spend a great deal of time and money trying to exactly predict the future. That expenditure of time and money doesn’t add value to the customer. Managers that understand that Agile know that there is a better way. They will instead focus on the priorities of the near term in detail, and leave the future to take care of itself. They have very detailed two week plans with less detailed quarterly plans. These plans are guided by a general corporate strategy that doesn’t focus on the exact implementation details. These managers also think in smaller features rather than large functionality. This adds a great deal of value to customers, since the features that matter most are the ones that the team focuses on in the near term and then are able to deliver to the customers that are paying for them. Agile managers also realize that stale software is very costly. They know that keeping the technology in their software current is much less expensive and risky than large rewrites that occur infrequently and schedule time in each release for refactoring of the existing software. Agile Executives Even though Agile is a better way, I’ve still seen failures using the Agile process. While some of these failures can be attributed to the team, most of them are caused by managers, not the team. Managers fail to understand what Agile is, how it works, and how to get the information that they need to make good business decisions. I think this is a shame. I’m very pleased that Veracity understands this problem and is trying to do something about it. Veracity is a key sponsor of Agile Executives. In fact, Galen is this year’s acting president for Agile Executives. The purpose of Agile Executives is to help managers better manage Agile teams and see better success. Agile Executives is trying to build a community of executives that range from managers interested in Agile to managers that have successfully adopted Agile. Together, these managers can form a community of support and ideas that will help make Agile teams more successful. Helping Your Team You can help too! Talk with your manager and get them involved in Agile Executives. Help Veracity build the community. If your manager understands Agile better, he’ll understand how to help his teams, which will result in software that adds more value for customers. If you have any questions about how you can be involved, please let me know. Technorati Tags: Agile,Agile Executives

    Read the article

  • Reflection and the params Keyword

    - by Robert May
    I’ve had to look this up a couple of times, and there’s not much out there, so I end up guessing the same answer over and over. When using MethodBase.GetParameters() to get an array of ParameterInfo object, I often want to get a count of the number of parameters that are out, optional, params, etc.  For out and optional, you can simply check ParameterInfo.IsOut or ParameterInfo.IsOptional or any number of other “Attributes”. However, for params, there isn’t a property on ParameterInfo.  Instead, you have to do this: info.GetCustomAttributes(typeof(ParamArrayAttribute), true) This will get you a set of all of the attributes that are the ParamArrayAttribute, which you can then turn into a linq statement that looks like this: methodParameters.Count(info => info.GetCustomAttributes(typeof(ParamArrayAttribute), true).Count() > 0); Which, assuming that methodParameters is the result of MethodBase.GetParameters, will give you a count of the number of parameters that have the params keyword.  Of course, there can be only one, but who’s counting! Now, hopefully, the next time I try to look this up, my own blog will get the values. Technorati Tags: Reflection

    Read the article

  • Business Choices and Evony

    - by Robert May
    Recently, I’ve been playing a game called Evony, and I finally decided to quit the game and thought I should warn others who might be tempted.  I also find a lot of insight with this game as an example.  A few of the companies that I’ve worked with or worked for have been like this and they are NOT good places to be. Evony is a joke designed to milk as much money out of people as possible.  As a professional software developer who mentors teams on how to build better software, here's what I see: They obviously offshore all development and have little oversight over that offshore development, and they probably have a small team at that.  Evidenced by the poor grammar throughout the game. They're seeking to maximize revenue and pushing to do as little development as possible, which would mean a small team. They're horribly understaffed in the customer support department as evidenced by never replying to this forum and never responding to bug reports or help requests (I've had one open with no response AT ALL for over a month . . .) They have way inadequate testing, no CI, and probably no automated unit tests.  You can see this by the poor grammar throughout the game and the type of bugs that show up. They aren't following a formal development process (no Agile, Waterfall, or anything else) as evidenced by their lack of predictable release cycle and lack of visibility. I'm guessing that the internal code base is terrible, otherwise, there wouldn't be an "Age II" that had nothing more than a new visual interface and a few rule tweaks.  This is also evidenced by the itty bitty scope of bug fixes and their inability to really fix bugs. Their Architect sucks.  Really, 42k user is all you can handle on a single server?  Could you REALLY not come up with a better way to scale to handle users?  They've built isolated worlds, instead of a single continuous world. Back to milking people for money--to really progress, you have to spend money. All of this adds up to knowing, deliberate actions on the part of management.  They CHOOSE to do this (like AOL choosing to send more discs instead of improve quality). So, what can we learn? This game will never really improve, since the bosses don't care, they're only in it for the money. The game will never have good support.  Again, the owners don't care. Giving them money only perpetuates this scam (and yes, I've given them money, way too much money. :() They don't care if you quit.  There's a new sucker born every day. Don't EVER go to work for them.  I've worked both with and for people like this and the culture is NEVER good. Ah well. Technorati Tags: Evony

    Read the article

  • wcf erorr: The client and service bindings may be mismatched?

    - by Rev
    Hi let see server config and client config. Then help me find difference between these configs!! Client config <system.serviceModel> <client> <endpoint address="http://localhost/admin2/AdminCentralService.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_Config" contract="TIR.ThreeTier.ICommandInvoker" name="AdminCentralServiceConfig" /> <endpoint binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_Config" contract="TIR.ThreeTier.ICommandInvoker" name="CommandInvokerConfig" /> </client> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_Config" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" messageEncoding="Mtom" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> </wsHttpBinding> </bindings> Server Config <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="AdminCentral.Business.Web.Service1Behavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_Config" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" messageEncoding="Mtom" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647"/> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false"/> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm=""/> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true"/> </security> </binding> </wsHttpBinding> </bindings> <services> <service behaviorConfiguration="AdminCentral.Business.Web.Service1Behavior" name="AdminCentral.Business.Web.AdminCentralService"> <endpoint address="" binding="wsHttpBinding" contract="AdminCentral.Business.Web.ICommandInvoker"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services>

    Read the article

  • Database Rebuild

    - by Robert May
    I promised I’d have a simpler mechanism for rebuilding the database.  Below is a complete MSBuild targets file for rebuilding the database from scratch.  I don’t know if I’ve explained the rational for this.  The reason why you’d WANT to do this is so that each developer has a clean version of the database on their local machine.  This also includes the continuous integration environment.  Basically, you can do whatever you want to the database without fear, and in a minute or two, have a completely rebuilt database structure. DBDeploy (including the KTSC build task for dbdeploy) is used in this script to do change tracking on the database itself.  The MSBuild ExtensionPack is used in this target file.  You can get an MSBuild DBDeploy task here. There are two database scripts that you’ll see below.  First is the task for creating an admin (dbo) user in the system.  This script looks like the following: USE [master] GO If not Exists (select Name from sys.sql_logins where name = '$(User)') BEGIN CREATE LOGIN [$(User)] WITH PASSWORD=N'$(Password)', DEFAULT_DATABASE=[$(DatabaseName)], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF END GO EXEC master..sp_addsrvrolemember @loginame = N'$(User)', @rolename = N'sysadmin' GO USE [$(DatabaseName)] GO CREATE USER [$(User)] FOR LOGIN [$(User)] GO ALTER USER [$(User)] WITH DEFAULT_SCHEMA=[dbo] GO EXEC sp_addrolemember N'db_owner', N'$(User)' GO The second creates the changelog table.  This script can also be found in the dbdeploy.net install\scripts directory. CREATE TABLE changelog ( change_number INTEGER NOT NULL, delta_set VARCHAR(10) NOT NULL, start_dt DATETIME NOT NULL, complete_dt DATETIME NULL, applied_by VARCHAR(100) NOT NULL, description VARCHAR(500) NOT NULL ) GO ALTER TABLE changelog ADD CONSTRAINT Pkchangelog PRIMARY KEY (change_number, delta_set) GO Finally, Here’s the targets file. <Projectxmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0" DefaultTargets="Update">   <PropertyGroup>     <DatabaseName>TestDatabase</DatabaseName>     <Server>localhost</Server>     <ScriptDirectory>.\Scripts</ScriptDirectory>     <RebuildDirectory>.\Rebuild</RebuildDirectory>     <TestDataDirectory>.\TestData</TestDataDirectory>     <DbDeploy>.\DBDeploy</DbDeploy>     <User>TestUser</User>     <Password>TestPassword</Password>     <BCP>bcp</BCP>     <BCPOptions>-S$(Server) -U$(User) -P$(Password) -N -E -k</BCPOptions>     <OutputFileName>dbDeploy-output.sql</OutputFileName>     <UndoFileName>dbDeploy-output-undo.sql</UndoFileName>     <LastChangeToApply>99999</LastChangeToApply>   </PropertyGroup>     <ImportProject="$(MSBuildExtensionsPath)\ExtensionPack\4.0\MSBuild.ExtensionPack.tasks"/>   <UsingTask TaskName="Ktsc.Build.DBDeploy" AssemblyFile="$(DbDeploy)\Ktsc.Build.dll"/>   <ItemGroup>     <VariableInclude="DatabaseName">       <Value>$(DatabaseName)</Value>     </Variable>     <VariableInclude="Server">       <Value>$(Server)</Value>     </Variable>     <VariableInclude="User">       <Value>$(User)</Value>     </Variable>     <VariableInclude="Password">       <Value>$(Password)</Value>     </Variable>   </ItemGroup>     <TargetName="Rebuild">     <!--Take the database offline to disconnect any users. Requires that the current user is an admin of the sql server machine.-->     <MSBuild.ExtensionPack.SqlServer.SqlCmd Variables="@(Variable)" Database="$(DatabaseName)" TaskAction="Execute" CommandLineQuery ="ALTER DATABASE $(DatabaseName) SET OFFLINE WITH ROLLBACK IMMEDIATE"/>         <!--Bring it back online.  If you don't, the database files won't be deleted.-->     <MSBuild.ExtensionPack.Sql2008.DatabaseTaskAction="SetOnline" DatabaseItem="$(DatabaseName)"/>     <!--Delete the database, removing the existing files.-->     <MSBuild.ExtensionPack.Sql2008.DatabaseTaskAction="Delete" DatabaseItem="$(DatabaseName)"/>     <!--Create the new database in the default database path location.-->     <MSBuild.ExtensionPack.Sql2008.DatabaseTaskAction="Create" DatabaseItem="$(DatabaseName)" Force="True"/>         <!--Create admin user-->     <MSBuild.ExtensionPack.SqlServer.SqlCmd TaskAction="Execute" Server="(local)" Database="$(DatabaseName)" InputFiles="$(RebuildDirectory)\0002 Create Admin User.sql" Variables="@(Variable)" />     <!--Create the dbdeploy changelog.-->     <MSBuild.ExtensionPack.SqlServer.SqlCmd TaskAction="Execute" Server="(local)" Database="$(DatabaseName)" LogOn="$(User)" Password="$(Password)" InputFiles="$(RebuildDirectory)\0003 Create Changelog.sql" Variables="@(Variable)" />     <CallTarget Targets="Update;ImportData"/>     </Target>    <TargetName="Update" DependsOnTargets="CreateUpdateScript">     <MSBuild.ExtensionPack.SqlServer.SqlCmd TaskAction="Execute" Server="(local)" Database="$(DatabaseName)" LogOn="$(User)" Password="$(Password)" InputFiles="$(OutputFileName)" Variables="@(Variable)" />   </Target>   <TargetName="CreateUpdateScript">     <ktsc.Build.DBDeploy DbType="mssql"                                        DbConnection="User=$(User);Password=$(Password);Data Source=$(Server);Initial Catalog=$(DatabaseName);"                                        Dir="$(ScriptDirectory)"                                        OutputFile="..\$(OutputFileName)"                                        UndoOutputFile="..\$(UndoFileName)"                                        LastChangeToApply="$(LastChangeToApply)"/>   </Target>     <TargetName="ImportData">     <ItemGroup>       <TestData Include="$(TestDataDirectory)\*.dat"/>     </ItemGroup>     <ExecCommand="$(BCP) $(DatabaseName).dbo.%(TestData.Filename) in&quot;%(TestData.Identity)&quot;$(BCPOptions)"/>   </Target> </Project> Technorati Tags: MSBuild

    Read the article

  • Collabnet Subversion and Self Signed Certificates

    - by Robert May
    We installed Collabnet as our subversion server recently.  This is the first time that we’ve used it.  In general, it seems pretty good, but we ran into a problem with it.  People were getting the following error in Tortoise: OPTIONS of ’https://xxxx.xxxxxxxx.xxxx/svn/xxxxx’: SSL handshake failed: SSL error code – 1/1/336032856 (https://xxxx.xxxxxxxx.xxxx) The odd thing is that for some people, it worked, for others, it didn’t!  I also couldn’t find anything useful out on the internet. We had checked the Subversion Server should serve via https option in the settings, and all of the ports were open, etc. This option causes a self signed certificate to be used. What we discovered: Tortoise must use the same url as is in the Hostname field on the General settings for collabnet or you’ll get this error.  Basically, some people were using https://svn.xxxxxxx.xxxxx and others were using https://computername.xxxxxxxx.xxxx.  Because the host name said used the computer name version, the whole thing broke.  By changing the host name to the svn version, which is what they should be using, the problem went away.  The users do get the “Accept Certificate” prompt, but we can live with that! Technorati Tags: Subversion,Collabnet

    Read the article

  • New Responsibilities

    - by Robert May
    With the start of the new year, I’m starting new responsibilities at Veracity. One responsibility that is staying constant is my love and evangelism of Agile.  In fact, I’ll be spending more time ensuring that all Veracity teams are performing agile, Scrum specifically, in a consistent manner so that all of our clients and consultants have a similar experience. Imagine, if you will, working for a consulting company on a project.  On that project, the project management style is Waterfall in iterations.  Now you move to another project and in that project, you’re doing real Scrum, but in both cases, you were told that what you were doing was Scrum.  Rather confusing.  I’ve found, however, that this happens on many teams and many projects.  Most companies simply aren’t disciplined enough to do Scrum.  Some think that being Agile means not being disciplined.  The opposite is true! So, my goals for Veracity are to make sure that all of our consultants have a consistent feel for Scrum and what it is and how it works and then to make sure that on the projects they’re assigned to, Scrum is appropriately applied for their situation.  This will help keep them happier, but also make switching to other projects easier and more consistent.  If we aren’t doing the project management on the project, we’ll help them know what good Agile practices should look like so that they can give good advice to the client, and so that if they move to another project, they have a consistent feel. I’m really looking forward to these new duties. Technorati Tags: Agile,Scrum

    Read the article

  • Who are good suppliers of .NET 4 Hosted Virtual Private Servers ? (May 2010)

    - by Nick Haslam
    I'm looking for a supplier for hosting a Virtual server, running Windows Server 2008 (R2 ideally) and .NET 4 to run an internet facing ASP.NET web application. I'd also like to be able to remote desktop onto it, and install other apps as necessary, including other websites as and when. I'm based in the UK, so a UK based supplier would be great. I was looking at Fasthosts, but having researched them a bit more, they look like a bad idea.

    Read the article

  • May we have Ruby and Rails performance statistics? We're persuading the business to use Rails!

    - by thekingoftruth
    We're convincing our Products officer that we want to use JRuby on Rails, and we're having a hard time coming up with some statistics which show that: Coding time is less using Rails vs. say Struts or Zend Framework or what have you. Ruby (and JRuby in particular) performance isn't horrible (anymore). Rails performance isn't bad either. If you can get us some good stats quickly, we might have a chance!

    Read the article

  • Why C++ virtual function defined in header may not be compiled and linked in vtable?

    - by 0xDEAD BEEF
    Situation is following. I have shared library, which contains class definition - QueueClass : IClassInterface { virtual void LOL() { do some magic} } My shared library initialize class member QueueClass *globalMember = new QueueClass(); My share library export C function which returns pointer to globalMember - void * getGlobalMember(void) { return globalMember;} My application uses globalMember like this ((IClassInterface*)getGlobalMember())->LOL(); Now the very uber stuff - if i do not reference LOL from shared library, then LOL is not linked in and calling it from application raises exception. Reason - VTABLE contains nul in place of pointer to LOL() function. When i move LOL() definition from .h file to .cpp, suddenly it appears in VTABLE and everything works just great. What explains this behavior?! (gcc compiler + ARM architecture_)

    Read the article

  • Using IE 9 as my primary browser

    - by Robert May
    With the release of Internet Explorer 9 RC the browser looks to be in a usable state.  So far, my experience has been positive. However, one area where I am having problems is when people are using the jQueryUI library.  Versions older than 1.8 cause IE 9.0 to be unable to drag and drop.  This is a real pain, especially at sites like Agile Zen, where dragging and dropping is a primary bit of functionality. Now that IE 9 is a release candidate, we’ll see how quickly these things improve.  I expect things to be rough, but so far, I’m really liking IE 9.  There’s more real estate than Chrome (it’s the tabs inline with the address bar) and its faster than Chrome 9.0 and FF 3.6.8 (as tested on my own machine). The biggest drawback so far is that because IE has been so badly behaved in the past, sites expect it to be badly behaved now, which is breaking things now. Technorati Tags: Internet Explorer

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >