Search Results

Search found 55276 results on 2212 pages for 'eicar test string'.

Page 545/2212 | < Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >

  • Setting up SharePoint without Active Directory

    - by eJugnoo
    In order to setup SharePoint without AD, you need to run following PowerShell command on Management Shell after installing SharePoint on your server, but before running Config Wizard: (we don’t want to run this SP farm in stand-alone mode!) 1. New-SPConfigurationDatabase SYNOPSIS     Creates a new configuration database. SYNTAX     New-SPConfigurationDatabase [-DatabaseName] <String> [-DatabaseServer] <String> [[-DirectoryDomain] <String>] [[-DirectoryOrganizationUnit] <String>]     [[-AdministrationContentDatabaseName] <String>] [[-DatabaseCredentials] <PSCredential>] [-FarmCredentials] <PSCredential> [-Passphrase] <SecureString>      [-AssignmentCollection <SPAssignmentCollection>] [<CommonParameters>] DESCRIPTION     The New-SPConfigurationDatabase cmdlet creates a new configuration database on the specified database server. This is the central database for a new SharePoint farm.     For permissions and the most current information about Windows PowerShell for SharePoint Products, see the online documentation (http://go.microsoft.com/fwlink/?LinkId=163185). RELATED LINKS     Backup-SPConfigurationDatabase     Disconnect-SPConfigurationDatabase     Connect-SPConfigurationDatabase     Remove-SPConfigurationDatabase REMARKS     To see the examples, type: "get-help New-SPConfigurationDatabase -examples".     For more information, type: "get-help New-SPConfigurationDatabase -detailed".     For technical information, type: "get-help New-SPConfigurationDatabase -full". NOTE: Use –AdministrationContentDatabaseName switch to pass the name of Admin database you want instead of GUID-based name it automatically creates. Hence, one can pretty much easily control Admin, Config, and Content database names at the time of farm creation. If creating new farm, you can also delete and re-provision any service databases automatically created, from UI, to decide what database names you want. 2. Run SharePoint Configuration Wizard, and you’ll following as already added to farm. Select do not discconect from farm, and proceed… Select the port, and authentication (NTLM in my case). Click next, and wizard will complete the remaining steps of provisioning, including creation of Central Admin Web App on the desired port. Once successful, it will open the Central Admin site and ask you to run Farm Config Wizard. I chose to skip and do things manually, to remain in control of what is happening on the farm. Like creating web-app for site collections, creating the very first site collection, and any other service applications. I needed this to create a public-facing installation of SharePoint Foundation RTM on a server which didn’t have AD. Now I am going to setup FBA, and possibly Live ID Auth as well. I will be also setting up RBS, and multi-tenancy on this farm ,and would post any notes, and findings here… --Sharad

    Read the article

  • C# Multiple Property Sort

    - by Ben Griswold
    As you can see in the snippet below, sorting is easy with Linq.  Simply provide your OrderBy criteria and you’re done.  If you want a secondary sort field, add a ThenBy expression to the chain.  Want a third level sort?  Just add ThenBy along with another sort expression. var projects = new List<Project>     {         new Project {Description = "A", ProjectStatusTypeId = 1},         new Project {Description = "B", ProjectStatusTypeId = 3},         new Project {Description = "C", ProjectStatusTypeId = 3},         new Project {Description = "C", ProjectStatusTypeId = 2},         new Project {Description = "E", ProjectStatusTypeId = 1},         new Project {Description = "A", ProjectStatusTypeId = 2},         new Project {Description = "C", ProjectStatusTypeId = 4},         new Project {Description = "A", ProjectStatusTypeId = 3}     };   projects = projects     .OrderBy(x => x.Description)     .ThenBy(x => x.ProjectStatusTypeId)     .ToList();   foreach (var project in projects) {     Console.Out.WriteLine("{0} {1}", project.Description,         project.ProjectStatusTypeId); } Linq offers a great sort solution most of the time, but what if you want or need to do it the old fashioned way? projects.Sort ((x, y) =>         Comparer<String>.Default             .Compare(x.Description, y.Description) != 0 ?         Comparer<String>.Default             .Compare(x.Description, y.Description) :         Comparer<Int32>.Default             .Compare(x.ProjectStatusTypeId, y.ProjectStatusTypeId));   foreach (var project in projects) {     Console.Out.WriteLine("{0} {1}", project.Description,         project.ProjectStatusTypeId); } It’s not that bad, right? Just for fun, let add some additional logic to our sort.  Let’s say we wanted our secondary sort to be based on the name associated with the ProjectStatusTypeId.  projects.Sort((x, y) =>        Comparer<String>.Default             .Compare(x.Description, y.Description) != 0 ?        Comparer<String>.Default             .Compare(x.Description, y.Description) :        Comparer<String>.Default             .Compare(GetProjectStatusTypeName(x.ProjectStatusTypeId),                 GetProjectStatusTypeName(y.ProjectStatusTypeId)));   foreach (var project in projects) {     Console.Out.WriteLine("{0} {1}", project.Description,         GetProjectStatusTypeName(project.ProjectStatusTypeId)); } The comparer will now consider the result of the GetProjectStatusTypeName and order the list accordingly.  Of course, you can take this same approach with Linq as well.

    Read the article

  • Enhanced Dynamic Filtering

    - by Ricardo Peres
    Remember my last post on dynamic filtering? Well, this time I'm extending the code in order to allow two levels of querying: Match type, represented by the following options: public enum MatchType { StartsWith = 0, Contains = 1 } And word match: public enum WordMatch { AnyWord = 0, AllWords = 1, ExactPhrase = 2 } You can combine the two levels in order to achieve the following combinations: MatchType.StartsWith + WordMatch.AnyWord Matches any record that starts with any of the words specified MatchType.StartsWith + WordMatch.AllWords Not available: does not make sense, throws an exception MatchType.StartsWith + WordMatch.ExactPhrase Matches any record that starts with the exact specified phrase MatchType.Contains + WordMatch.AnyWord Matches any record that contains any of the specified words MatchType.Contains + WordMatch.AllWords Matches any record that contains all of the specified words MatchType.Contains + WordMatch.ExactPhrase Matches any record that contains the exact specified phrase Here is the code: public static IList Search(IQueryable query, Type entityType, String dataTextField, String phrase, MatchType matchType, WordMatch wordMatch, Int32 maxCount) { String [] terms = phrase.Split(' ').Distinct().ToArray(); StringBuilder result = new StringBuilder(); PropertyInfo displayProperty = entityType.GetProperty(dataTextField); IList searchList = null; MethodInfo orderByMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "OrderBy").ToArray() [ 0 ].MakeGenericMethod(entityType, displayProperty.PropertyType); MethodInfo takeMethod = typeof(Queryable).GetMethod("Take", BindingFlags.Public | BindingFlags.Static).MakeGenericMethod(entityType); MethodInfo whereMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "Where").ToArray() [ 0 ].MakeGenericMethod(entityType); MethodInfo distinctMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "Distinct" && m.GetParameters().Length == 1).Single().MakeGenericMethod(entityType); MethodInfo toListMethod = typeof(Enumerable).GetMethod("ToList", BindingFlags.Static | BindingFlags.Public).MakeGenericMethod(entityType); MethodInfo matchMethod = typeof(String).GetMethod ( (matchType == MatchType.StartsWith) ? "StartsWith" : "Contains", new Type [] { typeof(String) } ); MemberExpression member = Expression.MakeMemberAccess ( Expression.Parameter(entityType, "n"), displayProperty ); MethodCallExpression call = null; LambdaExpression where = null; LambdaExpression orderBy = Expression.Lambda ( member, member.Expression as ParameterExpression ); switch (matchType) { case MatchType.StartsWith: switch (wordMatch) { case WordMatch.AnyWord: call = Expression.Call ( member, matchMethod, Expression.Constant(terms [ 0 ]) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); for (Int32 i = 1; i ()); where = Expression.Lambda ( Expression.Or ( where.Body, exp ), where.Parameters.ToArray() ); } break; case WordMatch.ExactPhrase: call = Expression.Call ( member, matchMethod, Expression.Constant(phrase) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); break; case WordMatch.AllWords: throw (new Exception("The match type StartsWith is not supported with word match AllWords")); } break; case MatchType.Contains: switch (wordMatch) { case WordMatch.AnyWord: call = Expression.Call ( member, matchMethod, Expression.Constant(terms [ 0 ]) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); for (Int32 i = 1; i ()); where = Expression.Lambda ( Expression.Or ( where.Body, exp ), where.Parameters.ToArray() ); } break; case WordMatch.ExactPhrase: call = Expression.Call ( member, matchMethod, Expression.Constant(phrase) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); break; case WordMatch.AllWords: call = Expression.Call ( member, matchMethod, Expression.Constant(terms [ 0 ]) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); for (Int32 i = 1; i ()); where = Expression.Lambda ( Expression.AndAlso ( where.Body, exp ), where.Parameters.ToArray() ); } break; } break; } query = orderByMethod.Invoke(null, new Object [] { query, orderBy }) as IQueryable; query = whereMethod.Invoke(null, new Object [] { query, where }) as IQueryable; if (maxCount != 0) { query = takeMethod.Invoke(null, new Object [] { query, maxCount }) as IQueryable; } searchList = toListMethod.Invoke(null, new Object [] { query }) as IList; return (searchList); } And this is how you'd use it: IQueryable query = ctx.MyEntities; IList list = Search(query, typeof(MyEntity), "Name", "Ricardo Peres", MatchType.Contains, WordMatch.ExactPhrase, 10 /*0 for all*/); SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • Oracle Text query parser

    - by Roger Ford
    Oracle Text provides a rich query syntax which enables powerful text searches.However, this syntax isn't intended for use by inexperienced end-users.  If you provide a simple search box in your application, you probably want users to be able to type "Google-like" searches into the box, and have your application convert that into something that Oracle Text understands.For example if your user types "windows nt networking" then you probably want to convert this into something like"windows ACCUM nt ACCUM networking".  But beware - "NT" is a reserved word, and needs to be escaped.  So let's escape all words:"{windows} ACCUM {nt} ACCUM {networking}".  That's fine - until you start introducing wild cards. Then you must escape only non-wildcarded searches:"win% ACCUM {nt} ACCUM {networking}".  There are quite a few other "gotchas" that you might encounter along the way.Then there's the issue of scoring.  Given a query for "oracle text query syntax", it would be nice if we could score a full phrase match higher than a hit where all four words are present but not in a phrase.  And then perhaps lower than that would be a document where three of the four terms are present.  Progressive relaxation helps you with this, but you need to code the "progression" yourself in most cases.To help with this, I've developed a query parser which will take queries in Google-like syntax, and convert them into Oracle Text queries. It's designed to be as flexible as possible, and will generate either simple queries or progressive relaxation queries. The input string will typically just be a string of words, such as "oracle text query syntax" but the grammar does allow for more complex expressions:  word : score will be improved if word exists  +word : word must exist  -word : word CANNOT exist  "phrase words" : words treated as phrase (may be preceded by + or -)  field:(expression) : find expression (which allows +,- and phrase as above) within "field". So for example if I searched for   +"oracle text" query +syntax -ctxcatThen the results would have to contain the phrase "oracle text" and the word syntax. Any documents mentioning ctxcat would be excluded from the results. All the instructions are in the top of the file (see "Downloads" at the bottom of this blog entry).  Please download the file, read the instructions, then try it out by running "parser.pls" in either SQL*Plus or SQL Developer.I am also uploading a test file "test.sql". You can run this and/or modify it to run your own tests or run against your own text index. test.sql is designed to be run from SQL*Plus and may not produce useful output in SQL Developer (or it may, I haven't tried it).I'm putting the code up here for testing and comments. I don't consider it "production ready" at this point, but would welcome feedback.  I'm particularly interested in comments such as "The instructions are unclear - I couldn't figure out how to do XXX" "It didn't work in my environment" (please provide as many details as possible) "We can't use it in our application" (why not?) "It needs to support XXX feature" "It produced an invalid query output when I fed in XXXX" Downloads: parser.pls test.sql

    Read the article

  • glsl shader to allow color change of skydome ogre3d

    - by Tim
    I'm still very new to all this but learning a lot. I'm putting together an application using Ogre3d as the rendering engine. So far I've got it running, with a simple scene, a day/night cycle system which is working okay. I'm now moving on to looking at changing the color of the skydome material based on the time of day. What I've done so far is to create a struct to hold the ColourValues for the different aspects of the scene. struct todColors { Ogre::ColourValue sky; Ogre::ColourValue ambient; Ogre::ColourValue sun; }; I created an array to store all the colours todColors sceneColours [4]; I populated the array with the colours I want to use for the various times of the day. For instance DayTime (when the sun is high in the sky) sceneColours[2].sky = Ogre::ColourValue(135/255, 206/255, 235/255, 255); sceneColours[2].ambient = Ogre::ColourValue(135/255, 206/255, 235/255, 255); sceneColours[2].sun = Ogre::ColourValue(135/255, 206/255, 235/255, 255); I've got code to work out the time of the day using a float currentHours to store the current hour of the day 10.5 = 10:30 am. This updates constantly and updates the sun as required. I am then calculating the appropriate colours for the time of day when relevant using else if( currentHour >= 4 && currentHour < 7) { // Lerp from night to morning Ogre::ColourValue lerp = Ogre::Math::lerp<Ogre::ColourValue, float>(sceneColours[GT_TOD_NIGHT].sky , sceneColours[GT_TOD_MORNING].sky, (currentHour - 4) / (7 - 4)); } My original attempt to get this to work was to dynamically generate a material with the new colour and apply that material to the skydome. This, as you can probably guess... didn't go well. I know it's possible to use shaders where you can pass information such as colour to the shader from the code but I am unsure if there is an existing simple shader to change a colour like this or if I need to create one. What is involved in creating a shader and material definition that would allow me to change the colour of a material without the overheads of dynamically generating materials all the time? EDIT : I've created a glsl vertex and fragment shaders as follows. Vertex uniform vec4 newColor; void main() { gl_FrontColor = newColor; gl_Position = ftransform(); } Fragment void main() { gl_FragColor = gl_Color; } I can pass a colour to it using ShaderDesigner and it seems to work. I now need to investigate how to use it within Ogre as a material. EDIT : I created a material file like this : vertex_program colour_vs_test glsl { source test.vert default_params { param_named newColor float4 0.0 0.0 0.0 1 } } fragment_program colour_fs_glsl glsl { source test.frag } material Test/SkyColor { technique { pass { lighting off fragment_program_ref colour_fs_glsl { } vertex_program_ref colour_vs_test { } } } } In the code I have tried : Ogre::MaterialPtr material = Ogre::MaterialManager::getSingleton().getByName("Test/SkyColor"); Ogre::GpuProgramParametersSharedPtr params = material->getTechnique(0)->getPass(0)->getVertexProgramParameters(); params->setNamedConstant("newcolor", Ogre::Vector4(0.7, 0.5, 0.3, 1)); I've set that as the Skydome material which seems to work initially. I am doing the same with the code that is attempting to lerp between colours, but when I include it there, it all goes black. Seems like there is now a problem with my colour lerping.

    Read the article

  • F# in ASP.NET, mathematics and testing

    - by DigiMortal
    Starting from Visual Studio 2010 F# is full member of .NET Framework languages family. It is functional language with syntax specific to functional languages but I think it is time for us also notice and study functional languages. In this posting I will show you some examples about cool things other people have done using F#. F# and ASP.NET As I am ASP/ASP.NET MVP I am – of course – interested in how people use different languages and technologies with ASP.NET. C# MVP Tomáš Petrícek writes about developing ASP.NET MVC applications using F#. He also shows how to use LINQ To SQL in F# (using F# PowerPack) and provides sample solution and Visual Studio 2010 template for F# MVC web applications. You may also find interesting how you can create controllers in F#. Excellent work, Tomáš! Vladimir Matveev has interesting example about how to use F# and ApplicationHost class to process ASP.NET requests ouside of IIS. This is simple and very straight-forward example and I strongly suggest you to take a look at it. Very cool example is project Strom in Codeplex. Storm is web services testing tool that is fully written on F#. Take a look at this site because Codeplex offers also source code besides binaries. Math Functional languages are strong in fields like mathematics and physics. When I wrote my C# example about BigInteger class I found out that recursive version of Fibonacci algorithm in C# is not performing well. In same time I made same experiment on F# and in F# there were no performance problems with recursive version. You can find F# version of Fibonacci algorithm from Bob Palmer’s blog posting Fibonacci numbers in F#. Although golden spiral is useful for solving many problems I looked for some practical code example and found one. Kean Walmsley published in his Through the Interface blog very interesting posting Creating Fibonacci spirals in AutoCAD using F#. There are also other cool examples you may be interested in. Using numerical components by Extreme Optimization  it is possible to make some numerical integration (quadrature method) using F# (also C# example is available). fsharp.it introduces factorials calculation on F#. Robert Pickering has made very good work on programming The Game of Life in Silverlight and F# – I definitely suggest you to try out this example as it is very illustrative too. Who wants something more complex may take a look at Newton basin fractal example in F# by Jonathan Birge. Testing After some searching and surfing I found out that there is almost everything available for F# to write tests and test your F# code. FsCheck - FsCheck is a port of Haskell's QuickCheck. Important parts of the manual for using FsCheck is almost literally "adapted" from the QuickCheck manual and paper. Any errors and omissions are entirely my responsibility. FsTest - This project is designed to Language Oriented Programming constructs around unit testing and behavior testing in F#. The goal of this project is to create a Domain Specific Language for testing F# code in a way that makes sense for functional programming. FsUnit - FsUnit makes unit-testing with F# more enjoyable. It adds a special syntax to your favorite .NET testing framework. xUnit.NET - xUnit.net is a developer testing framework, built to support Test Driven Development, with a design goal of extreme simplicity and alignment with framework features. It is compatible with .NET Framework 2.0 and later, and offers several runners: console, GUI, MSBuild, and Visual Studio integration via TestDriven.net, CodeRush Test Runner and Resharper. It also offers test project integration for ASP.NET MVC. Getting started Well, as a first thing you need Visual Studio 2010. Then take a look at these resources: F# samples @ MSDN Microsoft F# Developer Center @ MSDN F# Language Reference @ MSDN F# blog F# forums Real World Functional Programming: With Examples in F# and C# (Amazon) Happy F#-ing! :)

    Read the article

  • Experience the iPad UI On Your PC

    - by Matthew Guay
    Want to test drive iPad without heading over to an Apple store?  Here’s a way you can experience some of the iPad UI straight from your browser! The iPad is the latest gadget from Apple to wow the tech world, and people even waited in line all night to be one of the first to get their hands on one.  Thanks to a simple JavaScript trick, however, you can get a feel for some of its new features without leaving your computer.  This won’t let you try out everything on the iPad, but it will let you see how the new lists and pop-over menus work just like they do in the new apps. Test drive the iPad’s UI from your browser Normally, the Apple iPhone developer library online looks like a standard webpage. But, on the iPad, it looks and feels like a full-blown native iPad app.  With a nifty JavaScript trick from boredzo.org you can use this same interface on your PC.  Since the iPad uses the Safari browser, we ran this test in Safari for Windows.  If you don’t already have it installed, you can download it from Apple (link below) and setup as normal. Now, open Safari and browse to Apple’s developer page at: http://www.developer.apple.com   Now, enter the following in the address bar, and press Enter. javascript:localStorage.setItem('debugSawtooth', 'true')   Finally, click this link to go to the iPhone OS documentation. http://developer.apple.com/iphone/library/iPad/ After a short delay, it should open in full iPad style! The left menu works just like the menus on the iPad, complete with transitions.  It feels entirely like a native application, instead of a webpage.  To scroll through text, click and pull up or down similar to the way you would use it on a touch screen. Some pages even include a pop-over menu like many of the new iPad apps use. Note that the page will be rendered for the size of your browser, and if you resize your window the page will not resize with it.  Simply press F5 to reload the page, and it will resize to fit the new window size.  If you resize your window to be tall and narrow, like the iPad in horizontal mode, the webpage will change and the left menu will disappear in lieu of a drop-down menu just like it would if you rotated the iPad. This works in Chrome as well, since it, like Safari, is based on Webkit.  However, it didn’t seem to work in our test on Firefox or other browsers. We’ve previously covered how you can experience some of the iPhone’s UI with the online iPhone user guide.  Check it out if you haven’t yet: View Mobile Websites in Windows with Safari 4 Developer Tools Conclusion Although this doesn’t let you really try out all of the iPad’s interface, it at least gives you a taste of how it works.  It’s exciting to see how much functionality can be packed into webapps today.  And don’t forget, How-to Geek is giving away an iPad to a random fan!  Head over to our Facebook page and fan How-to Geek if you haven’t already done so. Win an iPad on the How-To Geek Facebook Fan Page Similar Articles Productive Geek Tips Want an iPad? How-To Geek is Giving One Away!Why Wait? Amazing New Add-on Turns Your iPhone into an iPad! [Comic]The Complete List of iPad Tips, Tricks, and TutorialsShare Your Windows Vista Experience Index ScoreAnother Blog You Should Subscribe To TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images Get Wildlife Photography Tips at BBC’s PhotoMasterClasses

    Read the article

  • Customize Entity Framework SSDL &amp; SQL Generation

    - by Dane Morgridge
    In almost every talk I have done on Entity Framework I get questions on how to do custom SSDL or SQL when using model first development.  Quite a few of these questions have required custom changes to the SSDL, which of course can be a problem if it is getting auto generated.  Luckily, there is a tool that can help.  In the Visual Studio Gallery on MSDN, there is the Entity Designer Database Generation Power Pack. You have the ability to select different generation strategies and it also allows you to inject custom T4 Templates into the generation workflow so that you can customize the SSDL and SQL generation.  When you select to generate a database from a model the dialog is replaced by one with more options:   You can clone the individual workflow for either the current project or current machine.  The templates are installed at “C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Extensions\Microsoft\Entity Framework Tools\DBGen” on my local machine and you can make a copy of any template there.  If you clone the strategy and open it up, you will get the following workflow: Each item in the sequence is defining the execution of a T4 template.  The XAML for the workflow is listed below so you can see where the T4 files are defined.  You can simply make a copy of an existing template and make what ever changes you need.   1: <Activity x:Class="GenerateDatabaseScriptWorkflow" ... > 2: <x:Members> 3: <x:Property Name="Csdl" Type="InArgument(sde:EdmItemCollection)" /> 4: <x:Property Name="ExistingSsdl" Type="InArgument(s:String)" /> 5: <x:Property Name="ExistingMsl" Type="InArgument(s:String)" /> 6: <x:Property Name="Ssdl" Type="OutArgument(s:String)" /> 7: <x:Property Name="Msl" Type="OutArgument(s:String)" /> 8: <x:Property Name="Ddl" Type="OutArgument(s:String)" /> 9: <x:Property Name="SmoSsdl" Type="OutArgument(ss:SsdlServer)" /> 10: </x:Members> 11: <Sequence> 12: <dbtk:ProgressBarStartActivity /> 13: <dbtk:CsdlToSsdlTemplateActivity SsdlOutput="[Ssdl]" TemplatePath="$(VSEFTools)\DBGen\CSDLToSSDL_TPT.tt" /> 14: <dbtk:CsdlToMslTemplateActivity MslOutput="[Msl]" TemplatePath="$(VSEFTools)\DBGen\CSDLToMSL_TPT.tt" /> 15: <ded:SsdlToDdlActivity ExistingSsdlInput="[ExistingSsdl]" SsdlInput="[Ssdl]" DdlOutput="[Ddl]" /> 16: <dbtk:GenerateAlterSqlActivity DdlInputOutput="[Ddl]" DeployToScript="True" DeployToDatabase="False" /> 17: <dbtk:ProgressBarEndActivity ClosePopup="true" /> 18: </Sequence> 19: </Activity>   So as you can see, this tool enables you to make some pretty heavy customizations to how the SSDL and SQL get generated.  You can get more info and the tool can be downloaded from: http://visualstudiogallery.msdn.microsoft.com/en-us/df3541c3-d833-4b65-b942-989e7ec74c87.  There is a comments section on the site so make sure you let the team know what you like and what you don’t like.  Enjoy!

    Read the article

  • RPi and Java Embedded GPIO: Writing Java code to blink LED

    - by hinkmond
    So, you've followed the previous steps to install Java Embedded on your Raspberry Pi ?, you went to Fry's and picked up some jumper wires, LEDs, and resistors ?, you hooked up the wires, LED, and resistor the the correct pins ?, and now you want to start programming in Java on your RPi? Yes? ???????! OK, then... Here we go. You can use the following source code to blink your first LED on your RPi using Java. In the code you can see that I'm not using any complicated gpio libraries like wiringpi or pi4j, and I'm not doing any low-level pin manipulation like you can in C. And, I'm not using python (hell no!). This is Java programming, so we keep it simple (and more readable) than those other programming languages. See: Write Java code to do this In the Java code, I'm opening up the RPi Debian Wheezy well-defined file handles to control the GPIO ports. First I'm resetting everything using the unexport/export file handles. (On the RPi, if you open the well-defined file handles and write certain ASCII text to them, you can drive your GPIO to perform certain operations. See this GPIO reference). Next, I write a "1" then "0" to the value file handle of the GPIO0 port (see the previous pinout diagram). That makes the LED blink. Then, I loop to infinity. Easy, huh? import java.io.* /* * Java Embedded Raspberry Pi GPIO app */ package jerpigpio; import java.io.FileWriter; /** * * @author hinkmond */ public class JerpiGPIO { static final String GPIO_OUT = "out"; static final String GPIO_ON = "1"; static final String GPIO_OFF = "0"; static final String GPIO_CH00="0"; /** * @param args the command line arguments */ public static void main(String[] args) { FileWriter commandFile; try { /*** Init GPIO port for output ***/ // Open file handles to GPIO port unexport and export controls FileWriter unexportFile = new FileWriter("/sys/class/gpio/unexport"); FileWriter exportFile = new FileWriter("/sys/class/gpio/export"); // Reset the port unexportFile.write(GPIO_CH00); unexportFile.flush(); // Set the port for use exportFile.write(GPIO_CH00); exportFile.flush(); // Open file handle to port input/output control FileWriter directionFile = new FileWriter("/sys/class/gpio/gpio"+GPIO_CH00+"/direction"); // Set port for output directionFile.write(GPIO_OUT); directionFile.flush(); /*--- Send commands to GPIO port ---*/ // Opne file handle to issue commands to GPIO port commandFile = new FileWriter("/sys/class/gpio/gpio"+GPIO_CH00+"/value"); // Loop forever while (true) { // Set GPIO port ON commandFile.write(GPIO_ON); commandFile.flush(); // Wait for a while java.lang.Thread.sleep(200); // Set GPIO port OFF commandFile.write(GPIO_OFF); commandFile.flush(); // Wait for a while java.lang.Thread.sleep(200); } } catch (Exception exception) { exception.printStackTrace(); } } } Hinkmond

    Read the article

  • Detecting Process Shutdown/Startup Events through ActivationAgents

    - by Ramkumar Menon
    @10g - This post is motivated by one of my close friends and colleague - who wanted to proactively know when a BPEL process shuts down/re-activates. This typically happens when you have a BPEL Process that has an inbound polling adapter, when the adapter loses connectivity to the source system. Or whatever causes it. One valuable suggestion came in from one of my colleagues - he suggested I write my own ActivationAgent to do the job. Well, it really worked. Here is a sample ActivationAgent that you can use. There are few methods you need to override from BaseActivationAgent, and you are on your way to receiving notifications/what not, whenever the shutdown/startup events occur. In the example below, I am retrieving the emailAddress property [that is specified in your bpel.xml activationAgent section] and use that to send out an email notification on the activation agent initialization. You could choose to do different things. But bottomline is that you can use the below-mentioned API to access the very same properties that you specify in the bpel.xml. package com.adapter.custom.activation; import com.collaxa.cube.activation.BaseActivationAgent; import com.collaxa.cube.engine.ICubeContext; import com.oracle.bpel.client.BPELProcessId; import java.util.Date; import java.util.Properties; public class LifecycleManagerActivationAgent extends BaseActivationAgent { public BPELProcessId getBPELProcessId() { return super.getBPELProcessId(); } private void handleInit() throws Exception { //Write initialization code here System.err.println("Entered initialization code...."); //e.g. String emailAddress = getActivationAgentDescriptor().getPropertyValue(emailAddress); //send an email sendEmail(emailAddress); } private void handleLoad() throws Exception { //Write load code here System.err.println("Entered load code...."); } private void handleUnload() throws Exception { //Write unload code here System.err.println("Entered unload code...."); } private void handleUninit() throws Exception { //Write uninitialization code here System.err.println("Entered uninitialization code...."); } public void init(ICubeContext icubecontext) throws Exception { super.init(icubecontext); System.err.println("Initializing LifecycleManager Activation Agent ....."); handleInit(); } public void unload(ICubeContext icubecontext) throws Exception { super.unload(icubecontext); System.err.println("Unloading LifecycleManager Activation Agent ....."); handleUnload(); } public void uninit(ICubeContext icubecontext) throws Exception{ super.uninit(icubecontext); System.err.println("Uninitializing LifecycleManager Activation Agent ....."); handleUninit(); } public String getName() { return "Lifecyclemanageractivationagent"; } public void onStateChanged(int i, ICubeContext iCubeContext) { } public void onLifeCycleChanged(int i, ICubeContext iCubeContext) { } public void onUndeployed(ICubeContext iCubeContext) { } public void onServerShutdown() { } } Once you compile this code, generate a jar file and ensure you add it to the server startup classpath. The library is ready for use after the server restarts. To use this activationAgent, add an additional activationAgent entry in the bpel.xml for the BPEL Process that you wish to monitor. After you deploy the process, the ActivationAgent object will be called back whenever the events mentioned in the overridden methods are raised. [init(), load(), unload(), uninit()]. Subsequently, your custom code is executed. Sample bpel.xml illustrating activationAgent definition and property definition. <?xml version="1.0" encoding="UTF-8"? <BPELSuitcase timestamp="1291943469921" revision="1.0" <BPELProcess wsdlPort="{http://xmlns.oracle.com/BPELTest}BPELTestPort" src="BPELTest.bpel" wsdlService="{http://xmlns.oracle.com/BPELTest}BPELTest" id="BPELTest" <partnerLinkBindings <partnerLinkBinding name="client" <property name="wsdlLocation"BPELTest.wsdl</property </partnerLinkBinding <partnerLinkBinding name="test" <property name="wsdlLocation"test.wsdl</property </partnerLinkBinding </partnerLinkBindings <activationAgents <activationAgent className="oracle.tip.adapter.fw.agent.jca.JCAActivationAgent" partnerLink="test" <property name="portType"Read_ptt</property </activationAgent <activationAgent className="com.oracle.bpel.activation.LifecycleManagerActivationAgent" partnerLink="test" <property name="emailAddress"[email protected]</property </activationAgent </activationAgents </BPELProcess </BPELSuitcase em

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 3.5: Node.js relay

    - by Elton Stoneman
    This is an extension to Part 3 in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer Integration Patterns with Azure Service Bus Relay, Part 3: Anonymous partial-trust consumer In Part 3 I said “there isn't actually a .NET requirement here”, and this post just follows up on that statement. In Part 3 we had an ASP.NET MVC Website making a REST call to an Azure Service Bus service; to show that the REST stuff is really interoperable, in this version we use Node.js to make the secure service call. The code is on GitHub here: IPASBR Part 3.5. The sample code is simpler than Part 3 - rather than code up a UI in Node.js, the sample just relays the REST service call out to Azure. The steps are the same as Part 3: REST call to ACS with the service identity credentials, which returns an SWT; REST call to Azure Service Bus Relay, presenting the SWT; request gets relayed to the on-premise service. In Node.js the authentication step looks like this: var options = { host: acs.namespace() + '-sb.accesscontrol.windows.net', path: '/WRAPv0.9/', method: 'POST' }; var values = { wrap_name: acs.issuerName(), wrap_password: acs.issuerSecret(), wrap_scope: 'http://' + acs.namespace() + '.servicebus.windows.net/' }; var req = https.request(options, function (res) { console.log("statusCode: ", res.statusCode); console.log("headers: ", res.headers); res.on('data', function (d) { var token = qs.parse(d.toString('utf8')); callback(token.wrap_access_token); }); }); req.write(qs.stringify(values)); req.end(); Once we have the token, we can wrap it up into an Authorization header and pass it to the Service Bus call: token = 'WRAP access_token=\"' + swt + '\"'; //... var reqHeaders = { Authorization: token }; var options = { host: acs.namespace() + '.servicebus.windows.net', path: '/rest/reverse?string=' + requestUrl.query.string, headers: reqHeaders }; var req = https.request(options, function (res) { console.log("statusCode: ", res.statusCode); console.log("headers: ", res.headers); response.writeHead(res.statusCode, res.headers); res.on('data', function (d) { var reversed = d.toString('utf8') console.log('svc returned: ' + d.toString('utf8')); response.end(reversed); }); }); req.end(); Running the sample Usual routine to add your own Azure details into Solution Items\AzureConnectionDetails.xml and “Run Custom Tool” on the .tt files. Build and you should be able to navigate to the on-premise service at http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 and get a string response, going to the service direct. Install Node.js (v0.8.14 at time of writing), run FormatServiceRelay.cmd, navigate to http://localhost:8013/reverse?string=abc123, and you should get exactly the same response but through Node.js, via Azure Service Bus Relay to your on-premise service. The console logs the WRAP token returned from ACS and the response from Azure Service Bus Relay which it forwards:

    Read the article

  • Generic Sorting using C# and Lambda Expression

    - by Haitham Khedre
    Download : GenericSortTester.zip I worked in this class from long time and I think it is a nice piece of code that I need to share , it might help other people searching for the same concept. this will help you to sort any collection easily without needing to write special code for each data type , however if you need special ordering you still can do it , leave a comment and I will see if I need to write another article to cover the other cases. I attached also a fully working example to make you able to see how do you will use that .     public static class GenericSorter { public static IOrderedEnumerable<T> Sort<T>(IEnumerable<T> toSort, Dictionary<string, SortingOrder> sortOptions) { IOrderedEnumerable<T> orderedList = null; foreach (KeyValuePair<string, SortingOrder> entry in sortOptions) { if (orderedList != null) { if (entry.Value == SortingOrder.Ascending) { orderedList = orderedList.ApplyOrder<T>(entry.Key, "ThenBy"); } else { orderedList = orderedList.ApplyOrder<T>(entry.Key,"ThenByDescending"); } } else { if (entry.Value == SortingOrder.Ascending) { orderedList = toSort.ApplyOrder<T>(entry.Key, "OrderBy"); } else { orderedList = toSort.ApplyOrder<T>(entry.Key, "OrderByDescending"); } } } return orderedList; } private static IOrderedEnumerable<T> ApplyOrder<T> (this IEnumerable<T> source, string property, string methodName) { ParameterExpression param = Expression.Parameter(typeof(T), "x"); Expression expr = param; foreach (string prop in property.Split('.')) { expr = Expression.PropertyOrField(expr, prop); } Type delegateType = typeof(Func<,>).MakeGenericType(typeof(T), expr.Type); LambdaExpression lambda = Expression.Lambda(delegateType, expr, param); MethodInfo mi = typeof(Enumerable).GetMethods().Single( method => method.Name == methodName && method.IsGenericMethodDefinition && method.GetGenericArguments().Length == 2 && method.GetParameters().Length == 2) .MakeGenericMethod(typeof(T), expr.Type); return (IOrderedEnumerable<T>)mi.Invoke (null, new object[] { source, lambda.Compile() }); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }

    Read the article

  • and the winner is Google Chrome

    - by anirudha
    Browser war really still uncompleted but here i tell that Why Google chrome better. 1. Easy to install:- as IE 9 Google chrome not force user to purchase a new OS. the chrome have a facelity that they install in minutes then less then other just like Firefox a another competitor or bloody fool  IE 9. 2. Easy to test: if you want to test their beta that’s no problem as well as Firefox. if user use Firefox 4 beta that they found that they can’t use many good plugin such as a big list the Web Developer tool and many other are one of them. in Chrome beta they provide you more then the last official release of chrome. 3. Google chrome Sync:-  i myself used  sync inside Firefox but nothing i found good and from a long time i feel nothing good and any feature in Firefox sync. but in google chrome their sync system is much better. When user login for sync in chrome they install everything and get back the user every settings they set the last time such as apps, autofill, bookmark ,extensions preference and theme. if you want to check bookmark from other browser that you can use google docs because google provided their bookmark backup in their docs account they have. performance:- after testing a website i found that a website open in 36 seconds in Firefox that Google chrome open them in 10 seconds. i found a interesting thing that when i test offline in IE 8 they show me in one or two seconds. i wonder how it’s possible after a long puzzle i found that IE was integrated software from Microsoft that the both software Visual studio and IE was integrated with windows. if user  test javascript in IE that the error they find show in visual studio not in IE as well as other software like chrome and IE. chrome not have a vast range of plugin as well as firefox so developer spent less time on chrome that would be a problem of future of chrome. interface comparison : the chrome have a common but user friendly interface then the user easily can use them. are you watching menu in Firefox 4. they make them complex as well as whole software IE 9. IE developer team thing that they can make everything fool by making a slogan HTML 5 inside IE. if anyone want to open a page in IE 9 that they show after some second. some time they show page not found even site is not gone wrong. when anyone want to use IE 9 developer tool that they thing that “ are this really  a developer tool ? ”. yeah they not make them for human as well as Firebug working team make firebug inside Firefox. they thing that how they can make public fool. Are you see that if you want to install Visual studio they force you to install sql server even you use other database system. a big stupidity of their tool can be found here today we hear that they Microsoft launched silverlight 5. are you know how Microsoft make silverlight yeah he copycat the idea of Adobe and their product Adobe Flash. that’s a other matter we can use .Net language instead of actionscript , lingo or shockwave.

    Read the article

  • Thread safe double buffering

    - by kdavis8
    I am trying to implement a draw map method that will draw the tiled image across the surface of the component. I'm having issue with this code. The double buffering does not seem to be working, because the sprite flickers like crazy; my source code: package myPackage; import java.awt.Color; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.Image; import java.awt.Toolkit; import java.awt.image.BufferStrategy; import java.awt.image.BufferedImage; import javax.swing.JFrame; public class GameView extends JFrame implements Runnable { public BufferedImage backbuffer; public Graphics2D g2d; public Image img; Thread gameloop; Scene scene; public GameView() { super("Game View"); setSize(600, 600); setVisible(true); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); backbuffer = new BufferedImage(getWidth(), getHeight(), BufferedImage.TYPE_INT_RGB); g2d = backbuffer.createGraphics(); Toolkit tk = Toolkit.getDefaultToolkit(); img = tk.getImage(this.getClass().getResource("cage.png")); scene = new Scene(g2d, this); gameloop = new Thread(this); gameloop.start(); } public static void main(String args[]) { new GameView(); } public void paint(Graphics g) { g.drawImage(backbuffer, 0, 0, this); repaint(); } @Override public void run() { // TODO Auto-generated method stub Thread t = Thread.currentThread(); while (t == gameloop) { scene.getScene("dirtmap"); g2d.drawImage(img, 80, 80,this![enter image description here][1]); } } private void drawScene(String string) { // TODO Auto-generated method stub // g2d.setColor(Color.white); // g2d.fillRect(0, 0, getWidth(), getHeight()); scene.getScene(string); } } package myPackage; import java.awt.Color; import java.awt.Component; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.Image; import java.awt.Toolkit; public class Scene { Graphics g2d; Component c; boolean loaded = false; public Scene(Graphics2D gr, Component co) { g2d = gr; c = co; } public void getScene(String mapName) { Toolkit tk = Toolkit.getDefaultToolkit(); Image tile = tk.getImage(this.getClass().getResource("dirt.png")); // g2d.setColor(Color.red); for (int y = 0; y <= 18; y++) { for (int x = 0; x <= 18; x += 1) { g2d.drawImage(tile, x * 32, y * 32, c); } } loaded = true; } }

    Read the article

  • Override ToString() in your Classes

    - by psheriff
    One of the reasons I love teaching is because of the questions that I get from attendees. I was giving a presentation at DevConnections and was showing a collection of Product objects. When I hovered over the variable that contained the collection, it looked like Figure 2. As you can see in the collection, I have actual product names of my videos from www.pdsa.com/videos being displayed. To get your data to appear in the data tips you must override the ToString() method in your class. To illustrate this, take the following simple Product class shown below: public class Product{  public string ProductName { get; set; }  public int ProductId { get; set; }} This class does not have an override of the ToString() method so if you create a collection of Product objects you will end up with data tips that look like Figure 1. Below is the code I used to create a collection of Product objects. I have shortened the code in this blog, but you can get the full source code for this sample by following the instructions at the bottom of this blog entry. List<Product> coll = new List<Product>();Product prod; prod = new Product()  { ProductName = "From Zero to HTML 5 in 60 Minutes",     ProductId = 1 };coll.Add(prod);prod = new Product()   { ProductName = "Architecting Applications …",     ProductId = 2 };coll.Add(prod);prod = new Product()  { ProductName = "Introduction to Windows Phone Development",    ProductId = 3 };coll.Add(prod);prod = new Product()   { ProductName = "Architecting a Business  …",     ProductId = 4 };coll.Add(prod);......   Figure 1: Class without overriding ToString() Now, go back to the Product class and add an override of the ToString() method as shown in the code listed below: public class Product{  public string ProductName { get; set; }  public int ProductId { get; set; }   public override string ToString()  {    return ProductName;  }} In this simple sample, I am just returning the ProductName property. However, you can create a whole string of information if you wish to display more data in your data tips. Just concatenate any properties you want from your class and return that string. When you now run the application and hover over the collection object you will now see something that looks like Figure 2. Figure 2: Overriding ToString() in your Class Another place the ToString() override comes in handy is if you forget to use a DisplayMemberPath in your ListBox or ComboBox. The ToString() method is called automatically when a class is bound to a list control. Summary You should always override the ToString() method in your classes as this will help you when debugging your application. Seeing relevant data immediately in the data tip without having to drill down one more layer and maybe scroll through a complete list of properties should help speed up your development process. NOTE: You can download the sample code for this article by visiting my website at http://www.pdsa.com/downloads. Select “Tips & Tricks”, then select “Override ToString” from the drop down list.  

    Read the article

  • A tiny Utility to recycle an IIS Application Pool

    - by Rick Strahl
    In the last few weeks I've annoyingly been having problems with an area on my Web site. It's basically ancient articles that are using ASP classic pages and for reasons unknown ASP classic locks up on these pages frequently. It's not an individual page, but ALL ASP classic pages lock up. Ah yes, gotta old tech gone bad. It's not super critical since the content is really old, but still a hassle since it's linked content that still gets quite a bit of traffic. When it happens all ASP classic in that AppPool dies. I've been having a hard time tracking this one down - I suspect an errant COM object I have a Web Monitor running on the server that's checking for failures and while the monitor can detect the failures when the timeouts occur, I didn't have a good way to just restart that particular application pool. I started putzing around with PowerShell, but - as so often seems the case - I can never get the PowerShell syntax right - I just don't use it enough and have to dig out cheat sheets etc. In any case, after about 20 minutes of that I decided to just create a small .NET Console Application that does the trick instead, and in a few minutes I had this:using System; using System.Collections.Generic; using System.Text; using System.DirectoryServices; namespace RecycleApplicationPool { class Program { static void Main(string[] args) { string appPoolName = "DefaultAppPool"; string machineName = "LOCALHOST"; if (args.Length > 0) appPoolName = args[0]; if (args.Length > 1) machineName = args[1]; string error = null; DirectoryEntry root = null; try { Console.WriteLine("Restarting Application Pool " + appPoolName + " on " + machineName + "..."); root = new DirectoryEntry("IIS://" + machineName + "/W3SVC/AppPools/" +appPoolName); Console.WriteLine(root.InvokeGet("Name")); root.Invoke("Recycle"); Console.WriteLine("Application Pool recycling complete..."); } catch(Exception ex) { error = "Error: Unable to access AppPool: " + ex.Message; } if ( !string.IsNullOrEmpty(error) ) { Console.WriteLine(error); return; } } } } To run in you basically provide the name of the ApplicationPool and optionally a machine name if it's not on the local box. RecyleApplicationPool.exe "WestWindArticles" And off it goes. What's nice about AppPool recycling versus doing a full IISRESET is that it only affects the AppPool, and more importantly AppPool recycles happen in a staggered fashion - the existing instance isn't shut down immediately until requests finish while a new instance is fired up to handle new requests. So, now I can easily plug this Executable into my West Wind Web Monitor as an action to take when the site is not responding or timing out which is a big improvement than hanging for an unspecified amount of time. I'm posting this fairly trivial bit of code just in case somebody (maybe myself a few months down the road) is searching for ApplicationPool recyling code. It's clearly trivial, but I've written batch files for this a bunch of times before and actually having a small utility around without having to worry whether Powershell is installed and configured right is actually an improvement. Next time I think about using PowerShell remind me that it's just easier to just build a small .NET Console app, 'k? :-) Resources Download Executable and VS Project© Rick Strahl, West Wind Technologies, 2005-2012Posted in IIS7  .NET  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How can I thoroughly evaluate a prospective employer?

    - by glenviewjeff
    We hear much about code smells, test smells, and even project smells, but I have heard no discussion about employer "smells" outside of the Joel Test. After much frustration working for employers with a bouquet of unpleasant corporate-culture odors, I believe it's time for me to actively seek a more mature development environment. I've started assembling a list of questions to help vet employers by identifying issues during a job interview, and am looking for additional ideas. I suppose this list could easily be modified by an employer to vet an employee as well, but please answer from the interviewee's perspective. I think it would be important to ask many of these questions of multiple people to find out if consistent answers are given. For the most part, I tried to put the questions in each section in the order they could be asked. An undesired answer to an early question will often make follow-ups moot. Values What constitutes "well-written" software? What attributes does a good developer have? Same question for manager. Process Do you have a development process? How rigorously do you follow it? How do you decide how much process to apply to each project? Describe a typical project lifecycle. Ask the following if they don't come up otherwise: Waterfall/iterative: How much time is spent in upfront requirements gathering? upfront design? Testing Who develops tests (developers or separate test engineers?) When are they developed? When are the tests executed? How long do they take to execute? What makes a good test? How do you know you've tested enough? What percentage of code is tested? Review What is the review process like? What percentage of code is reviewed? Design? How frequently can I expect to participate as code/design reviewer/reviewee? What are the criteria applied to review and where do the criteria come from? Improvement What new tools and techniques have you evaluated or deployed in the past year? What training courses have your employees been given in the past year? What will I be doing for the first six months in your company (hinting at what kind of organized mentorship/training has been thought through, if any) What changes to your development process have been made in the past year? How do you improve and learn from your mistakes as an organization? What was your organizations biggest mistake in the past year, and how was it addressed? What feedback have you given to management lately? Was it implemented? If not, why? How does your company use "best practices?" How do you seek them out from the outside or within, and how do you share them with each other? Ethics Tell me about an ethical problem you or your employees experienced recently and how was it resolved? Do you use open-source software? What open-source contributions have you made? Follow-Ups I liked what @jim-leonardo said on this Stack Overflow question: Really a thing to ask yourself: "Does this person seem like they are trying to recruit me and make me interested?" I think this is one of the most important bits. If they seem to be taking the attitude that the only one being interviewed is you, then they probably will treat you poorly. Good interviewers understand they have to sell the position as much as the candidate needs to sell themselves. @SethP added: Glassdoor.com is a good web site for researching potential employers. It contains information about how specific companies conduct interviews...

    Read the article

  • virtual host not working in windows7 xampp

    - by K.B Panamaldeniya-littletipz
    hi i am using windows7 and xampp , i want to create a virtual host . so i added 127.0.0.1 myawesomeproject to my C:\Windows\System32\drivers\etc\hosts like this # Copyright (c) 1993-2009 Microsoft Corp. # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host # localhost name resolution is handled within DNS itself. 127.0.0.1 localhost 127.0.0.1 myawesomeproject ::1 localhost and i added some lines to C:\xampp\apache\conf\extra\httpd-vhosts.conf like this # # Virtual Hosts # # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.2/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # NameVirtualHost *:80 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. # ##<VirtualHost *:80> ##ServerAdmin [email protected] ##DocumentRoot "C:/xampp/htdocs/dummy-host.localhost" ##ServerName dummy-host.localhost ##ServerAlias www.dummy-host.localhost ##ErrorLog "logs/dummy-host.localhost-error.log" ##CustomLog "logs/dummy-host.localhost-access.log" combined ##</VirtualHost> ##<VirtualHost *:80> ##ServerAdmin [email protected] ##DocumentRoot "C:/xampp/htdocs/dummy-host2.localhost" ##ServerName dummy-host2.localhost ##ServerAlias www.dummy-host2.localhost ##ErrorLog "logs/dummy-host2.localhost-error.log" ##CustomLog "logs/dummy-host2.localhost-access.log" combined ##</VirtualHost> <VirtualHost *> DocumentRoot "C:\xampp\htdocs" ServerName localhost </VirtualHost> <VirtualHost *> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot c:\myawesomeproject ServerName localhost <Directory "c:\myawesomeproject"> Order allow,deny Allow from all </Directory> </VirtualHost> i created a folder called myawesomeproject in my c drive . when i type http://myawesomeproject it is rederecting to http://myawesomeproject/xampp i added another folder 'test' inside myawesomeproject . so the path to 'test' is C:/myawesomeproject/test . the problem is when i type http://myawesomeproject/test it gives an error. it says Object not found! The requested URL was not found on this server. If you entered the URL manually please check your spelling and try again. If you think this is a server error, please contact the webmaster. Error 404 myawesomeproject 8/22/2011 4:30:29 PM Apache/2.2.17 (Win32) mod_ssl/2.2.17 OpenSSL/0.9.8o PHP/5.3.4 mod_perl/2.0.4 Perl/v5.10.1 why is this . how can i create a virtual host........................ :(

    Read the article

  • Why you shouldn't add methods to interfaces in APIs

    - by Simon Cooper
    It is an oft-repeated maxim that you shouldn't add methods to a publically-released interface in an API. Recently, I was hit hard when this wasn't followed. As part of the work on ApplicationMetrics, I've been implementing auto-reporting of MVC action methods; whenever an action was called on a controller, ApplicationMetrics would automatically report it without the developer needing to add manual ReportEvent calls. Fortunately, MVC provides easy hook when a controller is created, letting me log when it happens - the IControllerFactory interface. Now, the dll we provide to instrument an MVC webapp has to be compiled against .NET 3.5 and MVC 1, as the lowest common denominator. This MVC 1 dll will still work when used in an MVC 2, 3 or 4 webapp because all MVC 2+ webapps have a binding redirect redirecting all references to previous versions of System.Web.Mvc to the correct version, and type forwards taking care of any moved types in the new assemblies. Or at least, it should. IControllerFactory In MVC 1 and 2, IControllerFactory was defined as follows: public interface IControllerFactory { IController CreateController(RequestContext requestContext, string controllerName); void ReleaseController(IController controller); } So, to implement the logging controller factory, we simply wrap the existing controller factory: internal sealed class LoggingControllerFactory : IControllerFactory { private readonly IControllerFactory m_CurrentController; public LoggingControllerFactory(IControllerFactory currentController) { m_CurrentController = currentController; } public IController CreateController( RequestContext requestContext, string controllerName) { // log the controller being used FeatureSessionData.ReportEvent("Controller used:", controllerName); return m_CurrentController.CreateController(requestContext, controllerName); } public void ReleaseController(IController controller) { m_CurrentController.ReleaseController(controller); } } Easy. This works as expected in MVC 1 and 2. However, in MVC 3 this type was throwing a TypeLoadException, saying a method wasn't implemented. It turns out that, in MVC 3, the definition of IControllerFactory was changed to this: public interface IControllerFactory { IController CreateController(RequestContext requestContext, string controllerName); SessionStateBehavior GetControllerSessionBehavior( RequestContext requestContext, string controllerName); void ReleaseController(IController controller); } There's a new method in the interface. So when our MVC 1 dll was redirected to reference System.Web.Mvc v3, LoggingControllerFactory tried to implement version 3 of IControllerFactory, was missing the GetControllerSessionBehaviour method, and so couldn't be loaded by the CLR. Implementing the new method Fortunately, there was a workaround. Because interface methods are normally implemented implicitly in the CLR, if we simply declare a virtual method matching the signature of the new method in MVC 3, then it will be ignored in MVC 1 and 2 and implement the extra method in MVC 3: internal sealed class LoggingControllerFactory : IControllerFactory { ... public virtual SessionStateBehaviour GetControllerSessionBehaviour( RequestContext requestContext, string controllerName) {} ... } However, this also has problems - the SessionStateBehaviour type only exists in .NET 4, and we're limited to .NET 3.5 by support for MVC 1 and 2. This means that the only solutions to support all MVC versions are: Construct the LoggingControllerFactory type at runtime using reflection Produce entirely separate dlls for MVC 1&2 and MVC 3. Ugh. And all because of that blasted extra method! Another solution? Fortunately, in this case, there is a third option - System.Web.Mvc also provides a DefaultControllerFactory type that can provide the implementation of GetControllerSessionBehaviour for us in MVC 3, while still allowing us to override CreateController and ReleaseController. However, this does mean that LoggingControllerFactory won't be able to wrap any calls to GetControllerSessionBehaviour. This is an acceptable bug, given the other options, as very few developers will be overriding GetControllerSessionBehaviour in their own custom controller factory. So, if you're providing an interface as part of an API, then please please please don't add methods to it. Especially if you don't provide a 'default' implementing type. Any code compiled against the previous version that can't be updated will have some very tough decisions to make to support both versions.

    Read the article

  • Ajax Autocomplete Extender

    - by Jason Ulloa
    El objetivo de este post es preparar un ejemplo sobre un tema que es planteado muy frecuentemente en los Foros de MSDN, como realizar un Autocomplete contra una base de datos. Qué requerimos? Antes de poder realizar un Autocomplete debemos tener en cuenta los elementos principales que requerimos para poder hacerlo funcionar, descritos de la siguiente manera: 1. Textbox: Nuestro grandioso amigo Textbox, que será donde el usuario ingresará los datos a buscar. 2. Un Webservice: que contendrá el método que se conectara a la base de datos y devolverá una lista con la información encontrada. 3. Ajax Autocomplete Extender: este es por decirlo así, el elemento más importante. Nos servirá como medio de enlace entre el webservice que expone el método y el textbox recuperando y mostrando los datos en forma de lista desplegable. La implementación Si bien parecierá complicado, crear un autocomplete extender es bastante sencillo. Empezaremos creando un nuevo sitio asp.net, en este sitio agregaremos un textbox y dos controles muy importantes de Ajax el ToolkitScriptManager para controlar el rende rizado de los script de ajax y el AutocompleteExtender que, como mencione anteriormente, será el medio de enlace. Antes de mostrar como quedará el código de lo anterior, explicaré algunas propiedades del AutocompleteExtender para que se entienda de mejor manera: 1. El ServicePath: contiene la ruta relativa al webservice que utilizaremos. 2. MinimumPrefixLength: se refiere al número de caracteres que deben ser digitados antes de iniciar la búsqueda. 3. ServiceMethod: el nombre del metodo de nuestro webservice que se encargará de devolver los datos. 4. EnableCaching: para mantener en cache los datos consultados, obteniendo mayor velocidad. 5. TargetControlID: una de las propiedades más importantes, acá se coloca el nombre del textbox al cual se unirá el Autocomplete 6. CompletionInterval: tiempo que debe transcurrir antes de iniciar con el trabajo de los datos. Una vez, explicadas las propiedades básicas, veamos como queda implementada la primer parte de nuestro autocomplete: <form id="form1" runat="server"> <div> <asp:ToolkitScriptManager ID="manager" runat="server" /> <asp:TextBox ID="TextBox1" runat="server"></asp:TextBox> <asp:AutoCompleteExtender ID="AutoCompleteExtender1" runat="server" ServicePath="WebService.asmx" MinimumPrefixLength="1" ServiceMethod="PersonasInfo " EnableCaching="true" TargetControlID="TextBox1" UseContextKey="True" CompletionSetCount="10" CompletionInterval="0"> </asp:AutoCompleteExtender> </div> </form>   Ahora que nuestro código html está completo, es hora de trabajar directamente con nuestro webservice, este deberá contener un método que devuelva una lista o arreglo de datos, los cuales por supuesto, serán traídos desde la base de datos. Antes de implementar este método, debemos asegurarnos de que nuestra clase del webservice tiene habilitados los espacios para ser utilizada [System.Web.Script.Services.ScriptService()] [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class WebService : System.Web.Services.WebService {}   Ahora si, nuestro metodo principal [WebMethod()] [System.Web.Script.Services.ScriptMethod()] public string[] PersonasInfo(string prefixText, int count) { string connstring = ConfigurationManager.ConnectionStrings["LocalSqlServer"].ConnectionString;   using (SqlConnection conn = new SqlConnection(connstring)) { SqlCommand comando = new SqlCommand("select nombre from personas where nombre LIKE '%' + @param + '%' ", conn); comando.Parameters.AddWithValue("@param", prefixText); SqlDataReader dr = default(SqlDataReader); comando.Connection.Open(); dr = comando.ExecuteReader(); List<string> items = new List<string>();   while (dr.Read()) { items.Add(dr["nombre"].ToString()); } comando.Connection.Close(); return items.ToArray(); } }   Del método anterior no explicaré en profundidad, pues es bastante sencillo. Una consulta a la base de datos utilizando un datareader y devolviendo los datos en una lista como arreglo. Lo más importante serían las 2 primeras líneas [WebMethod()] y el [ScriptMethod()] las cuales habilitan nuestro método para poder ser accedido y utilizado. Por último, el código de ejemplo en C# (VB Autcomplete):

    Read the article

  • An Unusual UpdatePanel

    - by João Angelo
    The code you are about to see was mostly to prove a point, to myself, and probably has limited applicability. Nonetheless, in the remote possibility this is useful to someone here it goes… So this is a control that acts like a normal UpdatePanel where all child controls are registered as postback triggers except for a single control specified by the TriggerControlID property. You could basically achieve the same thing by registering all controls as postback triggers in the regular UpdatePanel. However with this, that process is performed automatically. Finally, here is the code: public sealed class SingleAsyncTriggerUpdatePanel : WebControl, INamingContainer { public string TriggerControlID { get; set; } [TemplateInstance(TemplateInstance.Single)] [PersistenceMode(PersistenceMode.InnerProperty)] public ITemplate ContentTemplate { get; set; } public override ControlCollection Controls { get { this.EnsureChildControls(); return base.Controls; } } protected override void CreateChildControls() { if (string.IsNullOrWhiteSpace(this.TriggerControlID)) throw new InvalidOperationException( "The TriggerControlId property must be set."); this.Controls.Clear(); var updatePanel = new UpdatePanel() { ID = string.Concat(this.ID, "InnerUpdatePanel"), ChildrenAsTriggers = false, UpdateMode = UpdatePanelUpdateMode.Conditional, ContentTemplate = this.ContentTemplate }; updatePanel.Triggers.Add(new SingleControlAsyncUpdatePanelTrigger { ControlID = this.TriggerControlID }); this.Controls.Add(updatePanel); } } internal sealed class SingleControlAsyncUpdatePanelTrigger : UpdatePanelControlTrigger { private Control target; private ScriptManager scriptManager; public Control Target { get { if (this.target == null) { this.target = this.FindTargetControl(true); } return this.target; } } public ScriptManager ScriptManager { get { if (this.scriptManager == null) { var page = base.Owner.Page; if (page != null) { this.scriptManager = ScriptManager.GetCurrent(page); } } return this.scriptManager; } } protected override bool HasTriggered() { string asyncPostBackSourceElementID = this.ScriptManager.AsyncPostBackSourceElementID; if (asyncPostBackSourceElementID == this.Target.UniqueID) return true; return asyncPostBackSourceElementID.StartsWith( string.Concat(this.target.UniqueID, "$"), StringComparison.Ordinal); } protected override void Initialize() { base.Initialize(); foreach (Control control in FlattenControlHierarchy(this.Owner.Controls)) { if (control == this.Target) continue; bool isApplicableControl = false; isApplicableControl |= control is INamingContainer; isApplicableControl |= control is IPostBackDataHandler; isApplicableControl |= control is IPostBackEventHandler; if (isApplicableControl) { this.ScriptManager.RegisterPostBackControl(control); } } } private static IEnumerable<Control> FlattenControlHierarchy( ControlCollection collection) { foreach (Control control in collection) { yield return control; if (control.Controls.Count > 0) { foreach (Control child in FlattenControlHierarchy(control.Controls)) { yield return child; } } } } } You can use it like this, meaning that only the B2 button will trigger an async postback: <cc:SingleAsyncTriggerUpdatePanel ID="Test" runat="server" TriggerControlID="B2"> <ContentTemplate> <asp:Button ID="B1" Text="B1" runat="server" OnClick="Button_Click" /> <asp:Button ID="B2" Text="B2" runat="server" OnClick="Button_Click" /> <asp:Button ID="B3" Text="B3" runat="server" OnClick="Button_Click" /> <asp:Label ID="LInner" Text="LInner" runat="server" /> </ContentTemplate> </cc:SingleAsyncTriggerUpdatePanel>

    Read the article

  • How the number of indexes built on a table can impact performances?

    - by Davide Mauri
    We all know that putting too many indexes (I’m talking of non-clustered index only, of course) on table may produce performance problems due to the overhead that each index bring to all insert/update/delete operations on that table. But how much? I mean, we all agree – I think – that, generally speaking, having many indexes on a table is “bad”. But how bad it can be? How much the performance will degrade? And on a concurrent system how much this situation can also hurts SELECT performances? If SQL Server take more time to update a row on a table due to the amount of indexes it also has to update, this also means that locks will be held for more time, slowing down the perceived performance of all queries involved. I was quite curious to measure this, also because when teaching it’s by far more impressive and effective to show to attended a chart with the measured impact, so that they can really “feel” what it means! To do the tests, I’ve create a script that creates a table (that has a clustered index on the primary key which is an identity column) , loads 1000 rows into the table (inserting 1000 row using only one insert, instead of issuing 1000 insert of one row, in order to minimize the overhead needed to handle the transaction, that would have otherwise ), and measures the time taken to do it. The process is then repeated 16 times, each time adding a new index on the table, using columns from table in a round-robin fashion. Test are done against different row sizes, so that it’s possible to check if performance changes depending on row size. The result are interesting, although expected. This is the chart showing how much time it takes to insert 1000 on a table that has from 0 to 16 non-clustered indexes. Each test has been run 20 times in order to have an average value. The value has been cleaned from outliers value due to unpredictable performance fluctuations due to machine activity. The test shows that in a  table with a row size of 80 bytes, 1000 rows can be inserted in 9,05 msec if no indexes are present on the table, and the value grows up to 88 (!!!) msec when you have 16 indexes on it This means a impact on performance of 975%. That’s *huge*! Now, what happens if we have a bigger row size? Say that we have a table with a row size of 1520 byte. Here’s the data, from 0 to 16 indexes on that table: In this case we need near 22 msec to insert 1000 in a table with no indexes, but we need more that 500msec if the table has 16 active indexes! Now we’re talking of a 2410% impact on performance! Now we can have a tangible idea of what’s the impact of having (too?) many indexes on a table and also how the size of a row also impact performances. That’s why the golden rule of OLTP databases “few indexes, but good” is so true! (And in fact last week I saw a database with tables with 1700bytes row size and 23 (!!!) indexes on them!) This also means that a too heavy denormalization is really not a good idea (we’re always talking about OLTP systems, keep it in mind), since the performance get worse with the increase of the row size. So, be careful out there, and keep in mind the “equilibrium” is the key world of a database professional: equilibrium between read and write performance, between normalization and denormalization, between to few and too may indexes. PS Tests are done on a VMWare Workstation 7 VM with 2 CPU and 4 GB of Memory. Host machine is a Dell Precsioni M6500 with i7 Extreme X920 Quad-Core HT 2.0Ghz and 16Gb of RAM. Database is stored on a SSD Intel X-25E Drive, Simple Recovery Model, running on SQL Server 2008 R2. If you also want to to tests on your own, you can download the test script here: Open TestIndexPerformance.sql

    Read the article

  • SSAS: Utility to export SQL code from your cube's Data Source View (DSV)

    - by DrJohn
    When you are working on a cube, particularly in a multi-person team, it is sometimes necessary to review what changes that have been done to the SQL queries in the cube's data source view (DSV). This can be a problem as the SQL editor in the DSV is not the best interface to review code. Now of course you can cut and paste the SQL into SSMS, but you have to do each query one-by-one. What is worse your DBA is unlikely to have BIDS installed, so you will have to manually export all the SQL yourself and send him the files. To make it easy to get hold of the SQL in a Data Source View, I developed a C# utility which connects to an OLAP database and uses Analysis Services Management Objects (AMO) to obtain and export all the SQL to a series of files. The added benefit of this approach is that these SQL files can be placed under source code control which means the DBA can easily compare one version with another. The Trick When I came to implement this utility, I quickly found that the AMO API does not give direct access to anything useful about the tables in the data source view. Iterating through the DSVs and tables is easy, but getting to the SQL proved to be much harder. My Google searches returned little of value, so I took a look at the idea of using the XmlDom to open the DSV’s XML and obtaining the SQL from that. This is when the breakthrough happened. Inspecting the DSV’s XML I saw the things I was interested in were called TableType DbTableName FriendlyName QueryDefinition Searching Google for FriendlyName returned this page: Programming AMO Fundamental Objects which hinted at the fact that I could use something called ExtendedProperties to obtain these XML attributes. This simplified my code tremendously to make the implementation almost trivial. So here is my code with appropriate comments. The full solution can be downloaded from here: ExportCubeDsvSQL.zip   using System;using System.Data;using System.IO;using Microsoft.AnalysisServices; ... class code removed for clarity// connect to the OLAP server Server olapServer = new Server();olapServer.Connect(config.olapServerName);if (olapServer != null){ // connected to server ok, so obtain reference to the OLAP databaseDatabase olapDatabase = olapServer.Databases.FindByName(config.olapDatabaseName);if (olapDatabase != null){ Console.WriteLine(string.Format("Succesfully connected to '{0}' on '{1}'",   config.olapDatabaseName,   config.olapServerName));// export SQL from each data source view (usually only one, but can be many!)foreach (DataSourceView dsv in olapDatabase.DataSourceViews){ Console.WriteLine(string.Format("Exporting SQL from DSV '{0}'", dsv.Name));// for each table in the DSV, export the SQL in a fileforeach (DataTable dt in dsv.Schema.Tables){ Console.WriteLine(string.Format("Exporting SQL from table '{0}'", dt.TableName)); // get name of the table in the DSV// use the FriendlyName as the user inputs this and therefore has control of itstring queryName = dt.ExtendedProperties["FriendlyName"].ToString().Replace(" ", "_");string sqlFilePath = Path.Combine(targetDir.FullName, queryName + ".sql"); // delete the sql file if it exists... file deletion code removed for clarity// write out the SQL to a fileif (dt.ExtendedProperties["TableType"].ToString() == "View"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["QueryDefinition"].ToString());}if (dt.ExtendedProperties["TableType"].ToString() == "Table"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["DbTableName"].ToString()); } } } Console.WriteLine(string.Format("Successfully written out SQL scripts to '{0}'", targetDir.FullName)); } }   Of course, if you are following industry best practice, you should be basing your cube on a series of views. This will mean that this utility will be of limited practical value unless of course you are inheriting a project and want to check if someone did the implementation correctly.

    Read the article

  • Using rel=next and rel=prev with multiple sets of paginated content on the same page

    - by jakejgordon
    We are running into issues with trying to figure out how to implement rel="next" and rel="prev" -- coupled with rel="canonical" -- with multiple sets of paginated content on the same page, with pages in multiple cultures. In other words, how do we implement these when we have a pager for both Product Reviews and Questions and Answers (aka "Q&A") on the same page, with duplicate content across culture-specific URLs (e.g. /us/en/my-product vs. /ca/en/my-product)? Our current implementation will actually do a full postback when you click Page 2, and will add something to the query string (e.g. website.com/ca/en/my-product?previewpage=2 or website.com/ca/en/my-product?questionpage=2). If we only had one set of paginated content then the implementation would certainly be more straightforward. Adding a second set of paginated content (i.e. Q&A) complicates things. Let's assume that we want the United States English page to be the canonical target (i.e. /us/en/my-product) based on culture. If you go to the /ca/en/my-product page you'll have a rel="canonical" href="/us/en/my-product". So far so good. Let's also assume that we are not implementing a page that lists ALL Product Reviews and Q&A. This would likely solve a number of our problems by using rel="canonical" to this page, but is not an option for reasons that are out of scope for this discussion. Now if you click on page 2 of Product Reviews, it will reload the page with /ca/en/my-product?reviewpage=2 as the URL. Given this scenario, here are my questions: On page 2 of the my-product page on the Canadian site, should there be a rel="canonical" to /us/en/my-product?reviewpage=2 (assuming the content is identical in the United States and Canada)? Should the rel="prev" go to /ca/en/my-product?reviewpage=1 or should it go to /ca/en/my-product ? The query-string version would really only be accessible if using the pager and shows the exact same content as the base page. The following two questions are closely related to this one. Should the /ca/en/my-product?reviewpage=1 have a rel canonical directly to /us/en/my-product (United States page with nothing in query string) since the content is identical)? Given that Q&A content is also paginated, should there be a rel="next" on the base page without query string? In other words, should the /ca/en/my-product page have a rel="next" to /ca/en/my-product?reviewpage=2 AND rel="next" to /ca/en/my-product?questionpage=2 . So far as I can tell it doesn't make sense to have multiple rel="next" implementations on the same page. I suspect that the pages with query string values should have rel="next" and rel="prev" that only point to other pages with query strings and not to the base page. The ?reviewpage=1 and ?questionpage=1 pages would then just have a rel="canonical" to /us/en/my-product . Thoughts? I know this is a tough one -- that's why I brought it to this community. Thanks so much for your help in advance!

    Read the article

  • What are some good questions (and good/bad answers) to ask at an interview to gauge the competency of the company/team?

    - by Wayne M
    I'm already familiar with the Joel Test, but it's been my experience that some of the questions there have the answers "massaged" to make the company seem better than it is. I've had several jobs in the past that, for instance, claimed they had a QA process and did unit testing, and what they really meant is "The programmers test the app, and test with the debugger and via trial-and-error."; they said they used SVN but they just lumped everything into one giant repository and had no concept of branching/merging or anything more complicated than updating and committing; said they can build in one step and what they really mean is it's "one step" to copy dozens of files by hand from the programmer's PC to the live server. How do you go about properly gauging a company's environment to make sure that it's a well-evolved company and not stuck on doing things a certain way because they've done it for years and they're ignorant of change? You can almost never ask to see their source code, so you're stuck trying to figure out if the interviewer's answer is accurate or BS to make the company seem good. Besides the Joel Test what are some other good questions to get the proper feel for a company, and more importantly what are some good and bad answers that could indicate a good or bad company? I mean something like (take at face value, please, it's all I could think of at short notice): Question: How does the software team apply the SOLID principles and Inversion of Control to their code? Good Answer: We adhere to SOLID wherever possible; we use TDD so it kind of forces us to write abstract, testable code. We use Ninject for our IoC container because it's fairly easy to configure - it was that or StructureMap but I find Ninject a bit more intuitive, and who doesn't like ninjas? You're not a pirate, are you? Bad Answer: Our code is pretty secure, yeah. And what's this Inversion of Control thing? I've never heard of it before. You see what I did there. The "good" answer uses facts to back it up and has a bit of "in crowd" humor; the bad answer shows complete ignorance of the question - not necessarily a bad thing if you are interviewing for a manger/director position, but a terrible answer and a huge red flag if you're interviewing as a developer and talking to a senior developer or manager! My biggest problem at the moment is being able to take a generic response and gauge whether it's the good or bad answer; more often than not it's the bad kind and I find myself frustrated almost from day one at the new job. I suppose I could name drop if I ask about specific things (e.g. "Do you write unit tests?" and if the answer is yes, ask if they use NUnit, MbUnit or something else; if they mention data access ask if they use a clean ORM like NHibernate or something more coupled like EF or Linq) but is there another way short of being resolute to actually call the interview on things (which will almost certainly result in not getting the job, but if they are skirting the question it's probably not a job I want).

    Read the article

< Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >