Search Results

Search found 6186 results on 248 pages for 'syntax'.

Page 140/248 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • A Reusable Builder Class for Javascript Testing

    - by Liam McLennan
    Continuing on my series of builders for C# and Ruby here is the solution in Javascript. This is probably the implementation with which I am least happy. There are several parts that did not seem to fit the language. This time around I didn’t bother with a testing framework, I just append some values to the page with jQuery. Here is the test code: var initialiseBuilder = function() { var builder = builderConstructor(); builder.configure({ 'Person': function() { return {name: 'Liam', age: 26}}, 'Property': function() { return {street: '127 Creek St', manager: builder.a('Person') }} }); return builder; }; var print = function(s) { $('body').append(s + '<br/>'); }; var build = initialiseBuilder(); // get an object liam = build.a('Person'); print(liam.name + ' is ' + liam.age); // get a modified object liam = build.a('Person', function(person) { person.age = 999; }); print(liam.name + ' is ' + liam.age); home = build.a('Property'); print(home.street + ' manager: ' + home.manager.name); and the implementation: var builderConstructor = function() { var that = {}; var defaults = {}; that.configure = function(d) { defaults = d; }; that.a = function(type, modifier) { var o = defaults[type](); if (modifier) { modifier(o); } return o; }; return that; }; I still like javascript’s syntax for anonymous methods, defaults[type]() is much clearer than the Ruby equivalent @defaults[klass].call(). You can see the striking similarity between Ruby hashes and javascript objects. I also prefer modifier(o) to the equivalent Ruby, yield o.

    Read the article

  • Add a Graphical User Interface (GUI) to the Microsoft Robocopy Command Line Tool

    - by Lori Kaufman
    Robocopy, or “Robust File Copy,” is a command line directory replication tool from Microsoft. It is available as part of Windows 7 and Vista as a standard feature, and was available as part of the Windows Server 2003 Resource Kit. NOTE: For Windows XP, you can obtain Robocopy by downloading the resource kit. Robocopy allows you to setup simple or advanced backup strategies. It provides such features as multi-threaded copying, mirroring or synchronization mode, automatic retry, and the ability to resume the copying process. If you are comfortable with using command line tools, you can run Robocopy directly on the command line using the command syntax and options. You can also download the command line reference and usage notes for Robocopy as a PDF file. If you are more comfortable using a graphical user interface, or GUI, rather than the command line, there are a couple of options for adding a GUI to the Robocopy command line tool, making it easier to use. Both tools, RoboMirror and RichCopy, are discussed below and links to download each tool are provided. How to Factory Reset Your Android Phone or Tablet When It Won’t Boot Our Geek Trivia App for Windows 8 is Now Available Everywhere How To Boot Your Android Phone or Tablet Into Safe Mode

    Read the article

  • Picking a code review tool

    - by marcog
    We are a startup looking to migrate from Fogbugz/Kiln to a new issue tracker/code review system. We are very happy with Jira, especially the configurability, but we are undecided on a code review tool. We have been trialing Bitbucket, but it doesn't fit our workflow well. Here are the problems we have identified with BB: Comments can be hard to find: when commenting on code not visible in the diff when code that is commented on is later changed viewing the full file doesn't include comments (also doesn't show changes) Viewing comments on individual commits can be a pain We have the implementer merge the diff and close the issue, whereas pull requests are more suited to the open source model where someone with commit rights merges We would like to automate creation of the code review (either from Jira or a command line tool) No syntax highlighting Once the pull request exceeds a certain size, BB won't show the whole thing and you have to view individual commits Linking BB pull requests to Jira issues is a bit janky: we have a pull request URL field on Jira, but this doesn't work when there are changes in multiple repositories Does anyone have any good suggestion given the above? We are tight on budget, and Jira integration is a big plus. We also have multiple commits per issue, and would like to have the option of viewing individual commits in the review. It might also be worth noting that we have a separate reviewer and tester for each issue.

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Download Microsoft SQL Server Compact 4.0 SP1

    - by pinaldave
    Microsoft SQL Server Compact 4.0 is a free, embedded database that software developers can use for building ASP.NET websites and Windows desktop applications. SQL Server Compact 4.0 is the default database for Microsoft WebMatrix. For enhanced development and debugging capabilities, including designer support, Visual Studio can be used to develop ASP.NET web applications and websites using SQL Server Compact 4.0. Enabled to work in the medium or partial trust environments in the web servers, and can be easily deployed along with the website to the third party website hosting service providers. SQL Server CE 4.0 also provides stronger data security with the use of the SHA2 encryption algorithms for encrypting the databases. Latest version also supports T-SQL syntax enhancement by adding support for OFFSET and FETCH that can be used to write paging queries. Used with ADO.NET Entity Framework, SQL Server Compact now supports the columns that have server generated keys like identity, rowguid etc. and the code-first programming model. SQL Server Compact 4.0 is freely redistributable under a redistribution license agreement. SQL Server Compact 3.5 and SQL Server Compact 4.0 can be installed and work side by side on a desktop. Download Microsoft SQL Server Compact 4.0 SP1 Here are my earlier article on SQL Server CE Difference Between SQL Server Compact Edition (CE) and SQL Server Express Edition SQL SERVER – CE – 3 Links to Performance Tuning Compact Edition SQL SERVER – CE – List of Information_Schema System Tables SQL SERVER – Server Side Paging in SQL Server CE (Compact Edition) SQL SERVER – CE – Samples Database for SQL CE 4.0 Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Achieving forward compatibility with C++11

    - by mcmcc
    I work on a large software application that must run on several platforms. Some of these platforms support some features of C++11 (e.g. MSVS 2010) and some don't support any (e.g. GCC 4.3.x). I see this situation continuing on for several years (my best guess: 3-5 years). Given that, I would like set up a compatibility interface such that (to whatever degree possible) people can write C++11 code that will still compile with older compilers with a minimum of maintenance. Overall, the goal is to minimize #ifdef's as much as reasonably possible while still enabling basic C++11 syntax/features on the platforms that support them, and provide emulation on the platforms that don't. Let's start with std::move(). The most obvious way to achieve compatibility would be to put something like this in a common header file: #if !defined(HAS_STD_MOVE) namespace std { // C++11 emulation template <typename T> inline T& move(T& v) { return v; } template <typename T> inline const T& move(const T& v) { return v; } } #endif // !defined(HAS_STD_MOVE) This allow people to write things like std::vector<Thing> x = std::move(y); ... with impugnity. It does what they want in C++11 and it does the best it can in C++03. When we finally drop the last of the C++03 compilers, this code can remain as is. However, according to the standard, it is illegal to inject new symbols into the std namespace. That's the theory. My question is, practically speaking, is there any harm in doing this as a way of achieving forward compatibility?

    Read the article

  • Silverlight Cream for April 24, 2010 -- #846

    - by Dave Campbell
    In this Issue: Michael Washington, Timmy Kokke, Pete Brown, Paul Yanez, Emil Stoychev, Jeremy Likness, and Pavan Podila. Shoutouts: If you've got some time to spend, the User Experience Kit is packed with info: User Experience Kit, and just plain fun to navigate ... thanks Scott Barnes for reminding me about it! Jesse Liberty is looking for some help organizing and cataloging posts for a new project he's got going: Help Wanted Emil Stoychev posted Slides and demos from my talk on Silverlight 4 From SilverlightCream.com: Silverlight 4 Drag and Drop File Manager Michael Washington has a post up about a Silverlight Drag and Drop File Manager in MVVM, but a secondary important point about the post is that he and Alan Beasley followed strict Designer/Developer rules on this... you recognized Alan's ListBox didn't you? Changing CSS with jQuery syntax in Silverlight using jLight Timmy Kokke is using jLight as introduced in a prior post to interact with the DOM from Silverlight. Essential Silverlight and WPF Skills: The UI Thread, Dispatchers, Background Workers and Async Network Programming Pete Brown has a great backrounder up for WPF and Silverlight devs on threading and networking, good comments too so far. Fluid layout and Fullscreen in Silverlight Paul Yanez has a quick post and demo up on forcing full-screen with a fluid layout, all code included -- and it doesn't take much Data Binding in Silverlight Emil Stoychev has a great long tutorial up on DataBinding in Silverlight ... he hits all the major points with text, samples, and code... definitely one to read! Yet Another MVVM Locator Pattern Another not-necessarily Silverlight post from Jeremy Likness -- but definitely a good one on MVVM and locator patterns. The SpiderWebControl for Silverlight Pavan Podila has a 'SpiderWebControl' for Silverlight 4 up... this is a great network graph control with any sort of feature I can think of... check out the demo, then grab the code... or the other way around, your choice :) Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • SQL SERVER – Three Methods to Insert Multiple Rows into Single Table – SQL in Sixty Seconds #024 – Video

    - by pinaldave
    One of the biggest ask I have always received from developers is that if there is any way to insert multiple rows into a single table in a single statement. Currently when developers have to insert any value into the table they have to write multiple insert statements. First of all this is not only boring it is also very much time consuming as well. Additionally, one has to repeat the same syntax so many times that the word boring becomes an understatement. In the following quick video we have demonstrated three different methods to insert multiple values into a single table. -- Insert Multiple Values into SQL Server CREATE TABLE #SQLAuthority (ID INT, Value VARCHAR(100)); Method 1: Traditional Method of INSERT…VALUE -- Method 1 - Traditional Insert INSERT INTO #SQLAuthority (ID, Value) VALUES (1, 'First'); INSERT INTO #SQLAuthority (ID, Value) VALUES (2, 'Second'); INSERT INTO #SQLAuthority (ID, Value) VALUES (3, 'Third'); Clean up -- Clean up TRUNCATE TABLE #SQLAuthority; Method 2: INSERT…SELECT -- Method 2 - Select Union Insert INSERT INTO #SQLAuthority (ID, Value) SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third'; Clean up -- Clean up TRUNCATE TABLE #SQLAuthority; Method 3: SQL Server 2008+ Row Construction -- Method 3 - SQL Server 2008+ Row Construction INSERT INTO #SQLAuthority (ID, Value) VALUES (1, 'First'), (2, 'Second'), (3, 'Third'); Clean up -- Clean up DROP TABLE #SQLAuthority; Related Tips in SQL in Sixty Seconds: SQL SERVER – Insert Multiple Records Using One Insert Statement – Use of UNION ALL SQL SERVER – 2008 – Insert Multiple Records Using One Insert Statement – Use of Row Constructor I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • make-like build tools for data?

    - by miku
    Make is a standard tools for building software. But make decides whether a target needs to be regenerated by comparing file modification times. Are there any proven, preferably small tools that handle builds not for software but for data? Something that regenerates targets not only on mod times but on certain other properties (e.g. completeness). (Or alternatively some paper that describes such a tool.) As illustration: I'd like to automate the following process: get data (e.g. a tarball) from some regularly updated source copy somewhere if it's not there (based e.g. on some filename-scheme) convert the files to different format (but only if there aren't successfully converted ones there - e.g. from a previous attempt - custom comparison routine) for each file find a certain data element and fetch some additional file from say an URL, but only if that hasn't been downloaded yet (decide on existence of file and file "freshness") finally compute something (e.g. word count for something identifiable and store it in the database, but only if the DB does not have an entry for that exact ID yet) Observations: there are different stages each stage is usually simple to compute or implement in isolation each stage may be simple, but the data volume may be large each stage may produce a few errors each stage may have different signals, on when (re)processing is needed Requirements: builds should be interruptable and idempotent (== robust) when interrupted, already processed objects should be reused to speedup the next run data paths should be easy to adjust (simple syntax, nothing new to learn, internal dsl would be ok) some form of dependency graph, that describes the process would be nice for later visualizations should leverage existing programs, if possible I've done some research on make alternatives like rake and have worked a lot with ant and maven in the past. All these tools naturally focus on code and software build, not on data builds. A system we have in place now for a task similar to the above is pretty much just shell scripts, which are compact (and are a ok glue for a variety of other programs written in other languages), so I wonder if worse is better?

    Read the article

  • Developers are strange

    - by DavidWimbush
    Why do developers always use the GUI tools in SQL Server? I've always found this irritating and just vaguely assumed it's because they aren't familiar with SQL syntax. But when you think about it it, it's a genuine puzzle. Developers type code all day - really heavy code too like generics, lamda functions and extension methods. They (thankfully) scorn the Visual Studio stuff where you drag a table onto the class and it pastes in lots of code to query the table into a DataSet or something. But when they want to add a column to a table, without fail they dive into the graphical table designer. And half the time the script it generates does horrible things like copy the table to another one with the new column, delete the old table, and rename the new table. Which is fine if your users don't care about uptime. Is ALTER TABLE ADD <column definition> really that hard? I just don't get it.

    Read the article

  • A Simple Approach For Presenting With Code Samples

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/07/31/a-simple-approach-for-presenting-with-code-samples.aspxI’ve been getting ready for a presentation and have been struggling a bit with the best way to show and execute code samples. I don’t present often (hardly ever), but when I do I like the presentation to have a lot of succinct and executable code snippets to help illustrate the points that I’m making. Depending on what the presentation is about, I might just want to build an entire sample application that I would run during the presentation. In other cases, however, building a full-blown application might not really be the best way to present the code. The presentation I’m working on now is for an open source utility library for dealing with dates and times. I could have probably cooked up a sample app for accepting date and time input and then contrived ways in which it could put the library through its paces, but I had trouble coming up with one app that would illustrate all of the various features of the library that I wanted to highlight. I finally decided that what I really needed was an approach that met the following criteria: Simple: I didn’t want the user interface or overall architecture of a sample application to serve as a distraction from the demonstration of the syntax of the library that the presentation is about. I want to be able to present small bits of code that are focused on accomplishing a single task. Several of these examples will look similar, and that’s OK. I want each sample to “stand on its own” and not rely much on external classes or methods (other than the library that is being presented, of course). “Debuggable” (not really a word, I know): I want to be able to easily run the sample with the debugger attached in Visual Studio should I want to step through any bits of code and show what certain values might be at run time. As far as I know this rules out something like LinqPad, though using LinqPad to present code samples like this is actually a very interesting idea that I might explore another time. Flexible and Selectable: I’m going to have lots of code samples to show, and I want to be able to just package them all up into a single project or module and have an easy way to just run the sample that I want on-demand. Since I’m presenting on a .NET framework library, one of the simplest ways in which I could execute some code samples would be to just create a Console application and use Console.WriteLine to output the pertinent info at run time. This gives me a “no frills” harness from which to run my code samples, and I just hit ‘F5’ to run it with the debugger. This satisfies numbers 1 and 2 from my list of criteria above, but item 3 is a little harder. By default, just running a console application is going to execute the ‘main’ method, and then terminate the program after all code is executed. If I want to have several different code samples and run them one at a time, it would be cumbersome to keep swapping the code I want in and out of the ‘main’ method of the console application. What I really want is an easy way to keep the console app running throughout the whole presentation and just have it run the samples I want when I want. I could setup a simple Windows Forms or WPF desktop application with buttons for the different samples, but then I’m getting away from my first criteria of keeping things as simple as possible. Infinite Loops To The Rescue I found a way to have a simple console application satisfy all three of my requirements above, and it involves using an infinite loop and some Console.ReadLine calls that will give the user an opportunity to break out and exit the program. (All programs that need to run until they are closed explicitly (or crash!) likely use similar constructs behind the scenes. Create a new Windows Forms project, look in the ‘Program.cs’ that gets generated, and then check out the docs for the Application.Run method that it calls.). Here’s how the main method might look: 1: static void Main(string[] args) 2: { 3: do 4: { 5: Console.Write("Enter command or 'exit' to quit: > "); 6: var command = Console.ReadLine(); 7: if ((command ?? string.Empty).Equals("exit", StringComparison.OrdinalIgnoreCase)) 8: { 9: Console.WriteLine("Quitting."); 10: break; 11: } 12: 13: } while (true); 14: } The idea here is the app prompts me for the command I want to run, or I can type in ‘exit’ to break out of the loop and let the application close. The only trick now is to create a set of commands that map to each of the code samples that I’m going to want to run. Each sample is already encapsulated in a single public method in a separate class, so I could just write a big switch statement or create a hashtable/dictionary that maps command text to an Action that will invoke the proper method, but why re-invent the wheel? CLAP For Your Own Presentation I’ve blogged about the CLAP library before, and it turns out that it’s a great fit for satisfying criteria #3 from my list above. CLAP lets you decorate methods in a class with an attribute and then easily invoke those methods from within a console application. CLAP was designed to take the arguments passed into the console app from the command line and parse them to determine which method to run and what arguments to pass to that method, but there’s no reason you can’t re-purpose it to accept command input from within the infinite loop defined above and invoke the corresponding method. Here’s how you might define a couple of different methods to contain two different code samples that you want to run during your presentation: 1: public static class CodeSamples 2: { 3: [Verb(Aliases="one")] 4: public static void SampleOne() 5: { 6: Console.WriteLine("This is sample 1"); 7: } 8:   9: [Verb(Aliases="two")] 10: public static void SampleTwo() 11: { 12: Console.WriteLine("This is sample 2"); 13: } 14: } A couple of things to note about the sample above: I’m using static methods. You don’t actually need to use static methods with CLAP, but the syntax ends up being a bit simpler and static methods happen to lend themselves well to the “one self-contained method per code sample” approach that I want to use. The methods are decorated with a ‘Verb’ attribute. This tells CLAP that they are eligible targets for commands. The “Aliases” argument lets me give them short and easy-to-remember aliases that can be used to invoke them. By default, CLAP just uses the full method name as the command name, but with aliases you can simply the usage a bit. I’m not using any parameters. CLAP’s main feature is its ability to parse out arguments from a command line invocation of a console application and automatically pass them in as parameters to the target methods. My code samples don’t need parameters ,and honestly having them would complicate giving the presentation, so this is a good thing. You could use this same approach to invoke methods with parameters, but you’d have a couple of things to figure out. When you invoke a .NET application from the command line, Windows will parse the arguments and pass them in as a string array (called ‘args’ in the boilerplate console project Program.cs). The parsing that gets done here is smart enough to deal with things like treating strings in double quotes as one argument, and you’d have to re-create that within your infinite loop if you wanted to use parameters. I plan on either submitting a pull request to CLAP to add this capability or maybe just making a small utility class/extension method to do it and posting that here in the future. So I now have a simple class with static methods to contain my code samples, and an infinite loop in my ‘main’ method that can accept text commands. Wiring this all up together is pretty easy: 1: static void Main(string[] args) 2: { 3: do 4: { 5: try 6: { 7: Console.Write("Enter command or 'exit' to quit: > "); 8: var command = Console.ReadLine(); 9: if ((command ?? string.Empty).Equals("exit", StringComparison.OrdinalIgnoreCase)) 10: { 11: Console.WriteLine("Quitting."); 12: break; 13: } 14:   15: Parser.Run<CodeSamples>(new[] { command }); 16: Console.WriteLine("---------------------------------------------------------"); 17: } 18: catch (Exception ex) 19: { 20: Console.Error.WriteLine("Error: " + ex.Message); 21: } 22:   23: } while (true); 24: } Note that I’m now passing the ‘CodeSamples’ class into the CLAP ‘Parser.Run’ as a type argument. This tells CLAP to inspect that class for methods that might be able to handle the commands passed in. I’m also throwing in a little “----“ style line separator and some basic error handling (because I happen to know that some of the samples are going to throw exceptions for demonstration purposes) and I’m good to go. Now during my presentation I can just have the console application running the whole time with the debugger attached and just type in the alias of the code sample method that I want to run when I want to run it.

    Read the article

  • DATE function does not support all the dates in DAX by design #powerpivot #tabular #dax

    - by Marco Russo (SQLBI)
    The DATE function in DAX has this simple syntax: DATE( <year>, <month>, <day> ) If you are like me, you never read the BOL notes that says in a clear way that it supports dates beginning with March 1, 1900. In fact, I was wrongly assuming that it would have supported any date that can be represented in a Date data type in Data Models, so all the dates beginning with January 1, 1900. The funny thing is that in some of the BOL documentation you will find that Date data type supports dates after March 1, 1900 (which seems not including that date, but this is a detail…). But we should not digress. The real issue is that if you try to call the DATE function passing values between January 1 and February 28, 1900, you will see a different day as a result. evaluate row ( "x", DATE( 1900, 1, 1 ) ) -- return WRONG result -- [x] 12/31/1899 12:00:00 AM   evaluate row ( "x", DATE( 1901, 2, 29 ) ) -- return WRONG result -- [x] 2/28/1900 12:00:00 AM   evaluate row ( "x", DATE( 1900, 3, 1 ) ) -- return CORRECT result -- [x] 3/1/1900 12:00:00 AM As usual, this is not a bug. It is “by design”. The DATE function works in this way in Excel. And also in Excel it was “by design”. In this case the design is having the same bug of Lotus 1-2-3 that handled 1900 a leap year, even though it isn’t. The first release of Lotus 1-2-3 is dated 1983. I hope many of my readers are younger than that. I tried to open a bug in Connect. Please vote it. I would like if Microsoft changed this type of items from “by design” (as we can expect) to “by genetic disease”. Or by “historical respect”, in order to be more politically correct.

    Read the article

  • C#/.NET Little Wonders: The Predicate, Comparison, and Converter Generic Delegates

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. In the last three weeks, we examined the Action family of delegates (and delegates in general), the Func family of delegates, and the EventHandler family of delegates and how they can be used to support generic, reusable algorithms and classes. This week I will be completing my series on the generic delegates in the .NET Framework with a discussion of three more, somewhat less used, generic delegates: Predicate<T>, Comparison<T>, and Converter<TInput, TOutput>. These are older generic delegates that were introduced in .NET 2.0, mostly for use in the Array and List<T> classes.  Though older, it’s good to have an understanding of them and their intended purpose.  In addition, you can feel free to use them yourself, though obviously you can also use the equivalents from the Func family of delegates instead. Predicate<T> – delegate for determining matches The Predicate<T> delegate was a very early delegate developed in the .NET 2.0 Framework to determine if an item was a match for some condition in a List<T> or T[].  The methods that tend to use the Predicate<T> include: Find(), FindAll(), FindLast() Uses the Predicate<T> delegate to finds items, in a list/array of type T, that matches the given predicate. FindIndex(), FindLastIndex() Uses the Predicate<T> delegate to find the index of an item, of in a list/array of type T, that matches the given predicate. The signature of the Predicate<T> delegate (ignoring variance for the moment) is: 1: public delegate bool Predicate<T>(T obj); So, this is a delegate type that supports any method taking an item of type T and returning bool.  In addition, there is a semantic understanding that this predicate is supposed to be examining the item supplied to see if it matches a given criteria. 1: // finds first even number (2) 2: var firstEven = Array.Find(numbers, n => (n % 2) == 0); 3:  4: // finds all odd numbers (1, 3, 5, 7, 9) 5: var allEvens = Array.FindAll(numbers, n => (n % 2) == 1); 6:  7: // find index of first multiple of 5 (4) 8: var firstFiveMultiplePos = Array.FindIndex(numbers, n => (n % 5) == 0); This delegate has typically been succeeded in LINQ by the more general Func family, so that Predicate<T> and Func<T, bool> are logically identical.  Strictly speaking, though, they are different types, so a delegate reference of type Predicate<T> cannot be directly assigned to a delegate reference of type Func<T, bool>, though the same method can be assigned to both. 1: // SUCCESS: the same lambda can be assigned to either 2: Predicate<DateTime> isSameDayPred = dt => dt.Date == DateTime.Today; 3: Func<DateTime, bool> isSameDayFunc = dt => dt.Date == DateTime.Today; 4:  5: // ERROR: once they are assigned to a delegate type, they are strongly 6: // typed and cannot be directly assigned to other delegate types. 7: isSameDayPred = isSameDayFunc; When you assign a method to a delegate, all that is required is that the signature matches.  This is why the same method can be assigned to either delegate type since their signatures are the same.  However, once the method has been assigned to a delegate type, it is now a strongly-typed reference to that delegate type, and it cannot be assigned to a different delegate type (beyond the bounds of variance depending on Framework version, of course). Comparison<T> – delegate for determining order Just as the Predicate<T> generic delegate was birthed to give Array and List<T> the ability to perform type-safe matching, the Comparison<T> was birthed to give them the ability to perform type-safe ordering. The Comparison<T> is used in Array and List<T> for: Sort() A form of the Sort() method that takes a comparison delegate; this is an alternate way to custom sort a list/array from having to define custom IComparer<T> classes. The signature for the Comparison<T> delegate looks like (without variance): 1: public delegate int Comparison<T>(T lhs, T rhs); The goal of this delegate is to compare the left-hand-side to the right-hand-side and return a negative number if the lhs < rhs, zero if they are equal, and a positive number if the lhs > rhs.  Generally speaking, null is considered to be the smallest value of any reference type, so null should always be less than non-null, and two null values should be considered equal. In most sort/ordering methods, you must specify an IComparer<T> if you want to do custom sorting/ordering.  The Array and List<T> types, however, also allow for an alternative Comparison<T> delegate to be used instead, essentially, this lets you perform the custom sort without having to have the custom IComparer<T> class defined. It should be noted, however, that the LINQ OrderBy(), and ThenBy() family of methods do not support the Comparison<T> delegate (though one could easily add their own extension methods to create one, or create an IComparer() factory class that generates one from a Comparison<T>). So, given this delegate, we could use it to perform easy sorts on an Array or List<T> based on custom fields.  Say for example we have a data class called Employee with some basic employee information: 1: public sealed class Employee 2: { 3: public string Name { get; set; } 4: public int Id { get; set; } 5: public double Salary { get; set; } 6: } And say we had a List<Employee> that contained data, such as: 1: var employees = new List<Employee> 2: { 3: new Employee { Name = "John Smith", Id = 2, Salary = 37000.0 }, 4: new Employee { Name = "Jane Doe", Id = 1, Salary = 57000.0 }, 5: new Employee { Name = "John Doe", Id = 5, Salary = 60000.0 }, 6: new Employee { Name = "Jane Smith", Id = 3, Salary = 59000.0 } 7: }; Now, using the Comparison<T> delegate form of Sort() on the List<Employee>, we can sort our list many ways: 1: // sort based on employee ID 2: employees.Sort((lhs, rhs) => Comparer<int>.Default.Compare(lhs.Id, rhs.Id)); 3:  4: // sort based on employee name 5: employees.Sort((lhs, rhs) => string.Compare(lhs.Name, rhs.Name)); 6:  7: // sort based on salary, descending (note switched lhs/rhs order for descending) 8: employees.Sort((lhs, rhs) => Comparer<double>.Default.Compare(rhs.Salary, lhs.Salary)); So again, you could use this older delegate, which has a lot of logical meaning to it’s name, or use a generic delegate such as Func<T, T, int> to implement the same sort of behavior.  All this said, one of the reasons, in my opinion, that Comparison<T> isn’t used too often is that it tends to need complex lambdas, and the LINQ ability to order based on projections is much easier to use, though the Array and List<T> sorts tend to be more efficient if you want to perform in-place ordering. Converter<TInput, TOutput> – delegate to convert elements The Converter<TInput, TOutput> delegate is used by the Array and List<T> delegate to specify how to convert elements from an array/list of one type (TInput) to another type (TOutput).  It is used in an array/list for: ConvertAll() Converts all elements from a List<TInput> / TInput[] to a new List<TOutput> / TOutput[]. The delegate signature for Converter<TInput, TOutput> is very straightforward (ignoring variance): 1: public delegate TOutput Converter<TInput, TOutput>(TInput input); So, this delegate’s job is to taken an input item (of type TInput) and convert it to a return result (of type TOutput).  Again, this is logically equivalent to a newer Func delegate with a signature of Func<TInput, TOutput>.  In fact, the latter is how the LINQ conversion methods are defined. So, we could use the ConvertAll() syntax to convert a List<T> or T[] to different types, such as: 1: // get a list of just employee IDs 2: var empIds = employees.ConvertAll(emp => emp.Id); 3:  4: // get a list of all emp salaries, as int instead of double: 5: var empSalaries = employees.ConvertAll(emp => (int)emp.Salary); Note that the expressions above are logically equivalent to using LINQ’s Select() method, which gives you a lot more power: 1: // get a list of just employee IDs 2: var empIds = employees.Select(emp => emp.Id).ToList(); 3:  4: // get a list of all emp salaries, as int instead of double: 5: var empSalaries = employees.Select(emp => (int)emp.Salary).ToList(); The only difference with using LINQ is that many of the methods (including Select()) are deferred execution, which means that often times they will not perform the conversion for an item until it is requested.  This has both pros and cons in that you gain the benefit of not performing work until it is actually needed, but on the flip side if you want the results now, there is overhead in the behind-the-scenes work that support deferred execution (it’s supported by the yield return / yield break keywords in C# which define iterators that maintain current state information). In general, the new LINQ syntax is preferred, but the older Array and List<T> ConvertAll() methods are still around, as is the Converter<TInput, TOutput> delegate. Sidebar: Variance support update in .NET 4.0 Just like our descriptions of Func and Action, these three early generic delegates also support more variance in assignment as of .NET 4.0.  Their new signatures are: 1: // comparison is contravariant on type being compared 2: public delegate int Comparison<in T>(T lhs, T rhs); 3:  4: // converter is contravariant on input and covariant on output 5: public delegate TOutput Contravariant<in TInput, out TOutput>(TInput input); 6:  7: // predicate is contravariant on input 8: public delegate bool Predicate<in T>(T obj); Thus these delegates can now be assigned to delegates allowing for contravariance (going to a more derived type) or covariance (going to a less derived type) based on whether the parameters are input or output, respectively. Summary Today, we wrapped up our generic delegates discussion by looking at three lesser-used delegates: Predicate<T>, Comparison<T>, and Converter<TInput, TOutput>.  All three of these tend to be replaced by their more generic Func equivalents in LINQ, but that doesn’t mean you shouldn’t understand what they do or can’t use them for your own code, as they do contain semantic meanings in their names that sometimes get lost in the more generic Func name.   Tweet Technorati Tags: C#,CSharp,.NET,Little Wonders,delegates,generics,Predicate,Converter,Comparison

    Read the article

  • DevDays ‘00 The Netherlands day #2

    - by erwin21
    Day 2 of DevDays 2010 and again 5 interesting sessions at the World Forum in The Hague. The first session of the today in the big world forum theater was from Scott Hanselman, he gives a lap around .NET 4.0. In his way of presenting he talked about all kind of new features of .NET 4.0 like MEF, threading, parallel processing, changes and additions to the CLR and DLR, WPF and all new language features of .NET 4.0. After a small break it was ready for session 2 from Scott Allen about Tips, Tricks and Optimizations of LINQ. He talked about lazy and deferred executions, the difference between IQueryable and IEnumerable and the two flavors of LINQ syntax. The lunch was again very good prepared and delicious, but after that it was time for session 3 Web Vulnerabilities and Exploits from Alex Thissen. This was no normal session but more like a workshop, we decided what kind of subjects we discussed, the subjects where OWASP, XSS and other injections, validation, encoding. He gave some handy tips and tricks how to prevent such attacks. Session 4 was about the new features of C# 4.0 from Alex van Beek. He talked about Optional- en Named Parameters, Generic Co- en Contra Variance, Dynamic keyword and COM Interop features. He showed how to use them but also when not to use them. The last session of today and also the last session of DevDays 2010 was about WCF Best Practices from Gerben van Loon. He talked about 7 best practices that you must know when you are going to use WCF. With some quick demos he showed the problem and the solution for some common issues. It where two interesting days and next year i sure will be attending again.

    Read the article

  • Assembly load and execute issue

    - by Jean Carlos Suárez Marranzini
    I'm trying to develop Assembly code allowing me to load and execute(by input of the user) 2 other Assembly .EXE programs. I'm having two problems: -I don't seem to be able to assign the pathname to a valid register(Or maybe incorrect syntax) -I need to be able to execute the other program after the first one (could be either) started its execution. This is what I have so far: mov ax,cs ; moving code segment to data segment mov ds,ax mov ah,1h ; here I read from keyboard int 21h mov dl,al cmp al,'1' ; if 1 jump to LOADRUN1 JE LOADRUN1 popf cmp al,'2' ; if 1 jump to LOADRUN2 JE LOADRUN2 popf LOADRUN1: MOV AH,4BH MOV AL,00 LEA DX,[PROGNAME1] ; Not sure if it works INT 21H LOADRUN2: MOV AH,4BH MOV AL,00 LEA DX,[PROGNAME2] ; Not sure if it works INT 21H ; Here I define the bytes containing the pathnames PROGNAME1 db 'C:\Users\Usuario\NASM\Adding.exe',0 PROGNAME2 db 'C:\Users\Usuario\NASM\Substracting.exe',0 I just don't know how start another program by input in the 'parent' program, after one is already executing. Thanks in advance for your help! Any additional information I'll be more than happy to provide. -I'm using NASM 16 bits, Windows 7 32 bits.

    Read the article

  • 11gr2 DataGuard: Restarting DUPLICATE After a Failure

    - by rene.kundersma
    One of the great new features that comes in very handy when databases get larger and larger these days is RMAN's capability to duplicate from an active database and even restart a duplicate when it fails. Imagine yourself the problem I had lately; I used the duplicate from active database feature and had to wait for an hour or 6 before all datafiles where transferred.At the end of the process some error occurred because of the syntax. While this error was easily to solve I was afraid I had to redo the complete procedure and transfer the 2.5 TB again. Well, 11gr2 RMAN surprised when I re-ran my command with the following output: Using previous duplicated file +DATA/fin2prod/datafile/users.2968.719237649 for datafile 12 with checkpoint SCN of 183289288148 Using previous duplicated file +DATA/fin2prod/datafile/users.2703.719237975 for datafile 13 with checkpoint SCN of 183289295823 Above I only show a small snippet, but what happend is that RMAN smartly skipped all files that where already transferred ! The documentation says this: RMAN automatically optimizes a DUPLICATE command that is a repeat of a previously failed DUPLICATE command. The repeat DUPLICATE command notices which datafiles were successfully copied earlier and does not copy them again. This applies to all forms of duplication, whether they are backup-based (with and without a target connection) or active database duplication. The automatic optimization of the DUPLICATE command can be especially useful when a failure occurs during the duplication of very large databases. If a DUPLICATE operation fails, you need only run the DUPLICATE again, using the same parameters contained in the original DUPLICATE command. Please see chapter 23 of the 11g Release 2 Database Backup and Recovery User's Guide for more details. B.w.t. be very careful with the duplicate command. A small mistake in one of the 'convert' parameters can potentially overwrite your target's controlfile without prompting ! Rene Kundersma Technical Architect Oracle Technology Services

    Read the article

  • Web Essentials extension now available for Visual Studio Express

    - by ihaynes
    Originally posted on: http://geekswithblogs.net/ihaynes/archive/2014/08/12/web-essentials-extension-now-available-for-visual-studio-express.aspxVisual Studio Express has always had one big issue for me, the inability to install any (or very few) of the normal Visual Studio extensions. One of the best extensions for front-end development and design is the Web Essentials extension by Mads Kristensen on the VS team. It was a hugely welcome surprise to find that this extension is now available, in full, for VS Express, and had been since May, darn it! It has a huge number of useful features, not least being able to minify HTML, CSS and JavaScript files, which is almost a prerequisite for responsive sites. It can also bundle them together. There are lots of other features for HTML editing; intellisense and syntax highlighting for robots.txt files; 'surround with tag', intellisense for meta tags (including iOS, Twitter and Facebook); generate vendor-specific CSS etc etc. The full details can be found at the dedicated site. http://vswebessentials.com This extension alone makes VS Express far more useful and worth having.

    Read the article

  • New Regular Expression Features in Java 8

    - by Jan Goyvaerts
    Java 8 brings a few changes to Java’s regular expression syntax to make it more consistent with Perl 5.14 and later in matching horizontal and vertical whitespace. \h is a new feature. It is a shorthand character class that matches any horizontal whitespace character as defined in the Unicode standard. In Java 4 to 7 \v is a character escape that matches only the vertical tab character. In Java 8 \v is a shorthand character class that matches any vertical whitespace, including the vertical tab. When upgrading to Java 8, make sure that any regexes that use \v still do what you want. Use \x0B or \cK to match just the vertical tab in any version of Java. \R is also a new feature. It matches any line break as defined by the Unicode standard. Windows-style CRLF pairs are always matched as a whole. So \R matches \r\n while \R\R fails to match \r\n. \R is equivalent to (?\r\n|[\n\cK\f\r\u0085\u2028\u2029]) with an atomic group that prevents it from matching only the CR in a CRLF pair. Oracle’s documentation for the Pattern class omits the atomic group when explaining \R, which is incorrect. You cannot use \R inside a character class. RegexBuddy and RegexMagic have been updated to support Java 8. Java 4, 5, 6, and 7 are still supported. When you upgrade to Java 8 you can compare or convert your regular expressions between Java 8 and the Java version you were using previously.

    Read the article

  • CodePlex Daily Summary for Monday, October 01, 2012

    CodePlex Daily Summary for Monday, October 01, 2012Popular ReleasesD3 Loot Tracker: 1.4.1: This version will automatically save a recording session on application exit if the user didn't stop the current session.CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.2.1001.1): Visual Ribbon Editor 1.2.1001.1 What's New: Ability to hide currently selected tab Password in the connection file is now being encrypted Esc and Enter keyboard keys are now supported in all dialogs Extended width of Prefix and Id fields in New Button and New Group dialogs Minor UI enchancements for Hide Button button Minor UI enchancements for View XML button and dialog File/assembly versioning for executable file Notes: Existing plain-text password will be automatically encrypt...Untangler: Untangler: First version of Untangler application available now.SubExtractor: Release 1029: Feature: Added option to make i and ¡ characters movie-specific for improved OCR on Spanish subs (Special Characters tab in Options) Feature: Allow switch to Word Spacing dialog directly from Spell Check dialog Fix: Added more default word spacings for accented characters Fix: Changed Word Spacing dialog to show all OCR'd characters in current sub Fix: Removed application focus grab during OCR Fix: Tightened HD subs fuzzy logic to reduce false matches in small characters Fix: Improved Arrow k...MCEBuddy 2.x: MCEBuddy 2.2.18: Reccomended download Changelog for 2.2.18 (32bit and 64bit) 1. Added support for checking if Showanalyzer has hung and cancelling it 2. New version of comskip, 0.81.48 3. Speeding up comskip 4. Fixed a build bug in 64bit 2.2.17 5. Added a new comkip.ini, better commercial detection for international channels and less aggressive. Old one has been retained as comskip_old.ini 6. Added support for Audio Offset on Conversion Task page in GUI (this overrides the profiles AudioDelay when specified)Mugen Injection: Mugen Injection 3.0: Added a generic version of the fluent syntax. Big changes in fluent-syntax. Added support to resolve a parameters: - Factory: AnyCustomFunc{T}, AnyCustomFunc{IEnumerable<IInjectionParameters>,T}, AnyCustomFunc{IDictionary<string, object>,T}, AnyCustomFunc{IEnumerable<IInjectionParameters>,IDictionary<string, object>,T} - Lazy: AnyCustomLazy<T> - Binding metadata: ISetting Change binding builders, now you can configure them through the string settings. Increase performance. F...Aggravation: Version 1.0: This version 1.0 release is pretty stable. You need the Silverlight 4 runtime, developer tools, and Experssion Blend 4 installed.Tube++: Tube++ 4.0.0.43: Fix parser for downloading and add new default slate gray styleReadable Passphrase Generator: KeePass Plugin 0.7.1: See the KeePass Plugin Step By Step Guide for instructions on how to install the plugin. Changes Built against KeePass 2.20Windows 8 Toolkit - Charts and More: Beta 1.0: The First Compiled Version of my LibraryCatchThatException: Release 1.0: This is the first relase for CatchThatException library at 29-9-2012CardPlay: a Solitaire Framework for .Net: 0.3.0: Added 5 games: Blockade, Chessboard, Colorado, Sly Fox, TwentyUltraFluid Modeling Suite: Beta 1: This release is experimental but contains a lot of features like: - xml serialization - multiple selection - clipboard copy/cut/paste on context menu There is the debug and the release (recommanded default) versions.PDF.NET: PDF.NET.Ver4.5-OpenSourceCode: PDF.NET Ver4.5 ????,????Web??????。 PDF.NET Ver4.5 Open Source Code,include a sample Web application project.Visual Studio Icon Patcher: Version 1.5.2: This version contains no new images from v1.5.1 Contains the following improvements: Better support for detecting the installed languages The extract & inject commands won’t run if Visual Studio is running You may now run in extract or inject mode The p/invoke code was cleaned up based on Code Analysis recommendations When a p/invoke method fails the Win32 error message is now displayed Error messages use red text Status messages use green textZXing.Net: ZXing.Net 0.9.0.0: On the way to a release 1.0 the API should be stable now with this version. sync with rev. 2393 of the java version improved api better Unity support Windows RT binaries Windows CE binaries new Windows Service demo new WPF demo WindowsCE Hotfix: Fixes an error with ISO8859-1 encoding and scannning of QR-Codes. The hotfix is only needed for the WindowsCE platform.C.B.R. : Comic Book Reader: CBR 0.7: Synthesis since 0.6 : ePUB : Complete refactoring Add a new dedicated feed viewer for opds stream PDF conversion : improved with image merge Make all backstage panel scrollable Integrate the new AvalonDock 2 library. Support multi-document. Library explorer and Table of content are now toolboxes Designer for dynamic books is now mvvm and much better New BrowserForControl Customized xps viewer to suppress toolbars and bind it to cbr commands Add quick start manual and button ...sosoft: sosoft source: sosoft source include alarm clockRawr: Rawr 5.0.0: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Coevery - Free CRM: Coevery 1.0.0.26: The zh-CN issue has been solved. We also add a project management module.New ProjectsAmazonGlacierGUI - GUI Client for Amazon Glacier | WPF: a GUI client for amazon glacier , Written in .NET 4, C#,WPF,MVVM & RavenDB for storageAthene: UML editor.Basic Helpdesk Application: Develop a simple application to be used in a network environment to aid support staff while working remotely with users to resolve problems in their local PC.CAPTCHA Solver: CSolver, is a CAPTCHA Solver for Simple CAPTCHAs.ComicExt: ComicExt Works with CBZ files decompress them into a folder of the same name and convert them into zip files and vice versa from a single folder.Copy Your Table Storage: Tool to permit copy table storage dotCMS: it is a content management system base on BlogEngine.NET and Ext.NETihongma??: ?????????????????????????,????????????????、??????,?????????????????????????????,??????????????????????。 ??????????????,??????;???????????????,???????Liam F: Basic mathematical functions implemented using AVX/AVX2MVC Solution: MVC Solution, including many mvc best practicesopengltest: opengl testsample project: Testing...Sériethèque: A small tvshow organizer with viewed option, link to addict7ed.com, rename file etc ...SHOIC (SUPER HIGH ORBIT ION CANION): SHOIC (Super Hight Orbit Ion Canion) es una aplicación derivada de LOIC (Low Orbit Ion Canion)SignalRServerTry: signal r trySilverlight Datagrid Plus: Silverlight Datagrid plus adds basic features to the Silverlight toolkit datagrid such as grouping, filtering, new record and formatting of the datagrid.Sitecore Modules: This is a home for all the Sitecore Shared Source modules distributed by 5 Limes. Over time we expect this solution to contain several projects.Spiritual Chanting Utility: A quick media loop utilitySQL Web Studio Pad: A Simple Web Based SQL Management IDE for managing Database objectsStockmarket Simulation: Stockmarket Simulationtest_proj: ok testTISSEAN.NET: This project is based on: TISEAN 3.0.1 Nonlinear Time Series Analysis Rainer Hegger,Holger Kantz, Thomas Schreiber With major contributions by Eckehard OlbrichUntangler: Class hierarchy viewer for .Net assemblyValarmathi Hospital: Patient management.Windows 8 App For SharePoint Online: Windows 8 app to SharePoint 2013 OnlineWPF Anti-DataBinding Anti-DataTemplate Extensions: WPF Anti-DataBinding Anti-DataTemplate Extensions offers a pure C# code-behind (no XAML) approach to working with WPF controls, starting with the WPF ListBox.Yandex Positions Parser: ?????? ??????? ????? ? ?????? ??????? ?? ???????? ???????? ??????.YunCMS: A simple CMS for build protal site.

    Read the article

  • Is there any functional difference between immutable value types and immutable reference types?

    - by Kendall Frey
    Value types are types which do not have an identity. When one variable is modified, other instances are not. Using Javascript syntax as an example, here is how a value type works. var foo = { a: 42 }; var bar = foo; bar.a = 0; // foo.a is still 42 Reference types are types which do have an identity. When one variable is modified, other instances are as well. Here is how a reference type works. var foo = { a: 42 }; var bar = foo; bar.a = 0; // foo.a is now 0 Note how the example uses mutatable objects to show the difference. If the objects were immutable, you couldn't do that, so that kind of testing for value/reference types doesn't work. Is there any functional difference between immutable value types and immutable reference types? Is there any algorithm that can tell the difference between a reference type and a value type if they are immutable? Reflection is cheating. I'm wondering this mostly out of curiosity.

    Read the article

  • Is there an alternative to Google Code Search?

    - by blunders
    Per the Official Google Blog: Code Search, which was designed to help people search for open source code all over the web, will be shut down along with the Code Search API on January 15, 2012. Google Code Search is now gone, and since that makes it much harder to understand the features it presented, here's my attempt to render them via information I gathered from a cache of the page for the Search Options: The "In Search Box" just notes the syntax to type the command directly in the main search box instead of using the advance search interface. Package (In Search Box: "package:linux-2.6") Language (In Search Box: "lang:c++") (OPTIONS: any language, actionscript, ada, applescript, asp, assembly, autoconf, automake, awk, basic, bat, c, c#, c++, caja, cobol, coldfusion, configure, css, d, eiffel, erlang, fortran, go, haskell, inform, java, java, javascript, jsp, lex, limbo, lisp, lolcode, lua, m4, makefile, maple, mathematica, matlab, messagecatalog, modula2, modula3, objectivec, ocaml, pascal, perl, php, pod, prolog, proto, python, python, r, rebol, ruby, sas, scheme, scilab, sgml, shell, smalltalk, sml, sql, svg, tcl, tex, texinfo, troff, verilog, vhdl, vim, xslt, xul, yacc) File (In Search Box: "file:^.*.java$") Class (In Search Box: "class:HashMap") Function (In Search Box: "function:toString") License (In Search Box: "license:mozilla") (OPTIONS: null/any-license, aladdin/Aladdin-Public-License, artistic/Artistic-License, apache/Apache-License, apple/Apple-Public-Source-License, bsd/BSD-License, cpl/Common-Public-License, epl/Eclipse-Public-License, agpl/GNU-Affero-General-Public-License, gpl/GNU-General-Public-License, lgpl/GNU-Lesser-General-Public-License, disclaimer/Historical-Permission-Notice-and-Disclaimer, ibm/IBM-Public-License, lucent/Lucent-Public-License, mit/MIT-License, mozilla/Mozilla-Public-License, nasa/NASA-Open-Source-Agreement, python/Python-Software-Foundation-License, qpl/Q-Public-License, sleepycat/Sleepycat-License, zope/Zope-Public-License) Case Sensitive (In Search Box: "case:no") (OPTIONS: yes, no) Also of use in understanding the search tool would be the still live FAQs page for Google Code Search. Is there any code search engine that would fully replace Google Code Search's features?

    Read the article

  • VSDB to SSDT part 3 : command-line deployment with SqlPackage.exe, replacement for Vsdbcmd.exe

    - by Etienne Giust
    For our continuous integration needs, we use a powershell script to handle deployment. A simpler approach would be to have a deployment task embedded within the build process. See the solution provided here by Jakob Ehn (a most interesting read which also dives into the '”deploying from Visual Studio” specifics) : http://geekswithblogs.net/jakob/archive/2012/04/25/deploying-ssdt-projects-with-tfs-build.aspx   For our needs, though, clearly separating our build phase from our deployment phase is important. It allows us to instantly deploy old versions. Also it is more convenient for continuous integration. So we stick with the powershell script approach. With VSDB projects, that script used to call the following command (the vsdbcmd executable was locally available, along with needed libraries): vsdbcmd.exe /a:Deploy /dd /cs:<CONNECTIONSTRING TO TARGET DB> /dsp:SQL /manifest:< PATH TO .deploymanifest FILE>   To be able to do the approximately same thing with a SSDT produced file (dacpac), you would call this command on a machine which has VS2012 installed (or the SSDT installed, see here : http://msdn.microsoft.com/en-us/library/hh500335%28v=vs.103%29):   C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe /Action:Publish /SourceFile:<PATH TO Database.dacpac FILE> /Profile:<PATH TO .publish.xml FILE>   And from within a powershell script :   & "C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe" /Action:Publish /SourceFile:<PATH TO Database.dacpac FILE> /Profile:<PATH TO .publish.xml FILE>   The command will consume a publish.xml file where the connection string and the deployment options are specified. You must be familiar with it if you have done some deployments from visual studio. If not, please refer to the above mentioned article by Jakob Ehn.   It is also possible to pass those parameters in the command line. The complete SqlPackage.exe syntax is detailed here : http://msdn.microsoft.com/en-us/library/hh550080%28v=vs.103%29.aspx

    Read the article

  • Regular-Expressions.info Thoroughly Updated

    - by Jan Goyvaerts
    RegexBuddy 4 was released earlier this month. This is a major upgrade that significantly improves RegexBuddy’s ability to emulate the features and deficiencies of the latest versions of all the popular regex flavors as well as many past versions of these flavors. Along with that, the Regular-Expressions.info website has been thoroughly updated with new content. Both the tutorial and reference sections have been significantly expanded to cover all the features of the latest regular expression flavors. There are also new tutorial and reference subsections that explain the syntax used by replacement strings when searching and replacing with regular expressions. I’m also reviving this blog. In the coming weeks you can expect blog post that highlight the new topics on the Regular-Expressions.info website. Later on I’ll blog about more intricate regex-related issues that RegexBuddy 4 emulates but that the website doesn’t talk about or only mentions in passing. RegexBuddy 4.0.0 is aware of 574 different aspects (syntactic and behavioral differences) of 94 regular expression flavors. These numbers are surely to grow with future 4.x.x releases. While RegexBuddy juggles it all with ease, that’s far too much detail to cover in a tutorial or reference that any person would want to read. So the tutorial and reference cover the important features and behaviors, while the blog will serve the corner cases as tidbits. Subscribe to the Regex Guru RSS Feed if you don’t want to miss any articles.

    Read the article

  • How much Ruby should I learn before moving to Rails?

    - by Kevin
    Just a quick question.. I can never get a definitive answer when googling this, either. Some people say you can learn Rails without knowing any Ruby, but at some point you'll run into a brick wall and wish you knew Ruby and will have to go back to learn it..and some say to learn the "basics" of Ruby before learning Rails and it will make your life that much easier.. My current knowledge is low. I'm not a beginner, but I'm not pro, either. I went through the Learn Python The Hard Way online book in about a month, but I stopped once I got to the OOP side of Python (I know booleans, elif/if/else/statements, for loops, while loops, functions) I agree with learning the "basics" of Ruby before learning Rails, but what exactly are the "basics" of Ruby? Would I need to learn the whole OOP side of Ruby before I went on to Rails? Or would I just need to learn the Ruby syntax up to where I learned Python (booleans, elif/if/else/statements, for loops, while loops, functions) before I went on to Rails? Thanks!

    Read the article

  • Install NPM Packages Automatically for Node.js on Windows Azure Web Site

    - by Shaun
    In one of my previous post I described and demonstrated how to use NPM packages in Node.js and Windows Azure Web Site (WAWS). In that post I used NPM command to install packages, and then use Git for Windows to commit my changes and sync them to WAWS git repository. Then WAWS will trigger a new deployment to host my Node.js application. Someone may notice that, a NPM package may contains many files and could be a little bit huge. For example, the “azure” package, which is the Windows Azure SDK for Node.js, is about 6MB. Another popular package “express”, which is a rich MVC framework for Node.js, is about 1MB. When I firstly push my codes to Windows Azure, all of them must be uploaded to the cloud. Is that possible to let Windows Azure download and install these packages for us? In this post, I will introduce how to make WAWS install all required packages for us when deploying.   Let’s Start with Demo Demo is most straightforward. Let’s create a new WAWS and clone it to my local disk. Drag the folder into Git for Windows so that it can help us commit and push. Please refer to this post if you are not familiar with how to use Windows Azure Web Site, Git deployment, git clone and Git for Windows. And then open a command windows and install a package in our code folder. Let’s say I want to install “express”. And then created a new Node.js file named “server.js” and pasted the code as below. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7: 8: console.log("Web application opened."); 9: app.listen(process.env.PORT); If we switch to Git for Windows right now we will find that it detected the changes we made, which includes the “server.js” and all files under “node_modules” folder. What we need to upload should only be our source code, but the huge package files also have to be uploaded as well. Now I will show you how to exclude them and let Windows Azure install the package on the cloud. First we need to add a special file named “.gitignore”. It seems cannot be done directly from the file explorer since this file only contains extension name. So we need to do it from command line. Navigate to the local repository folder and execute the command below to create an empty file named “.gitignore”. If the command windows asked for input just press Enter. 1: echo > .gitignore Now open this file and copy the content below and save. 1: node_modules Now if we switch to Git for Windows we will found that the packages under the “node_modules” were not in the change list. So now if we commit and push, the “express” packages will not be uploaded to Windows Azure. Second, let’s tell Windows Azure which packages it needs to install when deploying. Create another file named “package.json” and copy the content below into that file and save. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*" 6: } 7: } Now back to Git for Windows, commit our changes and push it to WAWS. Then let’s open the WAWS in developer portal, we will see that there’s a new deployment finished. Click the arrow right side of this deployment we can see how WAWS handle this deployment. Especially we can find WAWS executed NPM. And if we opened the log we can review what command WAWS executed to install the packages and the installation output messages. As you can see WAWS installed “express” for me from the cloud side, so that I don’t need to upload the whole bunch of the package to Azure. Open this website and we can see the result, which proved the “express” had been installed successfully.   What’s Happened Under the Hood Now let’s explain a bit on what the “.gitignore” and “package.json” mean. The “.gitignore” is an ignore configuration file for git repository. All files and folders listed in the “.gitignore” will be skipped from git push. In the example below I copied “node_modules” into this file in my local repository. This means,  do not track and upload all files under the “node_modules” folder. So by using “.gitignore” I skipped all packages from uploading to Windows Azure. “.gitignore” can contain files, folders. It can also contain the files and folders that we do NOT want to ignore. In the next section we will see how to use the un-ignore syntax to make the SQL package included. The “package.json” file is the package definition file for Node.js application. We can define the application name, version, description, author, etc. information in it in JSON format. And we can also put the dependent packages as well, to indicate which packages this Node.js application is needed. In WAWS, name and version is necessary. And when a deployment happened, WAWS will look into this file, find the dependent packages, execute the NPM command to install them one by one. So in the demo above I copied “express” into this file so that WAWS will install it for me automatically. I updated the dependencies section of the “package.json” file manually. But this can be done partially automatically. If we have a valid “package.json” in our local repository, then when we are going to install some packages we can specify “--save” parameter in “npm install” command, so that NPM will help us upgrade the dependencies part. For example, when I wanted to install “azure” package I should execute the command as below. Note that I added “--save” with the command. 1: npm install azure --save Once it finished my “package.json” will be updated automatically. Each dependent packages will be presented here. The JSON key is the package name while the value is the version range. Below is a brief list of the version range format. For more information about the “package.json” please refer here. Format Description Example version Must match the version exactly. "azure": "0.6.7" >=version Must be equal or great than the version. "azure": ">0.6.0" 1.2.x The version number must start with the supplied digits, but any digit may be used in place of the x. "azure": "0.6.x" ~version The version must be at least as high as the range, and it must be less than the next major revision above the range. "azure": "~0.6.7" * Matches any version. "azure": "*" And WAWS will install the proper version of the packages based on what you defined here. The process of WAWS git deployment and NPM installation would be like this.   But Some Packages… As we know, when we specified the dependencies in “package.json” WAWS will download and install them on the cloud. For most of packages it works very well. But there are some special packages may not work. This means, if the package installation needs some special environment restraints it might be failed. For example, the SQL Server Driver for Node.js package needs “node-gyp”, Python and C++ 2010 installed on the target machine during the NPM installation. If we just put the “msnodesql” in “package.json” file and push it to WAWS, the deployment will be failed since there’s no “node-gyp”, Python and C++ 2010 in the WAWS virtual machine. For example, the “server.js” file. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: var sql = require("msnodesql"); 9: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:tqy4c0isfr.database.windows.net,1433;Database=msteched2012;Uid=shaunxu@tqy4c0isfr;Pwd=P@ssw0rd123;Encrypt=yes;Connection Timeout=30;"; 10: app.get("/sql", function (req, res) { 11: sql.open(connectionString, function (err, conn) { 12: if (err) { 13: console.log(err); 14: res.send(500, "Cannot open connection."); 15: } 16: else { 17: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 18: if (err) { 19: console.log(err); 20: res.send(500, "Cannot retrieve records."); 21: } 22: else { 23: res.json(results); 24: } 25: }); 26: } 27: }); 28: }); 29: 30: console.log("Web application opened."); 31: app.listen(process.env.PORT); The “package.json” file. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*", 6: "msnodesql": "*" 7: } 8: } And it failed to deploy to WAWS. From the NPM log we can see it’s because “msnodesql” cannot be installed on WAWS. The solution is, in “.gitignore” file we should ignore all packages except the “msnodesql”, and upload the package by ourselves. This can be done by use the content as below. We firstly un-ignored the “node_modules” folder. And then we ignored all sub folders but need git to check each sub folders. And then we un-ignore one of the sub folders named “msnodesql” which is the SQL Server Node.js Driver. 1: !node_modules/ 2:  3: node_modules/* 4: !node_modules/msnodesql For more information about the syntax of “.gitignore” please refer to this thread. Now if we go to Git for Windows we will find the “msnodesql” was included in the uncommitted set while “express” was not. I also need remove the dependency of “msnodesql” from “package.json”. Commit and push to WAWS. Now we can see the deployment successfully done. And then we can use the Windows Azure SQL Database from our Node.js application through the “msnodesql” package we uploaded.   Summary In this post I demonstrated how to leverage the deployment process of Windows Azure Web Site to install NPM packages during the publish action. With the “.gitignore” and “package.json” file we can ignore the dependent packages from our Node.js and let Windows Azure Web Site download and install them while deployed. For some special packages that cannot be installed by Windows Azure Web Site, such as “msnodesql”, we can put them into the publish payload as well. With the combination of Windows Azure Web Site, Node.js and NPM it makes even more easy and quick for us to develop and deploy our Node.js application to the cloud.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >