Search Results

Search found 3471 results on 139 pages for 'automated builds'.

Page 108/139 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • Complete list of tools and technologies that make up a solid ASP.NET MVC 2 development environment f

    - by Dr Dork
    This question is related to another wiki I found on SO, but I'd like to develop a more comprehensive example of an automated ASP MVC 2 development environment that can be used to develop and deploy a wide range of small-scale websites by beginners. As far as characteristics of the dev environment go, I'd like to focus on beginner-friendly over powerful since the other wiki focuses more on advanced, powerful setups. This information is targeted for beginners (that already know C# and understand web dev concepts) that have selected... ASP.NET MVC 2 as their dev framework Visual Studio 2010 Pro (or 2008 Pro SP1) as their IDE Windows 7 as their OS and are looking for a quick and easy-to-setup environment that covers managing, building, testing, tracking, and deploying their website with as much automation as possible. A system that can be used for becoming familiar with the whole process, as well as a launching point for exploring other, more custom and powerful systems. Since we've already selected the Compiler, Framework, and OS, I'd like to develop ideas for... Code editor (unless you feel VS will suffice for all areas of code) Database and related tools Unit testing (VS?) Continuous integration build system (VS?) Project Planning Issue tracking Deployment (VS?) Source management (VS?) ASP, C#, VS, and related blogs that beginners can follow Any other categories I'm probably missing Since we're already using Visual Studio, I'd like to focus on the out-of-the-box solutions and features built into Visual Studio, unless you feel there are better solutions that work well with VS and are easier to use than the features built directly into VS. Thanks so much in advance for your wisdom!

    Read the article

  • Any way to loop through FPDF code with proper XY coordinates?

    - by JM4
    At the end of a form collection, I provide the consumer a printable PDF with the information they just entered. I already run through a loop to store the variables themselves but am wondering if it is at all possible to build a loop that builds on itself for FPDF. The catch is this, each new variable (#1, #2, #3) will change location by a determined amount of space. For example: I print the Member #1 First name at coordinate at coordinate (95, 101). I print Member #2 First name at coordinate (95, 110)... and so on. Each known variable will be 9.5mm greater than its previous entry (therefor Member #9 will be 40mm higher than Member 6) My sample code for the FPDF itself is: $pdf->SetFont('Arial','', 7); $pdf->SetXY(8,76.5); $pdf->Cell(20,0,$f1name); $pdf->SetFont('Arial','', 5); $pdf->SetXY(50.5,76.5); $pdf->Cell(20,0,$f1address); $pdf->SetFont('Arial','', 7); $pdf->SetXY(95.7,76.5); $pdf->Cell(20,0,$f1city); $pdf->SetXY(129.5,76.5); $pdf->Cell(20,0,$f1state); $pdf->SetXY(139.1,76.5); $pdf->Cell(20,0,$f1zip); $pdf->SetXY(151,76.5); $pdf->Cell(20,0,$f1dob); $pdf->SetXY(168,76.5); $pdf->Cell(20,0,$f1ssn); $pdf->SetXY(186,76.5); $pdf->Cell(20,0,$f1phone); $pdf->SetXY(55,81.1); $pdf->Cell(20,0,$f1email); $pdf->SetXY(129,81.1); $pdf->Cell(20,0,$f1fednum); Ideally, all Y variables with $f2 would be 9.5mm greater than f1's Y values.

    Read the article

  • A layout for maven project with a patched dependency

    - by zamza
    Suppose, I have an opensource project that depends on some library, that must be patched in order to fix some issues. How do I do that? My ideas are: Have that library sources set up as a module, keep them in my vcs. Pros: simple. Cons: some third party sources in my repo, might slow down build process, hard to find a patched place (though can be fixed in README) Have a module, like in 1, but keep patched source files only, compile them with orignal library jar in classpath and somehow replace *.class files in library jar on build. Pros: builds faster, easy to find patched places. Cons: hard to configure, that jar hackery is non-obvious (library jar in repository and in my project assembly would be different) Keep patched *.class files in main/resources, and replace on packaging like in 2). Pros: almost none. Cons: binaries in vcs, hard to recompile a patched class as patch compilation is not automated. One nice solution is to create a distinct project with patched library sources, and deploy it on local/enterprise repository with -patched qualifier. But that would not fit for an opensourced project that is meant to be easily buildable by anyone who checks out its sources. Or should I just say "and also, before you build my project, please check out that stuff and run mvn install".

    Read the article

  • Need advice or pointers on Release Management Strategies

    - by Murray
    I look after an internal web based (Java, JSP, Mediasurface, etc.) system that is in constant use (24/5). Users raise tickets for enhancements, bug fixes and other business changes. These issues are signed off individually and assigned to one of three or four developers. Once the issue is complete it is built and the code only committed to SVN. The changed files (templates, html, classes, jsp) are then copied to a dev server and committed to a different repository from where they are checked out to the UAT server for testing. (this often requires the Tomcat service to be restarted and occasionally the Mediasurface service as well). The users then test and either reject or approve the release. If approved the edited files are checked out to the Live server and the same process as with UAT undertaken. If rejected the developer makes the relevant changes and starts the release process again. This is all done manually without much control. Where different developers are working on similar files, changes sometimes get overwritten by builds done on out of sync code in other cases changes in UAT are moved to live in error as they are mixed up in files associated with a signed off release. I would like to move this to a more controlled and automated process where all source code and output files are held in SVN and releases to Dev, UAT and Live managed by a CI system (We have TeamCity in house for our .NET applications). My question is on how to manage the releases of multiple changes where some will be signed off and moved on and others rejected and returned to the developer. The changes may be on overlapping files and simply merging each release in to a Release Branch means that the rejected changes would have to be backed out of the branch. Is there a way to manage this using SVN and CI or will I simply have to live with the current system.

    Read the article

  • How to install FFMpeg in WampServer 2.0 (Windows XP)

    - by Richard Knop
    I need to install the ffmpeg PHP extension on my localhost so I can test few of my scripts but I am having troubles figuring out how to do that. I have WampServer 2.0 with PHP 5.2.9-2, my OS is Windows XP. Please somebody give me step by step instructions. I have found some Windows builds here: http://sourceforge.net/projects/ffmpeg-php/files/ But I don't know which one to download and what to do with files. EDITED: What I have done so far: Download ffmpeg_new Copy php_ffmpeg.dll from the php5 folder to the C:\wamp\bin\php\php5.2.9-2\ext Copy files from common to the windows/system32 folder Add extension=php_ffmpeg.dll to php.ini file Restarted all services (Apache, PHP...) I am gettings an error after using this code: $extension = 'ffmpeg'; $extension_soname = 'php_ffmpeg.dll'; $extension_fullname = PHP_EXTENSION_DIR . "/" . $extension_soname; // load extension if(false === extension_loaded($extension)) { if (false === dl($extension_soname)) throw new Exception("Can't load extension $extension_fullname\n"); } The error: Warning: dl() [function.dl]: Not supported in multithreaded Web servers - use extension=ffmpeg.dll in your php.ini in C:\wamp\www\hunnyhive\application\modules\default\controllers\MyAccountController.php on line 314 Plus I also get the exception from above.

    Read the article

  • Convert ISO-8859-1 to UTF-8

    - by tau
    I have several documents I need to convert from ISO-8859-1 to UTF-8 (without the BOM of course). This is the issue though. I have so many of these documents (it is actually a mix of documents, some UTF-8 and some ISO-8859-1) that I need an automated way of converting them. Unfortunately I only have ActivePerl installed and don't know much about encoding in that language. I may be able to install PHP, but I am not sure as this is not my personal computer. Just so you know, I use Scite or Notepad++, but both do not convert correctly. For example, if I open a document in Czech that contains the character "ž" and go to the "Convert to UTF-8" option in Notepad++, it incorrectly converts it to an unreadable character. There is a way I CAN convert them, but it is tedious. If I open the document with the special characters and copy the document to Windows clipboard, then paste it into a UTF-8 document and save it, it is okay. This is too tedious (opening every file and copying/pasting into a new document) for the amount of documents I have. Any ideas? Thanks!!!

    Read the article

  • How do I convert an NSMutableString to NSString when using Frameworks?

    - by BWHazel
    I have written an Objective-C framework which builds some HTML code with NSMutableString which returns the value as an NSString. I have declared an NSString and NSMutableString in the inteface .h file: NSString *_outputLanguage; // Tests language output NSMutableString *outputBuilder; NSString *output; This is a sample from the framework implementation .m code (I cannot print the actual code as it is proprietary): -(NSString*)doThis:(NSString*)aString num:(int)aNumber { if ([outputBuilder length] != 0) { [outputBuilder setString:@""]; } if ([_outputLanguage isEqualToString:@"html"]) { [outputBuilder appendString:@"Some Text..."]; [outputBuilder appendString:aString]; [outputBuilder appendString:[NSString stringWithFormat:@"%d", aNumber]]; } else if ([_outputLanguage isEqualToString:@"xml"]) { [outputBuilder appendString:@"Etc..."]; } else { [outputBuilder appendString:@""]; } output = outputBuilder; return output; } When I wrote a text program, NSLog simply printed out "(null)". The code I wrote there was: TheClass *instance = [[TheClass alloc] init]; NSString *testString = [instance doThis:@"This String" num:20]; NSLog(@"%@", testString); [instance release]; I hope this is enough information!

    Read the article

  • (Fluent) NHibernate Security Exception - ReflectionPermission

    - by PeterEysermans
    I've upgraded an ASP.Net Web application to the latest build of Fluent NHibernate (1.0.0.636) and the newest version of NHibernate (v2.1.2.4000). I've checked a couple of times that the application is running in Full trust. But I keep getting the following error: Security Exception Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. Exception Details: System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.ReflectionPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SecurityException: Request for the permission of type 'System.Security.Permissions.ReflectionPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.] System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) +0 System.Security.CodeAccessPermission.Demand() +54 System.Reflection.Emit.DynamicMethod.PerformSecurityCheck(Type owner, StackCrawlMark& stackMark, Boolean skipVisibility) +269 System.Reflection.Emit.DynamicMethod..ctor(String name, Type returnType, Type[] parameterTypes, Type owner, Boolean skipVisibility) +81 NHibernate.Bytecode.Lightweight.ReflectionOptimizer.CreateDynamicMethod(Type returnType, Type[] argumentTypes) +165 NHibernate.Bytecode.Lightweight.ReflectionOptimizer.GenerateGetPropertyValuesMethod(IGetter[] getters) +383 NHibernate.Bytecode.Lightweight.ReflectionOptimizer..ctor(Type mappedType, IGetter[] getters, ISetter[] setters) +108 NHibernate.Bytecode.Lightweight.BytecodeProviderImpl.GetReflectionOptimizer(Type mappedClass, IGetter[] getters, ISetter[] setters) +52 NHibernate.Tuple.Component.PocoComponentTuplizer..ctor(Component component) +231 NHibernate.Tuple.Component.ComponentEntityModeToTuplizerMapping..ctor(Component component) +420 NHibernate.Tuple.Component.ComponentMetamodel..ctor(Component component) +402 NHibernate.Mapping.Component.BuildType() +38 NHibernate.Mapping.Component.get_Type() +32 NHibernate.Mapping.SimpleValue.IsValid(IMapping mapping) +39 NHibernate.Mapping.RootClass.Validate(IMapping mapping) +61 NHibernate.Cfg.Configuration.ValidateEntities() +220 NHibernate.Cfg.Configuration.Validate() +16 NHibernate.Cfg.Configuration.BuildSessionFactory() +39 FluentNHibernate.Cfg.FluentConfiguration.BuildSessionFactory() in d:\Builds\FluentNH\src\FluentNHibernate\Cfg\FluentConfiguration.cs:93 Anyone had a similar error? I've seach the web / stackoverflow / NHibernate forums but only found people who had a problem when running in medium trust mode, not full trust. I've been developing for several months on this application on this machine with previous versions of Fluent NHibernate and NHibernate. The machine I'm running this on is 64-bit, you never know that this is relevant.

    Read the article

  • LINQ-to-SQL: Could not find key member 'x' of key 'x' on type 'y'

    - by Austin Hyde
    I am trying to connect my application to a SQLite database with LINQ-to-SQL, and so far everything has worked fine. The only hitch was that the SQLite provider I am using does not support code generation (unless I was doing something wrong), so I manually coded the 4 tables in the DB. The solution builds properly, but will not run, giving me the error message Could not find key member 'ItemType_Id' of key 'ItemType_Id' on type 'Item'. The key may be wrong or the field or property on 'Item' has changed names. I have checked and double checked spellings and field names on the database and in the attribute mappings, but could not find any problems. The SQL for the table looks like this: CREATE TABLE [Items] ( [Id] integer PRIMARY KEY AUTOINCREMENT NOT NULL, [Name] text NOT NULL, [ItemType_Id] integer NOT NULL ); And my mapping code: [Table(Name="Items")] class Item { // [snip] [Column(Name = "Id", IsPrimaryKey=true, IsDbGenerated=true)] public int Id { get; set; } // [snip] [Column(Name="ItemType_Id")] public int ItemTypeId { get; set; } [Association(Storage = "_itemType", ThisKey = "ItemType_Id")] public ItemType ItemType { get { return _itemType.Entity; } set { _itemType.Entity = value; } } private EntityRef<ItemType> _itemType; // [snip] } This is really my first excursion into LINQ-to-SQL, and am learning as I go, but I cannot seem to get past this seeming simple problem. Why cannot LINQ see my association?

    Read the article

  • What do I do about recurring billing?

    - by phidah
    This might be a subjective question, but I'll give it a go. There are already a number of questions on SO that revolves around subscription billing management. I am currently working on a SaaS solution that will require a fully automated billing system. What I am not looking for when asking this question is not advice on implementing towards a specific payment gateway or stuff like that. Instead I'd like advice on what kind of approach to take. The functionality that I need is a system that can handle upgrades, downgrades, recurring billing, cancellations, etc. Initially for one product only, but it might over time be a requirement that the system can handle multiple products (by products I mean fundamentally different products, not different variations of the same product). As I see it there are a number of possible approaches when you need a solution like this: Code a billing server yourself that supports this and is decoupled from each product so that it can handle multiple independent products. Use a hosted solution like Recurly, Chargify, Spreedly or CheddarGetter. The advantage of using a hosted solution is obviously that you don't need PCI certification, the concern is outsourced and it is a lot faster to get up and running. These advantages come at a cost however: The most important support function for your product - i.e. the billing is not in your control. Additionally you have less control and flexibility. What would you do? If we look beyond the PCI requirements I would definately prefer to have a system coded in-house that could do this kind of job. On the other hand I've heard from numerous sources that coding a system like this is a pain. Any advice is highly appreciated. Also, if you advice to code it yourself, any experiences on how to do it or if there are any opensource projects (no matter the language, what I'm after is not the code but the structure) that I can benefit from would really mean alot. Thanks in advance for your inputs! :-)

    Read the article

  • Flex, Ant, mxmlc and conditional compilation

    - by Rezmason
    My ActionScript project builds with several compile-time specified constants. They are all Booleans, and only one of them is true at any given time. The rest must be false. When I represented my build process in a bash script, I could loop through the names of all these constants, set one of them to be true and the rest to be false, then concatenate them onto a string to be inserted as a set of arguments passed to mxmlc. In Ant, looping is more complicated. I've tried the ant-contrib for tag: <mxmlc file='blah' output='blah'> <!- ... -> <for list='${commaSeparatedListOfConstNames}' param='constName'> <sequential> <define> <name>${constName}</name> <value>${constName} == ${theTrueConst}</value> <!-- (mxmlc's define arguments can be strings that are evaluated at compile time) --> </define> </sequential> </for> </mxmlc> Long story short, ant-contrib tags like the for tag can't go in the mxmlc task. So now I'm using Ant's propertyregex to take my list of arguments and format them into a set of define args, like my old bash script: <propertyregex property='defLine.first' override='false' input='${commaSeparatedListOfConstNames}' regexp='([^\|]+)\,' replace='\1,false ' /> <propertyregex property='defLine.final' input='${defLine.first}' regexp='(@{theTrueConst}\,)false' replace='\1true' /> <!-- result: -define+=CONST_ONE,false -define+=CONST_TWO,false -define+=TRUE_CONST,true --> Now my problem is, what can I do with this mxmlc argument and the mxmlc task? Apparently arg tags can go inside the mxmlc task without it throwing an error, but they don't seem to have any effect. What am I supposed to do? How do I make this work with the mxmlc task?

    Read the article

  • Ruby on rails generates tests for you. Do those give a false sense of a safety net?

    - by Hamish Grubijan
    Disclaimer: I have not used RoR, and I have not generated tests. But, I will still dare to post this question. Quality Assurance is theoretically impossible to get 100% right in general (Undecidable problem ;), and it is hard in practice. So many developers do not understand that writing good automated tests is an art, and it is hard. When I hear that RoR generates the tests for you, I get very skeptical. It cannot be that easy. Testing is a general concept; it applies across languages. So does the concept of code contracts, it is similar for languages that support it. Code contracts do not generate themselves. The programmer must add the requirements and the promises manually, after doing some thinking about the algorithm / function. If a human gets it wrong, then the tools will propagate the error. Similarly with testing - it takes human judgement about what should happen. Tests do not write themselves, and we are far from the day when a business analyst can just have a conversation with a computer and tell it informally what the requirements are and have the computer do all the work. There is no magic ... how can RoR generate good tests for you? Please shed some light on this. Opinions are ok, for this is a community wiki. Thanks!

    Read the article

  • PayPal IPN validation

    - by denis_n
    Following is from PayPal Order Management Integration Guide: Processing the PayPal Response to Your Postback PayPal responds to your postbacks with a single word in the body of the response: VERIFIED or INVALID. When you receive a VERIFIED postback response, perform the following checks on data in the IPN: Check that the payment_status is Completed. If the payment_status is Completed, check the txn_id against the previous PayPal transaction that you processed to ensure it is not a duplicate. Check that the receiver_email is an email address registered in your PayPal account. Check that the price, carried in mc_gross, and the currency, carried in mc_currency, are correct for the item, carried in item_name or item_number. After you complete the above checks, notification validation is complete. You can update your database with the information provided, and you can initiate other appropriate automated back-end processing. <form action="https://www.paypal.com/cgi-bin/webscr" method="post"> <input type="hidden" name="cmd" value="_cart" /> <input type="hidden" name="upload" value="1" /> <input type="hidden" name="business" value="GXLC9H9VFPLQE"> ..... <input type="submit" name="Submit" value="Submit" /> </form> In step 3 I should check receiver_email, but I don't want to. I don't want to keep my paypal account email in my application. My question is: can I check business variable instead?

    Read the article

  • MonoTouch: using embedded resx files on iPhone build

    - by bright
    I'm able to load and access resx files in Simulator builds of my iPhone app built using MonoTouch. The resx file entry in the csproj file looks like: <ItemGroup> <EmbeddedResource Include="MapMenu\Resources\MapMenu.resx"> <Generator>ResXFileCodeGenerator</Generator> <LastGenOutput>MapMenu.Designer.cs</LastGenOutput> </EmbeddedResource> </ItemGroup> The .resx file itself has an entry like this: <data name="Main_Menu" type="System.Resources.ResXFileRef, System.Windows.Forms"> <value>Main Menu.mm;System.String, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089;Windows-1252</value> </data> and the generated MapMenu.Designer.cs file has this: internal static string Main_Menu { get { return ResourceManager.GetString("Main_Menu", resourceCulture); } } As mentioned above, calling the Main_Menu accessor works fine on the simulator. On the device, however, it produces: <Notice>: Unhandled Exception: System.MissingMethodException: No constructor found for System.Resources.RuntimeResourceSet::.ctor(System.IO.UnmanagedMemoryStream) <Notice>: at System.Activator.CreateInstance (System.Type type, BindingFlags bindingAttr, System.Reflection.Binder binder, System.Object[] args, System.Globalization.CultureInfo culture, System.Object[] activationAttributes) [0x00000] in <filename unknown>:0 <Notice>: at System.Activator.CreateInstance (System.Type type, System.Object[] args, System.Object[] activationAttributes) [0x00000] in <filename unknown>:0 <Notice>: at System.Activator.CreateInstance (System.Type type, System.Object[] args) [0x00000] in <filename unknown>:0 <Notice>: at System.Resources.ResourceManager.InternalGetResourceSet (System.Globalization.CultureInfo culture, Boolean createIfNotExists, Boolean tryParents) [0x00000] in <filename unknown>:0 <Notice>: at System.Resources.ResourceManager.GetString (System.String name, System.Globalization.CultureInfo culture) [0x00000] in <filename unknown>:0 <Notice>: at MapMenu.Resources.MapMenu.get_Main_Menu () [0x00000] in <filename unknown>:0 Did a few sanity checks, and am wondering at this point if this really is missing functionality in Monotouch. Thanks,

    Read the article

  • Boost in Visual Studio 2010, IntelliSense error

    - by Peretz
    Hello, I would like to see if you could orient me. It happens that I compiled and referenced the boost libraries in order to use them with Visual Studio 2010. When building my test project I get these two IntelliSense errors 1 IntelliSense: #error directive: "Macro BOOST_LIB_NAME not set (internal error)" c:\boost_1_43_0\boost\config\auto_link.hpp 2 IntelliSense: #error directive: "some required macros where not defined (internal logic error)." c:\boost_1_43_0\boost\config\auto_link.hpp Checking the auto_link.hpp header file the first error is in this line #ifndef BOOST_LIB_NAME # error "Macro BOOST_LIB_NAME not set (internal error)" #endif Tracing the definition of BOOST_LIB_NAME, it seems that is defined in config.hpp by boost_regex, which code I am including below #if !defined(BOOST_REGEX_NO_LIB) && !defined(BOOST_REGEX_SOURCE) && !defined(BOOST_ALL_NO_LIB) && defined(__cplusplus) # define BOOST_LIB_NAME boost_regex # if defined(BOOST_REGEX_DYN_LINK) || defined(BOOST_ALL_DYN_LINK) # define BOOST_DYN_LINK ... more code and strangely when I point to BOOST_LIB_NAME it defines BOOST_LIB_NAME and the IntelliSense errors disappear. My program builds and executes fine using the Boost:Regex library -- with or without the Intellisense errors; however, I do not understand why these IntelliSense errors appear in the first place, and second why pointing the macro in the config.hpp defines BOOST_LIB_NAME. Any guidance will be greatly appreciated. Thanks, Jaime

    Read the article

  • Good DB Migrations for CakePHP?

    - by Martin Westin
    Hi, I have been trying a few migration scripts for CakePHP but I ran into problems with all of the in some form or another. Please advice me on a migration option for Cake that you use live and know works. I'd like the following "features": -Support CakePHP 1.2(e.g. CakeDCs migrations will only be an option when 1.3 is stable and my app migrated to the new codebase) -Support for (or at least not halt on) Models with a different database config. -Support Models in sub-folders of app/models -Support Models in plugins -Support tables that do not conform to Cake conventions (I have a few special tables that do not have a single primary key field and need to keep them) -Plays well with automated deployment via Capistrano and Git. I do not need rails-style versioned files a git versioned schema file that is compared live to the existing schema will do. That is: I like the SchemaShell in Cake apart from it not being compatible with most of my requirements above. I have looked at and tested: CakePHP Schema Shell http://book.cakephp.org/view/734/Schema-management-and-migrations CakeDC migrations http://cakedc.com/downloads/view/cakephp_migrations_plugin YAML migrations http://github.com/georgious/cakephp-yaml-migrations-and-fixtures joelmoss migrations http://code.google.com/p/cakephp-migrations

    Read the article

  • Rails Nested Forms Attributes not saving if Fields Added with jQuery

    - by looloobs
    Hi I have a rails form with a nested form. I used Ryan Bates nested form with jquery tutorial and I have it working fine as far as adding the new fields dynamically. But when I go to submit the form it does not save any of the associated attributes. However if the partial builds when the form loads it creates the attribute just fine. I can not figure out what is not being passed in the javascript that is failing to communicate that the form object needs to be saved. Any help would be great. class Itinerary < ActiveRecord::Base accepts_nested_attributes_for :trips end itinerary/new.html <% form_for ([@move, @itinerary]), :html => {:class => "new_trip" } do |f| %> <%= f.error_messages %> <%= f.hidden_field :move_id, :value => @move.id %> <% f.fields_for :trips do |builder| %> <%= render "trip", :f => builder %> <% end %> <%= link_to_add_fields "Add Another Leg to Your Trip", f, :trips %> <p><%= f.submit "Submit" %></p> <% end %> application_helper.rb def link_to_remove_fields(name, f) f.hidden_field(:_destroy) + link_to_function(name, "remove_fields(this)") end def link_to_add_fields(name, f, association) new_object = f.object.class.reflect_on_association(association).klass.new fields = f.fields_for(association, new_object, :child_index => "new_#{association}") do |builder| render(association.to_s.singularize, :f => builder) end link_to_function(name, h("add_fields(this, \"#{association}\", \"#{escape_javascript(fields)}\")")) end application.js function add_fields(link, association, content) { var new_id = new Date().getTime(); var regexp = new RegExp("new_" + association, "g") $(link).parent().before(content.replace(regexp, new_id)); }

    Read the article

  • Subscription Management with Merchant Account via API

    - by Josh
    I'm researching gateways/vendors that provide the ability to create subscription based transitions for merchant accounts. In other words, I want to allow customers to signup for a subscription for a website service that charges once a month. Authorize.Net has an ARB (Automated Recurring Billing) Module. The cost is cheap, $10 a month for the service, with unlimited subscriptions, and they have an API that allows XML or SOAP access to create, update and cancel. The LARGE negative of the service is that it doesn't have elegant way to obtain the current status of a subscription. They can send a daily email with an attached CSV file, or someone can login into the site and review statuses – neither is an enterprise solution. The parent company "CyberSource" has a "Recurring Billing Service" which implies a more robust solution, including API access to subscription information. I’m currently waiting for a sales call back on costs related to the service. I also looked at PayPal's Recurring Billing Service, but that appears to require that users are redirected to the PayPal site to signup for the subscription -- again, not an an elegant solution. Does anyone know of any other vendors/gateways that offer subscription service, that meet the following criteria: Vendor/Gateway must host the credit card number and be PCI compliant Have an API that accessible via a Web Service, Post over HTTPS or SOAP Have an API that allows querying the status of subscriptions and/or the ability to query for activity since a certain date. Thanks in advance for your suggestions.

    Read the article

  • Determine whether .NET assemblies were built from the same source

    - by Clayton
    Does anyone know of a way to compare two .NET assemblies to determine whether they were built from the "same" source files? I am aware that there are some differencing utilities available, such as the plugin for Reflector, but I am not interested in viewing differences in a GUI, I just want an automated way to compare a collection of binaries to see whether they were built from the same (or equivalent) source files. I understand that multiple different source files could produce the same IL, and realise that the process would only be sensitive to differences in the IL, not the original source. The main obstacle to just comparing the byte streams for the two assemblies is that .NET includes a field called "MVID" (Module Version Identifier) the assembly. This appears to have a different value for every compilation, so if you build the same code twice the assembly will be different. A related question is, does anyone know how to force the MVID to be the same for each compilation? This would avoid us needing to have a comparison process that is insensitive to differences in the value of the MVID. A consistent MVID would be preferable, as this means that standard checksums could be used. The background behind this is that a third-party company is responsible for independently reviewing and signing off our releases, prior to us being permitted to release to Production. This includes reviewing the source code. They want to independently confirm that the source code we give them matches the binaries that we earlier built, tested and currently plan to deploy. We are looking for a process that allows them to independently build the system from the source we supply them with, and the compare the checksums against the checksums for the binaries we have tested. thanks

    Read the article

  • Trouble running setup package after Publishing in Visual Studio 2008

    - by Andrew Cooper
    I've got a small winform application that I've written that is running fine in the IDE. It builds with no errors or warnings. It's not using any third party controls. I'm coding in C# in Visual Studio 2008. When I Build -- Publish the application, everything seems to work fine. However, when I go and attempt to install the application via the setup.exe file I get an error message that says, "Application cannot be started." The error details are below: ERROR DETAILS Following errors were detected during this operation. * [3/18/2010 10:50:56 AM] System.Runtime.InteropServices.COMException - The referenced assembly is not installed on your system. (Exception from HRESULT: 0x800736B3) - Source: System.Deployment - Stack trace: at System.Deployment.Internal.Isolation.IStore.GetAssemblyInformation(UInt32 Flags, IDefinitionIdentity DefinitionIdentity, Guid& riid) at System.Deployment.Internal.Isolation.Store.GetAssemblyManifest(UInt32 Flags, IDefinitionIdentity DefinitionIdentity) at System.Deployment.Application.ComponentStore.GetAssemblyManifest(DefinitionIdentity asmId) at System.Deployment.Application.ComponentStore.GetSubscriptionStateInternal(DefinitionIdentity subId) at System.Deployment.Application.SubscriptionStore.GetSubscriptionStateInternal(SubscriptionState subState) at System.Deployment.Application.ComponentStore.CollectCrossGroupApplications(Uri codebaseUri, DefinitionIdentity deploymentIdentity, Boolean& identityGroupFound, Boolean& locationGroupFound, String& identityGroupProductName) at System.Deployment.Application.SubscriptionStore.CommitApplication(SubscriptionState& subState, CommitApplicationParams commitParams) at System.Deployment.Application.ApplicationActivator.InstallApplication(SubscriptionState& subState, ActivationDescription actDesc) at System.Deployment.Application.ApplicationActivator.PerformDeploymentActivation(Uri activationUri, Boolean isShortcut, String textualSubId, String deploymentProviderUrlFromExtension, BrowserSettings browserSettings, String& errorPageUrl) at System.Deployment.Application.ApplicationActivator.ActivateDeploymentWorker(Object state) I'm not sure what else to do. The only slightly odd thing I used in this application is the SQL Compact Server. Any help would be appreciated. Thanks, Andrew

    Read the article

  • trigger config transformation in TFS 2010 or msbuild

    - by grenade
    I'm attempting to make use of configuration transformations in a continuous integration environment. I need a way to tell the TFS build agent to perform the transformations. I was kind of hoping it would just work after discovering the config transform files (web.qa-release.config, web.production-release.config, etc...). But it doesn't. I have a TFS build definition that builds the right configurations (qa-release, production-release, etc...) and I have some specific .proj files that get built within these definitions and those contain some environment specific parameters eg: <PropertyGroup Condition=" '$(Configuration)'=='production-release' "> <TargetHost Condition=" '$(TargetHost)'=='' ">qa.web</TargetHost> ... </PropertyGroup> <PropertyGroup Condition=" '$(Configuration)'=='qa-release' "> <TargetHost Condition=" '$(TargetHost)'=='' ">production.web</TargetHost> ... </PropertyGroup> I know from the output that the correct configurations are being built. Now I just need to learn how to trigger the config transformations. Is there some hocus pocus that I can add to the final .proj in the build to kick off the transform and blow away the individual transform files?

    Read the article

  • .NET projects build automation with NAnt/MSBuild + SVN

    - by petr k.
    Hi everyone, for quite a while now, I've been trying to figure out how to setup an automated build process at our shop. I've read many posts and guides on this matter and none of them really fits my specifics needs. My SVN repository is laid out as follows \projects \projectA (a product) \tags \1.0.0.1 \1.0.0.2 ... \trunk \src \proj1 (a VS C# project) \proj2 \documentation Then I have a network share, with a folder for each project (product), which in turn contains the binaries, written documentation and the generated API documentation (via NDoc - each project may have an .ndoc file in the repository) for every historical version (from the tags SVN folder) and for the latest version as well (from the trunk). Basically, what I want to do in a scheduled batch build are these steps: examine the project's SVN folder and identify tags not present in the network share for each of these tags check out the tag folder build (with Release config) copy the resulting binaries to the network share search for .ndoc files generate CHM files via NDoc copy the resulting CHM files to the network share do the same as in 2., but for the HEAD revision of trunk Now, the trouble is, I have no idea where to start. I do not keep .sln files in the repository, but I am able to replace these with MSBuild files which in turn build the C# projects belonging to the specific product. I guess the most troubling part is the examination of the repository for tags which have not been processed yet - i.e. searching the tags and comparing them to a project's directory structure on the network share. I have no idea how to do that in any of the build tools (NAnt, MSBuild). Could you please provide me with some pointers on how to approach this task as a whole and in detail as well? I do not care if I use NAnt, MSBuild, or both. I am aware that this might be rather complex, but every idea and NAnt/MSBuild snippet will be a great help. Thanks in advance.

    Read the article

  • Regression Testing and Deployment Strategy

    - by user279516
    I'd like some advice on a deployment strategy. If a development team creates an extensive framework, and many (20-30) applications consume it, and the business would like application updates at least every 30 days, what is the best deployment strategy? The reason I ask is that there seems to be a lot of waste (and risk) in using an agile approach of deploying changes monthly, if 90% of the applications don't change. What I mean by this is that the framework can change during the month, and so can a few applications. Because the framework changed, all applications should be regression-tested. If, say, 10 of the applications don't change at all during the year, then those 10 applications are regression-tested EVERY MONTH, when they didn't have any feature changes or hot fixes. They had to be tested simply because the business is rolling updates every month. And the risk that is involved... if a mission-critical application is deployed, that takes a few weeks, and multiple departments, to test, is it realistic to expect to have to constantly regression-test this application? One option is to make any framework updates backward-compatible. While this would mean that applications don't need to change their code, they would still need to be tested because the underlying framework changed. And the risk involved is great; a constantly changing framework (and deploying this framework) means the mission-critical app can never just enjoy the same code base for a long time. These applications share the same database, hence the need for the constant testing. I'm aware of TDD and automated tests, but that doesn't exist at the moment. Any advice?

    Read the article

  • Regex to extract this semi formatted data

    - by Codygman
    Alright, I can't quite figure out how to do this. Given the following text: Roland AX-1: /start Roland's AX-1 strap-on remote MIDI controller has a very impressive 45-note velocity sensitive keyboard, and has switchable velocity curves, goes octave up/down, transpose, split/layering zones, and has fun tempo control for sequencers and more. Roland's AX-1 comes with a built-in GS control for total MIDI control of GM/GS synths. Its "Expression Bar" can control pitch and mod via an almost ribbon-like controller. It's also the newest and most advanced remote controller for your synths or midi modules. /end Roland AX-7: /start Roland's AX-7 builds on the infamous Roland AX-1 design. You just strap it on and put it to the front of the stage. Offering several controllers, such as: a D-Beam, then you can open the door to amazing live performance. 7-segment LED display, larger patch memory (Around 128 patches with MIDI data backup), and comes with GM2/GS compatibility make it extra easy to use. The 45-note, velocity-sensitive keyboard. 5 realtime controllers including a data entry knob, touch controller knob, opression bar, a hold button, and D-Beam. 128 patches with MIDI data backup. 2 MIDI zones. /end I'm trying to use the following: /^([\w\d \-]*):\s\s\s\s^\/start([^\:]*)\/end$/im You can see on rubular here: http://rubular.com/r/BVRRHsnWdp Thanks for any help. I guess i'm trying to match blocks of text until I hit the next title which always ends with a :$

    Read the article

  • WPF UI Automation - AutomationElement.FindFirst fails when there are lots of elements

    - by Orion Edwards
    We've got some automated UI tests for our WPF app (.NET 4); these test use the UI Automation API's. We call AutomationElement.FindFirst to find a target element, and then interact with it. Example (pseudocode): var nameEquals = new PropertyCondition(AutomationElement.NameProperty, "OurAppWindow"); var appWindow = DesktopWindow.FindFirst(TreeScope.Children, nameEquals); // this succeeds var idEquals = new PropertyCondition(AutomationElement.AutomationIdProperty, "ControlId"); var someItem = appWindow.FindFirst(TreeScope.Descendants, idEquals); // this suceeds sometimes, and fails sometimes! The problem is, the appWindow.FindFirst will sometimes fail and return null, even when the element is present. I've written a helper function which walks the UI automation tree manually and prints it out, and the element with the correct ID is present in all cases. It seems to be related to how many other items are also being displayed in the window. If there are no other items then it always succeeds, but when there are many other complex UI elements being displayed alongside it, then the find fails. I can't find any documented element limit mentioned for any of the automation API's - is there some way around this? I'm thinking I might have to write my own implemententation of FindFirst which does the tree walk manually itself... As far as I can tell this should work, because my tree-printer utility function does exactly that, and it's ok, but it seems like this would be unnecessary and slow :-( Any help would be greatly appreciated

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >