Search Results

Search found 17285 results on 692 pages for 'incremental build'.

Page 251/692 | < Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >

  • Linq to SQL not inserting data onto the DB

    - by Jesus Rodriguez
    Hello! I have a little / weird behaviour here and Im looking over internet and SO and I didn't find a response. I have to admit that this is my first time using databases, I know how to use them with SQL but never used it actually. Anyway, I have a problem with my app inserting data, I just created a very simple project for testing that and no solution yet. I have an example database with Sql Server Id - int (identity primary key) Name - nchar(10) (not null) The table is called "Person", simple as pie. I have this: static void Main(string[] args) { var db = new ExampleDBDataContext {Log = Console.Out}; var jesus = new Person {Name = "Jesus"}; db.Persons.InsertOnSubmit(jesus); db.SubmitChanges(); var query = from person in db.Persons select person; foreach (var p in query) { Console.WriteLine(p.Name); } } As you can see, nothing extrange. It show Jesus in the console. But if you see the table data, there is no data, just empty. I comment the object creation and insertion and the foreach doesn't print a thing (normal, there is no data in the database) The weird thing is that I created a row in the database manually and the Id was 2 and no 1 (Was the linq really playing with the database but it didn't create the row?) There is the log: INSERT INTO [dbo].Person VALUES (@p0) SELECT CONVERT(Int,SCOPE_IDENTITY()) AS [value] -- @p0: Input NChar (Size = 10; Prec = 0; Scale = 0) [Jesus] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 3.5.30729.4926 SELECT [t0].[Id], [t0].[Name] FROM [dbo].[Person] AS [t0] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 3.5.30729.4926 I am really confused, All the blogs / books use this kind of snippet to insert an element to a database. Thank you for helping.

    Read the article

  • Modify code to change timestamp timezone in sitemap

    - by Aahan Krish
    Below is the code from a plugin I use for sitemaps. I would like to know if there's a way to "enforce" a different timezone on all date variable and functions in it. If not, how do I modify the code to change the timestamp to reflect a different timezone? Say for example, to America/New_York timezone. Please look for date to quickly get to the appropriate code blocks. Code on Pastebin. <?php /** * @package XML_Sitemaps */ class WPSEO_Sitemaps { ..... /** * Build the root sitemap -- example.com/sitemap_index.xml -- which lists sub-sitemaps * for other content types. * * @todo lastmod for sitemaps? */ function build_root_map() { global $wpdb; $options = get_wpseo_options(); $this->sitemap = '<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">' . "\n"; $base = $GLOBALS['wp_rewrite']->using_index_permalinks() ? 'index.php/' : ''; // reference post type specific sitemaps foreach ( get_post_types( array( 'public' => true ) ) as $post_type ) { if ( $post_type == 'attachment' ) continue; if ( isset( $options['post_types-' . $post_type . '-not_in_sitemap'] ) && $options['post_types-' . $post_type . '-not_in_sitemap'] ) continue; $count = $wpdb->get_var( $wpdb->prepare( "SELECT COUNT(ID) FROM $wpdb->posts WHERE post_type = %s AND post_status = 'publish' LIMIT 1", $post_type ) ); // don't include post types with no posts if ( !$count ) continue; $n = ( $count > 1000 ) ? (int) ceil( $count / 1000 ) : 1; for ( $i = 0; $i < $n; $i++ ) { $count = ( $n > 1 ) ? $i + 1 : ''; if ( empty( $count ) || $count == $n ) { $date = $this->get_last_modified( $post_type ); } else { $date = $wpdb->get_var( $wpdb->prepare( "SELECT post_modified_gmt FROM $wpdb->posts WHERE post_status = 'publish' AND post_type = %s ORDER BY post_modified_gmt ASC LIMIT 1 OFFSET %d", $post_type, $i * 1000 + 999 ) ); $date = date( 'c', strtotime( $date ) ); } $this->sitemap .= '<sitemap>' . "\n"; $this->sitemap .= '<loc>' . home_url( $base . $post_type . '-sitemap' . $count . '.xml' ) . '</loc>' . "\n"; $this->sitemap .= '<lastmod>' . htmlspecialchars( $date ) . '</lastmod>' . "\n"; $this->sitemap .= '</sitemap>' . "\n"; } } // reference taxonomy specific sitemaps foreach ( get_taxonomies( array( 'public' => true ) ) as $tax ) { if ( in_array( $tax, array( 'link_category', 'nav_menu', 'post_format' ) ) ) continue; if ( isset( $options['taxonomies-' . $tax . '-not_in_sitemap'] ) && $options['taxonomies-' . $tax . '-not_in_sitemap'] ) continue; // don't include taxonomies with no terms if ( !$wpdb->get_var( $wpdb->prepare( "SELECT term_id FROM $wpdb->term_taxonomy WHERE taxonomy = %s AND count != 0 LIMIT 1", $tax ) ) ) continue; // Retrieve the post_types that are registered to this taxonomy and then retrieve last modified date for all of those combined. $taxobj = get_taxonomy( $tax ); $date = $this->get_last_modified( $taxobj->object_type ); $this->sitemap .= '<sitemap>' . "\n"; $this->sitemap .= '<loc>' . home_url( $base . $tax . '-sitemap.xml' ) . '</loc>' . "\n"; $this->sitemap .= '<lastmod>' . htmlspecialchars( $date ) . '</lastmod>' . "\n"; $this->sitemap .= '</sitemap>' . "\n"; } // allow other plugins to add their sitemaps to the index $this->sitemap .= apply_filters( 'wpseo_sitemap_index', '' ); $this->sitemap .= '</sitemapindex>'; } /** * Build a sub-sitemap for a specific post type -- example.com/post_type-sitemap.xml * * @param string $post_type Registered post type's slug */ function build_post_type_map( $post_type ) { $options = get_wpseo_options(); ............ // We grab post_date, post_name, post_author and post_status too so we can throw these objects into get_permalink, which saves a get_post call for each permalink. while ( $total > $offset ) { $join_filter = apply_filters( 'wpseo_posts_join', '', $post_type ); $where_filter = apply_filters( 'wpseo_posts_where', '', $post_type ); // Optimized query per this thread: http://wordpress.org/support/topic/plugin-wordpress-seo-by-yoast-performance-suggestion // Also see http://explainextended.com/2009/10/23/mysql-order-by-limit-performance-late-row-lookups/ $posts = $wpdb->get_results( "SELECT l.ID, post_content, post_name, post_author, post_parent, post_modified_gmt, post_date, post_date_gmt FROM ( SELECT ID FROM $wpdb->posts {$join_filter} WHERE post_status = 'publish' AND post_password = '' AND post_type = '$post_type' {$where_filter} ORDER BY post_modified ASC LIMIT $steps OFFSET $offset ) o JOIN $wpdb->posts l ON l.ID = o.ID ORDER BY l.ID" ); /* $posts = $wpdb->get_results("SELECT ID, post_content, post_name, post_author, post_parent, post_modified_gmt, post_date, post_date_gmt FROM $wpdb->posts {$join_filter} WHERE post_status = 'publish' AND post_password = '' AND post_type = '$post_type' {$where_filter} ORDER BY post_modified ASC LIMIT $steps OFFSET $offset"); */ $offset = $offset + $steps; foreach ( $posts as $p ) { $p->post_type = $post_type; $p->post_status = 'publish'; $p->filter = 'sample'; if ( wpseo_get_value( 'meta-robots-noindex', $p->ID ) && wpseo_get_value( 'sitemap-include', $p->ID ) != 'always' ) continue; if ( wpseo_get_value( 'sitemap-include', $p->ID ) == 'never' ) continue; if ( wpseo_get_value( 'redirect', $p->ID ) && strlen( wpseo_get_value( 'redirect', $p->ID ) ) > 0 ) continue; $url = array(); $url['mod'] = ( isset( $p->post_modified_gmt ) && $p->post_modified_gmt != '0000-00-00 00:00:00' ) ? $p->post_modified_gmt : $p->post_date_gmt; $url['chf'] = 'weekly'; $url['loc'] = get_permalink( $p ); ............. } /** * Build a sub-sitemap for a specific taxonomy -- example.com/tax-sitemap.xml * * @param string $taxonomy Registered taxonomy's slug */ function build_tax_map( $taxonomy ) { $options = get_wpseo_options(); .......... // Grab last modified date $sql = "SELECT MAX(p.post_date) AS lastmod FROM $wpdb->posts AS p INNER JOIN $wpdb->term_relationships AS term_rel ON term_rel.object_id = p.ID INNER JOIN $wpdb->term_taxonomy AS term_tax ON term_tax.term_taxonomy_id = term_rel.term_taxonomy_id AND term_tax.taxonomy = '$c->taxonomy' AND term_tax.term_id = $c->term_id WHERE p.post_status = 'publish' AND p.post_password = ''"; $url['mod'] = $wpdb->get_var( $sql ); $url['chf'] = 'weekly'; $output .= $this->sitemap_url( $url ); } } /** * Build the <url> tag for a given URL. * * @param array $url Array of parts that make up this entry * @return string */ function sitemap_url( $url ) { if ( isset( $url['mod'] ) ) $date = mysql2date( "Y-m-d\TH:i:s+00:00", $url['mod'] ); else $date = date( 'c' ); $output = "\t<url>\n"; $output .= "\t\t<loc>" . $url['loc'] . "</loc>\n"; $output .= "\t\t<lastmod>" . $date . "</lastmod>\n"; $output .= "\t\t<changefreq>" . $url['chf'] . "</changefreq>\n"; $output .= "\t\t<priority>" . str_replace( ',', '.', $url['pri'] ) . "</priority>\n"; if ( isset( $url['images'] ) && count( $url['images'] ) > 0 ) { foreach ( $url['images'] as $img ) { $output .= "\t\t<image:image>\n"; $output .= "\t\t\t<image:loc>" . esc_html( $img['src'] ) . "</image:loc>\n"; if ( isset( $img['title'] ) ) $output .= "\t\t\t<image:title>" . _wp_specialchars( html_entity_decode( $img['title'], ENT_QUOTES, get_bloginfo('charset') ) ) . "</image:title>\n"; if ( isset( $img['alt'] ) ) $output .= "\t\t\t<image:caption>" . _wp_specialchars( html_entity_decode( $img['alt'], ENT_QUOTES, get_bloginfo('charset') ) ) . "</image:caption>\n"; $output .= "\t\t</image:image>\n"; } } $output .= "\t</url>\n"; return $output; } /** * Get the modification date for the last modified post in the post type: * * @param array $post_types Post types to get the last modification date for * @return string */ function get_last_modified( $post_types ) { global $wpdb; if ( !is_array( $post_types ) ) $post_types = array( $post_types ); $result = 0; foreach ( $post_types as $post_type ) { $key = 'lastpostmodified:gmt:' . $post_type; $date = wp_cache_get( $key, 'timeinfo' ); if ( !$date ) { $date = $wpdb->get_var( $wpdb->prepare( "SELECT post_modified_gmt FROM $wpdb->posts WHERE post_status = 'publish' AND post_type = %s ORDER BY post_modified_gmt DESC LIMIT 1", $post_type ) ); if ( $date ) wp_cache_set( $key, $date, 'timeinfo' ); } if ( strtotime( $date ) > $result ) $result = strtotime( $date ); } // Transform to W3C Date format. $result = date( 'c', $result ); return $result; } } global $wpseo_sitemaps; $wpseo_sitemaps = new WPSEO_Sitemaps();

    Read the article

  • When should one use the following: Amazon EC2, Google App Engine, Microsoft Azure and Salesforce.com

    - by vicky21
    I am asking this in very general sense. Both from cloud provider and cloud consumer's perspective. Also the question is not for any specific kind of application (in fact the intention is to know which type of applications/domains can fit into which of the cloud slab -SaaS PaaS IaaS). My understanding so far is: IaaS: Raw Hardware (Processors, Networks, Storage). PaaS: OS, System Softwares, Development Framework, Virtual Machines. SaaS: Software Applications. It would be great if Stackoverflower's can share their understanding and experiences of cloud computing concept. EDIT: Ok, I will put it in more specific way - Amazon EC2: You don't have control over hardware layer. But you can take your choice of OS image, Dev Framework (.NET, J2EE, LAMP) and Application and put it on EC2 hardware. Can you deploy an applications built with Google App Engine or Azure on EC2? Google App Engine: You don't have control over hardware and OS and you get a specific Dev Framework to build your application. Can you take any existing Java or Python application and port it to GAE? Or vice versa, can applications that were built on GAE be taken out of GAE and ported to any Application Server like Websphere or Weblogic? Azure: You don't have control over hardware and OS and you get a specific Dev Framework to build your application. Can you take any existing .NET application and port it to Azure? Or vice versa, can applications that were built on Azure be taken out of Azure and ported to any Application Server like Biztalk?

    Read the article

  • Linking errors when building against Boost Unit Test Framework

    - by Rafid
    I am trying to use Boost Unit Test Framework by building a stand alone library as detailed here: http://www.boost.org/doc/libs/1_35_0/libs/test/doc/components/utf/compilation.html So I created a VC library project containing the mentioned files and build it and it was successful. Then I created a test project and referenced the library project I just created, but when I tried to build it, I got the following linking errors: 1>Type.obj : error LNK2019: unresolved external symbol "bool __cdecl boost::test_tools::tt_detail::check_impl(class boost::test_tools::predicate_result const &,class boost::unit_test::lazy_ostream const &,class boost::unit_test::basic_cstring<char const >,unsigned __int64,enum boost::test_tools::tt_detail::tool_level,enum boost::test_tools::tt_detail::check_type,unsigned __int64,...)" (?check_impl@tt_detail@test_tools@boost@@YA_NAEBVpredicate_result@23@AEBVlazy_ostream@unit_test@3@V?$basic_cstring@$$CBD@63@_KW4tool_level@123@W4check_type@123@3ZZ) referenced in function "public: void __cdecl test1::test_method(void)" (?test_method@test1@@QEAAXXZ) 1>BoostUnitTestFramework.lib(framework.obj) : error LNK2019: unresolved external symbol "void __cdecl boost::debug::break_memory_alloc(long)" (?break_memory_alloc@debug@boost@@YAXJ@Z) referenced in function "void __cdecl boost::unit_test::framework::init(class boost::unit_test::test_suite * (__cdecl*)(int,char * * const),int,char * * const)" (?init@framework@unit_test@boost@@YAXP6APEAVtest_suite@23@HQEAPEAD@ZH0@Z) 1>BoostUnitTestFramework.lib(framework.obj) : error LNK2019: unresolved external symbol "void __cdecl boost::debug::detect_memory_leaks(bool)" (?detect_memory_leaks@debug@boost@@YAX_N@Z) referenced in function "void __cdecl boost::unit_test::framework::init(class boost::unit_test::test_suite * (__cdecl*)(int,char * * const),int,char * * const)" (?init@framework@unit_test@boost@@YAXP6APEAVtest_suite@23@HQEAPEAD@ZH0@Z) 1>BoostUnitTestFramework.lib(execution_monitor.obj) : error LNK2019: unresolved external symbol "bool __cdecl boost::debug::attach_debugger(bool)" (?attach_debugger@debug@boost@@YA_N_N@Z) referenced in function "public: int __cdecl boost::detail::system_signal_exception::operator()(unsigned int,struct _EXCEPTION_POINTERS *)" (??Rsystem_signal_exception@detail@boost@@QEAAHIPEAU_EXCEPTION_POINTERS@@@Z) 1>BoostUnitTestFramework.lib(execution_monitor.obj) : error LNK2019: unresolved external symbol "bool __cdecl boost::debug::under_debugger(void)" (?under_debugger@debug@boost@@YA_NXZ) referenced in function "public: int __cdecl boost::execution_monitor::execute(class boost::unit_test::callback0<int> const &)" (?execute@execution_monitor@boost@@QEAAHAEBV?$callback0@H@unit_test@2@@Z) 1>BoostUnitTestFramework.lib(unit_test_main.obj) : error LNK2019: unresolved external symbol "class boost::unit_test::test_suite * __cdecl init_unit_test_suite(int,char * * const)" (?init_unit_test_suite@@YAPEAVtest_suite@unit_test@boost@@HQEAPEAD@Z) referenced in function main 1>C:\Users\Rafid\Workspace\MyPhysics\Builds\VC10\Tests\Debug\Tests.exe : fatal error LNK1120: 6 unresolved externals They seem to be mainly caused by Boost debug library, but I can't see a reason why I should get linking errors putting in mind that Boost debug library only need to be included as header files, rather than linking against as a library! Any ideas?!

    Read the article

  • Installing Grails with Apache Camel plugin

    - by Abdullah Jibaly
    I'm having trouble getting the Apache Camel plugin to run in grails-1.1.1. Here's the steps I took: $ grails create-app camelapp Welcome to Grails 1.1.1 - http://grails.org/ ... $ cd camelapp $ grails run-app ... Running Grails application.. Server running. Browse to http://localhost:8080/camelapp $ grails install-plugin camel ... Camel Route directory was created. Plugin camel-0.2 installed Plug-in provides the following new scripts: ------------------------------------------ grails create-route $ grails run-app ... [groovyc] org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed, Compile error during compilation with javac. [groovyc] /Users/abdullah/.grails/1.1.1/projects/camelapp/plugins/camel-0.2/src/java/org/ix/grails/plugins/camel/ClosureProcessor.java:22: method does not override a method from its superclass [groovyc] @Override [groovyc] ^ ... : Compilation Failed at org.codehaus.groovy.ant.Groovyc.compile(Groovyc.java:807) at org.codehaus.groovy.ant.Groovyc.execute(Groovyc.java:540) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:105) at org.apache.tools.ant.Task.perform(Task.java:348) at _GrailsCompile_groovy$_run_closure3_closure7.doCall(_GrailsCompile_groovy:102) at _GrailsCompile_groovy$_run_closure3_closure7.doCall(_GrailsCompile_groovy) at _GrailsSettings_groovy$_run_closure10.doCall(_GrailsSettings_groovy:274) at _GrailsSettings_groovy$_run_closure10.call(_GrailsSettings_groovy) at _GrailsCompile_groovy$_run_closure3.doCall(_GrailsCompile_groovy:89) at _GrailsCompile_groovy$_run_closure2.doCall(_GrailsCompile_groovy:55) at _GrailsPackage_groovy$_run_closure2_closure9.doCall(_GrailsPackage_groovy:79) at _GrailsPackage_groovy$_run_closure2_closure9.doCall(_GrailsPackage_groovy) at _GrailsSettings_groovy$_run_closure10.doCall(_GrailsSettings_groovy:274) at _GrailsSettings_groovy$_run_closure10.call(_GrailsSettings_groovy) at _GrailsPackage_groovy$_run_closure2.doCall(_GrailsPackage_groovy:78) at RunApp$_run_closure1.doCall(RunApp.groovy:28) at gant.Gant$_dispatch_closure4.doCall(Gant.groovy:324) at gant.Gant$_dispatch_closure6.doCall(Gant.groovy:334) at gant.Gant$_dispatch_closure6.doCall(Gant.groovy) at gant.Gant.withBuildListeners(Gant.groovy:344) at gant.Gant.this$2$withBuildListeners(Gant.groovy) at gant.Gant$this$2$withBuildListeners.callCurrent(Unknown Source) at gant.Gant.dispatch(Gant.groovy:334) at gant.Gant.this$2$dispatch(Gant.groovy) at gant.Gant.invokeMethod(Gant.groovy) at gant.Gant.processTargets(Gant.groovy:495) at gant.Gant.processTargets(Gant.groovy:480) Caused by: org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed, Compile error during compilation with javac. /Users/abdullah/.grails/1.1.1/projects/camelapp/plugins/camel-0.2/src/java/org/ix/grails/plugins/camel/ClosureProcessor.java:22: method does not override a method from its superclass @Override ^ ... Compilation error: Compilation Failed $ java -version java version "1.6.0_07" Java(TM) SE Runtime Environment (build 1.6.0_07-b06-153) Java HotSpot(TM) 64-Bit Server VM (build 1.6.0_07-b06-57, mixed mode)

    Read the article

  • Izpack: Creating custom panels

    - by bguiz
    Hi, I am trying to create a custom panel for an IzPack installer. This means that I have to extend IzPanel. However, it appears that if I do this, the extended panel needs to be in the com.izforge.izpack.panels package. Then I found this post, which stipulates that: As such, you must include installer.jar from the lib folder of IzPack in the build path of your custom panel project. Your custom panel /must/ extend com.izforge.izpack.installer.IzPanel. Furthermore, it /must/ reside in the com.izforge.izpack.panels package. On top of it all, your build jar's name /must/ be the same as the unqualified name of your custom panel class. I take issue with the 1st and 4th points. They imply that I have to create an additional JAR file for each custom IzPanel that I create. Also, I would have to modify the IzPack installation by adding these JARs to one of its subdirectories. Is this article outdated (2008) and can it be safely ignored, or is this still true? If not how can I avoid this and simply have the extended IzPanel on the classpath instead? Thank you!

    Read the article

  • How do I debug a crash when I run my garbage-collected app in Rosetta?

    - by Rob Keniger
    I have a Universal app which is targeting 10.5 and which uses garbage collection. I am building for ppc, i386 and x86_64. I don't have access to a physical PowerPC machine so I am trying to use Rosetta to confirm that the PowerPC portion of the app works correctly. However, as soon as the app is launched in Rosetta it immediately crashes with the following crash log: Process: FooApp [91567] Path: /Users/rob/Development/src/FooApp/build/Release 64-bit/FooApp.app/Contents/MacOS/FooApp Identifier: com.companyX.FooApp Version: 0.9 (build d540e05) (2) Code Type: PPC (Translated) Parent Process: launchd [708] Date/Time: 2010-04-09 18:32:23.962 +1000 OS Version: Mac OS X 10.6.3 (10D573) Report Version: 6 Exception Type: EXC_CRASH (SIGTRAP) Exception Codes: 0x0000000000000000, 0x0000000000000000 Crashed Thread: 5 ...snip non-relevant threads... Thread 5 Crashed: 0 libSystem.B.dylib 0x8023656a __pthread_kill + 10 1 libSystem.B.dylib 0x80235e17 pthread_kill + 95 2 com.companyX.FooApp 0xb80bfb30 0xb8000000 + 785200 3 com.companyX.FooApp 0xb80c0037 0xb8000000 + 786487 4 com.companyX.FooApp 0xb80dd8e8 0xb8000000 + 907496 5 com.companyX.FooApp 0xb8145397 spin_lock_wrapper + 1791 6 com.companyX.FooApp 0xb801ceb7 0xb8000000 + 118455 I have used the Apple docs on debugging translated apps and the information on this page to attach gdb to the app when it's running in Rosetta. The app immediately breaks into the debugger upon launch: Program received signal SIGTRAP, Trace/breakpoint trap. [Switching to thread 15107] 0x9151fdd4 in auto_fatal () (gdb) bt #0 0x9151fdd4 in auto_fatal () #1 0x91536d84 in Auto::Thread::get_register_state () #2 0x915372f8 in Auto::Thread::scan_other_thread () #3 0x91529be4 in Auto::Zone::scan_registered_threads () #4 0x91539114 in Auto::MemoryScanner::scan_thread_ranges () #5 0x9153b000 in Auto::MemoryScanner::scan () #6 0x9153049c in Auto::Zone::collect () #7 0x915198f4 in auto_collect_internal () #8 0x9151a094 in auto_collection_work () #9 0x96687434 in _dispatch_call_block_and_release () #10 0x9668912c in _dispatch_queue_drain () #11 0x96689350 in _dispatch_queue_invoke () #12 0x966895c0 in _dispatch_worker_thread2 () #13 0x966896fc in _dispatch_worker_thread () #14 0x965a97e8 in _pthread_body () (gdb) I have no idea where to start with this. It looks like the Garbage Collector is failing very badly. Are garbage-collected PowerPC apps not supported in Rosetta? I can't see any mention of this limitation in the docs if so. Does anyone have any ideas?

    Read the article

  • Can YAML have inheritance?

    - by Jason
    This question involves a lot of symfony but it should be easy enough for someone to follow who only knows YAML and not symfony. My symfony models come from a three-step process: First, I create the tables in MySQL. Second, I run a symfony command (symfony doctrine:build-schema) to convert my table structure into a YAML file. Third, I run another symfony command (symfony doctrine:build-model) to convert the YAML file into PHP code. Here's the problem: there are some tables in the database that I don't want to end up in my symfony code. For example, let's say I have two tables: one called my_table and another called wordpress. The YAML file I end up with might look like this: MyTable: connection: doctrine tableName: my_table Wordpress: connection: doctrine tableName: wordpress That's great except the wordpress table has nothing to do with my symfony models. The result is that every single time I make a change to my database and generate this YAML file, I have to manually remove wordpress. It's annoying! I'd like to be able to create a file called baseConfig.php or something that looks like this: $config = array( 'MyTable' => array( 'connection' => 'doctrine', 'tableName' => 'my_table', ), 'Wordpress' => array( 'connection' => 'doctrine', 'tableName' => 'wordpress', ), ); And then I could have a separate file called config.php or something where I could make modifications to the base config: unset($config['Wordpress']); So my question is: is there any way to convert YAML into executable PHP code (as opposed to load YAML INTO PHP code like what sfYaml::load() does) to achieve this sort of thing? Or is there maybe some other way to achieve YAML inheritance? Thanks, Jason

    Read the article

  • Public key of Android project and keystore created in Eclipse?

    - by user578056
    I created an Android project using Eclipse (under Windows FWIW) and let Eclipse create the keypair during the Export Android Application process. I successfully used Eclipse to make a signed release build that is now on the Market. Now I want to now use ProGuard, which I believe means using Ant instead of Eclipse to build. It was a pain, but Ant building works in both debug and release, until it tries to sign the APK. I get: [signjar] jarsigner: Certificate chain not found for: redacted. redacted must reference a valid KeyStore key entry containing a private key and corresponding public key certificate chain. keytool -list -keystore redacted gives me: Keystore type: JKS Keystore provider: SUN Your keystore contains 1 entry redacted, Jan 16, 2011, PrivateKeyEntry, Certificate fingerprint (MD5): BD:0F:70:C1:39:F5:FE:5B:BC:CD:89:0B:C8:66:95:E0 Which brings me to the actual question: where is my public key? I have some sort of public key on my Android Market profile, but is that the pair for my private key? If so, how do I store that in the keystore so that jarsigner will work?

    Read the article

  • Visual studio 2010 setup project problem.

    - by Guru
    Hi there, I've made an application that uses .NET framework 3.5 SP1 and SQL Server 2008 Express. Application is fine and now i'm going to to make a setup project for this. When I first build my setup it was fine as all the prerequisites were not included in setup. But I want my setup to install .NET 3.5 SP1 and SQL SERVER 2008 Express also. So for this I've changed the options in setup project's properties from "Download prerequisites from following location" to "Download prerequisites from the same location as my application". In addition to that I've also checked the options above like .NET 3.5 SP1 and SQL Server 2008 Express etc. After doing all this I build my project again. This time I'm Getting 57 Errors. Error 1 The install location for prerequisites has not been set to 'component vendor's web site' and the file 'DotNetFX35SP1\dotNetFX20\aspnet.msp' in item '.NET Framework 3.5 SP1' can not be located on disk. See Help for more information. D:\MindStrike Setup\MindStrike Setup.vdproj MindStrike Setup Error 2 The install location for prerequisites has not been set to 'component vendor's web site' and the file 'DotNetFX35SP1\dotNetFX20\aspnet_64.msp' in item '.NET Framework 3.5 SP1' can not be located on disk. See Help for more information. D:\MindStrike Setup\MindStrike Setup.vdproj MindStrike Setup Error 3 The install location for prerequisites has not been set to 'component vendor's web site' and the file 'DotNetFX35SP1\dotNetFX20\clr.msp' in item '.NET Framework 3.5 SP1' can not be located on disk. See Help for more information. D:\MindStrike Setup\MindStrike Setup.vdproj MindStrike Setup Error 4 The install location for prerequisites has not been set to 'component vendor's web site' and the file 'DotNetFX35SP1\dotNetFX20\clr_64.msp' in item '.NET Framework 3.5 SP1' can not be located on disk. See Help for more information. D:\MindStrike Setup\MindStrike Setup.vdproj MindStrike Setup As the question will become too large so I'm just pasting 3 errors but there are totally 57 errors. Please help me . Thanks in advance Guru

    Read the article

  • Dealing with asynchronous control structures (Fluent Interface?)

    - by Christophe Herreman
    The initialization code of our Flex application is doing a series of asynchronous calls to check user credentials, load external data, connecting to a JMS topic, etc. Depending on the context the application runs in, some of these calls are not executed or executed with different parameters. Since all of these calls happen asynchronously, the code controlling them is hard to read, understand, maintain and test. For each call, we need to have some callback mechanism in which we decide what call to execute next. I was wondering if anyone had experimented with wrapping these calls in executable units and having a Fluent Interface (FI) that would connect and control them. From the top of my head, the code might look something like: var asyncChain:AsyncChain = execute(LoadSystemSettings) .execute(LoadAppContext) .if(IsAutologin) .execute(AutoLogin) .else() .execute(ShowLoginScreen) .etc; asyncChain.execute(); The AsyncChain would be an execution tree, build with the FI (and we could of course also build one without a FI). This might be an interesting idea for environments that run in a single threaded model like the Flash Player, Silverlight, JavaFX?, ... Before I dive into the code to try things out, I was hoping to get some feedback.

    Read the article

  • C++ resize a docked Qt QDockWidget programmatically?

    - by Zac
    I've just started working on a new C++/Qt project. It's going to be an MDI-based IDE with docked widgets for things like the file tree, object browser, compiler output, etc. One thing is bugging me so far though: I can't figure out how to programmatically make a QDockWidget smaller. For example, this snippet creates my bottom dock window, "Build Information": m_compilerOutput = new QTextEdit; m_compilerOutput->setReadOnly(true); dock = new QDockWidget(tr("Build Information"), this); dock->setWidget(m_compilerOutput); addDockWidget(Qt::BottomDockWidgetArea, dock); When launched, my program looks like this: http://yfrog.com/6ldreamidep (bear in mind the early stage of development) However, I want it to appear like this: http://yfrog.com/20dreamide2p I can't seem to get this to happen. The Qt Reference on QDockWidget says this: Custom size hints, minimum and maximum sizes and size policies should be implemented in the child widget. QDockWidget will respect them, adjusting its own constraints to include the frame and title. Size constraints should not be set on the QDockWidget itself, because they change depending on whether it is docked Now, this suggests that one method of going about doing this would be to sub-class QTextEdit and override the sizeHint() method. However, I would prefer not to do this just for that purpose, nor have I tried it to find that to be a working solution. I have tried calling dock-resize(m_compilerOutput-width(), m_compilerOutput-minimumHeight()), calling m_compilerOutput-setSizePolicy() with each of its options...nothing so far has affected the size. Like I said, I would prefer a simple solution in a few lines of code to having to create a sub-class just to change sizeHint(). All suggestions are appreciated.

    Read the article

  • How does lucene index documents?

    - by Mehdi Amrollahi
    Hello, I read some document about Lucene; also I read the document in this link (http://lucene.sourceforge.net/talks/pisa). I don't really understand how Lucene indexes documents and don't understand which algorithms Lucene uses for indexing? On the above link, it says Lucene uses this algorithm for indexing: incremental algorithm: maintain a stack of segment indices create index for each incoming document push new indexes onto the stack let b=10 be the merge factor; M=8 for (size = 1; size < M; size *= b) { if (there are b indexes with size docs on top of the stack) { pop them off the stack; merge them into a single index; push the merged index onto the stack; } else { break; } } How does this algorithm provide optimized indexing? Does Lucene use B-tree algorithm or any other algorithm like that for indexing - or does it have a particular algorithm? Thank you for reading my post.

    Read the article

  • Resolving MSBuild 4.0 warnings

    - by Hadi Eskandari
    I've upgraded a solution to use MSBuild 4.0. It compiles but I get lots of warnings, for example: "T:\projects\Castle.Core\buildscripts\Build.proj" (Package target) (1) - "T:\projects\Castle.Core\Castle.Core-vs2008.sln" (Build target) (2:2) - "T:\projects\Castle.Core\src\Castle.DynamicProxy.Tests\Castle.DynamicProxy.Tests-vs2008.csproj" (default target) (3:2) - D:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(847,9): warning MSB3644: The reference assemblies for framework ".NETFramework,Version=v4.0.30319" were not found. To resolve this, install the SDK or Targeting Pack for this framework version or retarget your application to a version of the framework for which you have the SDK or Targeting Pack installed. Note that assemblies will be resolved from the Global Assembly Cache (GAC) and will be used in place of reference assemblies. Therefore your assembly may not be correctly targeted for the framework you intend. [T:\projects\Castle.Core\src\Castle.DynamicProxy.Tests\Castle.DynamicProxy.Tests-vs2008.csproj] How can I fix these warnings? It is related to .NET 4.0 Multitargeting pack or SDK, but there's no SDK for .NET 4.0 AFAIK and Multi-Target pack can not be installed separatly. Any ideas would be appreciated.

    Read the article

  • GH-Unit for unit testing Objective-C code, why am I getting linking errors?

    - by djhworld
    Hi there, I'm trying to dive into the quite frankly terrible world of unit testing using Xcode (such a convoluted process it seems.) Basically I have this test class, attempting to test my Show.h class #import <GHUnit/GHUnit.h> #import "Show.h" @interface ShowTest : GHTestCase { } @end @implementation ShowTest - (void)testShowCreate { Show *s = [[Show alloc] init]; GHAssertNotNil(s,@"Was nil."); } @end However when I try to build and run my tests it moans with this error: - Undefined symbols: "_OBJC_CLASS_$_Show", referenced from: __objc_classrefs__DATA@0 in ShowTest.o ld: symbol(s) not found collect2: ld returned 1 exit status Now I'm presuming this is a linking error. I tried following every step in the instructions located here: - http://github.com/gabriel/gh-unit/blob/master/README.md And step 2 of these instructions confused me: - In the Target 'Tests' Info window, General tab: Add a linked library, under Mac OS X 10.5 SDK section, select GHUnit.framework Add a linked library, select your project. Add a direct dependency, and select your project. (This will cause your application or framework to build before the test target.) How am I supposed to add my project to the linked library list when all it accepts it .dylib, .framework and .o files. I'm confused! Thanks for any help that is received.

    Read the article

  • Creating a dynamic linq query

    - by Bas
    I have the following query: from p in dataContext.Repository<IPerson>() join spp1 in dataContext.Repository<ISportsPerPerson>() on p.Id equals spp1.PersonId join s1 in dataContext.Repository<ISports>() on spp1.SportsId equals s1.Id join spp2 in dataContext.Repository<ISportsPerPerson>() on p.Id equals spp2.PersonId join s2 in dataContext.Repository<ISports>() on spp2.SportsId equals s2.Id where s1.Name == "Soccer" && s2.Name == "Tennis" select new { p.Id }; It selects all the person who play Soccer and Tennis. On runtime the user can select other tags to add to the query, for instance: "Hockey". now my question is, how could I dynamically add "Hockey" to the query? If "Hockey" is added to the query, it would look like this: from p in dataContext.Repository<IPerson>() join spp1 in dataContext.Repository<ISportsPerPerson>() on p.Id equals spp1.PersonId join s1 in dataContext.Repository<ISports>() on spp1.SportsId equals s1.Id join spp2 in dataContext.Repository<ISportsPerPerson>() on p.Id equals spp2.PersonId join s2 in dataContext.Repository<ISports>() on spp2.SportsId equals s2.Id join spp3 in dataContext.Repository<ISportsPerPerson>() on p.Id equals spp3.PersonId join s3 in dataContext.Repository<ISports>() on spp3.SportsId equals s3.Id where s1.Name == "Soccer" && s2.Name == "Tennis" && s3.Name == "Hockey" select new { p.Id }; It would be preferable if the query is build up dynamically like: private void queryTagBuilder(List<string> tags) { IDataContext dataContext = new LinqToSqlContext(new L2S.DataContext()); foreach(string tag in tags) { //Build the query? } } Anyone has an idea on how to set this up correctly? Thanks in advance!

    Read the article

  • Deploy multiple instances of an EAR (representing versions) to Glassfish

    - by Thorbjørn Ravn Andersen
    I basically want to be able to deploy multiple versions of the same EAR file to the same server (Glassfish instance?) , and have a unique path to each version separating them. From my reading on this it appears that multiple EARs deploy to the root of the web server namespace so that they can coexist if they do not have colliding context-root's of WAR's. In my case I'd rather have that instead of everything going under "/", I'd like to be able to brand a given EAR-file build to ALWAYS deploy under a given path like "/foo-20100319" or "/foo-CUSTOMER-20010101". This can easily be done with a single WAR file just by renaming it. I do not need or want them to disturb each other. It is my understanding that this remapping is outside the scope of the application.xml file, so I found that http://docs.sun.com/app/docs/doc/820-7693/beayr?a=view says that I can specify web-uri and context-root, but I am not certain that what I wish to do, can be specified with these in Glassfish. How should I approach this? I have full control over the build process. (I have found http://stackoverflow.com/questions/877390/deploying-multiple-java-web-apps-to-glassfish-in-one-go but I am not certain how to apply this to what I need).

    Read the article

  • NHibernate.Search - async mode

    - by Atul
    Hi, I am using NHibernate Lucene search in my project. Lucene.Net.dll - v - 2.3.1.3 NHibernate.dll - v - 2.1.0.4000 At this point I am trying to use async option for indexing and used following options config.SetProperty(NHibernate.Search.Environment.WorkerExecution, "async"); config.SetProperty(NHibernate.Search.Environment.WorkerThreadPoolSize, "1"); config.SetProperty(NHibernate.Search.Environment.WorkerWorkQueueSize, "5000"); Questions 1) My initial index was not build with this option, when used these settings first time, I had error saying NHibernate.Search.dll not found. When I deleted existing index and then started working, it went fine. Do we need to rebuild indexes whenever we change config settings like above ? 2) How size of index should be interpreted; i.e. initially my index was about 400MB (build over the last few months), which I deleted. Later when I reindexed, the size of index went down to 5MB ! Search appear to be alright after limited testing, but such a change appeared bit scary. Should we delete/rebuild indexes once in a while & is it normal to change this drastically ? 3) Is my above setting is OK ? When I had WorkerThreadPoolSize=5, I once got Dr Watson kind of error. Please advise on best practices of using async configuration for search. Regards, Atul

    Read the article

  • Feasibility of using Silverlight for web and windows client with common code base for data intensive

    - by Kabeer
    Hello. Recently in a conversation, someone suggested me to make use of Silverlight if I am targeting a web client and a windows client for the same application. This will cut down my effort for supporting the contrast in both presentation layers. Mine is a product, that will be deployed in enterprises. Both web and windows clients are desirable. With the above context, I have few queries: Is it advisable to adopt the recommended approach and whether this approach is becoming a trend? Besides, some configuration & deployment tweaking, will this significantly reduce effort on the presentation layer? Is there a possibility that my future prospects (for this product) will resist Silverlight footprint? Will I be able to make use of the ASP.Net MVC pattern? Will there be any performance implication for the web client? Will Silverlight support incremental load of controls? If my back-end includes SSRS, will I be able to harness all its front end features with Silverlight? Will I be able to support additional devices with same code base in future? Mine is a very data intensive application from both, data entry and reporting perspective. Is it advisable to use 3rd party controls (like Telerik) for improved user experience and developer productivity? Are their any professional quality open source Silverlight controls (library) available? Further, I seek information of best practices in the context I shared above.

    Read the article

  • xcodebuild + iPhone fail under ssh with Couldn't load plug-in 'com.apple.Xcode.iPhoneSupport'

    - by mamcx
    I'm triying to compile my iPhone app from ssh. This is for my build tool that run in another machine. The base sdk is iPhone Device 3.0. The error is : "Couldn't load plug-in 'com.apple.Xcode.iPhoneSupport'" However, executing from the regular terminal run ok. Also directly from xcode. This is the log: [trtrrtrtr@mac-pro-de-trtrr-trtr ~/mamcx/projects/JhonSell/iPhone]$ xcodebuild -target BestSeller -configuration Debug=== BUILDING NATIVE TARGET Three20 OF PROJECT Three20 WITH CONFIGURATION Debug === Checking Dependencies... No architectures to compile for (ONLY_ACTIVE_ARCH=YES, active arch=armv6, VALID_ARCHS=i386). 2010-04-27 16:16:50.369 xcodebuild[1168:4b1b] Error loading /Developer/Platforms/iPhoneOS.platform/Developer/Library/Xcode/Plug-ins/iPhoneRemoteDevice.xcodeplugin/Contents/MacOS/iPhoneRemoteDevice: dlopen(/Developer/Platforms/iPhoneOS.platform/Developer/Library/Xcode/Plug-ins/iPhoneRemoteDevice.xcodeplugin/Contents/MacOS/iPhoneRemoteDevice, 265): no suitable image found. Did find: /Developer/Platforms/iPhoneOS.platform/Developer/Library/Xcode/Plug-ins/iPhoneRemoteDevice.xcodeplugin/Contents/MacOS/iPhoneRemoteDevice: GC capability mismatch 2010-04-27 16:16:50.371 xcodebuild[1168:4b1b] Exception caught: Couldn't load plug-in 'com.apple.Xcode.iPhoneSupport' 2010-04-27 16:16:50.373 xcodebuild[1168:4b1b] Error loading /Developer/Platforms/iPhoneOS.platform/Developer/Library/Xcode/Plug-ins/iPhoneRemoteDevice.xcodeplugin/Contents/MacOS/iPhoneRemoteDevice: dlopen(/Developer/Platforms/iPhoneOS.platform/Developer/Library/Xcode/Plug-ins/iPhoneRemoteDevice.xcodeplugin/Contents/MacOS/iPhoneRemoteDevice, 265): no suitable image found. Did find: /Developer/Platforms/iPhoneOS.platform/Developer/Library/Xcode/Plug-ins/iPhoneRemoteDevice.xcodeplugin/Contents/MacOS/iPhoneRemoteDevice: GC capability mismatch 2010-04-27 16:16:50.373 xcodebuild[1168:4b1b] Exception caught: Couldn't load plug-in 'com.apple.Xcode.iPhoneSupport' ** BUILD FAILED **

    Read the article

  • iPhone app: How to implement in-app purchased game levels

    - by Wonderflonium
    So, I understand that it's possible to set up in-app purchases for iPhone apps to purchase non-consumables like game levels. I understand the logic behind the purchase part, but what I don't understand is, how can I deliver the new game level. For example: I build an app that contains the first level and they purchase additional levels. Is it better to build all the other levels into the app and whenever they purchase the app, it unlocks it with a plist entry or something? That doesn't seem very update-able to me. Every time I come up with a new level, I'd have to update the app. So, what I don't understand then, is what is how do I package up a level and download it as a separate entity that can accessed by the game? Would the level just be some XML with images in a ZIP folder or something? How does the level get added to the game? What are best practices for this type of thing? I Googled and have found NOTHING about this. I'm a little bit confused by the concept and any help would be appreciated. I'm not looking for someone to write the game for me, I just need pointed in the right direction so I can develop it on my own.

    Read the article

  • Parameterized Queries /Without/ using queries.

    - by Aren B
    I've got a bit of a poor situation here. I'm stuck working with commerce server, which doesn't do a whole lot of sanitization/parameterization. I'm trying to build up my queries to prevent SQL Injection, however some things like the searches / where clause on the search object need to be built up, and there's no parameterized interface. Basically, I cannot parameterize, however I was hoping to be able to use the same engine to BUILD my query text if possible. Is there a way to do this, aside from writing my own parameterizing engine which will probably still not be as good as parameterized queries? Update: Example The where clause has to be built up as a sql query where clause essentially: CatalogSearch search = /// Create Search object from commerce server search.WhereClause = string.Format("[cy_list_price] > {0} AND [Hide] is not NULL AND [DateOfIntroduction] BETWEEN '{1}' AND '{2}'", 12.99m, DateTime.Now.AddDays(-2), DateTime.Now); *Above Example is how you refine the search, however we've done some testing, this string is NOT SANITIZED. This is where my problem lies, because any of those inputs in the .Format could be user input, and while i can clean up my input from text-boxes easily, I'm going to miss edge cases, it's just the nature of things. I do not have the option here to use a parameterized query because Commerce Server has some insane backwards logic in how it handles the extensible set of fields (schema) & the free-text search words are pre-compiled somewhere. This means I cannot go directly to the sql tables What i'd /love/ to see is something along the lines of: SqlCommand cmd = new SqlCommand("[cy_list_price] > @MinPrice AND [DateOfIntroduction] BETWEEN @StartDate AND @EndDate"); cmd.Parameters.AddWithValue("@MinPrice", 12.99m); cmd.Parameters.AddWithValue("@StartDate", DateTime.Now.AddDays(-2)); cmd.Parameters.AddWithValue("@EndDate", DateTime.Now); CatalogSearch search = /// constructor search.WhereClause = cmd.ToSqlString();

    Read the article

  • Replacing a Namespace with XSLT

    - by er4z0r
    Hi I want to work around a 'bug' in certain RSS-feeds, which use an incorrect namespace for the mediaRSS module. I tried to do it by manipulating the DOM programmatically, but using XSLT seems more flexible to me. Example: <media:thumbnail xmlns:media="http://search.yahoo.com/mrss" url="http://www.suedkurier.de/storage/pic/dpa/infoline/brennpunkte/4311018_0_merkelxI_24280028_original.large-4-3-800-199-0-3131-2202.jpg" /> <media:thumbnail url="http://www.suedkurier.de/storage/pic/dpa/infoline/brennpunkte/4311018_0_merkelxI_24280028_original.large-4-3-800-199-0-3131-2202.jpg" /> Where the namespace must be http://search.yahoo.com/mrss/ (mind the slash). This is my stylesheet: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="//*[namespace-uri()='http://search.yahoo.com/mrss']"> <xsl:element name="{local-name()}" namespace="http://search.yahoo.com/mrss/" > <xsl:apply-templates select="@*|*|text()" /> </xsl:element> </xsl:template> </xsl:stylesheet> Unfortunately the result of the transformation is an invalid XML and my RSS-Parser (ROME Library) does not parse the feed anymore: java.lang.IllegalStateException: Root element not set at org.jdom.Document.getRootElement(Document.java:218) at com.sun.syndication.io.impl.RSS090Parser.isMyType(RSS090Parser.java:58) at com.sun.syndication.io.impl.FeedParsers.getParserFor(FeedParsers.java:72) at com.sun.syndication.io.WireFeedInput.build(WireFeedInput.java:273) at com.sun.syndication.io.WireFeedInput.build(WireFeedInput.java:251) ... 8 more What is wrong with my stylesheet?

    Read the article

  • Java & Maven generating -and using- my own archetype

    - by Random
    Hello again! I have been busy in my project creating a webapp (in struts) that manages maven, using maven-2.2.1-uber.jar link text. The problem comes when the boss says it has to use some archetypes the company has created (so no predefined archetypes for you naughty boy!). So ok, I use the -DarchetypeRepository option (with ServletWrapper I get my complete web direction, becouse the repository will be inside the app), and the log seems to find it, but then the build fails -miserably- with this little text 'Build Failure - The defined artifact is not an archetype' as simple as that. Of course I have a lot of INFO lines that say abslutily nothig related. I have read the maven definitive guide searching for some kind of help, but it has been disapointing at best. My thoughts are thatmaybe I am missing in somewhere of all the folders tree some xml that actually sais maven that my pom.xml is an archetype not a project. But I really, really, can't find anything on the net or in the manuals that explains easy-handed how the archetype:generate (with special parameters) works and where I have to put every folder and/or file. So just to say my thoughts aloud (and hopefully you understand what I am trying to ask): I have a template where I do some xml changes (variables, etc...), then I have to call maen and do an archetype:generate with a variable project. The problem seems to be that my actual confuguration doesn't like what I am doing. After the generation of the archetype, witch luckly will create some directory trees and leave me a POM.xml somewhere I still have to do some variable changes and more xml manage stuff, so it whould be very kind from maven to don't destroy anything in this process. Any ideas why this maven-thing is not happly-ever-after asuming that my archetype is defintly an archetype? Allthought I think the code is ok, it could be wrong, as I am using maven-ubber and I call the actual CSMavenCli.main(String[, ClassWorld), I don't think it is the case this time. Thanks and all! :) Random.

    Read the article

  • F# and .Net versions

    - by rwallace
    I'm writing a program in F# at the moment, which I specified in the Visual Studio project setup to target .Net 3.5, this being the highest offered, on the theory that I might as well get the best available. Then I tried just now running the compiled program on an XP box, not expecting it to work, but just to see what would happen. Unsurprisingly I just got an error message demanding an appropriate version of the framework, but surprisingly it wasn't 3.5 it demanded, but 2.0.50727. An additional puzzle is the version of MSBuild I'm using to compile the release version of the program, which I found in the framework 3.5 directory but claims to be framework 2.0 and build engine 3.5. I just guessed it was the right version of MSBuild to use because it seemed to correspond with the highest framework version F# seems to be able to target, but should I be using a different version? Anyone have any idea what's going on? C:\Windows>dir/s msbuild.exe Volume in drive C is OS Volume Serial Number is 0422-C2D0 Directory of C:\Windows\Microsoft.NET\Framework\v2.0.50727 27/07/2008 19:03 69,632 MSBuild.exe 1 File(s) 69,632 bytes Directory of C:\Windows\Microsoft.NET\Framework\v3.5 29/07/2008 23:40 91,136 MSBuild.exe 1 File(s) 91,136 bytes Directory of C:\Windows\Microsoft.NET\Framework\v4.0.30319 18/03/2010 16:47 132,944 MSBuild.exe 1 File(s) 132,944 bytes Directory of C:\Windows\winsxs\x86_msbuild_b03f5f7f11d50a3a_6.0.6000.16386_none_815e96e1b0e084be 20/10/2006 02:14 69,632 MSBuild.exe 1 File(s) 69,632 bytes Directory of C:\Windows\winsxs\x86_msbuild_b03f5f7f11d50a3a_6.0.6000.16720_none_81591d45b0e55432 27/07/2008 19:00 69,632 MSBuild.exe 1 File(s) 69,632 bytes Directory of C:\Windows\winsxs\x86_msbuild_b03f5f7f11d50a3a_6.0.6000.20883_none_6a9133e9ca879925 27/07/2008 18:55 69,632 MSBuild.exe 1 File(s) 69,632 bytes C:\Windows>cd Microsoft.NET\Framework\v3.5 C:\Windows\Microsoft.NET\Framework\v3.5>msbuild /ver Microsoft (R) Build Engine Version 3.5.30729.1 [Microsoft .NET Framework, Version 2.0.50727.3053] Copyright (C) Microsoft Corporation 2007. All rights reserved. 3.5.30729.1

    Read the article

< Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >