Search Results

Search found 865 results on 35 pages for 'binaries'.

Page 25/35 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Antenna Aligner Part 7: Connecting the dots

    - by Chris George
    The app is basically ready, so I eagerly started to sort out creating the application entry in iTunes Connect. It's mostly intuitive actually, although I did have to create yet another icon for iTunes sized 512x512 pixels, damn lucky I did the original graphics as vector! It took me longer to write the application description than anything else, I'm so not a tech author! I didn't like the way you have to 'make up' an SKU (Stock Keeping Unit) number. I have to do some googling to find out that it really doesn't matter what it is! It should be more obvious what to do from the actual website itself. That aside, the rest of it was actually fairly straightforward. As well as the details of the application, iPhone and iPad screenshots were also required. This posed somewhat of a problem. The iPhone ones were easy (as I have one!), but I do not (yet) own an iPad . So I thought I'd leave the iPad screenshots out for now. Once the application details were sorted, I moved onto the rights and pricing. At the start of the project I had made the decision that I wouldn't charge any more than the lowest amount £0.59. I believe there is a market for this, but as my first foray into app development I didn't want to take the mick. I did realise, however, that I had built my app with a developer certificate and provisioning profile. This was fairly quickly corrected, and again Nomad made this very easy to switch over to the distribution certificate and provisioning profile. With a sense of excitement I cracked open iTunes connect and clicked the upload button ... ...slight snag... . when the Nomad project was started, Apple allowed uploads of these binaries via iTunes Connect. But this is no longer possible, the only upload path is via the Application Loader available from the Apple Developer program. This itself has one limitation, it only runs on a mac! D'OH!!!  Actually my language was somewhat more colourful when this fact came to light. After picking my laptop up off the floor and putting it back together... ok only joking, but I did nearly throw it out of frustration!... I started to consider the options; I briefly entertained the idea of buying a cheap mac from ebay... no, that defeats the whole object of what I'm doing, plus my wife wouldn't be impressed there are some guys out there in the interweb who will upload your app for a small fee...but I don't really like the idea of giving some faceless email address my apple developer login details, as well as my app binary! find some willing friend with a mac who would kindly let me use it... obviously this is the only sensible option. In the meantime, I informed the Nomad team about this slight 'issue' and they are currently investigating possible solutions...

    Read the article

  • Stepping outside Visual Studio IDE [Part 2 of 2] with Mono 2.6.4

    - by mbcrump
    Continuing part 2 of my Stepping outside the Visual Studio IDE, is the open-source Mono Project. Mono is a software platform designed to allow developers to easily create cross platform applications. Sponsored by Novell (http://www.novell.com/), Mono is an open source implementation of Microsoft's .NET Framework based on the ECMA standards for C# and the Common Language Runtime. A growing family of solutions and an active and enthusiastic contributing community is helping position Mono to become the leading choice for development of Linux applications. So, to clarify. You can use Mono to develop .NET applications that will run on Linux, Windows or Mac. It’s basically a IDE that has roots in Linux. Let’s first look at the compatibility: Compatibility If you already have an application written in .Net, you can scan your application with the Mono Migration Analyzer (MoMA) to determine if your application uses anything not supported by Mono. The current release version of Mono is 2.6. (Released December 2009) The easiest way to describe what Mono currently supports is: Everything in .NET 3.5 except WPF and WF, limited WCF. Here is a slightly more detailed view, by .NET framework version: Implemented C# 3.0 System.Core LINQ ASP.Net 3.5 ASP.Net MVC C# 2.0 (generics) Core Libraries 2.0: mscorlib, System, System.Xml ASP.Net 2.0 - except WebParts ADO.Net 2.0 Winforms/System.Drawing 2.0 - does not support right-to-left C# 1.0 Core Libraries 1.1: mscorlib, System, System.Xml ASP.Net 1.1 ADO.Net 1.1 Winforms/System.Drawing 1.1 Partially Implemented LINQ to SQL - Mostly done, but a few features missing WCF - silverlight 2.0 subset completed Not Implemented WPF - no plans to implement WF - Will implement WF 4 instead on future versions of Mono. System.Management - does not map to Linux System.EnterpriseServices - deprecated Links to documentation. The Official Mono FAQ’s Links to binaries. Mono IDE Latest Version is 2.6.4 That's it, nothing more is required except to compile and run .net code in Linux. Installation After landing on the mono project home page, you can select which platform you want to download. I typically pick the Virtual PC image since I spend all of my day using Windows 7. Go ahead and pick whatever version is best for you. The Virtual PC image comes with Suse Linux. Once the image is launch, you will see the following: I’m not going to go through each option but its best to start with “Start Here” icon. It will provide you with information on new projects or existing VS projects. After you get Mono installed, it's probably a good idea to run a quick Hello World program to make sure everything is setup properly. This allows you to know that your Mono is working before you try writing or running a more complex application. To write a "Hello World" program follow these steps: Start Mono Development Environment. Create a new Project: File->New->Solution Select "Console Project" in the category list. Enter a project name into the Project name field, for example, "HW Project". Click "Forward" Click “Packaging” then OK. You should have a screen very simular to a VS Console App. Click the "Run" button in the toolbar (Ctrl-F5). Look in the Application Output and you should have the “Hello World!” Your screen should look like the screen below. That should do it for a simple console app in mono. To test out an ASP.NET application, simply copy your code to a new directory in /srv/www/htdocs, then visit the following URL: http://localhost/directoryname/page.aspx where directoryname is the directory where you deployed your application and page.aspx is the initial page for your software. Databases You can continue to use SQL server database or use MySQL, Postgress, Sybase, Oracle, IBM’s DB2 or SQLite db. Conclusion I hope this brief look at the Mono IDE helps someone get acquainted with development outside of VS. As always, I welcome any suggestions or comments.

    Read the article

  • My collection of favourite TFS utilities

    - by Aaron Kowall
    So, you’re in charge of your company or team’s Team Foundation Server.  Wish it was easier to manage, administer, extend?  Well, here are a few utilities that I highly recommend looking at. I’ve recently had need to rebuild my laptop and upgrade my local TFS environment to TFS 2012 Update 1.  This gave me cause to enumerate some of the utilities I like to have on hand. One of the reasons I love to use TFS on projects is that it’s basically a complete ALM toolkit.  Everything from Task Management, Version Control, Build Management, Test Management, Metrics and Reporting are all there ‘in the box’.  However, no matter how complete a product set it, there are always ways to make it better.  Here are a list of utilities and libraries that are pretty generally useful.  this is not intended to be an exhaustive list of TFS extensions but rather a set that I recommend you look at.  There are many more out there that may be applicable in one scenario or another.  This set of tools should work with TFS 2012 or 2010 if you grab the right version. Most of these tools (and more) are available from the Visual Studio Gallery or CodePlex. General TFS Power Tools – This is ‘the’ collection of utilities and extensions delivered by the Product Group.  Highly recommended from here are the Best Practice Analyzer for ensuring your TFS implementation is healthy and the Team Foundation Server Backups to ensure your TFS databases are backed up correctly. TFS Administrators Toolkit – helps make updates to work item types and reports across many team projects.  Also provides visibility of disk usage by finding large files in version control or test attachments to assist in managing storage utilization. Version Control Git-TF - a set of cross-platform, command line tools that facilitate sharing of changes between TFS and Git. These tools allow a developer to use a local Git repository, and configure it to share changes with a TFS server.  Great for all Git lovers who must integrate into a TFS repository. Testing TFS 2012 Tester Power Tool – A utility for bulk copying test cases which assists in an approach for managing test cases across multiple releases.  A little plug that this utility was written and maintained by Anna Russo of Imaginet where I also work. Test Scribe - A documentation power tool designed to construct documents directly from the TFS for test plan and test run artifacts for the purpose of discussion, reporting etc. Reporting Community TFS Report Extensions - a single repository of SQL Server Reporting Services report for Team Foundation 2010 (and above).  Check out the Test Plan Status report by Imaginet’s Steve St. Jean.  Very valuable for your test managers. Builds TFS Build Manager – A great utility if you are build manager over a complex build environment with many TFS build definitions. Community TFS Build Extensions – contains many custom build activities.  Current release binaries are for TFS 2010 but many of the activities can be recompiled for use with TFS 2012. While compiling this list, I was surprised by the number of TFS utilities and extensions I no longer use/need in TFS 2012 because of the great work by the TFS team addressing many gaps since the 2010 release. Are there any utilities you depend on that I’ve missed?  I’d love to hear about them in the comments!

    Read the article

  • Antenna Aligner Part 7: Connecting the dots

    - by Chris George
    The app is basically ready, so I eagerly started to sort out creating the application entry in iTunes Connect. It's mostly intuitive actually, although I did have to create yet another icon for iTunes sized 512x512 pixels, damn lucky I did the original graphics as vector! It took me longer to write the application description than anything else, I'm so not a tech author! I didn't like the way you have to 'make up' an SKU (Stock Keeping Unit) number. I have to do some googling to find out that it really doesn't matter what it is! It should be more obvious what to do from the actual website itself. That aside, the rest of it was actually fairly straightforward. As well as the details of the application, iPhone and iPad screenshots were also required. This posed somewhat of a problem. The iPhone ones were easy (as I have one!), but I do not (yet) own an iPad . So I thought I'd leave the iPad screenshots out for now. Once the application details were sorted, I moved onto the rights and pricing. At the start of the project I had made the decision that I wouldn't charge any more than the lowest amount £0.59. I believe there is a market for this, but as my first foray into app development I didn't want to take the mick. I did realise, however, that I had built my app with a developer certificate and provisioning profile. This was fairly quickly corrected, and again Nomad made this very easy to switch over to the distribution certificate and provisioning profile. With a sense of excitement I cracked open iTunes connect and clicked the upload button ... ...slight snag... . when the Nomad project was started, Apple allowed uploads of these binaries via iTunes Connect. But this is no longer possible, the only upload path is via the Application Loader available from the Apple Developer program. This itself has one limitation, it only runs on a mac! D'OH!!!  Actually my language was somewhat more colourful when this fact came to light. After picking my laptop up off the floor and putting it back together... ok only joking, but I did nearly throw it out of frustration!... I started to consider the options; I briefly entertained the idea of buying a cheap mac from ebay... no, that defeats the whole object of what I'm doing, plus my wife wouldn't be impressed there are some guys out there in the interweb who will upload your app for a small fee...but I don't really like the idea of giving some faceless email address my apple developer login details, as well as my app binary! find some willing friend with a mac who would kindly let me use it... obviously this is the only sensible option. In the meantime, I informed the Nomad team about this slight 'issue' and they are currently investigating possible solutions...

    Read the article

  • Why Apple’s New SDK Limitation is So Offensive

    - by TStewartDev
    I am not an Apple fanboy, nor have I ever been. However, I have owned a Mac, an iPod, and an iPhone in my lifetime, and for more than a decade, I have defended Apple against the untruths that the haters so enjoy spewing. I encouraged my wife to buy a MacBook when she needed a new laptop two years ago, and I often recommend them to my friends and relatives. I have proudly and happily used my first generation iPhone for nearly three years. Now, for the first time in well over ten years, I find myself ready to swear off Apple and encourage everyone I know to do the same. I was disappointed when Apple wouldn't allow native apps, but I still bought the iPhone. I've stomached their ambiguous app approval process even though it's apparent that Steve may just reject your app because he doesn't like you or feels threatened by you (I'm still lamenting the rejection of the Google Voice app). But, as a developer, I can no longer tolerate Apple's terms and the kind of totalitarian control they indicate Apple wants. In case you are not already familiar, Apple has dictated in their OS 4.0 SDK license agreement (the now infamous Section 3.3.1) that all apps developed for the iPhone must be coded in C, C++, or Objective C, and moreover, that using any cross-compiling platforms is a violation of the agreement. For those of you who aren't developers, let me try to illustrate why this angers those of us who are. Imagine you're a professional writer. You've had articles published in some journals and magazines, and you've got a couple popular books out there, too. You've got an idea for a new book, and so you take it to your publisher. Your publisher agrees that it's a good idea. "But," says the publisher, "we want to hold our books to a tighter standard so that our readers get the experience we want them to have. Therefore, from now on, all our writers may only use words from this list of the 10,000 most common English words. Furthermore, if you cite any other works or quote anyone, they must comply with that same list, or you'll have to rewrite the entire work as well in case our readers want to look up your citation." What do you do? If your work is a children's book, this probably isn't a big deal to you. If it's an autobiography, textbook, or even a novel, though, you're going to have a lot of trouble describing your content with only common words. It's going to take you longer to complete your book, too, since you'll be looking up less common words frequently to see if you can use them. You could always go to another publisher, but this one has the best ability to distribute your book. The next largest distributor can only do a quarter as much. You could abandon the project altogether, but then everyone loses. Isn't this a silly scenario? Who would put such a limitation on writers? Yet this is very much what Apple is doing. They are using their dominant position in the market to coerce developers to write their apps exclusively for the iPhone OS by making it too expensive to write for multiple platforms. It is at least a threefold attack, striking at Adobe who is set to release a compiler that lets Flash source be compiled to iPhone binaries; striking at Google whose Android platform stands the best chance at the moment of providing serious competition to the iPhone; and reinforcing their own strong position by keeping popular apps exclusively to iPhone. And while developers are already very upset about this, the sad fact is that most of us will cave and give in to Apple because consumers don't know any better. They will continue to buy Apple's toy forcing developers to play Apple's maniacal game in order to make any money, at least until Steve Jobs decides he doesn't like them or he intends to release a competing application (bye-bye OpenFeint). Apple has been kept in check on the desktop front by a very dominant Microsoft, but I'm afraid that their success with iPods, iTunes, and iPhones has created a monster that we may have to bear until it is slain by an anti-trust suit or dies with the retirement of Steve Jobs.

    Read the article

  • JEP 124: Enhance the Certificate Revocation-Checking API

    - by smullan
    Revocation checking is the mechanism to determine the revocation status of a certificate. If it is revoked, it is considered invalid and should not be used. Currently as of JDK 7, the PKIX implementation of java.security.cert.CertPathValidator  includes a revocation checking implementation that supports both OCSP and CRLs, the two main methods of checking revocation. However, there are very few options that allow you to configure the behavior. You can always implement your own revocation checker, but that's a lot of work. JEP 124 (Enhance the Certificate Revocation-Checking API) is one of the 11 new security features in JDK 8. This feature enhances the java.security.cert API to support various revocation settings such as best-effort checking, end-entity certificate checking, and mechanism-specific options and parameters. Let's describe each of these in more detail and show some examples. The features are provided through a new class named PKIXRevocationChecker. A PKIXRevocationChecker instance is returned by a PKIX CertPathValidator as follows: CertPathValidator cpv = CertPathValidator.getInstance("PKIX"); PKIXRevocationChecker prc = (PKIXRevocationChecker)cpv.getRevocationChecker(); You can now set various revocation options by calling different methods of the returned PKIXRevocationChecker object. For example, the best-effort option (called soft-fail) allows the revocation check to succeed if the status cannot be obtained due to a network connection failure or an overloaded server. It is enabled as follows: prc.setOptions(Enum.setOf(Option.SOFT_FAIL)); When the SOFT_FAIL option is specified, you can still obtain any exceptions that may have been thrown due to network issues. This can be useful if you want to log this information or treat it as a warning. You can obtain these exceptions by calling the getSoftFailExceptions method: List<CertPathValidatorException> exceptions = prc.getSoftFailExceptions(); Another new option called ONLY_END_ENTITY allows you to only check the revocation status of the end-entity certificate. This can improve performance, but you should be careful using this option, as the revocation status of CA certificates will not be checked. To set more than one option, simply specify them together, for example: prc.setOptions(Enum.setOf(Option.SOFT_FAIL, Option.ONLY_END_ENTITY)); By default, PKIXRevocationChecker will try to check the revocation status of a certificate using OCSP first, and then CRLs as a fallback. However, you can switch the order using the PREFER_CRLS option, or disable the fallback altogether using the NO_FALLBACK option. For example, here is how you would only use CRLs to check the revocation status: prc.setOptions(Enum.setOf(Option.PREFER_CRLS, Option.NO_FALLBACK)); There are also a number of other useful methods which allow you to specify various options such as the OCSP responder URI, the trusted OCSP responder certificate, and OCSP request extensions. However, one of the most useful features is the ability to specify a cached OCSP response with the setOCSPResponse method. This can be quite useful if the OCSPResponse has already been obtained, for example in a protocol that uses OCSP stapling. After you have set all of your preferred options, you must add the PKIXRevocationChecker to your PKIXParameters object as one of your custom CertPathCheckers before you validate the certificate chain, as follows: PKIXParameters params = new PKIXParameters(keystore); params.addCertPathChecker(prc); CertPathValidatorResult result = cpv.validate(path, params); Early access binaries of JDK 8 can be downloaded from http://jdk8.java.net/download.html

    Read the article

  • Exporting makefile from Eclipse CDT

    - by Alex Farber
    I have C++ project in the Ubuntu OS, Eclipse CDT. My final goal is to build the project binaries for FreeBSD OS. The first test. I create simple C++ CDT project with main.cpp file: cout << "OK" << endl; and build it. Then I open Terminal window in Release directory: alex@alex-linux:~/workspace/HelloWorld/Release$ ls HelloWorld main.d main.o makefile objects.mk sources.mk subdir.mk alex@alex-linux:~/workspace/HelloWorld/Release$ rm HelloWorld main.d main.o alex@alex-linux:~/workspace/HelloWorld/Release$ ls makefile objects.mk sources.mk subdir.mk alex@alex-linux:~/workspace/HelloWorld/Release$ make Building file: ../main.cpp Invoking: GCC C++ Compiler g++ -O3 -Wall -c -fmessage-length=0 -MMD -MP -MF"main.d" -MT"main.d" -o"main.o" "../main.cpp" Finished building: ../main.cpp Building target: HelloWorld Invoking: GCC C++ Linker g++ -o"HelloWorld" ./main.o Finished building target: HelloWorld alex@alex-linux:~/workspace/HelloWorld/Release$ ./HelloWorld OK alex@alex-linux:~/workspace/HelloWorld/Release$ So far, so good. Now I copy the whole project tree to FreeBSD and trying to build it: $ cd /home/alex/project $ ls main.cpp release $ cd release $ ls makefile objects.mk sources.mk subdir.mk $ make "makefile", line 5: Need an operator "makefile", line 10: Need an operator "makefile", line 11: Need an operator "makefile", line 12: Need an operator CDT-generated makefile doesn't work. This is makefile beginning: $ Automatically-generated file. Do not edit! -include ../makefile.init RM := rm -rf $ All of the sources participating in the build are defined here -include sources.mk -include subdir.mk -include objects.mk ... Line 5 is -include ../makefile.init. Really, there is no such file. But it works by some way on Ubuntu computer. What is the trick, how can I build this? BTW, manually written makefile works: all: g++ -O0 -g3 -Wall -c -fmessage-length=0 -MMD -MP -MF"main.d" -MT"main.d" -o"main.o" "../main.cpp" g++ -o"HelloWorld" ./main.o Note: $ in makefile is actually #, I replaced it because # creates formatting problems inside of stackoverflow pre block.

    Read the article

  • Managing Dependency Hell with WiX and C#

    - by Tom the Junglist
    We are on the eve of product launch, and at the last minute I am being bombarded with crash reports that appear to be related to our installer, which is a WiX3 project with separate outputs for x86 and x64 builds. These have been an ongoing problem that I always thought were fixed, only to find out that they were still lurking. The product itself is a collection of binaries that communicate with each other via .Net remoting, including a Windows Service and a small COM component that is loaded as an addon in another app. The service runs as SYSTEM, the COM piece runs in a low-rights context, while the other pieces run in normal user contexts. Other pieces include an third-party COM object library DLL and a shared DLL with the .net Remoting interfaces. I've observed flat-out weird behavior with MSI, particularly on version upgrades. Between MS' anal strong-name implementation (specifically, the exact version check before loading a given assembly), a documented WiX/MSI bug that sees critical files erased on upgrades (essentially, if a file in the upgrade MSI has the same version number as the existing install, that file is deleted), and having to work around Wow64 virtualization (x86 MSI can only write to registry/HD locations via Wow64, yet x64 MSIs cannot run on x86 computers...), I am about ready to trash the whole thing and port it over to a different install system. What I am looking for on tips + tricks, techniques, or suggestions on how to properly do things so that I am not fighting with Windows Installer's twisted sense of logic. I am tired of fighting with WiX/MSI/Windows Installer. All it needs to do is place files and registry keys where I tell it to, upgrade them when appropriate, and don't delete anything until the user uninstalls. Instead, dependencies are deleted willy-nilly, bringing up a whole bunch of uncatchable exceptions (can't wrap a try{} block around function declarations) and GPF'ing the whole app. I am particularly interested in 'best practices' and examples regarding shared and dependency DLLs, and any tips on making sure if a file needs to go to GAC, that it actually goes to the GAC and stays there until it is appropriate to remove it. Thanks! Tom

    Read the article

  • Build 32-bit with 64-bit llvm-gcc

    - by Jay Conrod
    I have a 64-bit version of llvm-gcc, but I want to be able to build both 32-bit and 64-bit binaries. Is there a flag for this? I tried passing -m32 (which works on the regular gcc), but I get an error message like this: [jay@andesite]$ llvm-gcc -m32 test.c -o test Warning: Generation of 64-bit code for a 32-bit processor requested. Warning: 64-bit processors all have at least SSE2. /tmp/cchzYo9t.s: Assembler messages: /tmp/cchzYo9t.s:8: Error: bad register name `%rbp' /tmp/cchzYo9t.s:9: Error: bad register name `%rsp' ... This is backwards; I want to generate 32-bit code for a 64-bit processor! I'm running llvm-gcc 4.2, the one that comes with Ubuntu 9.04 x86-64. EDIT: Here is the relevant part of the output when I run llvm-gcc with the -v flag: [jay@andesite]$ llvm-gcc -v -m32 test.c -o test.bc Using built-in specs. Target: x86_64-linux-gnu Configured with: ../llvm-gcc4.2-2.2.source/configure --host=x86_64-linux-gnu --build=x86_64-linux-gnu --prefix=/usr/lib/llvm/gcc-4.2 --enable-languages=c,c++ --program-prefix=llvm- --enable-llvm=/usr/lib/llvm --enable-threads --disable-nls --disable-shared --disable-multilib --disable-bootstrap Thread model: posix gcc version 4.2.1 (Based on Apple Inc. build 5546) (LLVM build) /usr/lib/llvm/gcc-4.2/libexec/gcc/x86_64-linux-gnu/4.2.1/cc1 -quiet -v -imultilib . test.c -quiet -dumpbase test.c -m32 -mtune=generic -auxbase test -version -o /tmp/ccw6TZY6.s I looked in /usr/lib/llvm/gcc-4.2/libexec/gcc hoping to find another binary, but the only directory there is x86_64-linux-gnu. I will probably look at compiling llvm-gcc from source with appropriate options next.

    Read the article

  • How to convert datatable to json string using json.net?

    - by Pandiya Chendur
    How to convert datatable to json using json.net? Any suggestion... I ve downloaded the necessary binaries... Which class should i use to get the conversion of my datatable to json? Thus far used this method to get json string by passing my datatable... public string GetJSONString(DataTable table) { StringBuilder headStrBuilder = new StringBuilder(table.Columns.Count * 5); //pre-allocate some space, default is 16 bytes for (int i = 0; i < table.Columns.Count; i++) { headStrBuilder.AppendFormat("\"{0}\" : \"{0}{1}¾\",", table.Columns[i].Caption, i); } headStrBuilder.Remove(headStrBuilder.Length - 1, 1); // trim away last , StringBuilder sb = new StringBuilder(table.Rows.Count * 5); //pre-allocate some space sb.Append("{\""); sb.Append(table.TableName); sb.Append("\" : ["); for (int i = 0; i < table.Rows.Count; i++) { string tempStr = headStrBuilder.ToString(); sb.Append("{"); for (int j = 0; j < table.Columns.Count; j++) { table.Rows[i][j] = table.Rows[i][j].ToString().Replace("'", ""); tempStr = tempStr.Replace(table.Columns[j] + j.ToString() + "¾", table.Rows[i][j].ToString()); } sb.Append(tempStr + "},"); } sb.Remove(sb.Length - 1, 1); // trim last , sb.Append("]}"); return sb.ToString(); } Now i thought of using json.net but dont know where to get started....

    Read the article

  • Could not load type from assembly error

    - by George Mauer
    I have written the following simple test in trying to learn Castle Windsor's Fluent Interface: using NUnit.Framework; using Castle.Windsor; using System.Collections; using Castle.MicroKernel.Registration; namespace WindsorSample { public class MyComponent : IMyComponent { public MyComponent(int start_at) { this.Value = start_at; } public int Value { get; private set; } } public interface IMyComponent { int Value { get; } } [TestFixture] public class ConcreteImplFixture { [Test] public void ResolvingConcreteImplShouldInitialiseValue() { IWindsorContainer container = new WindsorContainer(); container.Register(Component.For<IMyComponent>().ImplementedBy<MyComponent>().Parameters(Parameter.ForKey("start_at").Eq("1"))); IMyComponent resolvedComp = container.Resolve<IMyComponent>(); Assert.AreEqual(resolvedComp.Value, 1); } } } When I execute the test through TestDriven.NET I get the following error: System.TypeLoadException : Could not load type 'Castle.MicroKernel.Registration.IRegistration' from assembly 'Castle.MicroKernel, Version=1.0.3.0, Culture=neutral, PublicKeyToken=407dd0808d44fbdc'. at WindsorSample.ConcreteImplFixture.ResolvingConcreteImplShouldInitialiseValue() When I execute the test through the NUnit GUI I get: WindsorSample.ConcreteImplFixture.ResolvingConcreteImplShouldInitialiseValue: System.IO.FileNotFoundException : Could not load file or assembly 'Castle.Windsor, Version=1.0.3.0, Culture=neutral, PublicKeyToken=407dd0808d44fbdc' or one of its dependencies. The system cannot find the file specified. If I open the Assembly that I am referencing in Reflector I can see its information is: Castle.MicroKernel, Version=1.0.3.0, Culture=neutral, PublicKeyToken=407dd0808d44fbdc and that it definitely contains Castle.MicroKernel.Registration.IRegistration What could be going on? I should mention that the binaries are taken from the latest build of Castle though I have never worked with nant so I didn't bother re-compiling from source and just took the files in the bin directory. I should also point out that my project compiles with no problem.

    Read the article

  • A layout for maven project with a patched dependency

    - by zamza
    Suppose, I have an opensource project that depends on some library, that must be patched in order to fix some issues. How do I do that? My ideas are: Have that library sources set up as a module, keep them in my vcs. Pros: simple. Cons: some third party sources in my repo, might slow down build process, hard to find a patched place (though can be fixed in README) Have a module, like in 1, but keep patched source files only, compile them with orignal library jar in classpath and somehow replace *.class files in library jar on build. Pros: builds faster, easy to find patched places. Cons: hard to configure, that jar hackery is non-obvious (library jar in repository and in my project assembly would be different) Keep patched *.class files in main/resources, and replace on packaging like in 2). Pros: almost none. Cons: binaries in vcs, hard to recompile a patched class as patch compilation is not automated. One nice solution is to create a distinct project with patched library sources, and deploy it on local/enterprise repository with -patched qualifier. But that would not fit for an opensourced project that is meant to be easily buildable by anyone who checks out its sources. Or should I just say "and also, before you build my project, please check out that stuff and run mvn install".

    Read the article

  • Ruby through RVM fails

    - by TheLQ
    In constant battle to install Ruby 1.9.2 on an RPM system (OS is based off of CentOS), I'm trying again with RVM. So once I install it, I then try to use it: [root@quackwall ~]# rvm use 1.9.2 Using /usr/local/rvm/gems/ruby-1.9.2-p136 [root@quackwall ~]# ruby bash: ruby: command not found [root@quackwall ~]# which ruby /usr/bin/which: no ruby in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin) Now that's interesting; rvm info says something completely different: [root@quackwall bin]# rvm info ruby-1.9.2-p136: system: uname: "Linux quackwall.highwow.lan 2.6.18-194.8.1.v5 #1 SMP Thu Jul 15 01:14:04 EDT 2010 i686 i686 i386 GNU/Linux" bash: "/bin/bash => GNU bash, version 3.2.25(1)-release (i686-redhat-linux-gnu)" zsh: " => not installed" rvm: version: "rvm 1.2.2 by Wayne E. Seguin ([email protected]) [http://rvm.beginrescueend.com/]" ruby: interpreter: "ruby" version: "1.9.2p136" date: "2010-12-25" platform: "i686-linux" patchlevel: "2010-12-25 revision 30365" full_version: "ruby 1.9.2p136 (2010-12-25 revision 30365) [i686-linux]" homes: gem: "/usr/local/rvm/gems/ruby-1.9.2-p136" ruby: "/usr/local/rvm/rubies/ruby-1.9.2-p136" binaries: ruby: "/usr/local/rvm/rubies/ruby-1.9.2-p136/bin/ruby" irb: "/usr/local/rvm/rubies/ruby-1.9.2-p136/bin/irb" gem: "/usr/local/rvm/rubies/ruby-1.9.2-p136/bin/gem" rake: "/usr/local/rvm/gems/ruby-1.9.2-p136/bin/rake" environment: PATH: "/usr/local/rvm/gems/ruby-1.9.2-p136/bin:/usr/local/rvm/gems/ruby-1.9.2-p136@global/bin:/usr/local/rvm/rubies/ruby-1.9.2-p136/bin:bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/rvm/bin" GEM_HOME: "/usr/local/rvm/gems/ruby-1.9.2-p136" GEM_PATH: "/usr/local/rvm/gems/ruby-1.9.2-p136:/usr/local/rvm/gems/ruby-1.9.2-p136@global" MY_RUBY_HOME: "/usr/local/rvm/rubies/ruby-1.9.2-p136" IRBRC: "/usr/local/rvm/rubies/ruby-1.9.2-p136/.irbrc" RUBYOPT: "" gemset: "" So I have RVM that says one thing and bash which says another. Any suggestions on how to get this working?

    Read the article

  • Starting an Erlang slave node in escript fails when using custom Erlang in Ubuntu 10.4

    - by Adam Lindberg
    I have the following escript: #!/usr/bin/env escript %%! -name [email protected] main(_) -> NodeName = test, Host = '127.0.0.1', Args = "", {ok, _Node} = slave:start_link(Host, NodeName, Args), io:format("Node started successfully!"). When running it on Ubuntu 10.04 I get this: $ ./start_slave Node started successfully! $ I want to install my own Erlang (latest version, debug compiled files for dialyzer etc) since the stock install of Erlang on Ubuntu lacks some features. I put my Erlang binaries inside ~/Applications/bin. Starting Erlang normally works, and starting slave nodes inside an Erlang shell works as well. However, now my escript doesn't work. After about 60 seconds it returns an error: $ ./start_slave escript: exception error: no match of right hand side value {error,timeout} Even if I change the first line to the escript to use my erlang version, it still does not work: #!/home/user/Applications/bin/escript The slave node is started with a call to erlang:open_port/2 which seems to be using sh which in turn does not read my .bashrc file that sets my custom PATH environment variable. The timeout seems to occur when slave:start_link/3 waits for the slave node to respond, which it never does. How can I roll my own installation of Erlang and start slave nodes inside escripts on Ubuntu 10.4?

    Read the article

  • Odd Infragistics UltraComboEditor data binding non-bug

    - by Richard Dunlap
    Within an Infragistics 8.2 UltraComboEditor, we had the following properties set via C#: DataSource = dataSource; ValueMember = "Measure"; DisplayMember = "Name"; DataBindings.Add("Value", repository, "Measure"); DataBindings["Value"].DataSourceUpdateMode = DataSourceUpdateMode.OnPropertyChanged; where dataSource was an array of objects, each with a property Measure, and repository was an object with a property Measure. (Those strings are actually constructor parameters -- just using explicit strings to simplify the example.) In the course of some refactoring, the name of the property on the objects in the array was changed to BaseEnum (the objects are actually wrapped enumerations, for the curious), but the name of ValueMember above was not changed. And yet, the combo box binding continued to work through initial testing, beta testing, and even after release... until two customers emailed in noting that the combo box was no longer changing the underlying parameter. We were able to dig out the problem by careful study of the source code repository... despite being in the awkward position of not being able to replicate the buggy behavior internally. Two part question: What's happening under the hood that allowed the binding to continue to function, and/or what might be unique about those two users that caused the binding to (correctly) fail? (O/S version isn't alone the answer, and we get the unexpectedly functioning binding on machines that have never had a version of the software before, so we're not looking at rogue binaries). Are there tools that might have been able to warn us about the misbind, even if something was cleaning up behind?

    Read the article

  • ASP.NET MVC 2 with NServiceBus unable to load requested types

    - by dp
    I am trying to use NServiceBus with an ASP.NET MVC 2 website (using VS 2010 and the .NET 4.0 framework). However, when I run the site on my local machine, I get the following error: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. Here are the relevant steps I have taken: Downloaded the NServiceBus.2.0.0.1145 binaries In my asp.net mvc app, I've added references to NServiceBus.dll and NServiceBus.Core.dll In Global.asax.cs I've added: public static IBus Bus { get; private set; } protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); Bus = NServiceBus.Configure .WithWeb() .Log4Net() .DefaultBuilder() .XmlSerializer() .MsmqTransport() .IsTransactional(false) .PurgeOnStartup(false) .UnicastBus() .ImpersonateSender(false) .CreateBus() .Start(); } In web.config, I've added: <MsmqTransportConfig InputQueue="MyWebClient" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5"/> <UnicastBusConfig> <MessageEndpointMappings> <add Messages="Messages" Endpoint="MyServerInputQueue"/> </MessageEndpointMappings> </UnicastBusConfig> The error indicates that the problem is with the first line in the Global.asax.cs file. Is it possible that there is a problem with NServiceBus running under .NET 4.0?

    Read the article

  • AjaxControlToolkit TabContainer with weird rendering behavior

    - by sohum
    I've built a web application that contains a page that uses the AjaxControlToolkit's TabContainer/TabPanel objects. I've developed a custom stylesheet, as well. I'm developing using Visual Studio 2010. The following is the behavior of my application: VS2010 Development Server (localhost:XXXXX): Works as expected with the custom stylesheet. Local IIS: The TabContainer rendered but the stylesheet wasn't applied. I fixed this by doing a CTRL+F5. It seems that IIS caches stylesheets pretty aggressively. Remote Server: The TabContainer and TabPanel are completely hidden. Looking at the HTML, all of them have their visibility set to hidden. The way I got my files onto my remote server were as follows (I haven't yet set up WebDAV or remote publishing because the server is a Windows 7 box and as far as I am aware does not support FrontPage Extensions): The entire solution is under source code control (SVN). Checked in all pending changes (including projects, aspx files, css, AjaxControlToolkit binaries) Synced on the server. Rebuilt everything on server. Deployed to local IIS on server (which is externally accessible). Both on the local IIS on the server and the development server on the server, the TabContainers are completely hidden. Looking at the SVN status on the server project, only the "AjaxControlToolkit.dll" is under source-code control. All the locale-specific DLLs are not on the server. Could this be a potential issue? I'm not sure what's going on and would appreciate any help. Thanks!

    Read the article

  • Webdriver with Python

    - by vishal kharge
    I had written a scipt in Java with Webdriver and it worked fine and below is the code for the sample import org.junit.After; import org.junit.AfterClass; import org.junit.Before; import org.junit.BeforeClass; import org.openqa.selenium.WebDriver; import org.openqa.selenium.WebDriverBackedSelenium; import org.openqa.selenium.firefox.FirefoxDriver; import com.thoughtworks.selenium.Selenium; import java.util.*; import java.lang.Thread.*; public class Login { @BeforeClass public static void setUpBeforeClass() throws Exception { } @AfterClass public static void tearDownAfterClass() throws Exception { } @Before public void setUp() throws Exception { } @After public void tearDown() throws Exception { } public static void main(String[] args) { WebDriver driver = new FirefoxDriver(); Selenium selenium = new WebDriverBackedSelenium(driver, "http://192.168.10.10:8080/"); selenium.open("/"); selenium.keyPress("name=user_id", "admin"); } } } But my requirement is to implement the same in python with webdriver, can you please let me know how this can be done with the above example and webdriver binaries and how to do setup for the same

    Read the article

  • Problems installing PHP's PECL sphinx module

    - by Camsoft
    I've installed the sphinx binaries and libraries and am now trying to install the PECL sphinx module. My system is running OS X 10.6 with MAMP 1.8.2 installed. I try to install sphinx using the following command: sudo pecl install sphinx The PECL command outputs the following: running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 The versions above don't match the versions listed when doing a phpinfo(). It seems that PECL is trying to complie against the built-in version of PHP. If I ignore the errors and continue the it will successfully compile and place the sphinx.so file in: /usr/lib/php/extensions/no-debug-non-zts-20090626/sphinx.so when in fact it should be: /Applications/MAMP/bin/php5/lib/php/extensions/no-debug-non-zts-20060613/ I've tried copying the sphinx.so file to the MAMP extensions dir but when I restart apache PHP displays the following warning: PHP Startup: Unable to load dynamic library '/Applications/MAMP/bin/php5/lib/php/extensions/no-debug-non-zts-20060613/sphinx.so I think this is because MAMP is 32bit and the built-in PHP is 64bit so PECL complies for 64bit. I might be completely wrong but I did read this when I goggled on the topic. Does anyone know how to get PECL to map to the MAMP version of PHP instead of the built-in version?

    Read the article

  • configure batch to sent minute info instead of entire stdout

    - by Daniel
    Hi all, I am working on a RedHat server along with several other users. We use the batch utility to set a job queue. Some of the programs that I use write stuff to stdout during the run, with info on who much data has been processed to far and estimated time until completion etc. batch -q z at> myScript -i somefile -o someotherfile By default, the batch util send an email to me (since I configured it using .forward) with the entire output from stdout. Since the scripts writes something to stdout a few times each second, the amount of log-stuff I get from a two-day script can be ˜20 Mbs. Clearly not what I want. I can of course pipe stdout to a file like so batch -q z at> myScript -i somefile -o someotherfile > myscript.stdout.log but then I get a blank e-mail from the util. So to my question: Is it possible to configure batch so that it sends time the job started and ended, or run time or some oth valuable information to me, instead of a 20 Mb mail or a blank mail? Note that the scripts that I use are binaries and I cannot customize the code to output less info in the first place (which would be the optimal solution I guess). Thanks /Daniel

    Read the article

  • how to compile with llvm and g++?

    - by Sriram
    Hi, I use a fedora-11 system and recently I installed llvm ( sudo yum -y install llvm llvm-docs llvm-devel ). When I search for llvm I get them in /usr/bin. some of the links to the binaries are broken(llvm-gcc,llvm-g++,llvm-cpp,etc.) the include files are found within /usr/include/llvm and libs at /usr/lib/llvm. How to compile them using g++? I tried to compile the kaleidoscope code given in the tutorial (http://llvm.org/docs/tutorial/LangImpl3.html) as per directed, but it fails to compile.. I get this... toy.cpp:5:30: error: llvm/LLVMContext.h: No such file or directory toy.cpp:352: error: ‘getGlobalContext’ was not declared in this scope toy.cpp: In member function ‘virtual llvm::Value* NumberExprAST::Codegen()’: toy.cpp:358: error: ‘getGlobalContext’ was not declared in this scope toy.cpp: In member function ‘virtual llvm::Value* BinaryExprAST::Codegen()’: toy.cpp:379: error: ‘getDoubleTy’ is not a member of ‘llvm::Type’ toy.cpp:379: error: ‘getGlobalContext’ was not declared in this scope toy.cpp: In member function ‘llvm::Function* PrototypeAST::Codegen()’: toy.cpp:407: error: ‘getDoubleTy’ is not a member of ‘llvm::Type’ toy.cpp:407: error: ‘getGlobalContext’ was not declared in this scope toy.cpp:408: error: ‘getDoubleTy’ is not a member of ‘llvm::Type’ toy.cpp: In member function ‘llvm::Function* FunctionAST::Codegen()’: toy.cpp:454: error: ‘getGlobalContext’ was not declared in this scope toy.cpp: In function ‘int main()’: toy.cpp:543: error: ‘LLVMContext’ was not declared in this scope toy.cpp:543: error: ‘Context’ was not declared in this scope toy.cpp:543: error: ‘getGlobalContext’ was not declared in this scope I cannot find the LLVMContext.h file too. so i guess this might be a version problem. what should i do to make it work? some help would be good! thanks in advance... :)

    Read the article

  • Sql Server Compact - Schema Management

    - by Richard B
    I've been searching for some time for a good solution to implement the idea of managing schema on a Sql Server Compact 3.5 db. I know of several ways of managing schema on Sql Express/std/enterprise, but Compact Edition doesn't support the necessary tools required to use the same methodology. Any suggestions/tips? I should expand this to say that it is for 100+ clients with wrapperware software. As the system changes, I need to publish update scripts alongside the new binaries to the client. I was looking for a decent method by which to publish this without having to just hand the client a script file and say "Run this in SSMSE". Most clients are not capable of doing such a beast. A buddy of mine disclosed a partial script on how to handle the SQL Server piece of my task, but never worked on Compact Edition... It looks like I'll be on my own for this. What I think that I've decided to do, and it's going to need a "geek week" to accomplish, is that I'm going to write some sort of tool much like how WiX and nAnt works, so that I can just write an overzealous Xml document to handle the work. If I think that it is worthwhile, I'll publish it on CodePlex and/or CodeProject because I've used both sites a bit to gain better understanding of concepts for jobs I've done in the past, and I think it is probably worthwhile to give back a little.

    Read the article

  • Linq to SQL Problem System.Data.Linq.IdentityManager.StandardIdentityManager.MultiKeyManager

    - by luckyluke
    I have a really tricky thing going up here. My project has around 100 tables and they are all mapped by LINQ. Everything works fine in a dev and test environment. These enviroments are MS Win 2008 r2 servers with SQL 2008 sp1 databases. IIS and SQL are on a different machines. Now on production enviroment which is MS Win 2003 x64 web farm + geoclustered SQL 2008 IT DOES not work. All I get is the exception System.IndexOutOfRangeException: Index was outside the bounds of the array. at System.Data.Linq.IdentityManager.StandardIdentityManager.MultiKeyManager3.TryCreateKeyFr>om Values(Object[] values, MultiKey& k) at System.Data.Linq.IdentityManager.StandardIdentityManager.IdentityCache2.Find(Object[] keyValues) at System.Data.Linq.ChangeProcessor.GetOtherItem(MetaAssociation assoc, Object instance) at System.Data.Linq.ChangeProcessor.BuildEdgeMaps() at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode) at ERS.IIMP.Services.ExposuresSrv.Update(Int32 ExpID, Int32 AssID) Services\ExposuresSrv.cs` My question is What the hell. They have precisely the same DBML, the DB has exactly THE SAME structure (when I get the DB from prod to TEST and mount it eveything works just great), the binaries on the WEB Server are the same. I seriously do not know what to do.... Did anyone found that Linq works on one env and does not on the second?? I mam really lost here. I really hope You can help me:)

    Read the article

  • Upgrading VSTO project to .net 4 - What references do I actually need?

    - by Dana
    I'm developing an application for Office. It originally targeted .net 3.5, but I decided to upgrade to .net 4 because of some WPF issues that I've run into. When I switched all the projects in my solution and rebuilt, I got an error saying to include System.Xaml. I did that and rebuilt, and VS2010 told me to include another reference, so I did. This happened a couple more times, and finally it asked me to include Microsoft.Office.Tools.Common.v9.0, and when I did I got this error: Microsoft.Office.Tools.CustomTaskPaneCollection exists in both Microsoft.Office.Tools.Common.v9.0.dll and Microsoft.Office.Tools.Common.dll I have both Microsoft.Office.Tools.Common.v9.0 and Microsoft.Office.Tools.Common referenced in my project, but the problem is that if I remove either, I get an error. Am I doing something wrong? Is it odd that I would need both references? I find it strange that CustomTaskPaneCollection would be defined in two different binaries. If I remove Microsoft.Office.Tools.Common, the error that I get is "Cannot find the interop type that matches the embedded interop type 'Microsoft.Office.Tools.IAddInExtension'. Are you missing an assembly reference?"

    Read the article

  • What version control system is best designed to *prevent* concurrent editing?

    - by Fred Hamilton
    We've been using CVS (with TortoiseCVS interface) for years for both source control and wide-ranging document control (including binaries such as Word, Excel, Framemaker, test data, simulation results, etc.). Unlike typical version control systems, 99% of the time we want to prevent concurrent editing - when a user starts editing a file, the pre-edit version of the file becomes read only to everyone else. Many of the people who will be using this are not programmers or even that computer savvy, so we're also looking for a system that let's people simply add documents to the repository, check out and edit a document (unless someone else is currently editing it), and check it back in with a minimum of fuss. We've gotten this to work reasonably well with CVS + TortoiseCVS, but we're now considering Subversion and Mercurial (and open to others if they're a better fit) for their better version tracking, so I was wondering which one supported locking files most transparently. For example, we'd like exclusive locking enabled as the default, and we want to make it as difficult as possible for someone to accidentally start editing a file that someone else has checked out. For example when someone checks out a file for editing, it checks with the master database first even if they have not recently updated their sandbox. Maybe it even won't let a user check out a document if it's off the network and can't check in with the mothership.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >