Search Results

Search found 17076 results on 684 pages for 'nightly build'.

Page 206/684 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • How do i change the Scala version that sbt works with?

    - by ashy_32bit
    Firing up the SBT console it reads : [info] Building project AYLIEN 1.0 against Scala 2.8.1 [info] using MyProject with sbt 0.7.4 and Scala 2.7.7 How can I make it use MyProject with sbt 0.7.4 and Scala 2.8.1 ? Please pay attenetion that I'm not asking about the Scala version that is used to build my project (it is the 2.8.1 as you can see), but I rather want to make sbt use MyProject with Scala 2.8.1. Apparently sbt uses it's own scala version to work with project definition (MyProject here) which is different than one it uses to actually build the project! or perhaps I'm missing something ... ?

    Read the article

  • Building FFmpeg for Android

    - by varevarao
    I've spent almost a week on this now, trying to get FFmpeg "Angel" to build for Android. I've tried build scripts from all over the internet to no avail. I got closest was using this. A sthe author himself says the script doesn't work for newer versions of FFmpeg due to this bug, which has been dismissed on that ticket saying "I found a Makefile that does it." This was dis-heartening, being the only post on all of the cast Google world that was anywhere close to my problem. So, question time: Is there a way to get around the above bug? I'm trying to use the newest ffmpeg API, and "Love" is just giving me "undefined reference" errors while trying to use av_encode_video2(), and av_free_frame(). The code I was working on the lines of is at the ffmpeg git repo, under /doc/examples/decoding_encoding.c (the function starting on line 338)

    Read the article

  • Maintenance window and recovery for a large database

    - by NYSystemsAnalyst
    One of our teams is developing a database that will be somewhat large (~500GB) and grow from there (I know 500 Gigs may seem small to many of you, but it will be one of the larger databases in our shop). One of the issues they are grappling with is backing up and restoring the database. Basically, the database will have several "data" tables and one table used for storing images / documents. We need to accomplish the following: Be able to quickly backup and restore only the data tables (sans images) to our test server for debugging and testing purposes. In the event of a catastrophic database failure, restore the data tables only to get most of the application up and running ASAP. Then, restore the images table when possible. Backup the database within the allotted nightly time window (a few hours). My questions are: Is it possible to accomplish the first two goals while still having the images stored in the same database? If so, would we use filegroups, filestream, or something else? How do other shops backup their databases in a reasonable time window while maintaining high availability? Do you replicate to a second server and backup from there?

    Read the article

  • Symantec Antivirus Corporate -- two problems

    - by Alex C.
    We have a Windows network with a domain and about 50 clients. A few months ago, we installed Symantec Antivirus, Corporate Edition ver. 10.1.8.8000. There are two problems. The larger problem is that the software isn't very good at stopping viruses. In the last month, four different machines have become infected with those viruses that masquerade as antivirus software. Two machines I was able to clean with MalWareBytes. The other two were hopeless, and I had to reinstall Windows. Is there something I can do to make the Symantec product more effective? As far as I can tell, it successfully updates definitions nightly and pushes the definitions to the clients. The smaller problem is that the Symantec client applications sometimes initiate scans at random (and inappropriate) times. One of my co-workers complained to me yesterday that her computer was running very slow. I looked at the scan history and found that Symantec had scanned the computer three times during the past two days, and each time during the workday. No threats were found. Not sure why it's doing this, but I'd like it to stop. Any help would be appreciated. Thanks.

    Read the article

  • Building a code search engine for java code in git repositories

    - by zero1
    I'm trying to build a Java code search engine. Apart from just searching for keywords, I would also like cross-referencing between classes to work. It should work the way eclipse's referencing works - click on anything to open the definition. Bonus would be if something like search-all-usages-of-foo works. I'm thinking of using Apache Solr to index the files and build the basic search. But I'm not sure how I'd do the crossreferencing part since Solr doesn't understand Java code. Any suggestions on what I could use here? EDIT: I mainly want to index a lot of java git repositories.

    Read the article

  • Getting error in integrating Contacts APIs in Android 1.6 and 2.0

    - by dhaiwat
    Hi All, I have seen the BusinessCard example provide in Android examples. I am using ContactAccessor abstract class to seperate out the SDK versions. My code is running fine for 2.0 onwards, but when I am trying to build the code in Adnroid 1.6 I am getting the following errors: Build.VERSION_CODES.ECLAIR is not resolved. Getting errors in the whole class in which I have used Contacts APIs from 2.0 (say in class ContactAccessorSdk5.java). How to resolve these issues? I want to run my App on both the versions. Please help me. Regards, Dhaiwat Bhavsar.

    Read the article

  • How to tell the MinGW linker not to export all symbols?

    - by James R.
    Hello, I'm building a Windows dynamic library using the MinGW toolchain. To build this library I'm statically linking to other 2 which offer an API and I have a .def file where I wrote the only symbol I want to be exported in my library. The problem is that GCC is exporting all of the symbols including the ones from the libraries I'm linking to. Is there anyway to tell the linker just to export the symbols in the def file? I know there is the option --export-all-symbols but there seems not to be the opposite to it. Right now the last line of the build script has this structure: g++ -shared CXXFLAGS DEFINES INCLUDES -o library.dll library.cpp DEF_FILE \ OBJECT_FILES LIBS -Wl,--enable-stdcall-fixup EDIT: In the docs about the linker it says that --export-all-symbols is the default behavior and that it's disabled when you don't use that option explicitly if you provide a def file, except when it doesn't; the symbols in 3rd party libs are being exported anyway. EDIT: Adding the option --exclude-libs LIBS doesn't keep their symbols from being exported either.

    Read the article

  • How to correctly load 32-bit DLL dependencies when running a program from a batch file

    - by neilwhitaker1
    I have written a tool that references Microsoft.TeamFoundation.VersionControl.Client.dll, which is a 32-bit DLL. When I build my tool on 64-bit Windows, I set Visual Studio to specifically target X86 in order to force it to a 32-bit build. Targetting X86 instead of All-CPU's prevents me from getting a BadImageFormatException, as long as I invoke the tool directly (e.g. by typing "myTool.exe" on the command line). However, if I run a batch file that invokes the tool, I still get the exception. This happens even if the batch file runs in a 32-bit command prompt (%WINDIR%\SysWOW64\cmd.exe). What else can I do to make this work?

    Read the article

  • Is there a convention for organizing the include/exports in a large C++ project ?

    - by BlueTrin
    Hello, In a large C++ solution, is there a best/standard way to separate the include files necessary to build an intermediary DLL and the include files which will be used by the DLL clients ? We have grouped all the include files in a folder called Interface (for DLL interface), but there the customers have to either include the Interface folder as a default include folder or type the full name as: #include "ProjectName/Interface/myinterface.h" Wouldn't it be better to create a separate folder called exports where I would create a folder called ProjectName and put the include files there ? So that the customers would be typing: #include "ProjectName/myinterface.h" If I do the thing right above, then should I keep the files within the solution and produce a post build event (I use Visual Studio 2k5) to copy the files into the "export" folder (/ProjectName/) ? Or is it better to just include directly the files from this folder within my project (this is more direct and has less chances to cause maintenance issues ? I am more looking for advice than for a definite solution. Thank you for reading this ! Anthony

    Read the article

  • What is a django QuerySet?

    - by gath
    Guys, When i do this >>> b = Blog.objects.all() >>> b i get this >>>[<Blog: Blog Title>,<Blog: Blog Tile>] When i query what type b is, >>> type(b) i get this >>> <class 'django.db.models.query.QuerySet'> What does this mean? is it a data type like dict, list etc? An example of how i can build data structure like a QuerySet will be appreciated. I would want to know how django build that QuerySet (the gory details) Gath.

    Read the article

  • Can I use WCF NET.TCP Protocol from Silverlight 4 for a public website ?

    - by pixel3cs
    Does anyone know if the new NET.TCP feature of Silverlight 4 can be used for public websites ? From what I know, in the Silverlight 4 Beta they announced that WCF NET.TCP can only be used for intranet applications. The reason I am asking this is because I want to recode my Silverlight multiplayer chess game (build with SL 3 Sockets support) based on the new SL 4 WCF TCP communication protocol. My Socket implementation is build from scratch and have big issues with thread safe and few unsolvable bugs from my side. I am sure that SL 4 team did a great job and they simplified all hard part for me, letting us, developers, concentrate more on the game code instead on underlying communication layers.

    Read the article

  • How can I include utility functions from another file into a Sproutcore unit test file?

    - by Lauri
    Lets say I have a few utility functions in file tests/utils/functions.js. I would like to use these functions from several unit test files. However, I'm not able to use them as the Sproutcore build system does not include any external files into the html page used to run the unit tests. Only application code and the code from the unit tests to be run are included. So is it possible to somehow include Javascript files to be used in unit test files in Sproutcore? I could add the functions.js file into some other directory inside my application to be able to use them. However, this is not what I want to do as the utility functions are useless in final production build and would only make my application larger.

    Read the article

  • Qt on Mac: where to find "configure"

    - by Gil
    hi, I am very new to Mac. I downloaded QT SDK Mac Open source (http://get.qt.nokia.com/qtsdk/qt-sdk-mac-opensource-2010.02.dmg) and installed the Package. I can run qmake, build samples and run demos, but I cannot run configure (in order to build the Qt libraries statically). It says: -bash: No such file or directory. Documentation says I should run this in the "Qt root folder", but what is this folder in Mac? I looked for it in /usr/bin, /usr/local/Qt4.6, /Developer/Tools/Qt. Anyway, what is "configure" on Mac. is it an executable or a script? Thanks a lot

    Read the article

  • Organization of linking to external libraries in C++

    - by Nicholas Palko
    In a cross-platform (Windows, FreeBSD) C++ project I'm working on, I am making use of two external libraries, Protocol Buffers and ZeroMQ. In both projects, I am tracking the latest development branch, so these libraries are recompiled / replaced often. For a development scenario, where is the best place to keep libprotobuf.{a,lib} and zeromq.{so,dll}? Should I have my build script copy them from their respective project directories into my local project's directory (say MyProjectRoot/lib or MyProjectRoot/bin) before I build my project? This seems preferable to tossing things into /usr/local/lib, as I wouldn't want to replace a system-wide stable version with the latest experimental one. Cmake warns me whenever I specify a relative path for linking, so I would suspect copying is a better solution then relative linking? Is this the best approach? Thanks for your help!

    Read the article

  • ZF-Autoloader not working in UnitTests on Ubuntu

    - by Sam
    i got a problem regarding Unit-testing a Zend-Framework application under Ubuntu 12.04. The project-structure is a default zend application whereas the models are defined as the following ./application ./models ./DbTable ./ProjectStatus.php (Application_Model_DbTable_ProjectStatus) ./Mappers ./ProjectStatus.php (Application_Model_Mapper_ProjectStatus) ./ProjectStatus.php (Application_Model_ProjectStatus) The Problem here is with the Zend-specific autoloading. The naming convention here appears that the folder Mappers loads all classes with _Mapper but not _Mappers. This is some internal Zend behavior which is fine so far. On my windows machine the phpunit runs without any Problems, trying to initiate all those classes. On my Ubuntu machine however with jenkins running on it, phpunit fails to find the appropriate classes giving me the following error Fatal error: Class 'Application_Model_Mapper_ProjectStatus' not found in /var/lib/jenkins/jobs/PAM/workspace/tests/application/models/Mapper/ProjectStatusTest.php on line 39 The error appears to really be that the Zend-Autoloader doesn't load from the ubuntu machine, but i can't figure out how or why this works. The question remains of why this is. I think i've double checked every point of contact with the zend autoloading stuff, but i just can't figure this out. I'll paste the - from my point of view relevant snippets - and hope someone of you has any insight to this. Jenkins Snippet for PHPUnit <target name="phpunit" description="Run unit tests with PHPUnit"> <exec executable="phpunit" failonerror="true"> <arg line="--configuration '${basedir}/tests/phpunit.xml' --coverage-clover '${basedir}/build/logs/clover.xml' --coverage-html '${basedir}/build/coverage/.' --log-junit '${basedir}/build/logs/junit.xml'" /> </exec> </target> ./tests/phpunit.xml <phpunit bootstrap="./bootstrap.php"> ... this shouldn't be of relevance ... </phpunit> ./tests/bootstrap.php <?php // Define path to application directory defined('APPLICATION_PATH') || define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); // Define application environment defined('APPLICATION_ENV') || define('APPLICATION_ENV', (getenv('APPLICATION_ENV') ? getenv('APPLICATION_ENV') : 'testing')); // Ensure library/ is on include_path set_include_path(implode(PATH_SEPARATOR, array( realpath(APPLICATION_PATH . '/../library'), get_include_path(), ))); require_once 'Zend/Loader/Autoloader.php'; Zend_Loader_Autoloader::getInstance(); Any help will be appreciated.

    Read the article

  • Building NDK app with Android ADT on Windows

    - by Michael Sh
    While there are tons of information on the topic, there is no clear guide on how to compile C++ code in ADT. Is Cygwin is required? Where the build artifacts go? How to confogure the destination folder for the build package? Are there a debug and release versions? Is it possible to debug and step through the C++ code in ADT? Maybe it all is described in a single resource, then a link would be welcome!

    Read the article

  • What are real-world examples of Gradle's dependency graph?

    - by Michael Easter
    As noted in the documentation, Gradle uses a directed acyclic graph (DAG) to build a dependency graph. From my understanding, having separate cycles for evaluation and execution is a major feature for a build tool. e.g. The Gradle doc states that this enables some features that would otherwise be impossible. I'm interested in real-world examples that illustrate the power of this feature. What are some use-cases for which a dependency graph is important? I'm especially interested in personal stories from the field, whether with Gradle or a similarly equipped tool. I am making this 'community wiki' from the outset, as it will be difficult to assess a 'correct' answer.

    Read the article

  • PHP Framework Benefits / Downfalls

    - by Lizard
    I have been a PHP developer for about 10 years now and until about a month ago I have never used a framework. The framework I am now using due to an existing codebase is cakePHP 1.2 I can see certain benefits of the frameworks with the basic helpers like default layouts. I can deffinately seen the benefits of MVC keeping the logic sperate etc. But the query building just seems to be bloated. Is this expected? Am I likely to be able to build better queries than the framework could build? I just feel I could get my apps running better without a framework. What are your thoughts?

    Read the article

  • Conditional Batch file renaming with mysql data

    - by Paul Stevens
    Hello, I wonder if anyone knows how could I rename multiple files, all of them originally named with same structure, and add some data extracted from a mysql DB according to specifics rules. For example I have 500 files named with this vars: ID NAME ADDRESS PHONE.wav = 1234567 PAULSIMON WESTDR122 9942213456.wav Now I need to rename files taking some data from the databases for each file, and append the data from a query appended to the filename. For example add the data resulting from a query Where some conditions match, and the data to build the query is taked from original file name, as ID or NAME. i other words, lets say that I want to build a query taking ID & NAME from file 1234567 PAULSIMON WESTDR123 9942213456.wav as WHERE statements to take another value as BirthDATE and add this to new filename, so final result should be: ID NAME ADDRESS PHONE BIRTHDATE.wav I will appreciate any help on this. I need this to be done on a LINUX server.

    Read the article

  • How can the number of modifications be changed in the TeamCity success email?

    - by Jason Slocomb
    I would like to list an arbitrary number of changes rather than the default 10 that are listed now. We have a dev build on checkin where this isn't necessary, but the once daily build that goes out I would like to have all the changes listed for the day. If that is unpossible (range?) than the last n would be fine. I've looked in the .ftl files, specifically common.ftl. It contains the macro for using data from jetbrains.buildServer.notification.impl.ChangesBean to acquire the changes. I am hopeful there is a way to set properties externally that ChangesBean will look at, but I haven't been able to discover anything. Ideas?

    Read the article

  • desktop module for existing web application

    - by maxxxee
    My client has a running web application which has been online for more than a year. Recently the client has introduced smart cards for his employees. Because of the difficulty in integrating smart card with its api on a web interface(i will post another detailed question on this later) we are planning to have desktop interface for this. There are 10-20 terminals which will use the desktop interface. 3 approaches for doing this that I have considered : Direct connection and operations on DB-Not using this because of data integrity and consistency issues. Build web service end points and use it from desktop interface Build a dll with common functions and use from both web and desktop Questions: 1. What are your opinions based on 2 and 3 approach? 2. Any other approach that I should consider? Note: I am using .Net framework, web application in asp.net

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >