Search Results

Search found 22951 results on 919 pages for 'debug build'.

Page 81/919 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Error related to pkg-config when installing frei0r as part of another package

    - by Anentropic
    I am trying to build https://github.com/mltframework/shotcut on OS X Lion (using their script in scripts/build_shotcut.sh) and after numerous hurdles I'm stuck on this error: ./configure: line 16062: syntax error near unexpected token `OPENCV,' ./configure: line 16062: `PKG_CHECK_MODULES(OPENCV, opencv >= 1.0.0, HAVE_OPENCV=true, true)' ERROR: Unable to configure frei0r From what I already googled this means that the PKG_CHECK_MODULES macro hasn't been defined, which probably means there's something wrong with my pkg-config, which I installed via Homebrew. Sounds like the pkg.m4 file isn't found. When I brew install pkg-config I get the following warning: Warning: m4 macros were installed to "share/aclocal". Homebrew does not append "/usr/local/share/aclocal" to "/usr/share/aclocal/dirlist". If an autoconf script you use requires these m4 macros, you'll need to add this path manually. Well I've appended that line to the dirlist file and it doesn't fix the problem above. Can anyone suggest a way forward here? I have briefly tried building my own pkg-config from source but (bizarrely) when I tried to ./configure I got the following error: checking for pkg-config... no ./configure: line 13540: --exists: command not found configure: error: pkg-config and glib-2.0 not found, please set GLIB_CFLAGS and GLIB_LIBS to the correct values if building pkg-config needs pkg-config it seems like a weird catch 22 situation... I think this is probably an unnecessary sidetrack anyway.

    Read the article

  • Delay index build until SQL Server table load is complete with SSIS

    - by Mattew
    I have a large table that I am updating. Is it possible to disable index updates on the destination table until the load is complete? It seems like a waste for it to be constantly updating the index with each commit. I can just drop and recreate the index before and after the load, I just want to know if there is a quick way to configure that in the OLEDB or SQL Server destination. Server is Windows Server 2003 Datacenter Edition, running SQL Server 2008 Standard Edition with SSIS.

    Read the article

  • Build and migrated to software raid (mdadm) on GPT disk, now can't assemble array

    - by John H
    mdadm, gpt issues, unrecognized partitions. Simplified question: How do I get mdadm to recognize GPT partitions? I have been attempting to convert/copy my Ubuntu 11.10 OS from a single drive to software raid 1. I have done similar in the past, but in this case, I was adding in a drive that has been configured for GPT and I tried to work with that without fully looking into the implications. Currently, I have a non-booting mdadm RAID 1 array of /dev/md127 (the OS assigned that and it keeps picking up). I am booting off of live USB keys, currently System Rescue CD from sysresccd. While gdisk and parted can see all the partitions, most of the OS utilities do not, including mdadm. My main goal is just to make the raid array accessible so I can get pull the data and start fresh (without using GPT). /dev/md127 /dev/sda /dev/sda1 <- GPT type partition /dev/sda1 <- exists within the GPT part, member of md127 /dev/sda2 <- exists within the GPT part, empty /dev/sdb /dev/sdb1 <- GPT type partition /dev/sdb1 <- exists within the GPT part, member of md127 History: POINT A: The original OS was install on sda (actually /dev/sda6). I used a the Ubuntu live usb to add sdb. I got warning from fdisk about GPT so I used gdisk to create a raid partition (sdb1) and mdadm to create a raid1 mirror with a missing drive. I had many issues getting this working (including being unable to get grub to install) but I eventually got it to boot using grub on sda and /dev/md127 off of sdb. So at point A, I had copied my OS from sda6 to md127 on sdb. I then booted into a rescue mode and attempted to get a bootloader onto sdb, which failed. I then discovered my mistake: I had installed the raid onto sdb instead of sdb1, essentially overwriting the sdb1 partition. POINT B: I now had two copies of my data- one on md127/sdb, and one on sda. I destroyed data on sda and created a new GPT table on sda. I then created sda1 for the raid array, and sda2 for a scratch partition. I added sda1 into the raid array and let it rebuild. md127 now covered /dev/sdb and /dev/sda1 as fully active and synced. POINT C: I rebooted onto linux rescue again and was still able to access the raid array. I then removed /dev/sdb from the array and created /dev/sdb1 for the raid. I added sdb1 to the array and let it sync. I was able to mount and access /dev/md127 without issues. Once it completed, both /dev/sda1 and /dev/sdb1 were GPT partitions and actively syncing. POINT D (current): I rebooted again to test if the array would boot and grub failed to load. I booted off of my live thumb drive and found that I can no longer assemble the raid array. mdadm doesn't see the required partitions. -- root@freshdesk /root % uname -a Linux freshdesk 3.0.24-std251-amd64 #2 SMP Sat Mar 17 12:08:55 UTC 2012 x86_64 AMD Athlon(tm) II X4 645 Processor AuthenticAMD GNU/Linux === /proc/partitions and parted look good: root@freshdesk /root % cat /proc/partitions major minor #blocks name 7 0 301788 loop0 8 0 976762584 sda 8 1 732579840 sda1 8 2 244181703 sda2 8 16 732574584 sdb 8 17 732573543 sdb1 8 32 7876607 sdc 8 33 7873349 sdc1 (parted) print all Model: ATA ST31000528AS (scsi) Disk /dev/sda: 1000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 750GB 750GB ext4 2 750GB 1000GB 250GB Linux/Windows data Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 750GB 750GB ext4 Linux RAID raid Model: SanDisk SanDisk Cruzer (scsi) Disk /dev/sdc: 8066MB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 31.7kB 8062MB 8062MB primary fat32 boot, lba === # no sda2, and I double the sdb1 is the one shown in parted root@freshdesk /root % blkid /dev/loop0: TYPE="squashfs" /dev/sda1: UUID="75dd6c2d-f0a8-4302-9da4-792cc7d72355" TYPE="ext4" /dev/sdc1: LABEL="PENDRIVE" UUID="1102-3720" TYPE="vfat" /dev/sdb1: UUID="2dd89f15-65bb-ff88-e368-bf24bd0fce41" TYPE="linux_raid_member" root@freshdesk /root % mdadm -E /dev/sda1 mdadm: No md superblock detected on /dev/sda1. # this is probably a result of me attempting to force the array up, putting superblocks on the GPT partition root@freshdesk /root % mdadm -E /dev/sdb1 /dev/sdb1: Magic : a92b4efc Version : 0.90.00 UUID : 2dd89f15:65bbff88:e368bf24:bd0fce41 Creation Time : Fri Mar 30 19:25:30 2012 Raid Level : raid1 Used Dev Size : 732568320 (698.63 GiB 750.15 GB) Array Size : 732568320 (698.63 GiB 750.15 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 127 Update Time : Sat Mar 31 12:39:38 2012 State : clean Active Devices : 1 Working Devices : 2 Failed Devices : 1 Spare Devices : 1 Checksum : a7d038b3 - correct Events : 20195 Number Major Minor RaidDevice State this 2 8 17 2 spare /dev/sdb1 0 0 8 1 0 active sync /dev/sda1 1 1 0 0 1 faulty removed 2 2 8 17 2 spare /dev/sdb1 === root@freshdesk /root % mdadm -A /dev/md127 /dev/sda1 /dev/sdb1 mdadm: no recogniseable superblock on /dev/sda1 mdadm: /dev/sda1 has no superblock - assembly aborted root@freshdesk /root % mdadm -A /dev/md127 /dev/sdb1 mdadm: cannot open device /dev/sdb1: Device or resource busy mdadm: /dev/sdb1 has no superblock - assembly aborted

    Read the article

  • Software/hardware to build video streaming server?

    - by Sasha Yanovets
    I am looking for a video streaming server solution, something like online TV server, with ability to make live broadcasts in the internet. What software could you recommend for that? What kind of hardware it should run on, should be there anything special? I am looking for a solution that could be scaled up to at least 1000 simultaneous users online with good resolution of video. I think it is good to have general answer on what direction to choose. But here more details on my specific case: I just looking for a solution almost from scratch. We have some video content that we've produced, but it is not delivered over internet yet. We do not tied to any particular vendor for now. We want to make 24 hours of steaming three 8 hour blocks with change of content every day. We want the ability to make regular live broadcasts. I guess we will need to have several options of streaming quality (low ~56 kb/s mid ~273 kb/s). Some terms just foreign to me (like play-truncation rate), if you could point out what parameters we should avare of, it would be great. Uplink to the internet is to be determined. We plan to start from something and scale up on the way. If you are already have some kind of media streaming server, just describe its configuration here (hardware, OS, software), peak number of concurrent users it serves. I think it could help people approaching this task.

    Read the article

  • I want to build an debian apt site for local LAN updates

    - by user73504
    Hi, I have downloaded all debian's DVD disks, and I have set up apache httpd service. I combined all dvd disk 's file, but I found the .gpg file I need and I can't create it. it looks like source's signature file. so when I set my /etc/apt/sources.list file as follow: deb http://192.168.1.102/apt/debian squeeze main contrib it noticed me the gpg files verilied faild. so I want to know , how to create gpg file, and do I need some other work except put DVD's file to the apache's htdocs path?

    Read the article

  • Can I build a laptop from scratch?

    - by Ciaran
    I've been looking at this as the same approach as building a desktop computer from scratch but I've never really come across the core things. Can I buy a laptop "case"? Can I buy a motherboard/PSU to fit that case? Can I choose a CPU fan? What are my battery options? Obviously taking into consideration the case. Why isn't this as easy as putting together a desktop computer? Shouldn't there be standards such as ATX? If they exist, what are they?

    Read the article

  • CPU temperatures high on new build after gaming

    - by Reznor
    My friend had a problem with his computer a while back. His games were crashing, even within the menus. He was stumped as to what the problem was, so I posted on here requesting help. He found out the day later, when his computer would start up but wouldn't display anything on the screen. His video card must have came screwed up. So, he got a replacement. Now, there's a new problem. His temperatures, which were acceptable before, are now insanely high. His GPU temperature runs 70-80c, which is understandable considering he's running his games maxed out, but the real problem here is his processor and motherboard temperatures. All four of his cores are running at 88-90c after coming out of a game. His motherboard temperature was also 70c at one point. In terms of cooling, his case should definitely be adequate. He has an Antec Twelve Hundred. He's using stock fans. The cable management in his case is very good; better than average. He's using the stock heatsink with the processor too, but note, it was fine before the replacement, so it isn't like there's some inherent problem. He has checked the case too. Everything's fine! No cables in the way. The heatsink is seated properly. He turned his case fans up to high, as well, but the temperatures are persisting. Could the processor be overheating due to running games maxed out? Any ideas?

    Read the article

  • Workstation 7 build 203739 - Capture Movie Mouse Pointer Not Visible

    - by BMIVM
    Hi, I have noticed that whenever I create a Capture Movie, the movie is fine but the mouse pointer is not visible at all. The clicks are on buttons file menus are executed but is hard for a viewer of the capturre movie session to follow the recording smoothly. This was not a problem in previous versions of WorkStation. Is this a bug or is there a setting I can set to see the mouse pointer? Note: VM Tools are installed, Host is Vista Ultimate Edtion SP1, x64 and the guest is a Win XP SP2. Thanks in advance for your help.

    Read the article

  • Software/hardware to build video streaming server?

    - by Sasha Yanovets
    I am looking for a video streaming server solution, something like online TV server, with ability to make live broadcasts in the internet. What software could you recommend for that? What kind of hardware it should run on, should be there anything special? I am looking for a solution that could be scaled up to at least 1000 simultaneous users online with good resolution of video. I think it is good to have general answer on what direction to choose. But here more details on my specific case: I just looking for a solution almost from scratch. We have some video content that we've produced, but it is not delivered over internet yet. We do not tied to any particular vendor for now. We want to make 24 hours of steaming three 8 hour blocks with change of content every day. We want the ability to make regular live broadcasts. I guess we will need to have several options of streaming quality (low ~56 kb/s mid ~273 kb/s). Some terms just foreign to me (like play-truncation rate), if you could point out what parameters we should avare of, it would be great. Uplink to the internet is to be determined. We plan to start from something and scale up on the way. If you are already have some kind of media streaming server, just describe its configuration here (hardware, OS, software), peak number of concurrent users it serves. I think it could help people approaching this task.

    Read the article

  • Is there any way to shut up my ATI HD 5770?

    - by slpsys
    So to preface, I basically built Jeff's machine; I already had some of the components, including (scarily enough) the exact same case1. I've been buying bits and pieces over the past few months, which coincided perfectly with his recent post about three monitors, though not being a gamer outright, I opted for the second-from-the-bottom option. After finally plopping all the pieces lovingly into the case this evening, I turn it on...and it sounds like four professional grade hair-driers. Some quick regression analysis determined that with the video card out, the running machine sounded no louder than our house's vents. Basically, my last desktop build included a $45-at-the-time graphics card, and it's been Macbook Pros and workstations since then, so I have zero idea whether I'll just be able to tune the fan speed later on. Will I be able to get this thing to quiet down every time I'm not playing Modern Warfare 2 at maximum framerate, or should I just send this thing back now, and get the quietest card in my pricerange? 1 One thing of note is that I do not have noise-absorbing foam in the case, as is pictured in the article. I'm only mentioning that because I suspect it could drop the overall output a few decibels, but obviously not that many.

    Read the article

  • Is there a chroot build script somewhere?

    - by Nils
    I am about to develop a little script to gather information for a chroot-jail. In my case this looks (at the first glance) pretty simple: The application has a clean rpm-install and did install almost all files into a sub-directory of /opt. My idea is: Do a find of all binaries Check their library-dependencies Record the results into a list Do a rsync of that list into the chroot-target-directory before startup of the application Now I wonder - ist there any script around that already does such a job (perl/bash/python)? So far I found only specialized solutions for single applications (like sftp-chroot). Update I see three close-votes for the reason "off topic". This is a question that arose because I have to install that ancient piece of software on a server at work. So if you still feel this is off-topic - leave a comment...

    Read the article

  • VMware Workstation 7 build 203739 - Capture Movie Mouse Pointer Not Visible

    - by BMIVM
    Hi, I have noticed that whenever I create a Capture Movie, the movie is fine but the mouse pointer is not visible at all. The clicks are on buttons file menus are executed but is hard for a viewer of the capturre movie session to follow the recording smoothly. This was not a problem in previous versions of WorkStation. Is this a bug or is there a setting I can set to see the mouse pointer? Note: VM Tools are installed, Host is Vista Ultimate Edtion SP1, x64 and the guest is a Win XP SP2. Thanks in advance for your help.

    Read the article

  • how to build network across buildings ?

    - by Omie
    Hi ! I need some help in building a network between hundreds of computers spread across multiple buildings of my college. Yes, I'll be doing this as a part of my college project. Please see this image, it will give you enough idea of what I'm trying to achieve. http://i.imgur.com/rOohx.png All the computers in all buildings should be able to connect server. Once network is up, there will be a set of services over intranet and network use will be moderate. well, say there will be an email server and a http server. My point is, I cannot afford much of performance loss. It feels easy to connect computers inside 1 building to each other, however, I'm clueless as to how to connect all of them to server. I mean, just 1 cable won't be enough to connect 1 building to server, right ? How should I go with it ? I am not expecting detailed configuration. Just heads up will do :) Thanks

    Read the article

  • Trying to build a history of popular laptop models

    - by John
    A requirement on a software project is it should run on typical business laptops up to X years old. However while given a specific model number I can normally find out when it was sold, I can't find data to do the reverse... for a given year I want to see what model numbers were released/discontinued. We're talking big-name, popular models like Dell Latitude/Precision/Vostro, Thinkpads, HP, etc. The data for any model is out there but getting a timeline is proving hard. Sites like Dell are (unsurprisingly) geared around current products, and even Wikipedia isn't proving very reliable. You'd think this data must have been collated by manufacturers or enthusiasts, surely?

    Read the article

  • Build problems when adding `__str__` method to Boost Python C++ class

    - by Rickard
    I have started to play around with boost python a bit and ran into a problem. I tried to expose a C++ class to python which posed no problems. But I can't seem to manage to implement the __str__ functionality for the class without getting build errors I don't understand. I'm using boost 1_42 prebuild by boostpro. I build the library using cmake and the vs2010 compiler. I have a very simple setup. The header-file (tutorial.h) looks like the following: #include <iostream> namespace TestBoostPython{ class TestClass { private: double m_x; public: TestClass(double x); double Get_x() const; void Set_x(double x); }; std::ostream &operator<<(std::ostream &ostr, const TestClass &ts); }; and the corresponding cpp-file looks like: #include <boost/python.hpp> #include "tutorial.h" using namespace TestBoostPython; TestClass::TestClass(double x) { m_x = x; } double TestClass::Get_x() const { return m_x; } void TestClass::Set_x(double x) { m_x = x; } std::ostream &operator<<(std::ostream &ostr, TestClass &ts) { ostr << ts.Get_x() << "\n"; return ostr; } BOOST_PYTHON_MODULE(testme) { using namespace boost::python; class_<TestClass>("TestClass", init<double>()) .add_property("x", &TestClass::Get_x, &TestClass::Set_x) .def(str(self)) ; } The CMakeLists.txt looks like the following: CMAKE_MINIMUM_REQUIRED(VERSION 2.8) project (testme) FIND_PACKAGE( Boost REQUIRED ) FIND_PACKAGE( Boost COMPONENTS python REQUIRED ) FIND_PACKAGE( PythonLibs REQUIRED ) set(Boost_USE_STATIC_LIBS OFF) set(Boost_USE_MULTITHREAD ON) INCLUDE_DIRECTORIES(${Boost_INCLUDE_DIRS}) INCLUDE_DIRECTORIES ( ${PYTHON_INCLUDE_PATH} ) add_library(testme SHARED tutorial.cpp) target_link_libraries(testme ${Boost_PYTHON_LIBRARY}) target_link_libraries(testme ${PYTHON_LIBRARY} The build error I get is the following: Compiling... tutorial.cpp C:\Program Files (x86)\boost\boost_1_42\boost/python/def_visitor.hpp(31) : error C2780: 'void boost::python::api::object_operators::visit(ClassT &,const char *,const boost::python::detail::def_helper &) const' : expects 3 arguments - 1 provided with [ U=boost::python::api::object ] C:\Program Files (x86)\boost\boost_1_42\boost/python/object_core.hpp(203) : see declaration of 'boost::python::api::object_operators::visit' with [ U=boost::python::api::object ] C:\Program Files (x86)\boost\boost_1_42\boost/python/def_visitor.hpp(67) : see reference to function template instantiation 'void boost::python::def_visitor_access::visit,classT>(const V &,classT &)' being compiled with [ DerivedVisitor=boost::python::api::object, classT=boost::python::class_, V=boost::python::def_visitor ] C:\Program Files (x86)\boost\boost_1_42\boost/python/class.hpp(225) : see reference to function template instantiation 'void boost::python::def_visitor::visit>(classT &) const' being compiled with [ DerivedVisitor=boost::python::api::object, W=TestBoostPython::TestClass, classT=boost::python::class_ ] .\tutorial.cpp(29) : see reference to function template instantiation 'boost::python::class_ &boost::python::class_::def(const boost::python::def_visitor &)' being compiled with [ W=TestBoostPython::TestClass, U=boost::python::api::object, DerivedVisitor=boost::python::api::object ] Does anyone have any idea on what went wrrong? If I remove the .def(str(self)) part from the wrapper code, everything compiles fine and the class is usable from python. I'd be very greatful for assistance. Thank you, Rickard

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when setup projects need to be built?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when third-party installers are involved?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

  • TFS: cannot build my app, CS0042: Unexpected error creating debug information file. Access is denied.

    - by zeta
    I am trying to deploy my MVC app, but in the TFS build I get this error message; CSC : fatal error CS0042: Unexpected error creating debug information file 'c:\Builds\2\STAS\STAS\Sources\Documents and Settings\jyothisrinivasa\My Documents\Visual Studio 2010\Projects\STAS\STAS\obj\Debug\STAS.PDB' -- 'c:\Builds\2\STAS\STAS\Sources\Documents and Settings\jyothisrinivasa\My Documents\Visual Studio 2010\Projects\STAS\STAS\obj\Debug\STAS.pdb: Access is denied. I have excluded the Debug directory from my application, so why am I getting this?

    Read the article

  • "SignTool error: Access is denied" in TFS 2010 build process

    - by user351352
    I'm getting "SignTool Error: Access is Denied" when I attempt to sign a file. When I use an administrator cmd, all works fine. However, this process is going to be used in a TFS 2010 build process and using the InvokeProcess task with signtool gives the same access denied message as a non-administrator command prompt. More info: On a Win2008 R2 enterprise machine. User is machine admin and on the domain. The TFS Build service is also set to run as this user. Using a self signed certificate created using these instructions: How do I create a self-signed certificate for code signing on Windows? After following these instructions I have the following files: MyCA.cer MyCA.pvk MySPC.cer MySPC.pvk MySPC.pfx MyCA is in my Trusted Root Certification Authorities I imported MySPC.pfx into personal certificates, following the advice here: SignTool error: Access is denied To do the signing I'm using the thumbprint of the MySPC.pfx that was imported into the Personal section so my signtool command looks like: sign /sha1 1e9d7b5ad98552d9c58944e3f3903e6b929f4819 /t http://timestamp.verisign.com/scripts/timestamp.dll "FileName" Once again this works in Admin mode. This also works when running cmd as administrator: sign /f "C:\Code Signing Non-Release\MySPC.pfx" /t http://timestamp.verisign.com/scripts/timestamp.dll "FileName" New to code signing in general, so any help is welcome.

    Read the article

  • Is make -j distcc possible to scale over 5 times?

    - by holmes
    Since distcc cannot keep states and just possible to send jobs and headers and let those servers to use only the data just sent and preprocess and compile, I think the lastest distcc has problem in scalability. In my local build environment which has appx. 10,000 c/c++ files to build, I could only make 2 times faster than not using distcc (but using make -j) when having 20 build servers. What do you think is the problem? If anyone has achieved scalability more than 10 - 20 times using make -j and distcc, please let me know. The following product claims that it is impossible to scale make -j and distcc faster than 5 times. http://www.electric-cloud.com/products/electricaccelerator.php I think this can be improved by: Letting the distccd server to maintain sessions Tied to those sessions, they will cache their own header directories Preprocess will be done demand base from the distccd server This will be done through a LD_PRELOADed library libdistcc.so which will replace stat/open syscalls and fetches the header files over network. ... Has anyone done this kind of thing?

    Read the article

  • Can't see why JavaMail doesn't work in a Spring MVC application on Tomcat

    - by Kartoch
    I have wrote a small web application using Spring MVC. It runs on tomcat and use the JavaMail library. At the present time I can't find why the mails are not sent. But I have no pertinent log messages in tomcat log files to find where is the problem. At the present time, my logging is configured in a log4j.properties file in the root of my CLASSPATH: log4j.rootLogger= DEBUG, CONSOLE log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout log4j.appender.CONSOLE.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n How can i see the java mail log message in debug mode ? I think it is related to JDK logging system (as I'm using Sun's JavaMail) but I don't know how to configure it. Edit: Well, one problem solves and another arises. I change my bean definition to include debug support for javamail: mail.debug=true But still no pertinent info in it: DEBUG: JavaMail version 1.4.1ea-SNAPSHOT DEBUG: not loading file: /usr/lib/jvm/java-6-sun-1.6.0.12/jre/lib/javamail.providers DEBUG: java.io.FileNotFoundException: /usr/lib/jvm/java-6-sun-1.6.0.12/jre/lib/javamail.providers (No such file or directory) DEBUG: !anyLoaded DEBUG: not loading resource: /META-INF/javamail.providers DEBUG: successfully loaded resource: /META-INF/javamail.default.providers DEBUG: Tables of loaded providers DEBUG: Providers Listed By Class Name:{com.sun.mail.smtp.SMTPSSLTransport=javax.mail.Provider[TRANSPORT,smtps,com.sun.mail.smtp.SMTPSSLTransport,Sun Microsystems, Inc], com.sun.mail.smtp.SMTPTransport=javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc], com.sun.mail.imap.IMAPSSLStore=javax.mail.Provider[STORE,imaps,com.sun.mail.imap.IMAPSSLStore,Sun Microsystems, Inc], com.sun.mail.pop3.POP3SSLStore=javax.mail.Provider[STORE,pop3s,com.sun.mail.pop3.POP3SSLStore,Sun Microsystems, Inc], com.sun.mail.imap.IMAPStore=javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Sun Microsystems, Inc], com.sun.mail.pop3.POP3Store=javax.mail.Provider[STORE,pop3,com.sun.mail.pop3.POP3Store,Sun Microsystems, Inc]} DEBUG: Providers Listed By Protocol: {imaps=javax.mail.Provider[STORE,imaps,com.sun.mail.imap.IMAPSSLStore,Sun Microsystems, Inc], imap=javax.mail.Provider[STORE,imap,com.sun.mail.imap.IMAPStore,Sun Microsystems, Inc], smtps=javax.mail.Provider[TRANSPORT,smtps,com.sun.mail.smtp.SMTPSSLTransport,Sun Microsystems, Inc], pop3=javax.mail.Provider[STORE,pop3,com.sun.mail.pop3.POP3Store,Sun Microsystems, Inc], pop3s=javax.mail.Provider[STORE,pop3s,com.sun.mail.pop3.POP3SSLStore,Sun Microsystems, Inc], smtp=javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc]} DEBUG: successfully loaded resource: /META-INF/javamail.default.address.map DEBUG: !anyLoaded DEBUG: not loading resource: /META-INF/javamail.address.map DEBUG: not loading file: /usr/lib/jvm/java-6-sun-1.6.0.12/jre/lib/javamail.address.map DEBUG: java.io.FileNotFoundException: /usr/lib/jvm/java-6-sun-1.6.0.12/jre/lib/javamail.address.map (No such file or directory)

    Read the article

  • NAnt or TFS build which is better?

    - by Leszek Wachowicz
    There was a question about Msbuild and NAnt advantages and disadvantages. Now let's see which is better TFS Build(with msbuild) or NAnt. In my opinion NAnt because you can easily move the building environment in few seconds to another machine (depends on copying files), also it's easier to manage, much faster to debug and it's not integrated with Team Foundation Server, what do You think?

    Read the article

  • Eclipse: What is the minimum Eclipse installation needed for a headless PDE build?

    - by Christoph
    Hi, I am currently using PDE build in headless mode to build my OSGI Bundle project. The PDE Antrunner task uses an Eclipse installation and I am just pointing it to my local Eclipse installation. unfortunatelly my eclipse installation is about 260MB big, but I assume that a PDE build does NOT require all of those plugins in a standard eclipse installation. Does anyone now what is the minimum list of plugins I need for doing a headless PDE build? All of my dependencies I actually have in a custom target platform folder, so I guess the only thing I need from my eclipse installation are the dependencies which PDE build actually needs. But what are those? Can I shrink my installation to a very minimum? My goal is to also check-in this "build-eclipse" folder into my project's SVN so that when you check it out, you have everything you need to start a full build, without touching any build.properties. But I don't want to commit 266MB of eclipse if I maybe need only 20MB of it. Thanks Christoph

    Read the article

  • Android Studio Could not call IncrementalTask.taskAction() on task ':project:dexDebug'

    - by akenawell85x
    I recently decided to switch from Eclipse to Android Studio. I imported a project I was working on and am now getting this error when I try to run the project. Gradle: Execution failed for task ':project:dexDebug'. > Could not call IncrementalTask.taskAction() on task ':project:dexDebug' I've been cruising this site for 2 days now and trying different suggestions to no avail. I did run gradlew compileDebug --stacktrace and this is what I got: C:\Users\adam\AndroidStudioProjects\projectProject>gradlew compileDebug --stacktrace Relying on packaging to define the extension of the main artifact has been deprecated and is scheduled to be removed in Gradle 2.0 :project:preBuild UP-TO-DATE :project:preDebugBuild UP-TO-DATE :project:preReleaseBuild UP-TO-DATE :project:prepareComAndroidSupportAppcompatV71800Library UP-TO-DATE :project:prepareComGoogleAndroidGmsPlayServices3225Library UP-TO-DATE :project:prepareDebugDependencies :project:compileDebugAidl UP-TO-DATE :project:compileDebugRenderscript UP-TO-DATE :project:generateDebugBuildConfig UP-TO-DATE :project:mergeDebugAssets UP-TO-DATE :project:mergeDebugResources UP-TO-DATE :project:processDebugManifest UP-TO-DATE :project:processDebugResources UP-TO-DATE :project:generateDebugSources UP-TO-DATE :project:compileDebug UP-TO-DATE BUILD SUCCESSFUL Total time: 10.459 secs However I am still getting that error when I try to actually run the project. Here is my build.gradle (i do have a 'libs' folder in my project with all the jars for a google maps/places app): buildscript { repositories { mavenCentral() } dependencies { classpath 'com.android.tools.build:gradle:0.6.+' } } apply plugin: 'android' repositories { mavenCentral() } android { compileSdkVersion 18 buildToolsVersion "18.1.1" defaultConfig { minSdkVersion 8 targetSdkVersion 18 } } dependencies { compile fileTree(dir: 'libs') compile 'com.google.android.gms:play-services:3.2.25' compile 'com.android.support:support-v4:18.0.0' compile 'com.android.support:appcompat-v7:+' } and my settings.gradle: include ':project', ':project:libs:android-support-v4', ':project:libs:google-api-client-1.10.3-beta', ':project:libs:google-api-client-android2-1.10.3-beta', ':project:libs:google-http-client-1.10.3-beta', ':project:libs:google-http-client-android2-1.10.3-beta', ':project:libs:google-oauth-client-1.10.1-beta', ':project:libs:gson-2.1', ':project:libs:guava-11.0.1', ':project:libs:jackson-core-asl-1.9.4', ':project:libs:jsr305-1.3.9', ':project:libs:protobuf-java-2.2.0', ':project:libs:GoogleAdMobAdsSdk-6.4.1' As I said, I've tried pretty much everything I have read on here about this error and am having no luck. Any help would be greatly appreciated.

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >