Search Results

Search found 22301 results on 893 pages for 'software sources'.

Page 297/893 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • android compile error: could not reserve enough space for object heap

    - by moonlightcheese
    I'm getting this error during compilation: Error occurred during initialization of VM Could not create the Java virtual machine. Could not reserve enough space for object heap What's worse, the error occurs intermittently. Sometimes it happens, sometimes it doesn't. It seems to be dependent on the amount of code in the application. If I get rid of some variables or drop some imported libraries, it will compile. Then when I add more to it, I get the error again. I've included the following sources into the application in the [project_root]/src/ directory: org.apache.httpclient (I've stripped all references to log4j from the sources, so don't need it) org.apache.codec (as a dependency) org.apache.httpcore (dependency of httpclient) and my own activity code consisting of nothing more than an instance of HttpClient. I know this has something to do with the amount of memory necessary during compile time or some compiler options, and I'm not really stressing my system while i'm coding. I've got 2GB of memory on this Core Duo laptop and windows reports only 860MB page file usage (haven't used any other memory tools. I should have plenty of memory and processing power for this... and I'm only compiling some common http libs... total of 406 source files. What gives? Android API Level: 5 Android SDK rel 5 JDK version: 1.6.0_12

    Read the article

  • Bad linking in Qt unit test -- missing the link to the moc file?

    - by dwj
    I'm trying to unit test a class that inherits QObject; the class itself is located up one level in my directory structure. When I build the unit test I get the standard unresolved errors if a class' MOC file cannot be found: test.obj : error LNK2001: unresolved external symbol "public: virtual void * __thiscall UnitToTest::qt_metacast(char const *)" (?qt_metacast@UnitToTest@@UAEPAXPBD@Z) + 2 missing functions The MOC file is created but appears to not be linking. I've been poking around SO, the web, and Qt's docs for quite a while and have hit a wall. How do I get the unit test to include the MOC file in the link? ==== My project file is dead simple: TEMPLATE = app TARGET = test DESTDIR = . CONFIG += qtestlib INCLUDEPATH += . .. DEPENDPATH += . HEADERS += test.h SOURCES += test.cpp ../UnitToTest.cpp stubs.cpp DEFINES += UNIT_TEST My directory structure and files: C:. | UnitToTest.cpp | UnitToTest.h | \---test | test.cpp (Makefiles removed for clarity) | test.h | test.pro | stubs.cpp | +---debug | UnitToTest.obj | test.obj | test.pdb | moc_test.cpp | moc_test.obj | stubs.obj Edit: Additional information The generated Makefile.Debug shows the moc file missing: SOURCES = test.cpp \ ..\test.cpp \ stubs.cpp debug\moc_test.cpp OBJECTS = debug\test.obj \ debug\UnitToTest.obj \ debug\stubs.obj \ debug\moc_test.obj

    Read the article

  • NAnt not running NUnit tests

    - by ctford
    I'm using NUnit 2.5 and NAnt 0.85 to compile a .NET 3.5 library. Because NAnt 0.85 doesn't support .NET 3.5 out of the box, I've added an entry for the 3.5 framework to NAnt.exe.config. 'MyLibrary' builds, but when I hit the "test" target to execute the NUnit tests, none of them seem to run. [nunit2] Tests run: 0, Failures: 0, Not run: 0, Time: 0.012 seconds Here are the entries in my NAnt.build file for the building and running the tests: <target name="build_tests" depends="build_core"> <mkdir dir="Target" /> <csc target="library" output="Target\Test.dll" debug="true"> <references> <include name="Target\MyLibrary.dll"/> <include name="Libraries\nunit.framework.dll"/> </references> <sources> <include name="Test\**\*.cs" /> </sources> </csc> </target> <target name="test" depends="build_tests"> <nunit2> <formatter type="Plain" /> <test assemblyname="Target\Test.dll" /> </nunit2> </target> Is there some versioning issue I need to be aware of? Test.dll runs fine in the NUnit GUI. The testing dll is definitely being found, because if I move it I get the following error: Failure executing test(s). If you assembly is not build using NUnit 2.2.8.0... Could not load file or assembly 'Test' or one of its dependencies... I would be grateful if anyone could point me in the right direction or describe a similary situation they have encountered. Edit I have since tried it with NAnt 0.86 beta 1, and the same problem occurs.

    Read the article

  • ruby gem not found although it is installed

    - by Eimantas
    I found some similar problems here on SO, but none seem to match my case (sorry if I overlooked). Here's my problem: I installed oauth-plugin gem to ruby gems dir, but trying to use it in rails app tells me that it's not being found. Here's the output of relevant commands: Instalation % s gem install oauth-plugin Successfully installed oauth-plugin-0.3.14 1 gem installed Installing ri documentation for oauth-plugin-0.3.14... Installing RDoc documentation for oauth-plugin-0.3.14... gem which oauth-plugin output: % gem which oauth-plugin /usr/lib/ruby/gems/1.8/gems/oauth-plugin-0.3.14/lib/oauth-plugin.rb gem env output: % gem env RubyGems Environment: - RUBYGEMS VERSION: 1.3.6 - RUBY VERSION: 1.8.7 (2009-12-24 patchlevel 248) [i686-darwin10.2.0] - INSTALLATION DIRECTORY: /usr/lib/ruby/gems/1.8 - RUBY EXECUTABLE: /usr/bin/ruby - EXECUTABLE DIRECTORY: /usr/bin - RUBYGEMS PLATFORMS: - ruby - x86-darwin-10 - GEM PATHS: - /usr/lib/ruby/gems/1.8 - /Users/eimantas/.gem/ruby/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => true - :bulk_threshold => 1000 - :gem => ["--no-ri", "--no-rdoc"] - :sources => ["http://gems.ruby.lt/", "http://rubygems.org/"] - REMOTE SOURCES: - http://gems.ruby.lt/ - http://rubygems.org/ Doing ls -l /usr/lib/ruby shows this: % ls -l /usr/lib/ruby lrwxr-xr-x 1 root wheel 76 Aug 14 2009 /usr/lib/ruby -> ../../System/Library/Frameworks/Ruby.framework/Versions/Current/usr/lib/ruby And the gem in question is in intended location. This is not a single gem that is not being found by rubygems (although it's located where it should be). Any guidance towards the solution is much appreciated.

    Read the article

  • .NET Membership with Repository Pattern

    - by Zac
    My team is in the process of designing a domain model which will hide various different data sources behind a unified repository abstraction. One of the main drivers for this approach is the very high probability that these data sources will undergo significant change in the near future and we don't want to be re-writing business logic when this happens. One data source will be our membership database which was originally implemented using the default ASP.Net Membership Provider. The membership provider is tied to the System.Web.Security namespace but we have a design guideline requiring that our domain model layer is not dependent upon System.Web (or any other implementation/environment dependency) as it will be consumed in different environments - nor do we want our websites directly communicating with databases. I am considering what would be a good approach to reconciling the MembershipProvider approach with our abstracted n-tier architecture. My initial feeling is that we could create a "DomainMembershipProvider" which interacts with the domain model and then implement objects in the model which deal with the repository and handle validation/business logic. The repository would then implement data access using our (as-yet undecided) ORM/data access tool. Are there are any glaring holes in this approach - I haven't worked closely with the MembershipProvider class so may well be missing something. Alternatively, is there an approach that you think will better serve the requirements I described above? Thanks in advance for your thoughts and advice. Regards, Zac

    Read the article

  • How do you remove invalid hexadecimal characters from an XML-based data source prior to constructing

    - by Oppositional
    Is there any easy/general way to clean an XML based data source prior to using it in an XmlReader so that I can gracefully consume XML data that is non-conformant to the hexadecimal character restrictions placed on XML? Note: The solution needs to handle XML data sources that use character encodings other than UTF-8, e.g. by specifying the character encoding at the XML document declaration. Not mangling the character encoding of the source while stripping invalid hexadecimal characters has been a major sticking point. The removal of invalid hexadecimal characters should only remove hexadecimal encoded values, as you can often find href values in data that happens to contains a string that would be a string match for a hexadecimal character. Background: I need to consume an XML-based data source that conforms to a specific format (think Atom or RSS feeds), but want to be able to consume data sources that have been published which contain invalid hexadecimal characters per the XML specification. In .NET if you have a Stream that represents the XML data source, and then attempt to parse it using an XmlReader and/or XPathDocument, an exception is raised due to the inclusion of invalid hexadecimal characters in the XML data. My current attempt to resolve this issue is to parse the Stream as a string and use a regular expression to remove and/or replace the invalid hexadecimal characters, but I am looking for a more performant solution.

    Read the article

  • Object created in Interface Builder getting dealloc'ed too soon

    - by Collin Allen
    The Project I'm working on a relatively simple iPhone OS project that's navigation controller based, with a root table view and a detail table view. Tap an item in the main list to see its details in a pushed table view. The Setup I broke out the data source for both views into their own objects so as not to muddy the purpose of a view controller. Having done this, the table views no longer have data sources since those methods are now in separate files, so I created an instance of each data source class in the appropriate XIB files with the Object item (dragged it in, then set its class). Then, to actually connect the tableviews to their data sources, I set the dataSource outlet of each tableview to the yellow data source object in Interface Builder. The table view delegates are still set to their view controllers. The Problem The root table view works just fine, but when you tap a row to push to the detail view, the data source object gets instantiated as expected, then immediately dealloc'ed, causing a crash (numberOfSectionsInTableView: gets called on the freed object). I can't figure out why the data source is getting automatically dealloc-ed when I need it right then and there for the detail view, as indicated by my data source object creation and tableview connection in Interface Builder. What's more perplexing is that the very approach works fine for the root tableview! The Question Is there anything obvious I'm missing that would cause this to happen? Or, is this even the right way to instantiate a data source for a table view controller? It seems like poor object oriented programming to do it from within the view controller, which should only be concerned with the view. I could cram everything in two table view controller classes and it would probably work, but it would not be as modular as I'd like. Thanks!

    Read the article

  • Manually Writing the HTML in TWebBrowser Pt. 2

    - by nomad311
    As the name suggests this is a continuation (sort of) of http://stackoverflow.com/questions/2784679/manually-writing-the-html-in-twebbrowser This time around I'm trying to add some auto-refresh logic to the HTML I get. I have pieced together an approach from several sources (see below). In short, I am trying to locate the title node and add a meta node after it (in the HTML head node). But, I get an access violation. Here is the source: iHtmlDoc := IHTMLDocument3(WebBrowser1.Document); iHtmlEleTitle := IHTMLElement2(iHtmlDoc.getElementsByName('title').item(0, 0)); iHtmlEle := IHTMLElement2(IHTMLDocument2(iHtmlDoc).createElement(Format('<meta http-equiv="refresh" content="%d">', [1]))); iHtmlEleTitle.insertAdjacentElement('afterEnd', IHTMLElement(iHtmlEle)); And A (technically not functionally) different way of doing it ...casting is slightly different here: IHTMLElement2(IHtmlDocument3(WebBrowser1.Document).getElementsByName('title').item(0, 0)).insertAdjacentElement('afterEnd', IHTMLDocument2(WebBrowser1.Document).createElement(Format('<meta http-equiv="refresh" content="%d">', [VPI_ISSUANCE_AUTO_RELOAD]))); Again all I get from Delphi is a access exception, and I fished through MSDN documentation on it, but now I'm hoping someone out there has gone through the same and has some insight. Any help? Sources (I think this is all of them): http://webdesign.about.com/od/metataglibraries/a/aa080300a.htm (auto-reload) http://delphi.about.com/od/adptips2005/qt/webbrowserhtml.htm (web browser document as an HTML document) http://msdn.microsoft.com/en-us/library/system.windows.forms.htmlelement.insertadjacentelement(VS.80).aspx (GetElementsByName) http://www.experts-exchange.com/Web_Development/Components/ActiveX/Q_26131034.html (insertAdjacentElement) http://www.experts-exchange.com/Programming/Languages/Pascal/Delphi/Q_23407977.html (GetElementsByName)

    Read the article

  • How to write C++ audio processing applications?

    - by cesko82
    Hi everyone, I'm an Electronics and Telecommunications student, next to my graduation. I'm gonna work on a project that involves my knowledge about DSP, music and audio in general. I allready know all the basic mathematic instruments and all the stuff I need to manage it, such as FFT, circular convolution ecc ecc. I want to learn C++ programming basically for one reason: it's very important in the professional world!!! And I think it's one of the most used to write applications working with audio, especially when it's about real time processing. Ok, after this small introduction I would like to know first, which are the most used libraries to work with audio processing in c++?? I was longer looking on the web but i couldn't find a lo of working stuff. (I work under linux with eclipse CDT enviroment). Then I would like to know if there are good sources to learn how to write some working code, such as for example how to write a simple low pass filter. Basically now i will not write real time applications, I would like to start from the processing of a WAV file, or even better an MP3 file, so basically on vectors of samples. Let's say that basically for now I would like to extract the waveform from an audio file, and save it to a thumbnail or to a PNG image. Ok, for now I think it's all I would need. Any ideas, advices, libraries, books, interesting sources about that? Thanks a lot in advance for any kind of answer. Giovanni.

    Read the article

  • How will Arel affect rails' includes() 's capabilities.

    - by Tim Snowhite
    I've looked over the Arel sources, and some of the activerecord sources for Rails 3.0, but I can't seem to glean a good answer for myself as to whether Arel will be changing our ability to use includes(), when constructing queries, for the better. There are instances when one might want to modify the conditions on an activerecord :include query in 2.3.5 and before, for the association records which would be returned. But as far as I know, this is not programmatically tenable for all :include queries: (I know some AR-find-includes make t#{n}.c#{m} renames for all the attributes, and one could conceivably add conditions to these queries to limit the joined sets' results; but others do n_joins + 1 number of queries over the id sets iteratively, and I'm not sure how one might hack AR to edit these iterated queries.) Will Arel allow us to construct ActiveRecord queries which specify the resulting associated model objects when using includes()? Ex: User :has_many posts( has_many :comments) User.all(:include => :posts) #say I wanted the post objects to have their #comment counts loaded without adding a comment_count column to `posts`. #At the post level, one could do so by: posts_with_counts = Post.all(:select => 'posts.*, count(comments.id) as comment_count', :joins => 'left outer join comments on comments.post_id = posts.id', :group_by => 'posts.id') #i believe #But it seems impossible to do so while linking these post objects to each #user as well, without running User.all() and then zippering the objects into #some other collection (ugly) #OR running posts.group_by(&:user) (even uglier, with the n user queries)

    Read the article

  • rails HABTM versus view (formtastic)

    - by VP
    I have two models: The model NetworkObject try to describe "hosts". I want to have a rule with source and destination, so i'm trying to use both objects from the same class since it dont makes sense to create two different classes. class NetworkObject < ActiveRecord::Base attr_accessible :ip, :netmask, :name has_many :statements has_many :rules, :through =>:statements end class Rule < ActiveRecord::Base attr_accessible :active, :destination_ids, :source_ids has_many :statements has_many :sources, :through=> :statements, :source=> :network_object has_many :destinations, :through => :statements, :source=> :network_object end To build the HABTM i did choose the Model JOIN. so in this case i created a model named Statement with: class Statement < ActiveRecord::Base attr_accessible :source_id, :rule_id, :destination_id belongs_to :network_object, :foreign_key => :source_id belongs_to :network_object, :foreign_key => :destination_id belongs_to :rule end The problem is: is right to add two belongs_to to the same class using different foreign_keys? I tried all combinations like: belongs_to :sources, :class_name => :network_object, :foreign_key => :source_id but no success.. anything that i am doing wrong?

    Read the article

  • Xpath > How can I select a node based on both its attributes and content?

    - by Andrew Kirk
    Sample XML: <assignments> <assignment id="911990211" section-id="1942268885" item-count="21" sources="foo"> <options> <value name="NumRetakes">4</value> <value name="MultipleResultGrading">6</value> <value name="MaxFeedbackAttempts">-1</value> <value name="ItemTakesBeforeHint">1</value> <value name="TimeAllowed">0</value> </options> </assignment> <assignment id="1425185257" section-id="1505958877" item-count="4" sources="bar"> <options> <value name="NumRetakes">0</value> <value name="MultipleResultGrading">6</value> <value name="MaxFeedbackAttempts">3</value> <value name="ItemTakesBeforeHint">1</value> <value name="TimeAllowed">0</value> </options> </assignment> <assignments> Using XPath, I would like to select all assignments/assignment/options/value nodes where the nodes "name" attribute is "MaxFeedbackAttempts" and the nodes content is "-1". That is to say, I want to return each node that looks like: <value name="MaxFeedbackAttempts">-1</value> I can get each assignments/assignment/options/value node with the specified attribute using: //assignment/options/value[@name="MaxFeedbackAttempts"] I am just not sure how to refine this path to also limit the results based on the nodes content. Is there any way to do this using XPath?

    Read the article

  • Biztalk vs API for databroker layer

    - by jdt199
    My company is about to undergo a large project in which our client wants a large customer portal with a cms, crm implementing. This will require interaction with data from multiple sources across our customers business, these sources include XML office backend systems, sql datbases, webservices etc. Our proposed solution would be to write an API in c# to provide a common interface with all these systems. This would be scalable for future and concurrent projects within the company. Our client expressed an interest in using Biztalk rather than a custom API for this integration, as they feel it is an enterprise solution that any of their suppliers could pick up and use, and it will be better supported. We feel that the configuration work using Biztalk would be rather heavy for all their custom business rules which are required and an interface for the new application to get data to and from Biztalk would still need to be written. Are we right to prefer a custom API solution above Biztalk? Would Biztalk be suitable as a databroker layer to provide an interface for the new Customer portal we are writing. We have not experience with using Biztalk before so any input would be appreciated.

    Read the article

  • bing search api ajax does not work

    - by jhon
    Hi guys, I want to use the Bing's search api with javascript. Actually, I want the user to write something and query Bing in order to get just images. so, I tried it using ajax. If I try the url http://api.search.live.net/xml.aspx?Appid=[YOURAPIKEY]&sources=image&query=home directly (with the browser) I do get an xml document. but if I use XMLHttpRequest it does not work. <html> <body> <script> var xhr = new XMLHttpRequest(); var url="http://api.search.live.net/xml.aspx?Appid=[YOURAPIKEY]&sources=image&query=home" xhr.open("GET", url, true ); xhr.onreadystatechange=function(){ /*if( xhr.readyState == 4 && xhr.status == 200) { document.write( xhr.responseText ); }*/ alert( xhr.readyState +" "+xhr.status +xhr.statusText +xhr); }; xhr.send(null); </script> </body> </html> Questions: 1) why does the code from above does not work? 2) any other way to do this without XMLHttpRequest? thanks. btw. I'm just interested in fix this for Firefox and without external libraries (jquery and so on).

    Read the article

  • Unexplained crashs with coregraphic

    - by Ziggy
    Hello there, i'm on this bug for a week now, and i can't solve it. I have some crash with coregraphic calls, it happen randomly (sometimes after 2 mn, or just at the start), but often at the same places in the code. I have a class that just wrap a CGContext, it have a CGContextRef as member. This Object is re-created each time DrawRect() is called, so the CGContextRef is always up-to-date. The draw calls came from the main thread, only After looking for this kind of error, it appear that it should be object Release related. Here is an example of an error : #0 0x90d8a7a7 in ___forwarding___ #1 0x90d8a8b2 in __forwarding_prep_0___ #2 0x90d0d0b6 in CFRetain #3 0x95e54a5d in CGColorRetain #4 0x95e5491d in CGGStateCreateCopy #5 0x95e5486d in CGGStackSave #6 0x95e54846 in CGContextSaveGState #7 0x00073500 in CAutoContextState::CAutoContextState at Context.cpp:47 the AutoContextSave() class look like this : class CAutoContextState { private: CGContextRef m_Hdc; public: CAutoContextState(const CGContextRef& Hdc) { m_Hdc = Hdc; CGContextSaveGState(m_Hdc); } virtual ~CAutoContextState() { CGContextRestoreGState(m_Hdc); } }; It crash at CGContextSaveGState(m_Hdc). Here is what i see into GDB: * -[Not A Type retain]: message sent to deallocated instance 0x16a148b0. When i type malloc-history on the address, i have this : 0: 0x954cf10c in malloc_zone_malloc 1: 0x90d0d201 in _CFRuntimeCreateInstance 2: 0x95e3fe88 in CGTypeCreateInstanceWithAllocator 3: 0x95e44297 in CGTypeCreateInstance 4: 0x95e58f57 in CGColorCreate 5: 0x71fdd in _ZN4Flux4Draw8CContext10DrawStringERKNS_7CStringEPKNS0_5CFontEPKNS0_6CBrushERKNS_5CRectENS0_12tagAlignmentESE_NS0_17tagStringTrimmingEfiPKf at /Volumes/Sources Mac/Flux/Sources/DotFlux/Projects/../Draw/CoreGraphic/Context.cpp:1029 Which point me at this line of code : f32 components[] = {pSolidBrush->GetColor().GetfRed(), pSolidBrush->GetColor().GetfGreen(), pSolidBrush->GetColor().GetfBlue(), pSolidBrush->GetColor().GetfAlpha()}; //{ 1.0, 0.0, 0.0, 0.8 }; CGColorRef TextColor = CGColorCreate(rgbColorSpace, components); Point this func : CGColorCreate(); Any help would be appreciated, i need to finish this task very soon, but i don't know how to resolve this :( Thanks.

    Read the article

  • Side by side madness - running binaries on different computer (with a twist)

    - by sbk
    Here's my configuration: Computer A - Windows 7, MS Visual Studio 2005 patched for Win7 compatibility (8.0.50727.867) Computer B - Windows XP SP2, MS Visual Studio 2005 installed (8.0.50727.42) My project has some external dependencies (prebuilt DLLs - either build on A or downloaded from the Internet), a couple of DLLs built from sources and one executable. I am mostly developing on A and all is fine there. At some point I try to build my project on computer B, copying the prebuilt DLLs to the output folder. Everything builds fine, but trying to start my application I get The application failed to initialize properly (0xc0150002).... The event log contains two of those: Dependent Assembly Microsoft.VC80.CRT could not be found and Last Error was The referenced assembly is not installed on your system. plus the slightly more amusing Generate Activation Context failed for some.dll. Reference error message: The operation completed successfully. At this point I'm trying my Google-Fu, but in vain - virtually all hits are about running binaries on machines without Visual Studio installed. In my case, however, the executables fail to run on the computer they are built. Next step was to try dependency walker and it baffled me even more - my DLLs built from sources on the same box cannot find MSVCR80.DLL and MSVCP80.DLL, however the executable seems to be alright in respect to those two DLLs i.e. when I open the executable with dependency walker it shows that the MSVC?80.DLLs can be found, but when I open one of my DLLs it says they cannot. That's where I am completely out of ideas what to do so I'm asking you, dear stackoverflow :) I admit I'm a bit blurry on the whole side-by-side thing, so general reading on the topic will also be appreciated.

    Read the article

  • Data Modeling of Entity with Attributes

    - by StackOverflowNewbie
    I'm storing some very basic information "data sources" coming into my application. These data sources can be in the form of a document (e.g. PDF, etc.), audio (e.g. MP3, etc.) or video (e.g. AVI, etc.). Say, for example, I am only interested in the filename of the data source. Thus, I have the following table: DataSource Id (PK) Filename For each data source, I also need to store some of its attributes. Example for a PDF would be "numbe of pages." Example for audio would be "bit rate." Example for video would be "duration." Each DataSource will have different requirements for the attributes that need to be stored. So, I have modeled "data source attribute" this way: DataSourceAttribute Id (PK) DataSourceId (FK) Name Value Thus, I would have records like these: DataSource->Id = 1 DataSource->Filename = 'mydoc.pdf' DataSource->Id = 2 DataSource->Filename = 'mysong.mp3' DataSource->Id = 3 DataSource->Filename = 'myvideo.avi' DataSourceAttribute->Id = 1 DataSourceAttribute->DataSourceId = 1 DataSourceAttribute->Name = 'TotalPages' DataSourceAttribute->Value = '10' DataSourceAttribute->Id = 2 DataSourceAttribute->DataSourceId = 2 DataSourceAttribute->Name = 'BitRate' DataSourceAttribute->Value '16' DataSourceAttribute->Id = 3 DataSourceAttribute->DataSourceId = 3 DataSourceAttribute->Name = 'Duration' DataSourceAttribute->Value = '1:32' My problem is that this doesn't seem to scale. For example, say I need to query for all the PDF documents along with thier total number of pages: Filename, TotalPages 'mydoc.pdf', '10' 'myotherdoc.pdf', '23' ... The JOINs needed to produce the above result is just too costly. How should I address this problem?

    Read the article

  • Eclipse CDT setup for remote build

    - by Posco Grubb
    Is there a better way to setup Eclipse CDT for local editing and remote building? I am working on a C++ project that uses GNU make in Linux. The code is under CVS on a Linux server. When I'm in the lab, I use Eclipse CDT on a Linux-x64 PC. The project is built on a Linux-x86 PC. All the computers in the lab (including the CVS server) have NFS mounts. When I'm at home, I use Eclipse CDT on a Windows 7 PC. The Windows PC connects to the Linux CVS server via SSH tunnel. To edit source, I rsync the C++ project under the Linux Eclipse workspace back to my Windows Eclipse workspace. (I can also do a remote CVS checkout on the Windows PC.) To build from home, I use a custom build command that SSH's to the Linux-x86 PC, rsync's the C++ project from my Windows Eclipse workspace to my Linux Eclipse workspace, and then runs make on the Liunx-x86 PC, specifying the correct path for the Makefile. In order to go back and forth between lab and home without committing my changes to CVS every time, I use rsync. When I transition from lab to home, I rsync sources to my Windows Eclipse workspace. When I build from home, the sources get rsync'd back to the Linux Eclipse workspace. Is there a better, less wonky way to do this? (I'm NOT interested in remote debugging.)

    Read the article

  • Compiling gstreamer plugin in windows

    - by utnapistim
    Hello all, My question: What is the correct way to compile a gstreamer plugin in windows, so that it will be accepted by gstreamer (actually Songbird on top of gstreamer). My setup: I have downloaded the songbird sources following the steps described here and I have a trunk/dependencies/windows-i686-msvc8 directory within my svn sources with all the gstreamer binaries. I have created a gstreamer empty plugin skeleton following the steps detailed in the GStreamer Plugin Writer's Guide, and compiled it against the gstreamer binaries in the Songbird dependencies folder. The compilation was done with VS2010 RC1 (Visual Studio 2008 yelded the same results), using an empty DLL project with the .h and .c files generated using the GStreamer Plugin Writer's Guide. The DLL was lined with libcpmt.lib, libcmt.lib, ws2_32.lib, gobject-2.0.lib, gthread-2.0.lib, gstreamer-0.10-0.lib, glib-2.0.lib, kernel32.lib, nspr4.lib and ignoring all default libraries. I have compiled the files as both .c and .cpp with the same results. Testing: I have installed the Songbird binaries corresponding to the correct svn version, then installed Songbird Developer Tools addon and used it to create an addon for testing my gstreamer plugin. Songbird will not load the pluggin. I have also tried to load it with gst-launch.exe from the trunk/dependencies/windows-i686-msvc8/[...] directory and that generated runtime error R6034: An application has made an attempt to load the C runtime library incorrectly. Most resources I found for this problem recommended restarting or reinstalling windows :(.

    Read the article

  • Maven - 'all' or 'parent' project for aggregation?

    - by disown
    For educational purposes I have set up a project layout like so (flat in order to suite eclipse better): -product | |-parent |-core |-opt |-all Parent contains an aggregate project with core, opt and all. Core implements the mandatory part of the application. Opt is an optional part. All is supposed to combine core with opt, and has these two modules listed as dependencies. I am now trying to make the following artifacts: product-core.jar product-core-src.jar product-core-with-dependencies.jar product-opt.jar product-opt-src.jar product-opt-with-dependencies.jar product-all.jar product-all-src.jar product-all-with-dependencies.jar Most of them are fairly straightforward to produce. I do have some problem with the aggregating artifacts though. I have managed to make the product-all-src.jar with a custom assembly descriptor in the 'all' module which downloads the sources for all non-transitive deps, and this works fine. This technique also allows me to make the product-all-with-dependencies.jar. I however recently found out that you can use the source:aggregate goal in the source plugin to aggregate sources of the entire aggregate project. This is also true for the javadoc plugin, which also aggregates through the usage of the parent project. So I am torn between my 'all' module approach and ditching the 'all' module and just use the 'parent' module for all aggregation. It feels unclean to have some aggregate artifacts produced in 'parent', and others produced in 'all'. Is there a way of making an 'product-all' jar in the parent project, or to aggregate javadoc in the 'all' project? Or should I just keep both? Thanks

    Read the article

  • Error installing FeedZirra

    - by Gautam
    Hi, I am new to Ruby on Rails. I am excited about Feed parsing but when I install FeedZirra I am getting this error. I use Windows 7 and Ruby 1.8.7. Please help. Thanks in advance. C:\Ruby187>gem sources -a http://gems.github.com http://gems.github.com added to sources C:\Ruby187>gem install pauldix-feedzirra Building native extensions. This could take a while... ERROR: Error installing pauldix-feedzirra: ERROR: Failed to build gem native extension. C:/Ruby187/bin/ruby.exe extconf.rb checking for curl-config... no checking for main() in -lcurl... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=C:/Ruby187/bin/ruby --with-curl-dir --without-curl-dir --with-curl-include --without-curl-include=${curl-dir}/include --with-curl-lib --without-curl-lib=${curl-dir}/lib --with-curllib --without-curllib extconf.rb:12: Can't find libcurl or curl/curl.h (RuntimeError) Try passing --with-curl-dir or --with-curl-lib and --with-curl-include options to extconf. Gem files will remain installed in C:/Ruby187/lib/ruby/gems/1.8/gems/taf2-curb-0 .5.4.0 for inspection. Results logged to C:/Ruby187/lib/ruby/gems/1.8/gems/taf2-curb-0.5.4.0/ext/gem_ma ke.out

    Read the article

  • Needed environment for building gstreamer plugins in Windows

    - by utnapistim
    Hi, I've been strugling for two weeks to create an environment for building a gstreamer plugin on windows (needed for a songbird addon). I've installed MSYS, MinGW and Cygwin, then installed GStreamer OSSBuild, and I also downloaded the sources for Songbird, which come with their own precompiled version of gstreamer. I was unable to run gst-inspect (or any other gstreamer applications) from the songbird sources and I figured I will settle for OSSBuild. When following the instructions for building a GST plugin (found here) through, cygwin will not recognize the OSSBuild and the build fails when running autogen, with the following error: checking for GST... no configure: error: You need to install or upgrade the GStreamer development packages on your system. On debian-based systems these are libgstreamer0.10-dev and libgstreamer-plugins-base0.10-dev. on RPM-based systems gstreamer0.10-devel, libgstreamer0.10-devel or similar. The minimum version required is 0.10.16. configure failed I could also not use MSYS or MinGW as they are unable to run autogen at all. I understand that cygwin should have it's own gstreamer development packages but I couldn't find how to install them. My question: How do I install the gstreamer packages in cygwin or how do I build using cygwin with the OSSBuild dependencies? In short, how do I get an environment where I can build a gstreamer plugin under windows?

    Read the article

  • How to handle not-enough-isolatedstorage issue deep in data loader?

    - by Edward Tanguay
    I have a silverlight application which loads data from many external data sources into IsolatedStorage, and while loading any of these sources if it does not have enough IsolatedStorage, it ends up in a catch statement. At that point in that catch statement I would like to ask the user to click a button to approve silverlight to increase the IsolatedStorage capacity. The problem is, although I have a "SwitchPage()" method with which I display a page, if I access it at this point it is too deep in the loading process and the application always goes into an endless loop, hangs and crashes. I need a way to branch out of the application completely somehow to an independent UserControl which has a button and code behind which does the increase logic. What is a solution for an application to be able to branch out of a loading process catch statement like this, display a user control which has a button to ask the user to increase the IsolatedStorage? public static void SaveBitmapImageToIsolatedStorageFile(OpenReadCompletedEventArgs e, string fileName) { try { using (IsolatedStorageFile isf = IsolatedStorageFile.GetUserStoreForApplication()) { using (IsolatedStorageFileStream isfs = new IsolatedStorageFileStream(fileName, FileMode.Create, isf)) { Int64 imgLen = (Int64)e.Result.Length; byte[] b = new byte[imgLen]; e.Result.Read(b, 0, b.Length); isfs.Write(b, 0, b.Length); isfs.Flush(); isfs.Close(); isf.Dispose(); } } } catch (IsolatedStorageException) { //handle: present user with button to increase isolated storage } catch (TargetInvocationException) { //handle: not saved } }

    Read the article

  • looking to streamline my RSS feed mashup

    - by Mark Cejas
    Hello crafty developers, I have aggregated RSS feeds from various sources with RSSowl, fetching directly from the social mention API. The RSS feeds are categorized into the following major categories: blogs, news, twitter, Q&A and social networking sites. Each major category is nested with a common group of RSS feeds that represent a particular client/brand ontology. Merging these feeds into the RSSowl reader application, allows me to conduct and save refined search queries (from the aggregated data) into a single file - that I can then tag and further segment for analysis. This scheme is utilized for my own research needs and has helped me considerably. However, I find this RSS mashup scheme kinda clumsy, it requires quite a bit of time to initially organize all of the feeds and I would like to be able to do further natural language processing to the data as well as eventually be able to rank the collected list of URL's into some order of media prominence - right I don't want to pay the ridiculous radian6 web analytics fees, when my intuition is telling me that with a bit of 'elbow grease' I can maybe leverage some available resources online to develop a functional low scale web mining application and get some good intelligence from it. I am now starting to learn a little about computer science - my background is in physical science/statistics so is my thinking in the right track? So, I guess I am imagining an application that allows me to query in a refined manner. A manner that allows me to search for keyword combinations, applying AND/OR operators, selectively focus my queries into particular sources - like a collection of blogs or twitter, or social networking communities, then save the results of my queries into a structured format that can then be manipulated and explored. Am I dreaming? I just had to get all of this out. any bit of advice and insight would be hugely appreciated. my best, Mark

    Read the article

  • How do I tell nant to only call csc when there are cs files in to compile?

    - by rob_g
    In my NAnt script I have a compile target that calls csc. Currently it fails because no inputs are specified: <target name="compile"> <csc target="library" output="${umbraco.bin.dir}\Mammoth.${project::get-name()}.dll"> <sources> <include name="project/*.cs" /> </sources> <references> </references> </csc> </target> How do I tell NAnt to not execute the csc task if there are no CS files? I read about the 'if' attribute but am unsure what expression to use with it, as ${file::exists('*.cs')} does not work. The build script is a template for Umbraco (a CMS) projects and may or may not ever have .cs source files in the project. Ideally I would like to not have developers need to remember to modify the NAnt script to include the compile task when .cs files are added to the project (or exclude it when all .cs files are removed).

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >