Search Results

Search found 18845 results on 754 pages for 'the machine charmer'.

Page 682/754 | < Previous Page | 678 679 680 681 682 683 684 685 686 687 688 689  | Next Page >

  • Socket.Recieve Failing When Multithreaded

    - by Qua
    The following piece of code runs fine when parallelized to 4-5 threads, but starts to fail as the number of threads increase somewhere beyond 10 concurrentthreads int totalRecieved = 0; int recieved; StringBuilder contentSB = new StringBuilder(4000); while ((recieved = socket.Receive(buffer, SocketFlags.None)) > 0) { contentSB.Append(Encoding.ASCII.GetString(buffer, 0, recieved)); totalRecieved += recieved; } The Recieve method returns with zero bytes read, and if I continue calling the recieve method then I eventually get a 'An established connection was aborted by the software in your host machine'-exception. So I'm assuming that the host actually sent data and then closed the connection, but for some reason I never recieved it. I'm curious as to why this problem arises when there are a lot of threads. I'm thinking it must have something to do with the fact that each thread doesn't get as much execution time and therefore there are some idle time for the threads which causes this error. Just can't figure out why idle time would cause the socket not to recieve any data.

    Read the article

  • MS Word opens documents hosted on WebDav share read-only on Windows Vista and 7 but only if no other

    - by rjmunro
    We have a WebDav server with some Word documents on it. (We are using PHP's HTTP_WebDAV_Server but get the same issue on tests with Apache mod_dav - both use digest authentication, basic auth doesn't work on Vista or later) We have a web page that opens the word documents using javascript like: Doc = new ActiveXObject("Sharepoint.OpenDocuments.3"); Doc.EditDocument(url, 'Word.Document'); which causes word to connect to the webdav server and open the document, bypassing IE and most of windows built in WebDav client. On Windows XP, this works perfectly, and (after prompting you to log in) allows you to edit the word document and save it back to the server. On Windows 7 and Windows Vista, this usually opens the document read only, but not in all cases. After quite a bit of trial and error, we found that it worked (i.e. opened read/write) if Explorer happened to be already connected to a WebDav server. Note that this works with any Webdav server, not neccesarily the one with the document that you are trying to edit. So other than telling our users to change settings on their machine, is there anything we can do in the javascript sharepoint call, or on the WebDav server that will fix this issue. Ps. We have the same problem when launching Word from an HTA file version of our system, with javascript like: wordApp = new ActiveXObject("Word.application"); wordApp = new ActiveXObject("Word.application"); wordApp.Visible = true; doc = wordApp.Documents.Open(url); Pps. Sorry if you think this question should be on Serverfault (or even SuperUser). I couldn't decide, but because we are programming the WebDav server ourself (in PHP) and I have more rep on this site than the others, I decided to post it here :-)

    Read the article

  • Unable to create PDB file

    - by Ryan Smith
    For some reason this error started popping up today on one of my projects. Error 1 Unable to write to output file 'C:\MyProject\Release\MyProject.pdb': Unspecified error If I go into advanced compile options and change it to not generate and debug info, my project compiles fine. I have tried setting the permissions on the Release folder to full for everyone, so I would assume it's not a permissions issue. Also, I don't see anything in my log files that would provide me with more information about the issue. Does anyone know why this error would just start showing up or a way to fix it? Thanks. Update: I have rebooted my machine, restarted VS several times and have even completely deleted the existing OBJ file where the issue is happening. It's still giving me the same error. This is a simple one project solution that was working fine just last week. It appears to be an issue with VS trying to build the PDB file because I can delete them out of the Release and Debug folders without issue. When I try rebuilding them VS will start creating the file (about 1.4MB is size) but I still get the error.

    Read the article

  • Fix hard-coded display setting without source (24-bit, need 32-bit)

    - by FerretallicA
    I wrote a program about 10 years ago in Visual Basic 6 which was basically a full-screen game similar to Breakout / Arkanoid but had 'demoscene'-style backgrounds. I found the program, but not the source code. Back then I hard-coded the display mode to 800x600x24, and the program crashes whenever I try to run it as a result. No virtual machine seems to support 24-bit display when the host display mode is 16/32-bit. It uses DirectX 7 so DOSBox is no use. I've tried all sorts of decompiler and at best they give me the form names and a bunch of assembly calls which mean nothing to me. The display mode setting was a DirectX 7 call but there's no clear reference to it in the decompilation. In this situation, is there any pointers on how I can: pin-point the function call in the program which is setting the display mode to 800x600x24 (ResHacker maybe?) and change the value being passed to it so it sets 800x600x32 view/intercept DirectX calls being made while it's running or if that's not possible, at least run the program in an environment that emulates a 24-bit display I don't need to recover the source code (as nice as it would be) so much as just want to get it running.

    Read the article

  • Is there really such a thing as a char or short in modern programming?

    - by Dean P
    Howdy all, I've been learning to program for a Mac over the past few months (I have experience in other languages). Obviously that has meant learning the Objective C language and thus the plainer C it is predicated on. So I have stumbles on this quote, which refers to the C/C++ language in general, not just the Mac platform. With C and C++ prefer use of int over char and short. The main reason behind this is that C and C++ perform arithmetic operations and parameter passing at integer level, If you have an integer value that can fit in a byte, you should still consider using an int to hold the number. If you use a char, the compiler will first convert the values into integer, perform the operations and then convert back the result to char. So my question, is this the case in the Mac Desktop and IPhone OS environments? I understand when talking about theses environments we're actually talking about 3-4 different architectures (PPC, i386, Arm and the A4 Arm variant) so there may not be a single answer. Nevertheless does the general principle hold that in modern 32 bit / 64 bit systems using 1-2 byte variables that don't align with the machine's natural 4 byte words doesn't provide much of the efficiency we may expect. For instance, a plain old C-Array of 100,000 chars is smaller than the same 100,000 ints by a factor of four, but if during an enumeration, reading out each index involves a cast/boxing/unboxing of sorts, will we see overall lower 'performance' despite the saved memory overhead?

    Read the article

  • C# Configuration Manager . ConnectionStrings

    - by Yoda
    I have a console app containing an application configuration file containing one connection string as shown below: <configuration> <connectionStrings> <add name="Target" connectionString="server=MYSERVER; Database=MYDB; Integrated Security=SSPI;" /> </connectionStrings> </configuration> When I pass this to my Connection using: ConfigurationManager.ConnectionStrings[1].ToString() I have two values in there, hence using the second in the collection, my question is where is this second coming from? I have checked the Bin version and original and its not mine! Its obviously a system generated one but I have not seen this before? Can anyone enlighten me? The mystery connection string is: data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true This isn't a problem as such I would just like to know why this is occuring? Thanks in advance! For future reference to those who may or may not stumble on this, after discovering the machine.config its become apparent it is bad practice to refer to a config by its index as each stack will potentially be different, which is why "Keys" are used. Cheers all!

    Read the article

  • Cross compiling from MinGW on Fedora 12 to Windows - console window?

    - by elcuco
    After reading this article http://lukast.mediablog.sk/log/?p=155 I decided to use mingw on linux to compile windows applications. This means I can compile, test, debug and release directly from Linux. I hacked this build script which will cross compile the application and even package it in a ZIP file. Note that I am using out of source builds for QMake (did you even know this is supported? wow...). Also note that the script will pull the needed DLLs automagically. Here is the script for you all internets to use and abuse: #! /bin/sh set -x set -e VERSION=0.1 PRO_FILE=blabla.pro BUILD_DIR=mingw_build DIST_DIR=blabla-$VERSION-win32 # clean up old shite rm -fr $BUILD_DIR mkdir $BUILD_DIR cd $BUILD_DIR # start building QMAKESPEC=fedora-win32-cross qmake-qt4 QT_LIBINFIX=4 config=\"release\ quiet\" ../$PRO_FILE #qmake-qt4 -spec fedora-win32-cross make DLLS=`i686-pc-mingw32-objdump -p release/*.exe | grep dll | awk '{print $3}'` for i in $DLLS mingwm10.dll ; do f=/usr/i686-pc-mingw32/sys-root/mingw/bin/$i if [ ! -f $f ]; then continue; fi cp -av $f release done mkdir -p $DIST_DIR mv release/*.exe $DIST_DIR mv release/*.dll $DIST_DIR zip -r ../$DIST_DIR.zip $DIST_DIR The compiled binary works on the Windows7 machine I tested. Now to the questions: When I execute the application on Windows, the theme is not the Windows7 theme. I assume I am missing a style module, I am not really sure yet. The application gets a console window for some reason. The second point (the console window) is critical. How can I remove this background window? Please note that the extra config lines are not working for me, what am I missing there?

    Read the article

  • What makes deployment successful for some users and unsuccessful for others?

    - by Julien
    I am trying to deploy a Visual C++ application (developed with Microsoft Visual Studio 2008) using a Setup and Deployment Project. After installation, users on some target computers get the following error message after launching the application executable: “This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix the problem.” Another user after installation could run the application properly. I cannot find the root cause of this problem, despite spending several hours on the Visual Studio help files and online forums (most postings date back to 2006). Does anyone at Stack Overflow have a suggestion? Thanks in advance. Additional details appear below. The application uses FLTK 1.1.9 for a GUI library, as well as some Boost 1.39 libraries (regex, lexical_cast, date_time, math). I made sure I am trying to deploy the release version (not the debug version) of the application. The Runtime library in the Code Generation settings is Multi-threaded DLL (/MD). The dependency walker of myapp.exe lists the following DLLs: wsock32.dll, comctl32.dll, kernel32.dll, user32.dll, gdi32.dll, shell32.dll, ole32.dll, mvcp90.dll, msvcr90.dll. In the Setup and Deployment Project, I add the following DLLs to the File System on Target Machine: fltkdlld.dll, and a folder named Microsoft.VC90.CRT with msvcm90.dll, msvcp90.dll, mcvcr90.dll and Microsoft.VC90.CRT.manifest. The installation process on the target computers getting the error message requires having the .Net Framework 3.5 installed first. Any suggestion? Thanks in advance!

    Read the article

  • GWT RPC and GoDaddy Shared Hosting

    - by Mike Apolis
    Hi, I've deployed the sample Stock Watcher app to my GoDaddy Hosting site, and I get the error below. I've tried compiling the Project in Eclipse with JRE 1.5 because my Host is using jre 1.5. I think the issue is the "gwt-servlet.jar" is not compatible with jre 1.5. Can anyone confirm this. The project runs fine on my local machine using JRE 1.6. Unfortunately GoDaddy will not upgrade my shared hosting account jre to 1.6. GoDaddy Server Setup: Tomcat Version 5.0.27 JRE 1.5_22 Error: HTTP Status 500 - type Exception report message description The server encountered an internal error () that prevented it from fulfilling this request. exception javax.servlet.ServletException: Error allocating a servlet instance org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java: 117) org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java: 535) org.apache.catalina.authenticator.SingleSignOn.invoke(SingleSignOn.java: 417) org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java: 160) org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:300) org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:374) org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:743) org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java: 675) org.apache.jk.common.SocketConnection.runIt(ChannelSocket.java:866) org.apache.tomcat.util.threads.ThreadPool $ControlRunnable.run(ThreadPool.java:683) java.lang.Thread.run(Thread.java:595) root cause java.lang.UnsupportedClassVersionError: Bad version number in .class file java.lang.ClassLoader.defineClass1(Native Method) java.lang.ClassLoader.defineClass(ClassLoader.java:621) java.security.SecureClassLoader.defineClass(SecureClassLoader.java: 124) org.apache.catalina.loader.WebappClassLoader.findClassInternal(WebappClassLoader.java: 1634) org.apache.catalina.loader.WebappClassLoader.findClass(WebappClassLoader.java: 860) org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java: 1307) org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java: 1189) java.security.AccessController.doPrivileged(Native Method) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java: 117) org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java: 535) org.apache.catalina.authenticator.SingleSignOn.invoke(SingleSignOn.java: 417) org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java: 160) org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:300) org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:374) org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:743) org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java: 675) org.apache.jk.common.SocketConnection.runIt(ChannelSocket.java:866) org.apache.tomcat.util.threads.ThreadPool $ControlRunnable.run(ThreadPool.java:683) java.lang.Thread.run(Thread.java:595) note The full stack trace of the root cause is available in the Apache Tomcat/5.0.27 logs. Apache Tomcat/5.0.27

    Read the article

  • App hosting Report Viewer crashes on exit after export

    - by Paul Sasik
    We have a .NET Winforms application that hosts the Crystal Reports Viewer control (Version XI). It works well for the most part but when an export of data from the viewer is performed the application will crash on exit and in unmanaged code. The error message is not very useful and just says that an incorrect memory location was accessed. No other info such a specific DLL etc. is provided. This only happens after the viewer is used to export a report to CSV, XML etc. My guess is that at some point in the export process Crystal creates a resource that attempts an action on shut down to a parent window (perhaps) that no longer exists. I've seen a number of memory leak and shut down issues with Crystal but this one's new. Has anyone seen it and come up with a workaround or has ideas for workarounds? So far we've tried explicitly disposing of all crystal-related objects, setting to null and even setting a Thread.Sleep cycle on shut down to "give Crystal time to clean up." Update: The crash happens only on shut down (so not immediate) All export formats work All export files are created properly CR is installed on the same machine as the hosting .NET app not sure about exporting from the IDE... is that even possible?

    Read the article

  • Calling SDL/OpenGL from Assembly code on Linux

    - by Lie Ryan
    I'm write a simple graphic-based program in Assembly for learning purpose; for this, I intended to use either OpenGL or SDL. I'm trying to call OpenGL/SDL's function from assembly. The problem is, unlike many assembly and OpenGL/SDL tutorials I found in the internet, the OpenGL/SDL in my machine apparently doesn't use C calling convention. I wrote a simple program in C, compile it to assembly (using -S switch), and apparently the assembly code that is generated by GCC calls the OpenGL/SDL functions by passing parameters in the registers instead of being pushed to the stack. Now, the question is, how do I determine how to pass arguments to these OpenGL/SDL functions? That is, how do I figure out which argument corresponds to which registers? Obviously since GCC can compile C code to call OpenGL/SDL, so therefore there must be a way to figure out the correspondence between function arguments and registers. In C calling conventions, the rule is easy, push parameters backwards and return value in eax/rax, I can simply read their C documentation and I can easily figure out how to pass the parameters. But how about these? Is there a way to call OpenGL/SDL using C calling convention? btw, I'm using yasm, with gcc/ld as the linker on Gentoo Linux amd64.

    Read the article

  • rails Rake and mysql ssh port forwarding.

    - by rube_noob
    Hello, I need to create a rake task to do some active record operations via a ssh tunnel. The rake task is run on a remote windows machine so I would like to keep things in ruby. This is my latest attempt. desc "Syncronizes the tablets DB with the Server" task(:sync => :environment) do require 'rubygems' require 'net/ssh' begin Thread.abort_on_exception = true tunnel_thread = Thread.new do Thread.current[:ready] = false hostname = 'host' username = 'tunneluser' Net::SSH.start(hostname, username) do|ssh| ssh.forward.local(3333, "mysqlhost.com", 3306) Thread.current[:ready] = true puts "ready thread" ssh.loop(0) { true } end end until tunnel_thread[:ready] == true do end puts "tunnel ready" Importer.sync rescue StandardError => e puts "The Database Sync Failed." end end The task seems to hang at "tunnel ready" and never attempts the sync. I have had success when running first a rake task to create the tunnel and then running the rake sync in a different terminal. I want to combine these however so that if there is an error with the tunnel it will not attempt the sync. This is my first time using ruby Threads and Net::SSH forwarding so I am not sure what is the issue here. Any Ideas!? Thanks

    Read the article

  • WinForms - DateTimePicker default month selection behavior for Server 2003 vs Server 2008?

    - by Mike Loux
    Good Afternoon! Has anybody else noticed a change in the default behavior of the "next" and "previous" month arrows in the standard WinForms DateTimePicker control? I have users running on both Windows Server 2003 and Windows Server 2008 R2, and they are reporting that on 2008 (and Vista/Win7), clicking the right or left arrows on the drop-down Calendar now selects the first day of the month rather than retaining the same day like 2003 (and XP) does. I have checked this out (I have a Win7 machine) and I have confirmed this behavior. I would prefer that the behavior remain consistent whenever possible. Does anybody know what causes this and if there is a way to get around this? Is there a way to trap the arrow-click event and force the resulting date to retain the original day rather than be reset to the first of the month? I thought about seeing if there was a way to hit-test the control on a MouseUp event and determine if the arrow buttons were clicked, and then override the month value being set, but I'm not sure if that is even possible. Can anybody provide some wisdom or insight? Thanks!

    Read the article

  • What's a good Java-based Master-Slave communication mechanism?

    - by plecong
    I'm creating a Java application that requires master-slave communication between JVMs, possibly residing on the same physical machine. There will be a "master" server running inside a JEE application server (i.e. JBoss) that will have "slave" clients connect to it and dynamically register itself for communication (that is the master will not know the IP addresses/ports of the slaves so cannot be configured in advance). The master server acts as a controller that will dole work out to the slaves and the slaves will periodically respond with notifications, so there would be bi-directional communication. I was originally thinking of RPC-based systems where each side would be a server, but it could get complicated, so I'd prefer a mechanism where there's an open socket and they talk back and forth. I'm looking for a communication mechanism that would be low-latency where the messages would be mostly primitive types, so no serious serialization is necessary. Here's what I've looked at: RMI JMS: Built-in to Java, the "slave" clients would connect to the existing ConnectionFactory in the application server. JAX-WS/RS: Both master and slave would be servers exposing an RPC interface for bi-directional communication. JGroups/Hazelcast: Use shared distributed data structures to facilitate communication. Memcached/MongoDB: Use these as "queues" to facilitate communication, though the clients would have to poll so there would be some latency. Thrift: This does seem to keep a persistent connection, but not sure how to integrate/embed a Thrift server into JBoss WebSocket/Raw Socket: This would work, but require a lot more custom code than I'd like. Is there any technology I'm missing? Edit: Also looked at: JMX: Have the client connect to JBoss' JMX server and receive JMX notifications for bidirectional comms.

    Read the article

  • List local printers

    - by vladimir
    hi all, i am using this routine to list the local printers installed on on a machine... var p: pointer; hpi: _PRINTER_INFO_2A; hGlobal: cardinal; dwNeeded, dwReturned: DWORD; bFlag: boolean; i: dword; begin p := nil; EnumPrinters(PRINTER_ENUM_LOCAL, nil, 2, p, 0, dwNeeded, dwReturned); if (dwNeeded = 0) then exit; GetMem(p,dwNeeded); if (p = nil) then exit; bFlag := EnumPrinters(PRINTER_ENUM_LOCAL, nil, 2, p, dwneeded, dwNeeded, dwReturned); if (not bFlag) then exit; CbLblPrinterPath.Properties.Items.Clear; for i := 0 to dwReturned - 1 do begin CbLblPrinterPath.Properties.Items.Add(TPrinterInfos(p^)[i].pPrinterName); end; FreeMem(p); TPrinterInfos(p^)[i].pPrinterName returns a 'P' for printer name. i have a PdfCreator installed as a printer. TPrinterInfos is an array of _PRINTER_INFO_2A. how can i fix this?

    Read the article

  • Google Maps API and "rightclick" events on Macs

    - by samc
    Using the Google Maps API (v3), I can create a map and handle normal click events just fine, but when I want to handle rightclick events, it doesn't work on Macs. I assume this is because a rightclick on a Mac is actually converted to a ctrl-click, but the Google Maps API MouseEvent doesn't provide information about modifier keys, so I can't check for the ctrl key. I tried adding an "capture" event listener to the document that converts the click event to a rightclick event. function convertClick(e) { if (e.ctrlKey) { e.button = 2; } } document.addEventListener("click", convertClick, true) I added an alert to verify that the condition is correct, but modifying the event in this way didn't work. So, I decided to have my event handler set a global flag that my click handler could check. If the flag is set, it means ctrl was pressed, so the click handler just invokes the rightclick handler. var ctrl; function captureCtrl(e) { ctrl = e.ctrlKey; } This approach worked great, except for one thing. The ctrl flag gets set for the click after the one that occured when ctrl was pressed. That means the event handler is be called during the bubble phase rather than the capture phase. Could explain why the event modification approach didn't work. So, my question is how can you detect "rightclick" events from Macs with the Google Maps API? I can't be the first person to want to do this. That said, when I right-click on the map on http://maps.google.com from a Windows or Linux machine, I get a popup box with options like "Directions from here...", etc. On a Mac, nothing happens. So, not even the main Google Maps page has solved this problem. ...maybe I am the first person to want to do this.

    Read the article

  • Inconsistent behavior working with "Flex on Rails" example.

    - by kmontgom
    I'm experimenting with Flex and Rails right now (Rails is cool). I'm following the examples in the book "Flex on Rails", and I'm getting some puzzling and inconsistent behavior. Heres the Flex MXML: <mx:HTTPService id="index" url="http://localhost:3000/people.xml" resultFormat="e4x" /> <mx:DataGrid dataProvider="{index.lastResult.person}" width="100%" height="100%"> <mx:columns> <mx:DataGridColumn headerText="First Name" dataField="first-name"/> <mx:DataGridColumn headerText="Last Name" dataField="last-name"/> </mx:columns> </mx:DataGrid> <mx:Script> <![CDATA[ import mx.controls.Alert; private function main():void { Alert.show( "In main()" ); } ]]> </mx:Script> When I run the app from my IDE (Amythyst beta, also cool), the DataGrid appears, but is not populated. The Alert.show() also triggers. When I go out to a web browser and manually enter the url (http://localhost:3000/people.xml), the Mongrel console shows the request coming through and the browser shows the web response. No exceptions or other error messages occur. Whats the difference? Do I need to alter some OS setting? I'm using Win7 on an x64 machine.

    Read the article

  • AnkhSVN: Cannot checkout Subsolution due to existing "versioned" folder

    - by lostiniceland
    Hello Everyone I am using Subversion since quite some time for Java-Development and I have setup a repository on my local NAS. Since I have a MSDN subscription via my company I recently installed Visual Studio 2010 to do a small project with .NET. According to some "best-practices" my project folder looks like the following. MySolution main.sln Services services.sln Service A files Service A Test files View projectfiles Persistence persistence.sln PersistenceXml files PersistenceXml Test files PersistenceDB files PersistenceDB Test files The idea is, that the main.sln only contains the projects for the application, meaning no test projects. The subsolutions, contain the project(s) and their corresponding testprojects. I was able to put all those projects under versioncontrol with AnkhSVN, so I have the same structure there in my trunk. Commiting changes was also no problem. Now I would like to check the this out on another machine. I was able to check out the main.sln which downloaded everything that was inside this solution. It skipped the services.sln, persistence.sln and all the test-projects. Until now everything is fine. Now, here comes the problem: when I am tryting to check out the subsolution (eg. services.sln) I get an error, I think it was UnsupportedOperation. I guess this happens because ankhsvn is tryting to download the folder Service A again and create ist hidden .svn folder which is already present. The only workaround I can think of by now is installing Tortoise SVN and check out the whole thing at once. It would be nicer though to have everything from within VS. Does anyone know how I can solve this? Is another client the only solution?

    Read the article

  • Trying to not need two separate solutions for x86 and x64 program.

    - by Sean Anderson
    Hi all, I have a program which needs to function in both an x86 and an x64 environment. It is using Oracle's ODBC drivers. I have a reference to Oracle.DataAccess.DLL. This DLL is different depending on whether the system is x64 or x86, though. Currently, I have two separate solutions and I am maintaining the code on both. This is atrocious. I was wondering what the proper solution is? I have my platform set to "Any CPU." and it is my understanding that VS should compile the DLL to an intermediary language such that it should not matter if I use the x86 or x64 version. Yet, if I attempt to use the x64 DLL I receive the error "Could not load file or assembly 'Oracle.DataAccess, Version=2.102.3.2, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format." I am running on a 32 bit machine, so the error message makes sense, but it leaves me wondering how I am supposed to efficiently develop this program when it needs to work on x64. Thanks.

    Read the article

  • Build a Visual Studio Project without access to referenced dlls

    - by David Reis
    I have a project which has a set of binary dependencies (assembly dlls for which I do no have the source code). At runtime those dependencies are required pre-installed on the machine and at compile time they are required in the source tree, e,g in a lib folder. As I'm also making source code available for this program I would like to enable a simple download and build experience for it. Unfortunately I cannot redistribute the dlls, and that complicates things, since VS wont link the project without access to the referenced dlls. Is there anyway to enable this project to be built and linked in absence of the real referenced dlls? Maybe theres a way to tell VS to link against an auto generated stub of the dll, so that it can rebuild without the original? Maybe there's a third party tool that will do this? Any clues or best practices at all in this area? I realize the person must have access to the dlls to run the code, so it makes sense that he could add them to the build process, but I'm just trying to save them the pain of collecting all the dlls and placing them in the lib folder manually.

    Read the article

  • How do I debug into an ILMerged assembly?

    - by Rory Becker
    Summary I want to alter the build process of a 2-assembly solution, such that a call to ILMerge is invoked, and the build results in a single assembly. Further I would like to be able to debug into the resultant assembly. Preparation - A simple example New Solution - ClassLibrary1 Create a static function 'GetMessage' in Class1 which returns the string "Hello world" Create new console app which references the ClassLibrary. Output GetMessage from main() via the console. You now have a 2 assembly app which outputs "Hello World" to the console. So what next..? I would like to alter the Console app build process, to include a post build step which uses ILMerge, to merge the ClassLibrary assembly into the Console assembly After this step I should be able to: Run the Console app directly with no ClassLibrary1.dll present Run the Console app via F5 (or F11) in VS and be able to debug into each of the 2 projects. Limited Success I read this blogpost and managed to achieve the merge I was after with a post-build command of... "$(ProjectDir)ILMerge.bat" "$(TargetDir)" $(ProjectName) ...and an ILMerge.bat file which read... CD %1 Copy %2.exe temp.exe ILMerge.exe /out:%2.exe temp.exe ClassLibrary1.dll Del temp.exe Del ClassLibrary1.* This works fairly well, and does in fact produce an exe which runs outside the VS environment as required. However it does not appear to produce symbols (.pdb file) which VS is able to use in order to debug into the code. I think this is the last piece of the puzzle. Does anyone know how I can make this work? FWIW I am running VS2010 on an x64 Win7 x64 machine.

    Read the article

  • Ogre material scripts; how do I give a technique multiple lod_indexes?

    - by BlueNovember
    I have an Ogre material script that defines 4 rendering techniques. 1 using GLSL shaders, then 3 others that just use textures of different resolutions. I want to use the GLSL shader unconditionally if the graphics card supports it, and the other 3 textures depending on camera distance. At the moment my script is; material foo { lod_distances 1600 2000 technique shaders { lod_index 0 lod_index 1 lod_index 2 //various passes here } technique high_res { lod_index 0 //various passes here } technique medium_res { lod_index 1 //various passes here } technique low_res { lod_index 2 //various passes here } Extra information The Ogre manual says; Increasing indexes denote lower levels of detail You can (and often will) assign more than one technique to the same LOD index, what this means is that OGRE will pick the best technique of the ones listed at the same LOD index. OGRE determines which one is 'best' by which one is listed first. Currently, on a machine supporting the GLSL version I am using, the script behaves as follows; Camera 2000 : Shader technique Camera 1600 <= 2000 : Medium Camera <= 1600 : High If I change the lod order in shader technique to { lod_index 2 lod_index 1 lod_index 0 } The behaviour becomes; Camera 2000 : Low Camera 1600 <= 2000 : Medium Camera <= 1600 : Shader implying only the latest lod_index is used. If I change it to lod_index 0 1 2 It shouts at me Compiler error: fewer parameters expected in foo.material(#): lod_index only supports 1 argument So how do I specify a technique to have 3 lod_indexes? Duplication works; technique shaders { lod_index 0 //various passes here } technique shaders1 { lod_index 1 //passes repeated here } technique shaders2 { lod_index 2 //passes repeated here } ...but it's ugly.

    Read the article

  • OleDB connection only working when debugging

    - by Francesc
    I have a C# application that connects to a named SQL Express instance on the local machine using OleDBConnection: _connection = new OleDbConnection(_strConn); _connection.Open(); _strConn is something like this: "Provider=sqloledb;Data Source=.\NAMEDINSTANCE;Initial Catalog=dbname;User Id=sa;Password=password;" If I debug the application, the connection works fine. If I run the application from Windows Explorer (the same debug compilation), I get an "OleDBException: Login timeout expired" in the Open() line after 30 seconds. The strange thing is the exception happens even if I attach the debugger to the exe. I can see that the connection string is correct and everything seems fine. I can't fine any extra information in the SQL Express error log or SQL Activity Monitor either. If it helps, here is the exception: System.Data.OleDb.OleDbException: Login timeout expired at System.Data.OleDb.OleDbConnectionInternal..ctor(OleDbConnectionString constr, OleDbConnection connection) at System.Data.OleDb.OleDbConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.OleDb.OleDbConnection.Open() I imagine that find the issue with the information I give here might be difficult, but I don't know where else to look or what other tests to do, so any ideas on what it could be or what test I could do to find out will be really appreciated.

    Read the article

  • maven assemblies. Putting each dependency with transitive dependencies in own directory?

    - by jr
    I have a maven project which consists of a few modules. This is to be deployed on a client machine and will involve installing Tomcat and will make use of NSIS for installer. There is a separate application which monitors tomcat and can restart it, perform updates, etc. So, I have the modules setup as follows: project +-- client (all code, handlers, for the war) +-- client-common - (shared code, shared between monitor and client) +-- client-web - (the war, basically just uses war has applicationcontext, web.xml,etc) +-- monitor - (the monitor application jar. Uses wrapper to run) So, I need to create an installer. I was planning on creating another module which would be the installer. This is where I would have tomcat directory and I'd like maven to "assemble" everything and then run NSIS so I can create the final installer. However, I need to have the monitor jar file in a directory and then have all monitors dependencies in a lib/ directory. The final directory structure should be: project-installer-directory/monitor/monitor-version.jar project-installer-directory/monitor/lib/monitor-dep-1.jar project-installer-directory/monitor/lib/monitor-dep-2.jar project-installer-directory/monitor/lib/monitor-dep-3.jar project-installer-directory/webapps/client-web.war Where in the client-web\WEB-INF\lib directory we will have all client-web's dependencies after it is exploded. That works, I have the .war file. What I am having problems with is getting the monitor module dependencies independent of the dependencies of the client-web module. I tried to just create the installer module and make the monitor and client-web dependencies, but when I use dependencies-copy it gives me everything. Not what I want. I'm leaning towards creating a new module called monitor-assembly or something to give me a zip file which contains the directory format I need, but that is yet another module. Can someone please help me with the correct way to accomplish this? thanks!

    Read the article

  • Getting zeros between data while reading a binary file in C

    - by indiajoe
    I have a binary data which I am reading into an array of long integers using a C programme. hexdump of the binary data shows, that after first few data points , it starts again at a location 20000 hexa adresses away. hexdump output is as shown below. 0000000 0000 0000 0000 0000 0000 0000 0000 0000 * 0020000 0000 0000 0053 0000 0064 0000 006b 0000 0020010 0066 0000 0068 0000 0066 0000 005d 0000 0020020 0087 0000 0059 0000 0062 0000 0066 0000 ........ and so on... But when I read it into an array 'data' of long integers. by the typical fread command fread(data,sizeof(*data),filelength/sizeof(*data),fd); It is filling up with all zeros in my data array till it reaches the 20000 location. After that it reads in data correctly. Why is it reading regions where my file is not there? Or how will I make it read only my file, not anything inbetween which are not in file? I know it looks like a trivial problem, but I cannot figure it out even after googling one night.. Can anyone suggest me where I am doing it wrong? Other Info : I am working on a gnu/linux machine. (slax-atma distro to be specific) My C compiler is gcc.

    Read the article

< Previous Page | 678 679 680 681 682 683 684 685 686 687 688 689  | Next Page >