Search Results

Search found 7554 results on 303 pages for 'shared secret'.

Page 256/303 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • What would be the safest way to store objects of classes derived from a common interface in a common

    - by Svenstaro
    I'd like to manage a bunch of objects of classes derived from a shared interface class in a common container. To illustrate the problem, let's say I'm building a game which will contain different actors. Let's call the interface IActor and derive Enemy and Civilian from it. Now, the idea is to have my game main loop be able to do this: // somewhere during init std::vector<IActor> ActorList; Enemy EvilGuy; Civilian CoolGuy; ActorList.push_back(EvilGuy); ActorList.push_back(CoolGuy); and // main loop while(!done) { BOOST_FOREACH(IActor CurrentActor, ActorList) { CurrentActor.Update(); CurrentActor.Draw(); } } ... or something along those lines. This example obviously won't work but that is pretty much the reason I'm asking here. I'd like to know: What would be the best, safest, highest-level way to manage those objects in a common heterogeneous container? I know about a variety of approaches (Boost::Any, void*, handler class with boost::shared_ptr, Boost.Pointer Container, dynamic_cast) but I can't decide which would be the way to go here. Also I'd like to emphasize that I want to stay away as far as possible from manual memory management or nested pointers. Help much appreciated :).

    Read the article

  • 3 Servers, is this is a cluster?

    - by Andy Barlow
    Hello, At the moment I have one Ubuntu server, 9.10, running with a simple Samba share, a mail server, DNS server and DHCP server. Mostly its just there for file sharing and email server. I also have 2 other servers that are exactly the same hardware and spec as the first, which have an rsync set up to retrieve the shared folders and backs them up. However, if the first server goes down, all of our shares disappear along with our mail and the system must be rebuilt. Also I tend to find if people are downloading a large amount from the file server, no-one can access there emails - especially in the morning when everyone is signing in at once. Would it be more beneficial for me to have all 3 servers, all running the same services, doing the same thing with some sort of cluster with load balancing? I'm not really sure where to begin looking, or how to go about such a setup where 3 servers are all identical, but perhaps one acts as the main load balancer?? If someone can point me in the right direction, or if this simply sounds like one of those Enterprise Cloud's that is now a default setup in Ubuntu Server 9.10+, then I'll go down that route. Cheers in advance. Andy

    Read the article

  • Producer/consumer system using database (MySql), is this feasible?

    - by johnrl
    Hi all. I need to use something to coordinate my system with several consumers/producers each running on different machines with different operating systems. I have been researching on using MySql to do this, but it seems ridiculously difficult. My requirements are simple: I want to be able to add or remove consumers/producers at any time and thus they should not depend on each other at all. Naturally a database would separate the two nicely. I have been looking at Q4M message queuing plugin for MySql but it seems complicated to use: I have to recompile it every time I upgrade MySql (can this really be true?) because when I try to install it on Ubuntu 9.10 with MySql 5.1.37 it says "Can't open shared library 'libqueue_engine.so' (errno: 0 API version for STORAGE ENGINE plugin is too different)". There is no precompiled version for MySql 5.1.37 apparently. Also what if I want to run MySql on my windows machine, then I can't rely on this plugin as it only seems to run on Linux and OSX?? I really need some input on how to construct my system best possible.

    Read the article

  • Using git pull to track a remote branch without merging

    - by J Barlow
    I am using git to track content which is changed by some people and shared "read-only" with others. The "readers" may from time to time need to make a change, but mostly they will not. I want to allow for the git "writers" to rebase pushed branches** if need be, and ensure that the "readers" never accidentally get a merge. That's normally easy enough. git pull origin +master There's one case that seems to cause problems. If a reader makes a local change, the command above will merge. I want pull to be fully automatic if the reader has not made local changes, while if they have made local changes, it should stop and ask for input. I want to track any upstream changes while being careful about merging downstream changes. In a way, I don't really want to pull. I want to track the master branch exactly. ** (I know this is not a best practice, but it seems necessary in our case: we have one main branch that contains most of the work and some topic branches for specific customers with minor changes that need to be isolated. It seems easiest to frequently rebase to keep the topics up to date.)

    Read the article

  • ExternalInterface

    - by Jesse
    Hey, so I'm having a bunch of trouble getting ExternalInterface to work, which is odd, because I use it somewhat often. I'm hoping it's something I just missed because I've been looking at it too long. The flash_ready function is correctly returning the objectID, and as far as I can tell, everything else is in order. Unfortunately, when I run it, I get an error (varying by browser) telling me that basically document.getElementById(<movename>).test() is not a valid method. Here's the code: javascript: function flash_ready(i){ document.getElementById(i).test('success!'); } Embed Html (Generated): <script type="text/javascript"> swfobject.embedSWF("/chainmaille/includes/media/flash/upload_image.swf", "/chainmaille/includes/media/flash/upload_image", "500", "50", "9.0.0","expressInstall.swf", {}, {allowScriptAccess:'always', wmode:'transparent'},{id:'uploader_flash',name:'uploader_flash'}); </script> <object type="application/x-shockwave-flash" id="uploader_flash" name="uploader_flash" data="/chainmaille/includes/media/flash/upload_image.swf" width="500" height="50"><param name="allowScriptAccess" value="always"><param name="wmode" value="transparent"></object> AS3 : package com.jesseditson.uploader { import flash.display.MovieClip; import flash.external.ExternalInterface; import flash.system.Security; public class UI extends MovieClip { // Initialization: public function UI() { Security.allowDomain('*'); ExternalInterface.addCallback("test", test); var jscommand:String = "flash_ready('"+ExternalInterface.objectID+"');"; var url:URLRequest = new URLRequest("javascript:" + jscommand + " void(0);"); navigateToURL(url, "_self"); } public function test(t){ trace(t); } } } Swfobject is being included via google code, and the flash embeds just fine, so that's not the problem. I've got a very similar setup working on another server, but can't seem to get it working on this one. It's a Hostgator shared server. Could it be the server's fault? Anybody see any obvious syntax problems? Thanks in advance!

    Read the article

  • Apache mod_deflate not compressing responses from Adobe BlazeDS

    - by DumCoder
    Hello, I have following setup Apache Box <=> Weblogic Box1 <=> Weblogic Box2 Apache Box : mod_weblogic(weblogic apache plugin), mod_deflate Weblogic Box1 : Weblogic 10.3 Weblogic Portal, Adobe BlazeDS Weblogic Box2 : Weblogic 10.3 SUN Jersey for REST API Apache forwards the request to Box1, where some of the REST requests get forwarded to Box 2 by Adobe BlazeDS. On Apache i have setup mod_deflate and mod_weblogic as follows: <IfModule mod_weblogic.c> WebLogicHost portalappeng.xxx.com WebLogicPort 7001 </IfModule> <Location /Portal> SetHandler weblogic-handler SetOutputFilter DEFLATE </Location> but when i look at the Apache deflate log i only see the jsp responses from Box1 being compressed, but the responses which BlazeDS forwarded to Box 2 are not compressed. Also the .js and .css files which are served from Box1 are not compressed Here are sample from log file, first response came directly from Box1 and got compressed, second and third also from Box1 but not compressed. Fourth one came from Box 2 to Box 1(BlazeDS) and then to Apache, not compressed. What am i missing? "GET /Portal/resources/services/userService/users?AppId=CM&CMUserType=ContentProducer&token=PSWNV8kb8db4WMBgWUjAbw%3D%3D&UserId=user123 HTTP/1.1" 711/7307 (9%) "GET /Portal/css/jquery-ui-1.7.2.custom.css HTTP/1.1" -/- (-%) "GET /Portal/framework/skins/shared/js/console.js HTTP/1.1" -/- (-%) "POST /Portal/messagebroker/http HTTP/1.1" -/- (-%)

    Read the article

  • Using boost::random as the RNG for std::random_shuffle

    - by Greg Rogers
    I have a program that uses the mt19937 random number generator from boost::random. I need to do a random_shuffle and want the random numbers generated for this to be from this shared state so that they can be deterministic with respect to the mersenne twister's previously generated numbers. I tried something like this: void foo(std::vector<unsigned> &vec, boost::mt19937 &state) { struct bar { boost::mt19937 &_state; unsigned operator()(unsigned i) { boost::uniform_int<> rng(0, i - 1); return rng(_state); } bar(boost::mt19937 &state) : _state(state) {} } rand(state); std::random_shuffle(vec.begin(), vec.end(), rand); } But i get a template error calling random_shuffle with rand. However this works: unsigned bar(unsigned i) { boost::mt19937 no_state; boost::uniform_int<> rng(0, i - 1); return rng(no_state); } void foo(std::vector<unsigned> &vec, boost::mt19937 &state) { std::random_shuffle(vec.begin(), vec.end(), bar); } Probably because it is an actual function call. But obviously this doesn't keep the state from the original mersenne twister. What gives? Is there any way to do what I'm trying to do without global variables?

    Read the article

  • Subversion svn:externals - What's wrong here?

    - by Brandon Montgomery
    I first want to say I've read the Subversion manual. I've read this question. I've also read this question. Here's my dilemma. Let's say I have 3 repositories laid out like this: DataAccessObject/ branches/ tags/ trunk/ DataAccessObject/ DataAccessObjectTests/ PlanObject/ branches/ tags/ trunk/ PlanObject/ PlanObjectTests/ WinFormsPlanViewer/ branches/ tags/ trunk/ WinFormsPlanViewer/ The PlanObject and DataAccessObject repositories contain shared projects. They are used by the WinFormsPlanViewer, but also by several other projects in several other repositories. Bear with me here. I put an svn:externals definition on the WinFormsPlanViewer/trunk folder like this: https://server/svn/PlanObject/trunk Objects https://server/svn/DataAccessObject/trunk Objects And here's what I see after I do an svn update. WinFormsPlanViewer/ branches/ tags/ trunk/ WinFormsPlanViewer/ Objects/ DataAccessObject/ DataAccessObjectTests/ The PlanObject stuff doesn't even come down in the update! I don't know if this has anything to do with it, but there's an externals definition on the PlanObject/trunk folder also: https://server/svn/DataAccessObject/trunk Objects What's going on here? What am I doing wrong? Are there bad consequences of referencing the PlanObject and the DataAccessObject from the WinFormsPlanViewer using svn:externals when the PlanObject references the DataAccessObject using svn:externals also?

    Read the article

  • SharePoint 2010 / ASP.Net Integration - Looking for advice

    - by jpennal
    I have been Googling a problem that I have with trying to integrate the web application that I am working on with SharePoint 2010. The web application is a wiki style tool that allows users to log in via forms authentication or WIA against Active Directory and create content for themselves and others. What we would like to do is to allow a user have a page with the content they have created in our web application mixed in with content that they have living on the SharePoint server. For example, they may want to see a list of documents that they have on the SharePoint server mixed in with some of their content. To accomplish this, we would like to take the credentials the user has logged into our web application with (for example MYDOMAIN\jsmith) and be able to query SharePoint for the documents of that same user (MYDOMAIN\jsmith) WITHOUT the user being prompted to re-enter their credentials to access the SharePoint server (we are trying to avoid the double-hop problem) We have come up with some options for how we want to do this, but we are unsure of what the best approach is. For example, we could - Have a global user, shared by all users to get information we need from SharePoint. The downside is that we cannot filter SharePoint content to a particular user - We could store the users credentials when they log in, but that would only work for users authenticating via forms auth and would be a security issue that some users/clients would not like - Writing a SharePoint extension using WCF to allow us to access the information we need, however we'd still have the issue of figuring out how to impersonate the user we want. Neither of these options are ideal and in our investigation we came across the Claims Authentication/STS option which seems like it is trying to solve the problem we are having. So my question is, based on what I have written, is Claims/STS the best approach for us? We have not been able to find much direction on how to use this method to call into SharePoint from a Web Application and pass along the existing credentials. Does anyone have any experience with any of these issues?

    Read the article

  • FFMpeg Error av_interleaved_write_frame():

    - by rajaneesh
    this my code . after running php code FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/usr --libdir=/usr/lib --shlibdir=/usr/lib --mandir=/usr/share/man --incdir=/usr/include --enable-libamr-nb --enable-libamr-wb --enable-libdirac --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-x11grab libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Nov 6 2009 19:05:03, gcc: 4.1.2 20080704 (Red Hat 4.1.2-46) Seems stream 0 codec frame rate differs from container frame rate: 50.00 (50/1) - 25.00 (25/1) Input #0, flv, from 'demo.flv': Duration: 00:00:30.83, start: 0.000000, bitrate: 546 kb/s Stream #0.0: Video: h264, yuv420p, 640x360 [PAR 1:1 DAR 16:9], 546 kb/s, 25 tbr, 1k tbn, 50 tbc Stream #0.1: Audio: aac, 44100 Hz, stereo, s16 Output #0, image2, to 'demo.jpg': Stream #0.0: Video: mjpeg, yuvj420p, 640x360 [PAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 90k tbn, 1 tbc Stream mapping: Stream #0.0 - #0.0 Press [q] to stop encoding av_interleaved_write_frame(): I/O error occurred Usually that means that input file is truncated and/or corrupted.

    Read the article

  • Newtonsoft JSON Interface Serialization error

    - by Ben
    I am using C# .NET 4.0, Newtonsoft JSON 4.5.0. public class Recipe { [JsonProperty(TypeNameHandling = TypeNameHandling.All)] public List<IFood> Foods{ get; set; } ... } I want to serialize and deserialize this Recipe object. If I serialize and deserialize the object during application lifetime this succeeds, but if I serialize the object, exit application and then deserialize it then it throws an exception, that it cannot instantiate IFood (since it is an interface). The problem is that it does not serialize the implementation of interface. "$type": "System.Collections.Generic.List`1[[NSM.Shared.Models.IFood, NSMShared]], mscorlib" I tried using TypeNameHandling.Object and Array and Auto, but it didn't help. Is there any way to serialize it properly? Or at least to define the class mapping before deserializing? EDIT: I am using JSON coupled with Hammock ( http://code.google.com/p/relax-net/ ), C# driver for CouchDB, which internally serializes and deserializes objects. As mentioned the problem is that it does not serialize the interface implementation.

    Read the article

  • Calling a SLSB with Seam security from a servlet

    - by wilth
    Hello, I have an existing application written in SEAM that uses SEAM Security (http://docs.jboss.org/seam/2.1.1.GA/reference/en-US/html/security.html). In a stateless EJB, I might find something like this: @In Identity identity; ... if(identity.hasRole("admin")) throw new AuthException(); As far as I understand, Seam injects the Identity object from the SessionContext of the servlet that invokes the EJB (this happens "behind the scenes", since Seam doesn't really use servlets) and removes it after the call. Is this correct? Is it now possible to access this EJB from another servlet (in this case, that servlet is the server side of a GWT application)? Do I have to "inject" the correct Identity instance? If I don't do anything, Seam injects an instance, but doesn't correctly correlate the sessions and instances of Identity (so the instances of Identity are shared between sessions and sometimes calls get new instances etc.). Any help and pointers are very welcome - thanks! Technology: EJB3, Seam 2.1.2. The servlets are actually the server-side of a GWT app, although I don't think this matters much. I'm using JBoss 5.

    Read the article

  • Modelbinding Failing VS2010 asp.net mvc2

    - by Rob Ellis
    The contactAddModel.Search always comes through as null - any ideas? View declaration <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<rs30UserWeb.Models.StatusIndexModel>" %> ViewModels public class StatusIndexModel { public ContactAddModel contactAddModel; public StatusMessageModel statusMessageModel; } public class ContactAddModel { [Required(ErrorMessage="Contact search string")] [DisplayName("Contact Search")] public string Search { get; set; } } View content <% using (Html.BeginForm("AddContact", "Status")) { %> <div> <fieldset> <legend>Add a new Contact</legend> <div class="editor-label"> <%= Html.LabelFor(m => m.contactAddModel.Search) %> </div> <div class="editor-field"> <%= Html.TextBoxFor(m => m.contactAddModel.Search)%> <%= Html.ValidationMessageFor(m => m.contactAddModel.Search)%> </div> <p> <input type="submit" value="Add Contact" /> </p> </fieldset> </div> <% } %> Controller [HttpPost] public ActionResult AddContact(Models.ContactAddModel model) { if (u != null) { } else { ModelState.AddModelError("contactAddModel.Search", "User not found"); } return View("Index"); }

    Read the article

  • Performance of java on different hardware?

    - by tangens
    In another SO question I asked why my java programs run faster on AMD than on Intel machines. But it seems that I'm the only one who has observed this. Now I would like to invite you to share the numbers of your local java performance with the SO community. I observed a big performance difference when watching the startup of JBoss on different hardware, so I set this program as the base for this comparison. For participation please download JBoss 5.1.0.GA and run: jboss-5.1.0.GA/bin/run.sh (or run.bat) This starts a standard configuration of JBoss without any extra applications. Then look for the last line of the start procedure which looks like this: [ServerImpl] JBoss (Microcontainer) [5.1.0.GA (build: SVNTag=JBoss_5_1_0_GA date=200905221634)] Started in 25s:264ms Please repeat this procedure until the printed time is somewhat stable and post this line together with some comments on your hardware (I used cpu-z to get the infos) and operating system like this: java version: 1.6.0_13 OS: Windows XP Board: ASUS M4A78T-E Processor: AMD Phenom II X3 720, 2.8 GHz RAM: 2*2 GB DDR3 (labeled 1333 MHz) GPU: NVIDIA GeForce 9400 GT disc: Seagate 1.5 TB (ST31500341AS) Use your votes to bring the fastest configuration to the top. I'm very curious about the results. EDIT: Up to now only a few members have shared their results. I'd really be interested in the results obtained with some other architectures. If someone works with a MAC (desktop) or runs an Intel i7 with less than 3 GHz, please once start JBoss and share your results. It will only take a few minutes.

    Read the article

  • stdexcept On Android

    - by David R.
    I'm trying to compile SoundTouch on Android. I started with this configure line: ./configure CPPFLAGS="-I/Volumes/android-build/mydroid/development/ndk/build/platforms/android-3/arch-arm/usr/include/" LDFLAGS="-Wl,-rpath-link=/Volumes/android-build/mydroid/development/ndk/build/platforms/android-3/arch-arm/usr/lib -L/Volumes/android-build/mydroid/development/ndk/build/platforms/android-3/arch-arm/usr/lib -nostdlib -lc" --host=arm-eabi --enable-shared=yes CFLAGS="-nostdlib -O3 -mandroid" host_alias=arm-eabi --no-create --no-recursion Because the Android NDK targets ARM, I also had to change the Makefile to remove the -msse2 flags to progress. When I run 'make', I get: /bin/sh ../../libtool --tag=CXX --mode=compile arm-eabi-g++ -DHAVE_CONFIG_H -I. -I../../include -I../../include -I/Volumes/android-build/mydroid/development/ndk/build/platforms/android-3/arch-arm/usr/include/ -O3 -fcheck-new -I../../include -g -O2 -MT FIRFilter.lo -MD -MP -MF .deps/FIRFilter.Tpo -c -o FIRFilter.lo FIRFilter.cpp libtool: compile: arm-eabi-g++ -DHAVE_CONFIG_H -I. -I../../include -I../../include -I/Volumes/android-build/mydroid/development/ndk/build/platforms/android-3/arch-arm/usr/include/ -O3 -fcheck-new -I../../include -g -O2 -MT FIRFilter.lo -MD -MP -MF .deps/FIRFilter.Tpo -c FIRFilter.cpp -o FIRFilter.o FIRFilter.cpp:46:21: error: stdexcept: No such file or directory FIRFilter.cpp: In member function 'virtual void soundtouch::FIRFilter::setCoefficients(const soundtouch::SAMPLETYPE*, uint, uint)': FIRFilter.cpp:177: error: 'runtime_error' is not a member of 'std' FIRFilter.cpp: In static member function 'static void* soundtouch::FIRFilter::operator new(size_t)': FIRFilter.cpp:225: error: 'runtime_error' is not a member of 'std' make[2]: *** [FIRFilter.lo] Error 1 make[1]: *** [all-recursive] Error 1 make: *** [all-recursive] Error 1 This isn't very surprising, since the -nostdlib flag was required. Android seems to have neither stdexcept nor stdlib. How can I get past this block of compiling SoundTouch? At a guess, there may be some flag I don't know about that I should use. I could refactor the code not to use stdexcept. There may be a way to pull in the original stdexcept source and reference that. I might be able to link to a precompiled stdexcept library.

    Read the article

  • How to solve concurrency problems in ASP.NET Windows-Workflow and ActiveRecord/NHibernate?

    - by Famous Nerd
    I have found that ActiveRecord uses the Session-Scope object within the ASP.NET application and that if the web-site is read-write we can have a tug-o-war between the Workflow's own Data-Access SessionScope and that of the ASP.NET site. I would really like to have the WindowsWorkflow Runtime use the same object session as the web-site however, they have different lifetimes. Sometimes, a web-request may save a very simple piece of data which would execute quickly however, if the web-site kicks off a workflow process.. how can that workflow make data-modifications while still allowing the Appliaction_EndRequest to dispose the ASP.NET SessionScope ... it's like ownership of the SessionScope should be shared between the workflow runtime and the ASP.NET website. Manual Workflow Scheduler may be the Savior... if a workflow is synchronous and merely uses CallExternalMethod to interact with the Host then we could constrain all the data-access to the host.. then the sessionScope can exist once. This however, won't solve the problem of a delay activity... if this delay fires, we could need to update data... in this case we'd need an isolated Session Scope and concurrency may arise. This however, differs from SharePoint workflows where it seems that the SharePoint workflow can save data from the web and the workflow and that concurrency is handled through other means. Can anyone offer any suggestions on how to allow the workflow to manage data and play nice with ASP.NET web sites?

    Read the article

  • Unpacking gems [Rails 2.3.5]

    - by yuval
    I have the following gems defined in my environment.rb file: config.gem "authlogic" config.gem "paperclip" config.gem "pauldix-feedzirra", :lib => "feedzirra", :source => "http://gems.github.com" config.gem 'whenever', :lib => false, :source => 'http://gemcutter.org/' I have them installed on my local computer and everything is working well. Since I am working on a shared-server (DreamHost), I need to unpack those gems to get them to work (can't install them as I did on my own computer to get them to work). Before uploading, I ran the following on my local machine: rake gems:unpack This created the following folders in /vender/gems: authlogic-2.1.3, paperclip-2.3.1.1, pauldix-feedzirra-0.0.18, whenever-0.4.1 So it looks like they're all there. When I run rake db:migrate on the server, though, I get these following error: Missing these required gems: pauldix-feedzirra For some reason, the feedzirra unpacked gem is not detected. Could anybody give me a clue as to why this is happening and how to solve it? Thanks!

    Read the article

  • Disable chaching in JPA (eclipselink)

    - by James
    Hi, I want to use JPA (eclipselink) to get data from my database. The database is changed by a number of other sources and I therefore want to go back to the database for every find I execute. I have read a number of posts on disabling the cache but this does not seem to be working. Any ideas? I am trying to execute the following code: EntityManagerFactory entityManagerFactory = Persistence.createEntityManagerFactory("default"); EntityManager em = entityManagerFactory.createEntityManager(); MyLocation one = em.createNamedQuery("MyLocation.findMyLoc").getResultList().get(0); MyLocation two = em.createNamedQuery("MyLocation.findMyLoc").getResultList().get(0); System.out.println(one==two); one==two is true while I want it to be false. I have tried adding each/all the following to my persistence.xml <property name="eclipselink.cache.shared.default" value="false"/> <property name="eclipselink.cache.size.default" value="0"/> <property name="eclipselink.cache.type.default" value="None"/> I have also tried adding the @Cache annotation to the Entity itself: @Cache( type=CacheType.NONE, // Cache nothing expiry=0, alwaysRefresh=true ) Am I misunderstanding something? Thank you, James

    Read the article

  • Unregistering COM dll with a C# Setup Project

    - by lb
    Hi All. I've been stuck on this one for a while. I'll try explain in the simplest terms and at the best of my knowledge. I will honour any help. I've got a C# project which uses a VB6 compiled ActiveX DLL that I'm constantly updating. I compile the setup project, send it to the client and they run the setup. When building the updated setup project, I would increase the 'Version' of the setup project so it wouldn't bother with 'Another version is already installed'. What started happening after a few updates I began to notice the DLL would not be updated to the new version in the installer. The client computer had the original DLL both installed and registered. First symptom: method not found exceptions from the client C# code. This is not a shared DLL and only this application needs it. I've noticed that when uninstalling the application (through the usual procedure) the DLL is also not removed from the application folder although I would set this file's property 'Permanent' to false. The registration entries in the registry are mantained also. I do update in VS6.0 the version of the DLL (usually increase the build number) before building it. Then in VS2008, I remove it from the References, and add it again from the 'Browse tab', without re-registering it on my dev machine and adding it from the COM tab. I've thought of these options. Custom step in Setup project to regsvr32.exe /u 'hardcoded path of my dll' at uninstall (ugly) Somehow find out how the 'Isolate' property can work for me without registering Find out how to execute setup project 'Conditions' that would actually check the version of the library and to update the file accordingly at every install) Any help would be incredibly welcome.

    Read the article

  • GDB not breaking on breakpoints set on object creation in C++

    - by Drew
    Hi all, I've got a c++ app, with the following main.cpp: 1: #include <stdio.h> 2: #include "HeatMap.h" 3: #include <iostream> 4: 5: int main (int argc, char * const argv[]) 6: { 7: HeatMap heatMap(); 8: printf("message"); 9: return 0; 10: } Everything compiles without errors, I'm using gdb (GNU gdb 6.3.50-20050815 (Apple version gdb-1346) (Fri Sep 18 20:40:51 UTC 2009)), and compiled the app with gcc (gcc version 4.2.1 (Apple Inc. build 5646) (dot 1)) with the commands "-c -g". When I add breakpoints to lines 7, 8, and 9, and run gdb, I get the following... (gdb) break main.cpp:7 Breakpoint 1 at 0x10000177f: file src/main.cpp, line 8. (gdb) break main.cpp:8 Note: breakpoint 1 also set at pc 0x10000177f. Breakpoint 2 at 0x10000177f: file src/main.cpp, line 8. (gdb) break main.cpp:9 Breakpoint 3 at 0x100001790: file src/main.cpp, line 9. (gdb) run Starting program: /DevProjects/DataManager/build/DataManager Reading symbols for shared libraries ++. done Breakpoint 1, main (argc=1, argv=0x7fff5fbff960) at src/main.cpp:8 8 printf("message"); (gdb) So, why of why, does anyone know, why my app does not break on the breakpoints for the object creation, but does break on the printf line? Drew J. Sonne.

    Read the article

  • .NET assembly loading problem

    - by Simon
    I'm maintaining the build process for our application which consist of an ASP.Net application, two different Win32 services and other sysadmin related applications. I want to end up with the following configuration to be used both when debugging & deploying. libraires/ -- Contains shared assemblies used by all other apps. web/ -- ASP.Net site service1/ -- Win32 service 1 (seen under the service control manager) service2/ -- Win32 service 2 adminstuff/ -- Sysadmin / support stuff used for troubleshooting The problem is assembly probing privatePath in the app.config does not support relative directories outside the application root. Ie: can't use ../libraries. Very frustating... If I strong name our assemblies, I could use codeBase config element which seems to support absolute path but you need to specify each assembly individually. I also tried hooking into AppDomain.AssemblyResolve event, but I'm getting FileNotFoundException from the .Net Fusion before I can even register the event handler in Main(). I don't like the idea of registering the assemblies in the GAC. Too much hassle when deploying / upgrading application. Is there another to do this without having the specify the path of each requiered assembly ?

    Read the article

  • Maintaining state and data context between requests in ASP.NET + EF4

    - by Nick
    I have a EF4/ASP.NET web application that is structured to use POCOs and generic repositories, based essentially on this excellent article. The application is relatively sophisticated with one page that involves selection and linking of multiple entities to build up a complex user profile. This requires access to multiple entity types (20 or so) and associated repositories across multiple posts. When a repository is first accessed it uses the existing data context if exists, else it creates a new context. The problem is that if the lifetime of the context is only per-request (as suggested in the article) then you have to deal with multiple contexts and the complexity around detaching and attaching entities from contexts. My solution is to share the context between posts by creating a single View Model that includes all required repositories (initialised to share the same context) plus any associated data and store this model in a Session variable, retrieving from Session on subsequent page requests. Therefore maintaining the same context across all posts until the profile is saved. This works fine BUT I am concerned that I don't actually know exactly what is stored in the model session variable or more importantly the size of the Session variable. So two questions I suppose: firstly should I look for a better solution to handle the shared context across posts issue (any suggestions welcome)? And secondly what is actually stored in the Session when it includes a repository plus context? Any help appreciated!

    Read the article

  • Design suggestion for expression tree evaluation with time-series data

    - by Lirik
    I have a (C#) genetic program that uses financial time-series data and it's currently working but I want to re-design the architecture to be more robust. My main goals are: sequentially present the time-series data to the expression trees. allow expression trees to access previous data rows when needed. to optimize performance of the data access while evaluating the expression trees. keep a common interface so various types of data can be used. Here are the possible approaches I've thought about: I can evaluate the expression tree by passing in a data row into the root node and let each child node use the same data row. I can evaluate the expression tree by passing in the data row index and letting each node get the data row from a shared DataSet (currently I'm passing the row index and going to multiple synchronized arrays to get the data). Hybrid: an immutable data set is accessible by all of the expression trees and each expression tree is evaluated by passing in a data row. The benefit of the first approach is that the data row is being passed into the expression tree and there is no further query done on the data set (which should increase performance in a multithreaded environment). The drawback is that the expression tree does not have access to the rest of the data (in case some of the functions need to do calculations using previous data rows). The benefit of the second approach is that the expression trees can access any data up to the latest data row, but unless I specify what that row is, I'll have to iterate through the rows and figure out which one is the last one. The benefit of the hybrid is that it should generally perform better and still provide access to the earlier data. It supports two basic "views" of data: the latest row and the previous rows. Do you guys know of any design patterns or do you have any tips that can help me build this type of system? Should I use a DataSet to hold and present the data, or are there more efficient ways to present rows of data while maintaining a simple interface? FYI: All of my code is written in C#.

    Read the article

  • Does HttpListener work well on Mono?

    - by billpg
    Hi everyone. I'm looking to write a small web service to run on a small Linux box. I prefer to code in C#, so I'm looking to use Mono. I don't want the overhead of running a full web server or Mono's version of ASP.NET. I'm thinking of having a single process with a thread dealing with each client connection. Shared memory between threads instead of a database. I've read a little on Microsoft's version of HttpListener and how it works with the Http.sys driver. Alas, Mono's documentation on this class is just the automated class interface with no discussion of how it works under the hood. (Linux doesn't have Http.sys, so I imagine it's implemented substantially differently.) Could anyone point me towards some resources discussing this module please? Many thanks, Bill, billpg.com (A little background to my question for the interested.) Some time ago, I asked this question, interested in keeping a long conversation open with lots of back-and-forth. I had settled on designing my own ad-hoc protocol, but people I spoke to really wanted a REST interface, even at the cost of the "Okay, send your command now" signal. So, I wondered about running ASP.NET on a Linux/Mono server, but stumbled upon HttpListener. This seemed ideal, as each "conversation" could run in a separate thread. The thread that calls HttpListener in a loop can look for which thread each incomming connection is for and pass the reference to that thread. The alternative for an ASP.NET driven service, would be to have the ASPX code pick up the state from a database, and write back the new state when it finishes. Yes, it would work, but that's a lot of overhead.

    Read the article

  • permutations gone wrong

    - by vbNewbie
    I have written code to implement an algorithm I found on string permutations. What I have is an arraylist of words ( up to 200) and I need to permutate the list in levels of 5. Basically group the string words in fives and permutated them. What I have takes the first 5 words generates the permutations and ignores the rest of the arraylist? Any ideas appreciated. Private Function permute(ByVal chunks As ArrayList, ByVal k As Long) As ArrayList ReDim ItemUsed(k) pno = 0 Permutate(k, 1) Return chunks End Function Private Shared Sub Permutate(ByVal K As Long, ByVal pLevel As Long) Dim i As Long, Perm As String Perm = pString ' Save the current Perm ' for each value currently available For i = 1 To K If Not ItemUsed(i) Then If pLevel = 1 Then pString = chunks.Item(i) 'pString = inChars(i) Else pString = pString & chunks.Item(i) 'pString += inChars(i) End If If pLevel = K Then 'got next Perm pno = pno + 1 SyncLock outfile outfile.WriteLine(pno & " = " & pString & vbCrLf) End SyncLock outfile.Flush() Exit Sub End If ' Mark this item unavailable ItemUsed(i) = True ' gen all Perms at next level Permutate(K, pLevel + 1) ' Mark this item free again ItemUsed(i) = False ' Restore the current Perm pString = Perm End If Next K above is = to 5 for the number of words in one permutation but when I change the for loop to the arraylist size I get an error of index out of bounds

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >