Search Results

Search found 8389 results on 336 pages for 'shared calendar'.

Page 287/336 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • Maintaining state and data context between requests in ASP.NET + EF4

    - by Nick
    I have a EF4/ASP.NET web application that is structured to use POCOs and generic repositories, based essentially on this excellent article. The application is relatively sophisticated with one page that involves selection and linking of multiple entities to build up a complex user profile. This requires access to multiple entity types (20 or so) and associated repositories across multiple posts. When a repository is first accessed it uses the existing data context if exists, else it creates a new context. The problem is that if the lifetime of the context is only per-request (as suggested in the article) then you have to deal with multiple contexts and the complexity around detaching and attaching entities from contexts. My solution is to share the context between posts by creating a single View Model that includes all required repositories (initialised to share the same context) plus any associated data and store this model in a Session variable, retrieving from Session on subsequent page requests. Therefore maintaining the same context across all posts until the profile is saved. This works fine BUT I am concerned that I don't actually know exactly what is stored in the model session variable or more importantly the size of the Session variable. So two questions I suppose: firstly should I look for a better solution to handle the shared context across posts issue (any suggestions welcome)? And secondly what is actually stored in the Session when it includes a repository plus context? Any help appreciated!

    Read the article

  • Unregistering COM dll with a C# Setup Project

    - by lb
    Hi All. I've been stuck on this one for a while. I'll try explain in the simplest terms and at the best of my knowledge. I will honour any help. I've got a C# project which uses a VB6 compiled ActiveX DLL that I'm constantly updating. I compile the setup project, send it to the client and they run the setup. When building the updated setup project, I would increase the 'Version' of the setup project so it wouldn't bother with 'Another version is already installed'. What started happening after a few updates I began to notice the DLL would not be updated to the new version in the installer. The client computer had the original DLL both installed and registered. First symptom: method not found exceptions from the client C# code. This is not a shared DLL and only this application needs it. I've noticed that when uninstalling the application (through the usual procedure) the DLL is also not removed from the application folder although I would set this file's property 'Permanent' to false. The registration entries in the registry are mantained also. I do update in VS6.0 the version of the DLL (usually increase the build number) before building it. Then in VS2008, I remove it from the References, and add it again from the 'Browse tab', without re-registering it on my dev machine and adding it from the COM tab. I've thought of these options. Custom step in Setup project to regsvr32.exe /u 'hardcoded path of my dll' at uninstall (ugly) Somehow find out how the 'Isolate' property can work for me without registering Find out how to execute setup project 'Conditions' that would actually check the version of the library and to update the file accordingly at every install) Any help would be incredibly welcome.

    Read the article

  • GDB not breaking on breakpoints set on object creation in C++

    - by Drew
    Hi all, I've got a c++ app, with the following main.cpp: 1: #include <stdio.h> 2: #include "HeatMap.h" 3: #include <iostream> 4: 5: int main (int argc, char * const argv[]) 6: { 7: HeatMap heatMap(); 8: printf("message"); 9: return 0; 10: } Everything compiles without errors, I'm using gdb (GNU gdb 6.3.50-20050815 (Apple version gdb-1346) (Fri Sep 18 20:40:51 UTC 2009)), and compiled the app with gcc (gcc version 4.2.1 (Apple Inc. build 5646) (dot 1)) with the commands "-c -g". When I add breakpoints to lines 7, 8, and 9, and run gdb, I get the following... (gdb) break main.cpp:7 Breakpoint 1 at 0x10000177f: file src/main.cpp, line 8. (gdb) break main.cpp:8 Note: breakpoint 1 also set at pc 0x10000177f. Breakpoint 2 at 0x10000177f: file src/main.cpp, line 8. (gdb) break main.cpp:9 Breakpoint 3 at 0x100001790: file src/main.cpp, line 9. (gdb) run Starting program: /DevProjects/DataManager/build/DataManager Reading symbols for shared libraries ++. done Breakpoint 1, main (argc=1, argv=0x7fff5fbff960) at src/main.cpp:8 8 printf("message"); (gdb) So, why of why, does anyone know, why my app does not break on the breakpoints for the object creation, but does break on the printf line? Drew J. Sonne.

    Read the article

  • Does HttpListener work well on Mono?

    - by billpg
    Hi everyone. I'm looking to write a small web service to run on a small Linux box. I prefer to code in C#, so I'm looking to use Mono. I don't want the overhead of running a full web server or Mono's version of ASP.NET. I'm thinking of having a single process with a thread dealing with each client connection. Shared memory between threads instead of a database. I've read a little on Microsoft's version of HttpListener and how it works with the Http.sys driver. Alas, Mono's documentation on this class is just the automated class interface with no discussion of how it works under the hood. (Linux doesn't have Http.sys, so I imagine it's implemented substantially differently.) Could anyone point me towards some resources discussing this module please? Many thanks, Bill, billpg.com (A little background to my question for the interested.) Some time ago, I asked this question, interested in keeping a long conversation open with lots of back-and-forth. I had settled on designing my own ad-hoc protocol, but people I spoke to really wanted a REST interface, even at the cost of the "Okay, send your command now" signal. So, I wondered about running ASP.NET on a Linux/Mono server, but stumbled upon HttpListener. This seemed ideal, as each "conversation" could run in a separate thread. The thread that calls HttpListener in a loop can look for which thread each incomming connection is for and pass the reference to that thread. The alternative for an ASP.NET driven service, would be to have the ASPX code pick up the state from a database, and write back the new state when it finishes. Yes, it would work, but that's a lot of overhead.

    Read the article

  • Unpacking gems [Rails 2.3.5]

    - by yuval
    I have the following gems defined in my environment.rb file: config.gem "authlogic" config.gem "paperclip" config.gem "pauldix-feedzirra", :lib => "feedzirra", :source => "http://gems.github.com" config.gem 'whenever', :lib => false, :source => 'http://gemcutter.org/' I have them installed on my local computer and everything is working well. Since I am working on a shared-server (DreamHost), I need to unpack those gems to get them to work (can't install them as I did on my own computer to get them to work). Before uploading, I ran the following on my local machine: rake gems:unpack This created the following folders in /vender/gems: authlogic-2.1.3, paperclip-2.3.1.1, pauldix-feedzirra-0.0.18, whenever-0.4.1 So it looks like they're all there. When I run rake db:migrate on the server, though, I get these following error: Missing these required gems: pauldix-feedzirra For some reason, the feedzirra unpacked gem is not detected. Could anybody give me a clue as to why this is happening and how to solve it? Thanks!

    Read the article

  • Log4Net GetLogger creates rolling files even for the unreferenced files

    - by ybastiand
    Hi, I have a C# solution that contains three executables. I have each of these three executables sharing the same log4net configuration file. At startup of each of the executable, they retrieve a logger (one logger per executable, as per configuration file further below). When one of the executable performs Log.GetLogger(), it creates all the rolling files instead of only the one rolling file that is referred to as appender-ref in the executable's logger configuration. For instance, when I startup my sending daemon executable, it performs Log.GetLogger("SendingDaemonLogger") which creates 3 files Log/RuleScheduler.txt, Log/NotificationGenerator.txt and Log/NotificationSender.txt instead of only the desired Log/NotificationSender.txt. Then when I startup another of the executables, for instance the rule scheduler daemon, this other process cannot write in Log/RuleScheduler.txt because it has been created and locked by the sending daemon process. I am guessing that there may be three different solutions to my problem: The GetLogger should only create the rolling file appenders that are referenced in the config I should have one config file per executable, this way each config file could list only one rolling file appender and starting each of the executable would not create the rolling files of the other daemons. I am however reluctant to do this because some of the configuration (SMTP appender, console appender) is shared between the daemons and I don't want to have duplicate copies to maintain. Unless there is a way to have a config file including another one? Maybe there is a way to configure the rolling file so that concurrent access across processes is allowed? This solution still isn't perfect in my opinion because any of the daemons should not be creating the rolling files of some other daemons. Thanks in advance for your help! I have difficulties for posting the config file properly here (this website interprets as HTML). Please go to the following link for seeing my log4net configuration file: log4Net configuration file

    Read the article

  • How can I sync files in two different git repositories (not clones) and maintain history?

    - by brian d foy
    I maintain two different git repos that need to share some files, and I'd like the commits in one repo to show up in the other. What's a good way to do that for ongoing maintenance? I've been one of the maintainers of the perlfaq (Github), and recently I fell into the role of maintaining the Perl core documentation, which is also in git. Long before I started maintaining the perlfaq, it lived in a separate source control repository. I recently converted that to git. Periodically, one of the perl5-porters would sync the shared files in the perlfaq repo and the perl repo. Since we've switched to git, we'e been a bit lazy converting the tools, and I'm now the one who does that. For the time being, the two repos are going to stay separate. Currently, to sync the FAQ for a new (monthly) release of perl, I'm almost ashamed to say that I merely copy the perlfaq*.pod files in the perlfaq repo and overlay them in the perl repo. That loses history, etc. Additionally, sometimes someone makes a change to those files in the perl repo and I end up overwriting it (yes, check git diff you idiot!). The files do not have the same paths in the repo, but that's something that I could change, I think. What I'd like to do, in the magical universe of rainbows and ponies, is pull the objects from the perlfaq repo and apply them in the perl repo, and vice-versa, so the history and commit ids correspond in each. Creating patches works, but it's also a lot work to manage it Git submodules seem to only work to pull in the entire external repo I haven't found something like svn's file externals, but that would work in both directions anyway I'd love to just fetch objects from one and cherry-pick them in the other What's a good way to manage this?

    Read the article

  • Disable chaching in JPA (eclipselink)

    - by James
    Hi, I want to use JPA (eclipselink) to get data from my database. The database is changed by a number of other sources and I therefore want to go back to the database for every find I execute. I have read a number of posts on disabling the cache but this does not seem to be working. Any ideas? I am trying to execute the following code: EntityManagerFactory entityManagerFactory = Persistence.createEntityManagerFactory("default"); EntityManager em = entityManagerFactory.createEntityManager(); MyLocation one = em.createNamedQuery("MyLocation.findMyLoc").getResultList().get(0); MyLocation two = em.createNamedQuery("MyLocation.findMyLoc").getResultList().get(0); System.out.println(one==two); one==two is true while I want it to be false. I have tried adding each/all the following to my persistence.xml <property name="eclipselink.cache.shared.default" value="false"/> <property name="eclipselink.cache.size.default" value="0"/> <property name="eclipselink.cache.type.default" value="None"/> I have also tried adding the @Cache annotation to the Entity itself: @Cache( type=CacheType.NONE, // Cache nothing expiry=0, alwaysRefresh=true ) Am I misunderstanding something? Thank you, James

    Read the article

  • Design suggestion for expression tree evaluation with time-series data

    - by Lirik
    I have a (C#) genetic program that uses financial time-series data and it's currently working but I want to re-design the architecture to be more robust. My main goals are: sequentially present the time-series data to the expression trees. allow expression trees to access previous data rows when needed. to optimize performance of the data access while evaluating the expression trees. keep a common interface so various types of data can be used. Here are the possible approaches I've thought about: I can evaluate the expression tree by passing in a data row into the root node and let each child node use the same data row. I can evaluate the expression tree by passing in the data row index and letting each node get the data row from a shared DataSet (currently I'm passing the row index and going to multiple synchronized arrays to get the data). Hybrid: an immutable data set is accessible by all of the expression trees and each expression tree is evaluated by passing in a data row. The benefit of the first approach is that the data row is being passed into the expression tree and there is no further query done on the data set (which should increase performance in a multithreaded environment). The drawback is that the expression tree does not have access to the rest of the data (in case some of the functions need to do calculations using previous data rows). The benefit of the second approach is that the expression trees can access any data up to the latest data row, but unless I specify what that row is, I'll have to iterate through the rows and figure out which one is the last one. The benefit of the hybrid is that it should generally perform better and still provide access to the earlier data. It supports two basic "views" of data: the latest row and the previous rows. Do you guys know of any design patterns or do you have any tips that can help me build this type of system? Should I use a DataSet to hold and present the data, or are there more efficient ways to present rows of data while maintaining a simple interface? FYI: All of my code is written in C#.

    Read the article

  • permutations gone wrong

    - by vbNewbie
    I have written code to implement an algorithm I found on string permutations. What I have is an arraylist of words ( up to 200) and I need to permutate the list in levels of 5. Basically group the string words in fives and permutated them. What I have takes the first 5 words generates the permutations and ignores the rest of the arraylist? Any ideas appreciated. Private Function permute(ByVal chunks As ArrayList, ByVal k As Long) As ArrayList ReDim ItemUsed(k) pno = 0 Permutate(k, 1) Return chunks End Function Private Shared Sub Permutate(ByVal K As Long, ByVal pLevel As Long) Dim i As Long, Perm As String Perm = pString ' Save the current Perm ' for each value currently available For i = 1 To K If Not ItemUsed(i) Then If pLevel = 1 Then pString = chunks.Item(i) 'pString = inChars(i) Else pString = pString & chunks.Item(i) 'pString += inChars(i) End If If pLevel = K Then 'got next Perm pno = pno + 1 SyncLock outfile outfile.WriteLine(pno & " = " & pString & vbCrLf) End SyncLock outfile.Flush() Exit Sub End If ' Mark this item unavailable ItemUsed(i) = True ' gen all Perms at next level Permutate(K, pLevel + 1) ' Mark this item free again ItemUsed(i) = False ' Restore the current Perm pString = Perm End If Next K above is = to 5 for the number of words in one permutation but when I change the for loop to the arraylist size I get an error of index out of bounds

    Read the article

  • Switch between multiple views while respecting orientation

    - by zoul
    Hello! I have an MVC application with a single model and several views (something like skins). I want the user to be able to switch the views and I can’t get it working with interface orientation. The most simple approach looks like this: - (void) switchToADifferentView: (UIView*) newView { // self is a descendant of UIViewController self.view = newView; } This does not work because the incoming view does not get rotated according to current orientation (until the next orientation change, test case). Is there a way to force the orientation on a view? It looks like the system is trying really hard to keep the interface controls for itself. (Or is it as simple as setting the right transform by hand?) I figured I’d better not switch the views directly and switch controllers instead. This makes sense, as it makes the initial code simpler. But how do I switch controllers that have no “navigation relation” between them? I guess I could use presentModalViewController:, but that seems like a hack. Same goes for navigation controller. If I exchange the controllers by hand, I get the wrong orientation again: - (void) switchToAController: (id) incoming { [currentController.view removeFromSuperview]; [window addSubview:incoming.view]; // does not respect current orientation } Now how the heck do I simply exchange the current controller for another one? Again, the controllers are something like “skins” operating above a shared model, so it really makes no sense to pretend that skin A is a “modal” dialog above skin B or that they’re a part of a navigation stack.

    Read the article

  • NHibernate: Mapping multiple classes from a single table row

    - by Michael Kurtz
    I couldn't find an answer to this specific question. I am trying to keep my domain model object-oriented and re-use objects where possible. I am having an issue determining how to provide a mapping to multiple classes from a single row. Let me explain with an example: I have a single table, call it Customer. A customer has several attributes; but, for brevity, assume it has Id, Name, Address, City, State, ZipCode. I would like to create a Customer and Address class that look like this: public class Customer { public virtual long Id {get;set;} public virtual string Name {get;set;} public virtual Address Address {get;set;} } public class Address { public virtual string Address {get;set;} public virtual string City {get;set;} public virtual string State {get;set;} public virtual string ZipCode {get;set;} } What I am having trouble with is determining what the mapping would be for the Address class within the Customer class. There is no Address table and there isn't a "set" of addresses associated with a Customer. I just want a more object-oriented view of the Customer table in code. There are several other tables that have address information in them and it would be nice to have a reusable Address class to deal with them. Addresses are not shared so breaking all addresses into a separate table with foreign keys seems to be overkill and, actually, more painful since I would need foreign keys to multiple tables. Can someone enlighten me on this type of mapping? Please provide an example if you can. Thanks for any insights! -Mike

    Read the article

  • What do these errors mean? ISOC++ forbids assignment of arrays...

    - by xunlinkx
    I'm trying to compile some code on one of our systems for our DBA...I've edited the makefiles to include the pertinent libraries listed in the documentation, but I keep getting these errors... Can you discern any obvious problems from my command lines in reference to the errors listed? Thank you! make -f /u01/app/banner/ban8/TEST3/links/Makefile_tm_linux64_redhat5_ban8.mk gcc -m64 -D_NOFIXARGPTR -fpic -shared -DTMCILIB_EXPORTS -D_TMUNICODE -I/usr/local/ban_icu -I/usr/local/src/icu/source/i18n/ -I/usr/local/src/icu/source/common/ -I/usr/local/src/icu/source/extra/ustdio/ -I/usr/local/src/icu/source/io -L/usr/lib64 -L/usr/lib -L/usr/local/src/icu/source/data/ -L/usr/local/src/icu/source/data/out/ -L/usr/local/src/icu/source/tools/toolutil/ -L/usr/lib/im/icuconv/ -L/usr/local/lib/ -L. -licui18n -licudata -licuuc -licu-toolutil -licuio msgfmttm.cpp umsgtm.cpp tmcilib.cpp -o /u01/app/banner/ban8/TEST3/general/exe/libtmciuc.so umsgtm.cpp: In function ‘void fixArgPtr(const UChar*, __va_list_tag (*)[1])’: umsgtm.cpp:158: error: array must be initialized with a brace-enclosed initializer umsgtm.cpp:194: error: ISO C++ forbids assignment of arrays umsgtm.cpp: In function ‘int32_t tmumsg_vformat(void*, UChar, int32_t, __va_list_tag*, UErrorCode*)’: umsgtm.cpp:305: error: cannot convert ‘__va_list_tag**’ to ‘__va_list_tag ()[1]’ for argument ‘2’ to ‘void fixArgPtr(const UChar, __va_list_tag (*)[1])’ tmcilib.cpp: In function ‘int tmprintf(TMBundle*, const UChar*, ...)’: tmcilib.cpp:743: error: array must be initialized with a brace-enclosed initializer tmcilib.cpp: In function ‘int tmfprintf(TMBundle*, UFILE*, const UChar*, ...)’: tmcilib.cpp:757: error: array must be initialized with a brace-enclosed initializer tmcilib.cpp: In function ‘int tmsprintf(TMBundle*, UChar*, const UChar*, ...)’: tmcilib.cpp:808: error: array must be initialized with a brace-enclosed initializer

    Read the article

  • .NET assembly loading problem

    - by Simon
    I'm maintaining the build process for our application which consist of an ASP.Net application, two different Win32 services and other sysadmin related applications. I want to end up with the following configuration to be used both when debugging & deploying. libraires/ -- Contains shared assemblies used by all other apps. web/ -- ASP.Net site service1/ -- Win32 service 1 (seen under the service control manager) service2/ -- Win32 service 2 adminstuff/ -- Sysadmin / support stuff used for troubleshooting The problem is assembly probing privatePath in the app.config does not support relative directories outside the application root. Ie: can't use ../libraries. Very frustating... If I strong name our assemblies, I could use codeBase config element which seems to support absolute path but you need to specify each assembly individually. I also tried hooking into AppDomain.AssemblyResolve event, but I'm getting FileNotFoundException from the .Net Fusion before I can even register the event handler in Main(). I don't like the idea of registering the assemblies in the GAC. Too much hassle when deploying / upgrading application. Is there another to do this without having the specify the path of each requiered assembly ?

    Read the article

  • ASP.MVC 2 RTM + ModelState Error at Id property

    - by Zote
    I have this classes: public class GroupMetadata { [HiddenInput(DisplayValue = false)] public int Id { get; set; } [Required] public string Name { get; set; } } [MetadataType(typeof(GrupoMetadata))] public partial class Group { public virtual int Id { get; set; } public virtual string Name { get; set; } } And this action: [HttpPost] public ActionResult Edit(Group group) { if (ModelState.IsValid) { // Logic to save return RedirectToAction("Index"); } return View(group); } That's my view: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<Group>" %> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <% using (Html.BeginForm()) {%> <fieldset> <%= Html.EditorForModel() %> <p> <input type="submit" value="Save" /> </p> </fieldset> <% } %> <div> <%=Html.ActionLink("Back", "Index") %> </div> </asp:Content> But ModelState is always invalid! As I can see, for MVC validation 0 is invalid, but for me is valid. How can I fix it since, I didn't put any kind of validation in Id property? UPDATE: I don't know how or why, but renaming Id, in my case to PK, solves this problem. Do you know if this an issue in my logic/configuration or is an bug or expected behavior? Thank you

    Read the article

  • How do attribute classes work?

    - by AaronLS
    My searches keep turning up only guides explaining how to use and apply attributes to a class. I want to learn how to create my own attribute classes and the mechanics of how they work. How are attribute classes instantiated? Are they instantiated when the class they are applied to is instantiated? Is one instantiated for each class instantiated that it is applied to? E.g. if I apply the SerializableAttribute class to a MyData class, and I instantiate 5 MyData instances, will there be 5 instances of the SerializbleAttribute class created behind the scenes? Or is there just one instance shared between all of them? How do attribute class instances access the class they are associated with? How does a SerializableAttribute class access the class it is applied to so that it can serialize it's data? Does it have some sort of SerializableAttribute.ThisIsTheInstanceIAmAppliedTo property? :) Or does it work in the reverse direction that whenever I serialize something, the Serialize function I pass the MyClass instance to will reflectively go through the Attributes and find the SerialiableAttribute instance?

    Read the article

  • Using WCF to expose underlying process

    - by Steven
    I think I must be a little dull because I'm having so much difficulty with this. I use WCF for pretty much everything in-house, it's the most appropriate technology. I have a new Silverlight 3 app that is connecting to the WCF service and that's working fine. Where the problem begins is: Because of the expense in creating the objects within this service and the high correlation of individual objects being shared between clients I want to have a console application that basically gathers/calculates/caches all the data for the service 24/7 and the service basically connects to the console app (or whatever it is) and gets the pre-processed data. eg, think of it in terms of a stock reporting app (which it is). Person A has a portfolio of x, y z Person B has a portfolio of x, q, z, r The service needs to provide updated metrics on how their portfolio is performing. So instead of every 1 second processing person A, then person B, the app independently gathers the stock price and persons position information into memory and the service just queries the in memory result. Thanks for your help, I really am feeling dumb right now.

    Read the article

  • Manually start session with specific id / transitioning session cookie between domains

    - by deceze
    My host requires me to use a different domain for SSL secured access (shared SSL), so I need to transition the user session between two domains. One part of the page lives at http://example.com, while the SSL'd part is at https://example.hosting.com. As such I can't set a domain-spanning cookie. What I'm trying to do is to transition the session id over and re-set the cookie like this: http://example.com/normal/page, user clicks link to secure area and goes to: http://example.com/secure/page, which causes a redirect to: https://example.hosting.com/secure/page?sess=ikub..., which resurrects the session and sets a new cookie valid for the domain, then redirects to: https://example.hosting.com/secure/page This works up to the point where the session should be resurrected. I'm doing: function beforeFilter() { ... $this->Session->id($_GET['sess']); $this->Session->activate(); ... } As far as I can tell this should start the session with the given ID. It actually generates a new session ID though and this session is empty, the data is not restored. This is on CakePHP 1.2.4. Do I need to do something else, or is there a better way to do what I'm trying to do?

    Read the article

  • How do I import and call unmanaged C dll with ansi string "char *" pointer string from VB.net?

    - by Warren P
    I have written my own function, which in C would be declared like this, using standard Win32 calling conventions: int Thing( char * command, char * buffer, int * BufSize); I have the following amount of VB figured out, which should import the dll and call this function, wrapping it up to make it easy to call Thing("CommandHere",GetDataBackHere): Imports Microsoft.VisualBasic Imports System.Runtime.InteropServices Imports System Imports System.Text Namespace dllInvocationSpace Public Class dllInvoker ' tried attributes but could not make it build: ' <DllImport("Thing1.dll", False, CallingConvention.Cdecl, CharSet.Ansi, "Baton", True, True, False, True)> Declare Ansi Function Thing Lib "Thing1.dll" (ByVal Command As String, ByRef Buffer As String, ByRef BufferLength As Integer) Shared Function dllCall(ByVal Command As String, ByRef Results As String) As Integer Dim Buffer As StringBuilder = New StringBuilder(65536) Dim retCode As Integer Dim bufsz As Integer bufsz = 65536 retCode = Thing(Command, Buffer, bufsz) Results = Buffer Return retCode End Function End Class End Namespace The current code doesn't build, because although I think I should be able to create a "buffer" that the C Dll can write data back into using a string builder, I haven't got it quite right. (Value of type System.Text.STringBuilder cannot be converted to 'String'). I have looked all over the newsgroups and forums and can not find an example where the C dll needs to pass between 1 and 64kbytes of data back (char *buffer, int bufferlen) to visual basic.net.

    Read the article

  • A map and set which uses contiguous memory and has a reserve function

    - by edA-qa mort-ora-y
    I use several maps and sets. The lack of contiguous memory, and high number of (de)allocations, is a performance bottleneck. I need a mainly STL-compatbile map and set class which can use a contiguous block of memory for internal objects (or multiple blocks). It also needs to have a reserve function so that I can preallocate for expected sizes. Before I write my own I'd like to check what is available first. Is there something in Boost which does this? Does somebody know of an available implementation elsewhere? Intrusive collection types are not usable here as the same objects need to exist in several collections. As far as I know STL memory pools are per-type, not per instance. These global pools are not efficient with respect to memory locality in mutli-cpu/core processing. Object pools don't work as the types will be shared between instance but their pool should not. In many cases a hash map may be an option in some cases.

    Read the article

  • How to make a mapped field inherited from a superclass transient in JPA?

    - by Russ Hayward
    I have a legacy schema that cannot be changed. I am using a base class for the common features and it contains an embedded object. There is a field that is normally mapped in the embedded object that needs to be in the persistence id for only one (of many) subclasses. I have made a new id class that includes it but then I get the error that the field is mapped twice. Here is some example code that is much simplified to maintain the sanity of the reader: @MappedSuperclass class BaseClass { @Embedded private Data data; } @Entity class SubClass extends BaseClass { @EmbeddedId private SubClassId id; } @Embeddable class Data { private int location; private String name; } @Embeddable class SubClassId { private int thingy; private int location; } I have tried @AttributeOverride but I can only get it to rename the field. I have tried to set it to updatable = false, insertable = false but this did not seem to work when used in the @AttributeOverride annotation. See answer below for the solution to this issue. I realise I could change the base class but I really do not want to split up the embedded object to separate the shared field as it would make the surrounding code more complex and require some ugly wrapping code. I could also redesign the whole system for this corner case but I would really rather not. I am using Hibernate as my JPA provider.

    Read the article

  • How to unit test this simple ASP.NET MVC controller

    - by Frank Schwieterman
    Lets say I have a simple controller for ASP.NET MVC I want to test. I want to test that a controller action (Foo, in this case) simply returns a link to another action (Bar, in this case). How would you test this? (either the first or second link) My implementation has the same link twice. One passes the url throw ViewData[]. This seems more testable to me, as I can check the ViewData collection returned from Foo(). Even this way though, I don't know how to validate the url itself without making dependencies on routing. The controller: public class TestController : Controller { public ActionResult Foo() { ViewData["Link2"] = Url.Action("Bar"); return View("Foo"); } public ActionResult Bar() { return View("Bar"); } } the "Foo" view: <%@ Page Title="" Language="C#" Inherits="System.Web.Mvc.ViewPage" MasterPageFile="~/Views/Shared/Site.Master"%> <asp:Content ContentPlaceHolderID="MainContent" runat="server"> <%= Html.ActionLink("link 1", "Bar") %> <a href="<%= ViewData["Link2"]%>">link 2</a> </asp:Content>

    Read the article

  • Webbased data modelling and management tool

    - by pixeldude
    Is there a web-based tool available, where I am able to... ...define data models (like in a database admin tool) ...fill in data (in custom web forms, not too generic) with basic features like completion ...import data from CSV oder Excel Sheets ...export data to CSV or SQL ...create snapshots of my data models (versions, diff, etc.) ...share my data models ...discuss/collaborate with other people about my data models Well, I can develop something like this in PHP or with Ruby or whatever. But this is such a common task, where the application support could be a lot better. And it would be language and database independent. This would help to maintain data models in different versions and you can maybe share your data models with others, extend it with your team members, etc. There is a website called FreeBase, which allows you to define a data entity model and fill in data, which also has export features, but I need to define my own data model with my own granularity and structure. And it should not be shared in public if I don't want to. How do you solve problems like this yourself?

    Read the article

  • Split large repo into multiple subrepos and preserve history (Mercurial)

    - by Andrew
    We have a large base of code that contains several shared projects, solution files, etc in one directory in SVN. We're migrating to Mercurial. I would like to take this opportunity to reorganize our code into several repositories to make cloning for branching have less overhead. I've already successfully converted our repo from SVN to Mercurial while preserving history. My question: how do I break all the different projects into separate repositories while preserving their history? Here is an example of what our single repository (OurPlatform) currently looks like: /OurPlatform ---- Core ---- Core.Tests ---- Database ---- Database.Tests ---- CMS ---- CMS.Tests ---- Product1.Domain ---- Product1.Stresstester ---- Product1.Web ---- Product1.Web.Tests ---- Product2.Domain ---- Product2.Stresstester ---- Product2.Web ---- Product2.Web.Tests ==== Product1.sln ==== Product2.sln All of those are folders containing VS Projects except for the solution files. Product1.sln and Product2.sln both reference all of the other projects. Ideally, I'd like to take each of those folders, and turn them into separate Hg repos, and also add new repos for each project (they would act as parent repos). Then, If someone was going to work on Product1, they would clone the Product1 repo, which contained Product1.sln and subrepo references to ReferenceAssemblies, Core, Core.Tests, Database, Database.Tests, CMS, and CMS.Tests. So, it's easy to do this by just hg init'ing in the project directories. But can it be done while preserving history? Or is there a better way to arrange this?

    Read the article

  • Design suggestions for creating document management structure using hidden shares.

    - by focus.nz
    I need to add some document management functionality into my software. Documents will be grouped by company name and project name. The folders need to be accessed by the application using the id numbers of clients/projects, but also easily browsed by the end user using windows explorer. Clients and Projects will be stored in a database. I am thinking of having the software create the folders using the friendly name and then using a hidden share with the id number for the software to access the files. The folder structure would be something like this --Company 1 (Company-1234$) -- Project 101 (Project-101$) -- Project 102 (Project-102$) -- Project 103 (Project-103$) -- Company 2 (Company-5678$) -- Project 201 (Project-201$) -- Project 202 (Project-202$) -- Project 203 (Project-203$) So in the example above there would be a company called "Company 1" with a ID of "1234". When browsing the folders using windows explorer the user would see \\ServerName\Documents\Company1 and you could also access the same folder from \\ServerName\Documents\Company-1234$ By using the hidden share, if the company name changes or its renamed for some reason it doesn't break the link in the application because its using the hidden shared based on the ID that never changes. Will having hundreds (maybe thousands) or hidden shares on a server provide a huge performance hit? Does any one have any suggestions or alternatives to provide this feature?

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >