Search Results

Search found 34131 results on 1366 pages for 'project documentation'.

Page 405/1366 | < Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >

  • StrcutureMap Wiring - Sanity Check Please

    - by Steve Ward
    Hi - Im new to IOC and StructureMap and have an n-level application and am looking at how to setup the wirings (ForRequestedType ...) and just want to check with people with more experience that this is the best way of doing it! I dont want my UI application object to reference my persistence layer directly so am not able to wire everything up in this UI project. I now have it working by defining a Registry class in each project which wires up the types in the project as needed. The layer above registers its types and also calls the assembly below and looks for registries so that all types are registered throught the hierrachy. E.g. I have UI, Service, Domain, and Persistence libraries. In my service layer the registry looks like Scan(x => { x.Assembly("MyPersistenceProject"); x.LookForRegistries(); }); ForRequestedType<IService>().TheDefault.Is.OfConcreteType<MyService>(); Is this a recommended way of doing this in a setup such as this? Are there better ways and what are the advantages / disadvantages of these approaches in this case?

    Read the article

  • Accessing HttpRequest from Global.asax via a page

    - by Polymorphix
    I'm trying to get a property (ImpersonatePersonId) from a page in global.asax but get a HttpException saying 'Request is not available in this context'. I've been searching for some documentation on where in the pipeline the request is accessible, but as far as I can see all Microsoft can produce of documentation is one-liners like "PostRequestHandlerExecute: Occurs when the ASP.NET event handler finishes execution." which really doesn't give me much... I've tried placing the call in both Pre and PostRequestHandlerExceute but with the same result. I wonder if anyone with experience in this would be so kind as to tell me where the Request object is available. My code from global asax is below. ICanImpersonate page = HttpContext.Current.Handler as ICanImpersonate; ImpersonatedUser impersonatePerson = page != null ? page.ImpersonatePersonId : null; Response.Filter = new TagRewriter(Response, new TagProcessor(Context, impersonatePerson).Process); What I want to do is rewrite some HTML based on some request parameters.

    Read the article

  • Are there any Python reference counting/garbage collection gotchas when dealing with C code?

    - by Jason Baker
    Just for the sheer heck of it, I've decided to create a Scheme binding to libpython so you can embed Python in Scheme programs. I'm already able to call into Python's C API, but I haven't really thought about memory management. The way mzscheme's FFI works is that I can call a function, and if that function returns a pointer to a PyObject, then I can have it automatically increment the reference count. Then, I can register a finalizer that will decrement the reference count when the Scheme object gets garbage collected. I've looked at the documentation for reference counting, and don't see any problems with this at first glance (although it may be sub-optimal in some cases). Are there any gotchas I'm missing? Also, I'm having trouble making heads or tails of the cyclic garbage collector documentation. What things will I need to bear in mind here? In particular, how do I make Python aware that I have a reference to something so it doesn't collect it while I'm still using it?

    Read the article

  • AnkhSVN: Cannot checkout Subsolution due to existing "versioned" folder

    - by lostiniceland
    Hello Everyone I am using Subversion since quite some time for Java-Development and I have setup a repository on my local NAS. Since I have a MSDN subscription via my company I recently installed Visual Studio 2010 to do a small project with .NET. According to some "best-practices" my project folder looks like the following. MySolution main.sln Services services.sln Service A files Service A Test files View projectfiles Persistence persistence.sln PersistenceXml files PersistenceXml Test files PersistenceDB files PersistenceDB Test files The idea is, that the main.sln only contains the projects for the application, meaning no test projects. The subsolutions, contain the project(s) and their corresponding testprojects. I was able to put all those projects under versioncontrol with AnkhSVN, so I have the same structure there in my trunk. Commiting changes was also no problem. Now I would like to check the this out on another machine. I was able to check out the main.sln which downloaded everything that was inside this solution. It skipped the services.sln, persistence.sln and all the test-projects. Until now everything is fine. Now, here comes the problem: when I am tryting to check out the subsolution (eg. services.sln) I get an error, I think it was UnsupportedOperation. I guess this happens because ankhsvn is tryting to download the folder Service A again and create ist hidden .svn folder which is already present. The only workaround I can think of by now is installing Tortoise SVN and check out the whole thing at once. It would be nicer though to have everything from within VS. Does anyone know how I can solve this? Is another client the only solution?

    Read the article

  • No Carbon Human-Interface-Toolbox in OSX 64-bit binaries?

    - by yairchu
    I get the impression that Carbon Human Interface Toolbox does not work in 64-bit binaries. Apple's documentation says: The Carbon Help Manager is not available to 64-bit applications. ... The Control Manager is not available to 64-bit applications. ... The Data Browser is not available to 64-bit applications. ... I just want to verify that: There is no work-around around this. If this is simply the case. Why don't Apple's documentation simply state it as such?

    Read the article

  • Stream/string/bytearray transformations in Python 3

    - by Craig McQueen
    Python 3 cleans up Python's handling of Unicode strings. I assume as part of this effort, the codecs in Python 3 have become more restrictive, according to the Python 3 documentation compared to the Python 2 documentation. For example, codecs that conceptually convert a bytestream to a different form of bytestream have been removed: base64_codec bz2_codec hex_codec And codecs that conceptually convert Unicode to a different form of Unicode have also been removed (in Python 2 it actually went between Unicode and bytestream, but conceptually it's really Unicode to Unicode I reckon): rot_13 My main question is, what is the "right way" in Python 3 to do what these removed codecs used to do? They're not codecs in the strict sense, but "transformations". But the interface and implementation would be very similar to codecs. I don't care about rot_13, but I'm interested to know what would be the "best way" to implement a transformation of line ending styles (Unix line endings vs Windows line endings) which should really be a Unicode-to-Unicode transformation done before encoding to byte stream, especially when UTF-16 is being used, as discussed this other SO question.

    Read the article

  • Is it any desktop wysiwyg editors for mediawiki / wiki available?

    - by Eye of Hell
    Hello. Mediawiki is very good, but for programming tasks editing it via web is not very handy since wysiwyg is very limited, pressing 'edit' + 'publish' on any small change and waiting for page loading kins of annoying. I have seen alot of desktop wikis (personal wikis) that are free from such problems. The best example is a 'wikidpad' that has a usage patter of 'focus, edit wiki in-place, minimize'. This is very handy for programming work where you need to make small changes to wiki and documentation during development, and documentation is written much more often than readed :). But all such desktops wikies are personal - they don't have any wiki sharing (or marginally limited support for it). So, maybe it's a desktop application exists that can connect to mediawiki and allows to view and edit it via a rich wysiwyg editor? Any hints are welcomed.

    Read the article

  • Get an array of structures from native dll to c# application

    - by PaulH
    I have a C# .NET 2.0 CF project where I need to invoke a method in a native C++ DLL. This native method returns an array of type TableEntry. At the time the native method is called, I do not know how large the array will be. How can I get the table from the native DLL to the C# project? Below is effectively what I have now. // in C# .NET 2.0 CF project [StructLayout(LayoutKind.Sequential)] public struct TableEntry { [MarshalAs(UnmanagedType.LPWStr)] public string description; public int item; public int another_item; public IntPtr some_data; } [DllImport("MyDll.dll", CallingConvention = CallingConvention.Winapi, CharSet = CharSet.Auto)] public static extern bool GetTable(ref TableEntry[] table); SomeFunction() { TableEntry[] table = null; bool success = GetTable( ref table ); // at this point, the table is empty } // In Native C++ DLL std::vector< TABLE_ENTRY > global_dll_table; extern "C" __declspec(dllexport) bool GetTable( TABLE_ENTRY* table ) { table = &global_dll_table.front(); return true; } Thanks, PaulH

    Read the article

  • How can I sign a Windows Mobile application for internal use?

    - by AR
    I'm developing a Windows Mobile application for internal company use, using the Windows Mobile 6 Professional SDK. Same old story: I've developed and tested on the emulator and all is well, but as soon as I deploy to advice I get an UnauthorizedAccessException when writing files or creating directories. I'm aware that an application installed to a device needs to be signed but I'm running into roadblocks at every turn: Using the project properties 'Devices' window I select 'Sign the project output with this certificate, and choose one of the sample certificates from the SDK. This results in a build error: "The signer's certificate is not valid for signing" when running SignTool. If I try to run SignTool.exe from the commandline, I get an error telling me to run SignTool.exe from a location in the system's PATH. I can't use the 'Signing' tab in the Project Properties to create a test certificate - this is greyed out (presumably for WinMobile projects?). If at all possible, I would like to avoid having to go through Versign or the like to get a Mobile2Market certificate. If I have to go this route for a final version that's fine, but I need to at least be able to test the app on real devices. Any advice would be most welcome!

    Read the article

  • Visual studio erroneous errors when building a website?

    - by Curtis White
    Visual Studio 2008 shows a lot of erroneous errors when building a website (not a web project) in the errors list. These errors are usually corrected (removed) when I rebuild the site a couple times but they cost me wasted time. Is there anyway to hide the erroneous errors? Update: I've decided to look into this to see if I could reproduce it. This is the exact behavior I am seeing, using the website model, I type some invalid syntax on a page. The errors list fills up with errors. I correct the error and the errors list does not update. I build the project and the errors list still shows the errors but the build shows as build completed. I build the project a second time and the errors list is cleared. My question is there anyway to make the errors list clear on the first build? I thought it might have something to do with page build vs website build but it seems to make no difference. I am not using any third party dlls on this website.

    Read the article

  • Tabcompletion and docview while editing rc.lua

    - by Berxue
    I saw that there is a lua plugin for eclipse and there is a docpage on the awesome main page api_doc and all the .lua files in /usr/share/awesome/lib. So I thought it must be possible to create a Library or Execution Environment so that one has tabcompletion and docview. So I tried making my own Execution Environment: wrote the standard .rockspec file downloaded the documentation made an ofline version of it and put it in docs/ folder ziped the files and folders in /usr/share/awesome/lib ziped all up tried it out ... and it failed. When I try to view a documentaion for a .lua file I get "Note: This element has no attached documentation." Questions: Am I totaly wrong in my doing (because I have the feeling I am)? Is there any way to edit the rc.lua with tabcompletion and docview?

    Read the article

  • Should the code being tested compile to a DLL or an executable file?

    - by uriDium
    I have a solution with two projects. One for project for the production code and another project for the unit tests. I did this as per the suggestions I got here from SO. I noticed that in the Debug Folder that it includes the production code in executable form. I used NUnit to run the tests after removing the executable and they all fail trying to find the executable. So it definitely is trying to find it. I then did a quick read to find out which is better, a DLL or an executable. It seems that an DLL is much faster as they share memory space where communication between executables is slower. Unforunately our production code needs to be an exectuable. So the unit tests will be slightly slower. I am not too worried about that. But the project does rely on code written in another library which is also in executable format at the moment. Should the projects that expose some sort of SDK rather be compiled to an DLL and then the projects that use the SDK be compiled to executable?

    Read the article

  • VS 2008 C++ build output?

    - by STingRaySC
    Why when I watch the build output from a VC++ project in VS do I see: 1Compiling... 1a.cpp 1b.cpp 1c.cpp 1d.cpp 1e.cpp [etc...] 1Generating code... 1x.cpp 1y.cpp [etc...] The output looks as though several compilation units are being handled before any code is generated. Is this really going on? I'm trying to improve build times, and by using pre-compiled headers, I've gotten great speedups for each ".cpp" file, but there is a relatively long pause during the "Generating Code..." message. I do not have "Whole Program Optimization" nor "Link Time Code Generation" turned on. If this is the case, then why? Why doesn't VC++ compile each ".cpp" individually (which would include the code generation phase)? If this isn't just an illusion of the output, is there cross-compilation-unit optimization potentially going on here? There don't appear to be any compiler options to control that behavior (I know about WPO and LTCG, as mentioned above). EDIT: The build log just shows the ".obj" files in the output directory, one per line. There is no indication of "Compiling..." vs. "Generating code..." steps. EDIT: I have confirmed that this behavior has nothing to do with the "maximum number of parallel project builds" setting in Tools - Options - Projects and Solutions - Build and Run. Nor is it related to the MSBuild project build output verbosity setting. Indeed if I cancel the build before the "Generating code..." step, the ".obj" files will not exist for the most recent set of "compiled" files. E.g., if I cancel the build during "c.cpp" above, I will see only "a.obj" and "b.obj".

    Read the article

  • How to merge/crosslink Javadoc?

    - by Tom Wheeler
    If you have the standard Javadoc for a few different projects, how can you process them to create a single unified set of documentation in which everything is cross-linked? Ideally, the result would be similar to the documentation for the various modules in the NetBeans Platform: http://bits.netbeans.org/dev/javadoc/index.html but I've looked at their build scripts and they're predicated on you building everything from source. I'm looking for something which could also handle linking in Javadoc for third-party libraries, so I'd imagine it would need to be a post-processing operation. I can't be the first person to ever want this. Any ideas?

    Read the article

  • System.Interactive: Difference between Memoize() and MemoizeAll()?

    - by Joel Mueller
    In System.Interactive.dll (v1.0.2521.0) from Reactive Extensions, EnumerableEx has both a Memoize method and a MemoizeAll method. The API documentation is identical for both of them: Creates an enumerable that enumerates the original enumerable only once and caches its results. However, these methods are clearly not identical. If I use Memoize, my enumerable has values the first time I enumerate it, and seems to be empty the second time. If I use MemoizeAll then I get the behavior I would expect from the description of either method - I can enumerate the result as many times as I want and get the same results each time, but the source is only enumerated once. Can anyone tell me what the intended difference between these methods is? What is the use-case for Memoize? It seems like a fairly useless method with really confusing documentation.

    Read the article

  • How to create Automation Add In Formula/Function and Excel Add In buttons (vsto) for them together?

    - by ticky
    Ok, let me explain it little bit better. Here is one example how to create formula/functions http://blogs.msdn.com/b/eric_carter/archive/2004/12/01/273127.aspx?PageIndex=1#comments I implemented something like that, I even added values in registry, so that this Automation AddIn doesn't have to be added manually in Excel, but automatically.. I created SETUP project for this project and it works GREAT. Then.. After some time, I wanted to create buttons in Excel for functions that I use. Those are custom functions, using some web services. I created Excel AddIn and added Ribbon with buttons - one button = one custom function. I can publish this project and I am creating VSTO, so this way, I can install excel ribbon buttons in custom group of mine. Now, I have 2 installations, first for Automation AddIn and second for Excel AddIn. How can I connect them? I tried to include VSTO to Setup - something like this: [I WILL ADD IT LATER] When I install it, it works great, it installs both parts. But when I install on my friends computer, it doesn't shows Ribbon buttons. What could be the problem? If there is some other way to integrate those two, I would be very grateful!!!!! Thanks! Tijana

    Read the article

  • Apache HttpClient making multipart form post

    - by Russ
    I'm pretty green to HttpClient and I'm finding the lack of (and or blatantly incorrect) documentation extremely frustrating. I'm trying to implement the following post (listed below) with Apache Http Client, but have no idea how to actually do it. I'm going to bury myself in documentation for the next week, but perhaps more experienced HttpClient coders could get me an answer sooner. Post: Content-Type: multipart/form-data; boundary=---------------------------1294919323195 Content-Length: 502 -----------------------------1294919323195 Content-Disposition: form-data; name="number" 5555555555 -----------------------------1294919323195 Content-Disposition: form-data; name="clip" rickroll -----------------------------1294919323195 Content-Disposition: form-data; name="upload_file"; filename="" Content-Type: application/octet-stream -----------------------------1294919323195 Content-Disposition: form-data; name="tos" agree -----------------------------1294919323195--

    Read the article

  • J2EE: Default values for custom tag attributes

    - by Nick
    So according to Sun's J2EE documentation (http://docs.sun.com/app/docs/doc/819-3669/bnani?l=en&a=view), "If a tag attribute is not required, a tag handler should provide a default value." My question is how in the hell do I define a default value as per the documentation's description. Here's the code: <%@ attribute name="visible" required="false" type="java.lang.Boolean" %> <c:if test="${visible}"> My Tag Contents Here </c:if> Obviously, this tag won't compile because it's lacking the tag directive and the core library import. My point is that I want the "visible" property to default to TRUE. The "tag attribute is not required," so the "tag handler should provide a default value." I want to provide a default value, so what am I missing? Any help is greatly appreciated.

    Read the article

  • Is MS Reporting Services suitable for stand-alone reports?

    - by JMarsch
    Hello all: I work for a ISV. Our product can use both SQL Server and Oracle as its back-end server. It includes a number of reports (currently in Crystal). We are investigating moving to Micrsoft Reporting Services, but I'm beginning to think that it's a bad idea. We want for our reports to look and feel as though they are a part of our application, and we will not require SQL Server (the customer can choose Oracle). Although I see the reporting services supports a stand-alone mode (RDLC), the boundry between what requires SQL server and what doesn't looks extremely ambiguous. (example, the stand-alone report builder appears to require SQL Server, most of the documentation appears to be part of SQL Server's documentation) It looks to me like if I want to keep my application DB-agnostic, I had better steer clear of Reporting Services. Have I missed the boat here?

    Read the article

  • Is the REST support in Spring 3's MVC Framework production quality yet?

    - by glenjohnson
    Hi all, Since Spring 3 was released in December last year, I have been trying out the new REST features in the MVC framework for a small commercial project involving implementing a few RESTful Web Services which consume XML and return XML views using JiBX. I plan to use either Hibernate or JDBC Templates for the data persistence. As a Spring 2.0 developer, I have found Spring 3's (and 2.5's) new annotations way of doing things quite a paradigm shift and have personally found some of the new MVC annotation features difficult to get up to speed with for non-trivial applications - as such, I am often having to dig for information in forums and blogs that is not apparent from going through the reference guide or from the various Spring 3 REST examples on the web. For deadline-driven production quality and mission critical applications implementing a RESTful architecture, should I be holding off from Spring 3 and rather be using mature JSR 311 (JAX-RS) compliant frameworks like RESTlet or Jersey for the REST layer of my code (together with Spring 2 / 2.5 to tie things together)? I had no problems using RESTlet 1.x in a previous project and it was quite easy to get up to speed with (no magic tricks behind the scenes), but when starting my current project it initially looked like the new REST stuff in Spring 3's MVC Framework would make life easier. Do any of you out there have any advice to give on this? Does anyone know of any commercial / production-quality projects using, or having successfully delivered with, the new REST stuff in Spring 3's MVC Framework. Many thanks Glen

    Read the article

  • AnyCPU/x86/x64 for C# application and it's C++/CLI dependency

    - by Soonts
    I'm Windows developer, I'm using Microsoft visual studio 2008 SP1. My developer machine is 64 bit. The software I'm currently working on is managed .exe written in C#. Unfortunately, I was unable to solve the whole problem solely in C#. That's why I also developed a small managed DLL in C++/CLI. Both projects are in the same solution. My C# .exe build target is "Any CPU". When my C++ DLL build target is "x86", the DLL is not loaded. As far as I understood when I googled, the reason is C++/CLI language, unlike other .NET languages, compiles to the native code, not managed code. I switched the C++ DLL build target to x64, and everything works now. However, AFAIK everything will stop working as soon as my client will install my product on a 32-bit OS. I have to support Windows Vista and 7, both 32 and 64 bit versions of each of them. I don't want to fall back to 32 bits. That 250 lines of C++ code in my DLL is only 2% of my codebase. And that DLL is only used in several places, so in the typical usage scenario it's not even loaded. My DLL implements two COM objects with ATL, so I can't use "/clr:safe" project setting. Is there way to configure the solution and the projects so that C# project builds "Any CPU" version, the C++ project builds both 32 bit and 64 bit versions, then in the runtime when the managed .EXE is starting up, it uses either 32-bit DLL or 64-bit DLL depending on the OS? Or maybe there's some better solution I'm not aware of? Thanks in advance!

    Read the article

  • Documenting preprocessor defines in Doxygen

    - by Fire Lancer
    Is it possible to document preprocessor defines in Doxygen? I expected to be able to do it just like a variable or function, however the Doxygen output appears to have "lost" the documentation for the define, and does not contain the define its self either. I tried the following /**My Preprocessor Macro.*/ #define TEST_DEFINE(x) (x*x) and /**@def TEST_DEFINE My Preprocessor Macro. */ #define TEST_DEFINE(x) (x*x) I also tried putting them within a group (tried defgroup, addtogroup and ingroup) rather than just at the "file scope" however that had no effect either (although other items in the group were documented as intended). I looked through the various Doxygen options, but couldn't see anything that would enable (or prevent) the documentation of defines.

    Read the article

  • HELP ME my dataset xsd content HAS GONE

    - by Mustafa Magdy
    I'm working in an erp project using Visual Studio 2008 Sp1, I've a typed dataset it was containg alot of datatable and alot of table adapter the .Designer.cs file was 8 MB, suddenly when i was trying to openit using visual studio designer the following code comes to me <?xml version="1.0" encoding="utf-8"?> <xs:schema id="erpDataSet" targetNamespace="http://tempuri.org/erpDataSet1.xsd" xmlns:mstns="http://tempuri.org/erpDataSet1.xsd" xmlns="http://tempuri.org/erpDataSet1.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:msdata="urn:schemas-microsoft-com:xml-msdata" xmlns:msprop="urn:schemas-microsoft-com:xml-msprop" attributeFormDefault="qualified" elementFormDefault="qualified"> <xs:annotation> <xs:appinfo source="urn:schemas-microsoft-com:xml-msdatasource"> <DataSource DefaultConnectionIndex="0" FunctionsComponentName="QueriesTableAdapter" Modifier="AutoLayout, AnsiClass, Class, Public" SchemaSerializationMode="IncludeSchema" xmlns="urn:schemas-microsoft-com:xml-msdatasource"> <Connections> <Connection AppSettingsObjectName="Settings" AppSettingsPropertyName="erpConnectionString" IsAppSettingsProperty="true" Modifier="Assembly" Name="erpConnectionString (Settings)" ParameterPrefix="@" PropertyReference="ApplicationSettings.Sbic.Pro My XSD file content has gone, :( :( :( I don't understand why, and how can i recover it. i mad something but i don't know if it is the reason for that or not, My connectionstring was in the settings "app.config" i removed it and add it to the resources of another project that is refernced by the main project. what can i do, pleaze help me.

    Read the article

  • Gendarme Rules Customisation

    - by Apogee
    Does anyone know the correct way to explicitly specify which rules Gendarme will use? Or which rules to exclude? I'm not having a lot of joy searching the Mono documentation for the answer. What I'm trying to do is to specify the rules one by one in the Gendarme rules.xml file like this: <rules include="AvoidAssemblyVersionMismatchRule" from="Gendarme.Rules.BadPractice.dll"/> Doing this, I'm hoping we can then switch off the rules we don't care about. The problem is, after specifying all the rules in this way, I'm getting a different number of defects detected compared with when I use the default method Gendarme provides, which is of the form: <rules include="*" from="Gendarme.Rules.BadPractice.dll"/> <rules include="*" from="OTHER DLL NAMES"/> Has anyone done this before? Or can anyone point me in the direction of some Gendarme rules usage documentation?

    Read the article

  • Git: changes not reflecting on other checkouts - huh?

    - by Chad Johnson
    Okay, so, I have my branches (git branch -a): * chat master remotes/origin/HEAD -> origin/master remotes/origin/chat I make changes (still with the 'chat' branch checkout out), commit, and push. I go to my server, on which I have a clone of the repository, and I do a fetch: git getch then I switch to the chat branch: git checkout --track -b chat origin/chat and I event do a pull, just to make sure everything is up to date: git pull and my changes from my other computer are NOT. THERE. What the heck am I doing wrong? If I had hair, I would have pulled it out. Thankfully I am bald. When I try a 'git commit' again, I get this # On branch chat # Changed but not updated: # (use "git add/rm <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: app/controllers/chat_controller.rb # modified: app/views/dashboard/index.html.erb # modified: app/views/dashboard/layout.js.erb # modified: app/views/layouts/dashboard.html.erb # deleted: app/views/project/.tmp_edit.html.erb.55742~ # deleted: app/views/project/.tmp_edit.html.erb.83482~ # modified: public/stylesheets/dashboard/layout.css # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # .loadpath # .project # config/database.yml # config/environments/development.yml # config/environments/production.yml # config/environments/test.yml # log/ no changes added to commit (use "git add" and/or "git commit -a")

    Read the article

< Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >