Search Results

Search found 27332 results on 1094 pages for 'visual studio 2010 beta 2'.

Page 225/1094 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • Problem consuming Exchange Web Service 2010 with jax-ws metro

    - by Johan Karlberg
    I am trying to consume the Exchange 2010 Web Service interface using JAX-WS. I'm using JAX-WS 2.2 RI (Metro 2.0). 2.1 exhibited the same problem. I am running into trouble with Exchange, which returns "HTTP/1.1 415 Cannot process the message because the content type 'text/xml;charset=utf-8' was not the expected type 'text/xml; charset=utf-8'." as a reponse (2.1 quoted the charset value, otherwise same response). Apparently I need to dictate the exact Content-type header for Exchange to be happy. Is there a way for me to do this without forcing me to manually rebuild the dependency? I currently rely on published maven artifacts, and would like to continue doing this if at all possible. The consuming process is a regular J2SE app, with no containers in sight. I have control of the application and can add pretty much anything required to the applications scope, but can not add out-of-process items like proxy servers. The client classes were generated from local WSDL, but the charset specification is derived from constants declared in the jaxws RI implementation, not the generated code. The resulting HTTP transport is thus handled by the standard http/https client from Sun JRE5 or JRE6.

    Read the article

  • "The breakpoint will not currently be hit. The source code is different from the original version."

    - by David
    Hi everyone I'm really hoping someone can help me out with this one. When debugging in Visual Studio, sometimes I add a breakpoint but it's hollow and VS says "The breakpoint will not currently be hit. The source code is different from the original version." Obviously this prevents me being able to debug. What on earth does the message mean? What original version? If I've just opened up the solution and not made any changes whatsoever to the code, how can there be an 'original version'? The appearance of this problem just seems totally arbitrary. It's just Visual Studio going 'Na na na na na, I'm not going to debug for you today'. Can anyone give any advice? Ta David

    Read the article

  • Debugger does not break when debugging PowerShell console

    - by Adam Driscoll
    I'm developing a binary PowerShell module. I have a post build event that copies that module into the 'modules' directory in my "Documents\WindowsPowerShell" folder. I then have the project set to launch PowerShell.exe. My module is loaded via Import-Module and off I go. The problem is my break points are never hit and the debugger does not break on exceptions. If I run PowerShell outside of Visual Studio and then attach the debugger to the process I can break just fine. The other strange this is that my break points are not empty. Typically if different source versions are loaded they will be. I'm running Visual Studio 2010 on a Win 7 box. My module is currently targeting .NET 3.5. I've tried running both the x64 and x86 versions of PS with no luck.

    Read the article

  • Custom Build Step Paths Between x86 and x64 in Visual Studio

    - by Bob Somers
    For reference, I'm using Visual Studio 2010. I have a custom build step defined as follows: if exist "$(TargetDir)"server.dll copy "$(TargetDir)"server.dll "c:\program files (x86)\myapp\server.dll" This works great on my desktop, which is running 64-bit Windows. However, when I build on my laptop, c:\Program Files (x86)\ doesn't exist because it's running 32-bit Windows. I'd like to put in something that will work between both editions of Windows, since the project files are under version control and it's a real pain to change the paths every time I work on my laptop. If this were a *nix environment I'd just create a symlink and be done with it. Any ideas?

    Read the article

  • Organizing Git repositories with common nested sub-modules

    - by André Caron
    I'm a big fan of Git sub-modules. I like to be able to track a dependency along with its version, so that you can roll-back to a previous version of your project and have the corresponding version of the dependency to build safely and cleanly. Moreover, it's easier to release our libraries as open source projects as the history for libraries is separate from that of the applications that depend on them (and which are not going to be open sourced). I'm setting up workflow for multiple projects at work, and I was wondering how it would be if we took this approach a bit of an extreme instead of having a single monolithic project. I quickly realized there is a potential can of worms in really using sub-modules. Supposing a pair of applications: studio and player, and dependent libraries core, graph and network, where dependencies are as follows: core is standalone graph depends on core (sub-module at ./libs/core) network depdends on core (sub-module at ./libs/core) studio depends on graph and network (sub-modules at ./libs/graph and ./libs/network) player depends on graph and network (sub-modules at ./libs/graph and ./libs/network) Suppose that we're using CMake and that each of these projects has unit tests and all the works. Each project (including studio and player) must be able to be compiled standalone to perform code metrics, unit testing, etc. The thing is, a recursive git submodule fetch, then you get the following directory structure: studio/ studio/libs/ (sub-module depth: 1) studio/libs/graph/ studio/libs/graph/libs/ (sub-module depth: 2) studio/libs/graph/libs/core/ studio/libs/network/ studio/libs/network/libs/ (sub-module depth: 2) studio/libs/network/libs/core/ Notice that core is cloned twice in the studio project. Aside from this wasting disk space, I have a build system problem because I'm building core twice and I potentially get two different versions of core. Question How do I organize sub-modules so that I get the versioned dependency and standalone build without getting multiple copies of common nested sub-modules? Possible solution If the the library dependency is somewhat of a suggestion (i.e. in a "known to work with version X" or "only version X is officially supported" fashion) and potential dependent applications or libraries are responsible for building with whatever version they like, then I could imagine the following scenario: Have the build system for graph and network tell them where to find core (e.g. via a compiler include path). Define two build targets, "standalone" and "dependency", where "standalone" is based on "dependency" and adds the include path to point to the local core sub-module. Introduce an extra dependency: studio on core. Then, studio builds core, sets the include path to its own copy of the core sub-module, then builds graph and network in "dependency" mode. The resulting folder structure looks like: studio/ studio/libs/ (sub-module depth: 1) studio/libs/core/ studio/libs/graph/ studio/libs/graph/libs/ (empty folder, sub-modules not fetched) studio/libs/network/ studio/libs/network/libs/ (empty folder, sub-modules not fetched) However, this requires some build system magic (I'm pretty confident this can be done with CMake) and a bit of manual work on the part of version updates (updating graph might also require updating core and network to get a compatible version of core in all projects). Any thoughts on this?

    Read the article

  • Error while uploading file method in Client Object Model Sharepoint 2010

    - by user1481570
    Error while uploading file method in Client Object Model + Sharepoint 2010. Once the file got uploaded. After that though the code compiles with no error I get the error while executing "{"Value does not fall within the expected range."} {System.Collections.Generic.SynchronizedReadOnlyCollection} I have a method which takes care of functionality to upload files /////////////////////////////////////////////////////////////////////////////////////////// public void Upload_Click(string documentPath, byte[] documentStream) { String sharePointSite = "http://cvgwinbasd003:28838/sites/test04"; String documentLibraryUrl = sharePointSite +"/"+ documentPath.Replace('\\','/'); //////////////////////////////////////////////////////////////////// //Get Document List List documentsList = clientContext.Web.Lists.GetByTitle("Doc1"); var fileCreationInformation = new FileCreationInformation(); //Assign to content byte[] i.e. documentStream fileCreationInformation.Content = documentStream; //Allow owerwrite of document fileCreationInformation.Overwrite = true; //Upload URL fileCreationInformation.Url = documentLibraryUrl; Microsoft.SharePoint.Client.File uploadFile = documentsList.RootFolder.Files.Add( fileCreationInformation); //uploadFile.ListItemAllFields.Update(); clientContext.ExecuteQuery(); } ///////////////////////////////////////////////////////////////////////////////////////////////// In the MVC 3.0 application in the controller I have defined the following method to invoke the upload method. ////////////////////////////////////////////////////////////////////////////////////////////////// public ActionResult ProcessSubmit(IEnumerable<HttpPostedFileBase> attachments) { System.IO.Stream uploadFileStream=null; byte[] uploadFileBytes; int fileLength=0; foreach (HttpPostedFileBase fileUpload in attachments) { uploadFileStream = fileUpload.InputStream; fileLength=fileUpload.ContentLength; } uploadFileBytes= new byte[fileLength]; uploadFileStream.Read(uploadFileBytes, 0, fileLength); using (DocManagementService.DocMgmtClient doc = new DocMgmtClient()) { doc.Upload_Click("Doc1/Doc2/Doc2.1/", uploadFileBytes); } return RedirectToAction("SyncUploadResult"); } ////////////////////////////////////////////////////////////////////////////////////////////////// Please help me to locate the error

    Read the article

  • Upload An Excel File in Classic ASP On Windows 2003 x64 Using Office 2010 Drivers

    - by alphadogg
    So, we are migrating an old web app from a 32-bit server to a newer 64-bit server. The app is basically a Classic ASP app. The pool is set to run in 64-bit and cannot be set to 32-bit due to other components. However, this breaks the old usage of Jet drivers and subsequent parsing of Excel files. After some research, I downloaded the 64-bit version of the new 2010 Office System Driver Beta and installed it. Presumably, this allows one to open and read Excel and CSV files. Here's the snippet of code that errors out. Think I followed the lean guidelines on the download page: Set con = Server.CreateObject("ADODB.Connection") con.ConnectionString = "Provider=Microsoft.ACE.OLEDB.14.0;Data Source=" & strPath & ";Extended Properties=""Excel 14.0;""" con.Open Any ideas why? UPDATE: My apologies. I did forget the important part, the error message: ADODB.Connection error '800a0e7a' Provider cannot be found. It may not be properly installed. /vendor/importZipList2.asp, line 56 I have installed, and uninstalled/reinstalled twice.

    Read the article

  • Password Cracking in 2010 and Beyond

    - by mttr
    I have looked a bit into cryptography and related matters during the last couple of days and am pretty confused by now. I have a question about password strength and am hoping that someone can clear up my confusion by sharing how they think through the following questions. I am becoming obsessed about these things, but need to spend my time otherwise :-) Let's assume we have an eight-digit password that consists of upper and lower-case alphabetic characters, numbers and common symbols. This means we have 8^96 ~= 7.2 quadrillion different possible passwords. As I understand there are at least two approaches to breaking this password. One is to try a brute-force attack where we try to guess each possible combination of characters. How many passwords can modern processors (in 2010, Core i7 Extreme for eg) guess per second (how many instructions does a single password guess take and why)? My guess would be that it takes a modern processor in the order of years to break such a password. Another approach would consist of obtaining a hash of my password as stored by operating systems and then search for collisions. Depending on the type of hash used, we might get the password a lot quicker than by the bruteforce attack. A number of questions about this: Is the assertion in the above sentence correct? How do I think about the time it takes to find collisions for MD4, MD5, etc. hashes? Where does my Snow Leopard store my password hash and what hashing algorithm does it use? And finally, regardless of the strength of file encryption using AES-128/256, the weak link is still my en/decryption password used. Even if breaking the ciphered text would take longer than the lifetime of the universe, a brute-force attack on my de/encryption password (guess password, then try to decrypt file, try next password...), might succeed a lot earlier than the end of the universe. Is that correct? I would be very grateful, if people could have mercy on me and help me think through these probably simple questions, so that I can get back to work.

    Read the article

  • Report Builder Error (Input string was not in a correct format) when filtering data

    - by JVZ
    Issue: the user is getting an error “The requested list could not be retrieved because the query is not valid or a connection could not be made to the data source.” while trying to filter a reporting using Report Builder 2.0. When you expand the details the message is “Input string was not in a correct format” I verified that when the user clicks to see the filter values the query is being sent to the underlying database (by using profiler). When I build the same report [I'm a server admin] I have no issues and see all the filtered values, which leads me to believe this is a permissions issue. I believe I granted all the correct permissions, see below: Report Server Setup [Project Name] Data Sources Models […other folders] We are using SQL Server 2005. The user belongs to a poweruser domain group which has the following permissions: Home Report Builder View Folders Browser Home/[Project Name] Browser Report Builder View Folders [Data Sources] Browser Content Developer [Models] [Inherit roles from Parent folder] << should be same as [Project Name] folder Report server log is showing the following: w3wp!library!5!04/21/2010-08:47:27:: Call to GetPermissionsAction(/). w3wp!library!a!04/21/2010-08:47:27:: Call to GetPropertiesAction(/, PathBased). w3wp!library!5!04/21/2010-08:47:27:: Call to GetSystemPermissionsAction(). w3wp!library!a!04/21/2010-08:47:27:: Call to ListChildrenAction(/, False). w3wp!library!a!04/21/2010-08:47:27:: Call to GetSystemPropertiesAction(). w3wp!library!a!04/21/2010-08:47:27:: Call to GetSystemPropertiesAction(). w3wp!library!9!04/21/2010-08:47:40:: Call to ListModelPerspectivesAction(). w3wp!library!9!04/21/2010-08:48:22:: Call to GetUserModelAction(//Models/, ). w3wp!library!5!04/21/2010-08:48:45:: Call to GetItemTypeAction(/). w3wp!library!9!04/21/2010-08:48:46:: Call to GetItemTypeAction(/). w3wp!library!5!04/21/2010-08:48:53:: Call to GetItemTypeAction(/). w3wp!library!a!04/21/2010-08:49:21:: Call to ListModelPerspectivesAction(). w3wp!library!b!04/21/2010-08:53:40:: Call to GetUserModelAction(//Models/, ). w3wp!library!5!04/21/2010-08:54:48:: i INFO: Call to RenderFirst( '' ) w3wp!webserver!5!04/21/2010-08:54:48:: i INFO: Processed report. Report='/', Stream='' w3wp!library!9!04/21/2010-08:56:42:: Call to GetItemTypeAction(/). w3wp!library!9!04/21/2010-08:56:42:: Call to GetItemTypeAction(/). w3wp!library!a!04/21/2010-08:56:50:: Call to GetItemTypeAction(/). Any ideas on how to resolve/troubleshoot this issue?

    Read the article

  • Linking IronPython to WPF

    - by DonnyD
    I just installed VS2010 and the great new IronPython Tools extension. Currently this extension doesn't yet generate event handlers in code upon double-clicking wpf visual controls. Is there anyone that can provide or point me to an example as to how to code wpf event handlers manually in python. I've had no luck finding any and I am new to visual studio. Upon generating a new ipython wpf project the auto-generated code is: import clr clr.AddReference('PresentationFramework') from System.Windows.Markup import XamlReader from System.Windows import Application from System.IO import FileStream, FileMode app = Application() app.Run(XamlReader.Load(FileStream('WpfApplication7.xaml', FileMode.Open))) and the XAML is: <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="WpfApplication7" Height="300" Width="300"> <Button>Click Me</Button> </Window> Any help would be appreciated.

    Read the article

  • Using the Search API with Sharepoint Foundation 2010 - 0 results

    - by MB
    I am a sharepoint newbee and am having trouble getting any search results to return using the search API in Sharepoint 2010 Foundation. Here are the steps I have taken so far. The Service Sharepoint Foundation Search v4 is running and logged in as Local Service Under Team Site - Site Settings - Search and Offline Availability, Indexing Site Content is enabled. Running the PowerShell script Get-SPSearchServiceInstance returns TypeName : SharePoint Foundation Search Description : Search index file on the search server Id : 91e01ce1-016e-44e0-a938-035d37613b70 Server : SPServer Name=V-SP2010 Service : SPSearchService Name=SPSearch4 IndexLocation : C:\Program Files\Common Files\Microsoft Shared\Web Server Exten sions\14\Data\Applications ProxyType : Default Status : Online When I do a search using the search textbox on the team site I get a results as I would expect. Now, when I try to duplicate the search results using the Search API I either receive an error or 0 results. Here is some sample code: using Microsoft.SharePoint.Search.Query; using (var site = new SPSite(_sharepointUrl, token)) { // FullTextSqlQuery fullTextSqlQuery = new FullTextSqlQuery(site) { QueryText = String.Format("SELECT Title, SiteName, Path FROM Scope() WHERE \"scope\"='All Sites' AND CONTAINS('\"{0}\"')", searchPhrase), //QueryText = String.Format("SELECT Title, SiteName, Path FROM Scope()", searchPhrase), TrimDuplicates = true, StartRow = 0, RowLimit = 200, ResultTypes = ResultType.RelevantResults //IgnoreAllNoiseQuery = false }; ResultTableCollection resultTableCollection = fullTextSqlQuery.Execute(); ResultTable result = resultTableCollection[ResultType.RelevantResults]; DataTable tbl = new DataTable(); tbl.Load(result, LoadOption.OverwriteChanges); } When the scope is set to All Sites I retrieve an error about the search scope not being available. Other search just return 0 results. Any ideas about what I am doing wrong?

    Read the article

  • How to install Eclipse + PHP Development Tools (PDT) + Debugger on Mac in The Year 2010

    - by aphex5
    I had a lot of trouble installing Eclipse and PDT on my system. It took two days, largely because all the tutorials I could find were out of date (written in 2008, it's 2010 now) and various steps they included were no longer necessary, broken, or irrelevant. I wanted to write my process here so it could be improved upon (via wiki) as time goes on. Install Eclipse without PHP plugin ("Eclipse Classic"). This will give you a complete Eclipse, which I find preferable, as the UI is more fleshed out (e.g. you get a default list of Perspectives, which helps you understand what Perspectives are.) Install PDT SDK with the Help Install New Software menu item. You'd think you'd be done here, but if you try to run something, it'll fail complaining of not having a debugger. Install the Zend Debugger. It'll fail if you try to use the Install New Software option, as many tutorials suggest ("No repository found containing osgi.bundle.org.zend.php.debug.debugger.5.3.7.v20091116".) Instead, download it from http://www.zend.com/en/community/pdt, and manually copy the features/ and plugins/ directory into your Eclipse install (these instructions are not written anywhere). Restart Eclipse Monkey with preferences for a while -- if you followed a previous tutorial and tried to manually add your php executable to Eclipse prefs (/usr/bin/php), remove it (PHP PHP Executables). Set one of the Zend Debugger executables to the default. If you've already tried to execute a .php file, remove the existing "Run" profile you (maybe weren't aware that you) created (Run Debug Configurations...). Eclipse works! You should be able to run a .php file as a script just fine.

    Read the article

  • Changing the background colour of lines in the stack

    - by Mongus Pong
    I have just changed the colour scheme of my Visual Studio 2008 environment to have a dark backround with light text. This is so much easier on the eyes. The only problem is lines that are on the call stack... Those lines that are referred to in this thread here in visual studio some lines of code have light grey background while debugging These lines have a bright grey background, which against my light text means I cannot read the text at all. I have been through every single colour in Tools - Options - Fonts and Colours and cannot find one that matches. How can I change the background for lines on the current call stack?

    Read the article

  • Any info about book "Unix Internals: The New Frontiers" by Uresh Vahalia 2nd edition (Jan 2010)

    - by claws
    This summer I'm getting into UNIX (mostly *BSD) development. I've graduate level knowledge about operating systems. I can also understand the code & read from here and there but the thing is I want to make most of my time. Reading books are best for this. From my search I found that these two books The Design and Implementation of the 4.4 BSD Operating System "Unix Internals: The New Frontiers" by Uresh Vahalia are like established books on UNIX OS internals. But the thing is these books are pretty much outdated. yay!! Lucky me. "Unix Internals: The New Frontiers" by Uresh Vahalia 2 edition (Jan 2010) is released. I've been search for information on this book. Sadly, Amazon says "Out of Print--Limited Availability" & I couldn't find any info regarding this book. This is the information I'm looking for: Table of Contents Whats new in this edition? Where the hell can I buy soft-copy of this book? I really cannot afford buying a hardcopy. How can I contact the author? I've lot of hopes & expectations on this book. I've been waiting for its release for a long time. I've sent random mails to & & requesting to have a proper website for this book. I even contacted publisher for any further information but no replies from any one. If you have any other books that you think will help me. I again repeat, I want to get max possible out of these 2.5 months summer.

    Read the article

  • Windsor IHandlerSelector in RIA Services Visual Studio 2010 Beta2

    - by Savvas Sopiadis
    Hi everybody! I want to implement multi tenancy using Windsor and i don't know how to handle this situation: i succesfully used this technique in plain ASP.NET MVC projects and thought incorporating in a RIA Services project would be similar. So i used IHandlerSelector, registered some components and wrote an ASP.NET MVC view to verify it works in a plain ASP.NET MVC environment. And it did! Next step was to create a DomainService which got an IRepository injected in the constructor. This service is hosted in the ASP.NET MVC application. And it actually ... works:i can get data out of it to a Silverlight application. Sample snippet: public OrganizationDomainService(IRepository<Culture> cultureRepository) { this.cultureRepository = cultureRepository; } Last step is to see if it works multi-tenant-like: it does not! The weird thing is this: using some line of code and writing debug messages in a log file i verified that the correct handler is selected! BUT this handler seems not to be injected in the DomainService. I ALWAYS get the first handler (that's the logic in my SelectHandler) Can anybody verify this behavior? Is injection not working in RIA Services? Or am i missing something basic?? Development environment: Visual Studio 2010 Beta2 Thanks in advance

    Read the article

  • SQl rows to columns conversion

    - by Thihara
    Hi, I have a table ClassAttendance and I'm using MSSQL 2005 studentID--attendanceDate---------------------------------------status 1004--------2010-03-17--------------------------------------------------0 1005--------2010-03-17--------------------------------------------------1 1006--------2010-03-17--------------------------------------------------0 1007--------2010-03-17--------------------------------------------------0 1004--------2010-03-19--------------------------------------------------0 1005--------2010-03-19--------------------------------------------------1 1006--------2010-03-19--------------------------------------------------0 1007--------2010-03-19--------------------------------------------------0 1004--------2010-03-20--------------------------------------------------1 as you can see studentID is a foreign Key for a table called StudentData and attendedDate has an unknown number of rows. Can i get the output like below by using a query? I need the dates in one month to be columns and the value of the date columns will be values in the status column. The number of date records per studentID is the same its the number of dates in the attendanceDate filed that is unknown. studentID---------2010-03-17--------2010-03-19------2010-03-20 1004-----------------------------0----------------------0--------------------1 etc. This is for a creating a report so I need to do it in a query. Please help if you can.

    Read the article

  • Error after updating to the latest version Azure SDK

    - by Mikael Johansson
    After I updated to the newest version the Azure SDK I have started to get this error several times each day when I press build in Visual Studio. The only way for me to fix it at the moment is to restart my visual studio. The error I get is: Windows Azure Tools: Invalid access to memory location Is there someone else that have got this error? And also what did you do to fix it? Thanks in advance! Update 2012-08-28: The same error still exist in VS2012 and Azure 1.7 SDK. However the frequency have gone down with VS2012.

    Read the article

  • Identity in .NET 4.5&ndash;Part 2: Claims Transformation in ASP.NET (Beta 1)

    - by Your DisplayName here!
    In my last post I described how every identity in .NET 4.5 is now claims-based. If you are coming from WIF you might think, great – how do I transform those claims? Sidebar: What is claims transformation? One of the most essential features of WIF (and .NET 4.5) is the ability to transform credentials (or tokens) to claims. During that process the “low level” token details are turned into claims. An example would be a Windows token – it contains things like the name of the user and to which groups he belongs to. That information will be surfaced as claims of type Name and GroupSid. Forms users will be represented as a Name claim (all the other claims that WIF provided for FormsIdentity are gone in 4.5). The issue here is, that your applications typically don’t care about those low level details, but rather about “what’s the purchase limit of alice”. The process of turning the low level claims into application specific ones is called claims transformation. In pre-claims times this would have been done by a combination of Forms Authentication extensibility, role manager and maybe ASP.NET profile. With claims transformation all your identity gathering code is in one place (and the outcome can be cached in a single place as opposed to multiple ones). The structural class to do claims transformation is called ClaimsAuthenticationManager. This class has two purposes – first looking at the incoming (low level) principal and making sure all required information about the user is present. This is your first chance to reject a request. And second – modeling identity information in a way it is relevant for the application (see also here). This class gets called (when present) during the pipeline when using WS-Federation. But not when using the standard .NET principals. I am not sure why – maybe because it is beta 1. Anyhow, a number of people asked me about it, and the following is a little HTTP module that brings that feature back in 4.5. public class ClaimsTransformationHttpModule : IHttpModule {     public void Dispose()     { }     public void Init(HttpApplication context)     {         context.PostAuthenticateRequest += Context_PostAuthenticateRequest;     }     void Context_PostAuthenticateRequest(object sender, EventArgs e)     {         var context = ((HttpApplication)sender).Context;         // no need to call transformation if session already exists         if (FederatedAuthentication.SessionAuthenticationModule != null &&             FederatedAuthentication.SessionAuthenticationModule.ContainsSessionTokenCookie(context.Request.Cookies))         {             return;         }         var transformer = FederatedAuthentication.FederationConfiguration.IdentityConfiguration.ClaimsAuthenticationManager;         if (transformer != null)         {             var transformedPrincipal = transformer.Authenticate(context.Request.RawUrl, context.User as ClaimsPrincipal);             context.User = transformedPrincipal;             Thread.CurrentPrincipal = transformedPrincipal;         }     } } HTH

    Read the article

  • php date list for specified duration

    - by user301584
    hi is there any php classes or functions, which will gives us all the days from specific duration? for example, if i want a list of dates from 25/03/2010 (25th march 2010) to 15/05/2010 (15th May 2010), it will give me: 25/03/2010 26/03/2010 26/03/2010 .... .... .... 14/05/2010 15/05/2010 thanks a lot for any help!

    Read the article

  • How can I create different DLLs in one project?

    - by jaloplo
    I have a question I don't know if it can be solved. I have one C# project on Visual Studio 2005 and I want to create different DLL names depending on a preprocessor constant. What I have in this moment is the preprocessor constant, two snk files and two assembly's guid. I also create two configurations (Debug and Debug Preprocessor) and they compile perfectly using the snk and guid appropiate. #if PREPROCESSOR_CONSTANT [assembly: AssemblyTitle("MyLibraryConstant")] [assembly: AssemblyProduct("MyLibraryConstant")] #else [assembly: AssemblyTitle("MyLibrary")] [assembly: AssemblyProduct("MyLibrary")] #endif Now, I have to put the two assemblies into the GAC. The first assembly is added without problems but the second isn't. What can I do to create two or more different assemblies from one Visual Studio project? It's possible that I forgot to include a new line on "AssemblyInfo.cs" to change the DLL name depending on the preprocessor constant?

    Read the article

  • Web Service appears as website instead of developer web server

    - by stocherilac
    I recently reimaged my PC and regrabbed one of our projects from Source Safe. In our solution we have a web service that normally runs on a server, however we can build the webservice on our localhost for debugging as well. However, now whenever I grab the project from source safe it is building the webservice as a website instead of a developer web server. This is causing a variety of issue, specifically I am no longer able to specify which port I would like that webservice to use. As a result I cannot connect to our database through my local webservice. How can I change the project in my solution that controls the webservice from a website to a developer web server? MS Visual Studio 2005. MS Visual Source Safe 2005. MS SQL Server 2000. VB .NET project

    Read the article

  • Problem passing a reference as a named parameter to a variadic function

    - by Michael Mrozek
    I'm having problems in Visual Studio 2003 with the following: void foo(const char*& str, ...) { va_list args; va_start(args, str); const char* foo; while((foo = va_arg(args, const char*)) != NULL) { printf("%s\n", foo); } } When I call it: const char* one = "one"; foo(one, "two", "three", NULL); I get: Access violation reading location 0xcccccccc on the printf() line -- va_arg() returned 0xcccccccc. I finally discovered it's the first parameter being a reference that breaks it -- if I make it a normal char* everything is fine. It doesn't seem to matter what the type is; being a reference causes it to fail at runtime. Is this a known problem with VS2003, or is there some way in which that's legal behavior? It doesn't happen in GCC; I haven't tested with newer Visual Studios to see if the behavior goes away

    Read the article

  • How do I ignore the UTF-8 Byte Order Marker in String comparisons?

    - by Skrud
    I'm having a problem comparing strings in a Unit Test in C# 4.0 using Visual Studio 2010. This same test case works properly in Visual Studio 2008 (with C# 3.5). Here's the relevant code snippet: byte[] rawData = GetData(); string data = Encoding.UTF8.GetString(rawData); Assert.AreEqual("Constant", data, false, CultureInfo.InvariantCulture); While debugging this test, the data string appears to the naked eye to contain exactly the same string as the literal. When I called data.ToCharArray(), I noticed that the first byte of the string data is the value 65279 which is the UTF-8 Byte Order Marker. What I don't understand is why Encoding.UTF8.GetString() keeps this byte around. How do I get Encoding.UTF8.GetString() to not put the Byte Order Marker in the resulting string?

    Read the article

  • SSIS Custom Control Task Debugging UI in BIDS and VS

    - by zeencat
    I've created a SSIS Custom Task in C# and I'm currently developing the UI. I was wondering if there is a better way of debugging the UI instead of compiling the project, copying the DLL's into the appropriate DTS folder and then opening the test Package within BIDS and then attaching the process to Visual Studio. This part I'm not bothered about but once you've tested the UI and made changes to UI within Visual Studio. I've got to recomplile the DLL's and then repeat the entire process. I've got to close BIDS and VS because they don't release the DLL's before I have to start the entire process over again. Does anyone have any tips to speed up this process. It's just so frustrating having to do this everytime.

    Read the article

  • RTTI Dynamic array TValue Delphi 2010

    - by user558126
    Hello I have a question. I am a newbie with Run Time Type Information from Delphi 2010. I need to set length to a dynamic array into a TValue. You can see the code. Type TMyArray = array of integer; TMyClass = class publihed function Do:TMyArray; end; function TMyClass.Do:TMyArray; begin SetLength(Result,5); for i:=0 to 4 Result[i]=3; end; ....... ....... ...... y:TValue; Param:array of TValue; ......... y=Methods[i].Invoke(Obj,Param);//delphi give me a DynArray type kind, is working, Param works to any functions. if Method[i].ReturnType.TypeKind = tkDynArray then//is working... begin I want to set length for y to 10000//i don't know how to write. end; I don't like Generics Collections.

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >