Search Results

Search found 26384 results on 1056 pages for 'visual studio 2008 sp1'.

Page 621/1056 | < Previous Page | 617 618 619 620 621 622 623 624 625 626 627 628  | Next Page >

  • Sharing configuration settings between Windows Azure roles

    - by theo.spears
    If you are working on a medium-large Windows Azure project it's likely it will involve more than one role, for example separate web and worker roles. Unfortunately although all the windows azure configuration settings are stored in a single cscfg file, there is no way to share configuration settings between multiple roles. This means you have to duplicate common settings like connection strings across all your roles. There is an open Connect issue about this topic, but Microsoft have not said when they will fix it. In the mean time I've put together a dirty dirty hack cunning workaround that creates a fake role containing your shared configuration settings, and copies it to all roles as part of the build process. Here's how you set it up: 1. Download the zip file attached to this post, and unzip it into the folder containing your Azure project (not your solution folder). 2. Edit your csdef and cscfg files to include the placeholder project ServiceDefinition.csdef<?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="AzureSpendNotifier" http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition%22"http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="GLOBAL"> <ConfigurationSettings> <Setting name="ExampleSetting" /> </ConfigurationSettings> </WorkerRole> <WorkerRole name="MyWorker"> <ConfigurationSettings> </ConfigurationSettings> </WorkerRole> <WebRole name="MyWeb"> <Sites> <Site name="Web"> <Bindings> <Binding name="WebEndpoint" endpointName="WebEndpoint" /> </Bindings> </Site> </Sites> <ConfigurationSettings> </ConfigurationSettings> </WebRole> </ServiceDefinition> ServiceConfiguration.cscfg<?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="AzureSpendNotifier" xmlns=http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration osFamily="1" osVersion="*"> <Role name="GLOBAL"> <ConfigurationSettings> <Setting name="ExampleSetting" value="Hello World" /> </ConfigurationSettings> <Instances count="1" /> </Role> <Role name="MyWorker"> <Instances count="1" /> <ConfigurationSettings> </ConfigurationSettings> </Role> <Role name="MyWeb"> <Instances count="1" /> <ConfigurationSettings> </ConfigurationSettings> </Role> </ServiceConfiguration> It is important that all your roles contain a ConfigurationSettings entry in both cscfg and csdef files, even if it's empty- otherwise the shared configuration settings will not be inserted. 3. Open your azure deployment (.ccproj) project in notepad, and add the highlighted line below: ... <Import Project="$(CloudExtensionsDir)Microsoft.CloudService.targets" /> <Import Project="globalsettings/globalsettings.targets" /> </Project> It is important you add this below the Microsoft.CloudService.targets import line, as it replaces some of the rules defined in that file. Visual studio will prompt you to reload the project, say yes. At this point you will have a new Azure role called 'GLOBAL' with settings you can edit through the visual studio properties panel as normal. This role will never be deployed, but any settings you add to it will be copied to all your other roles when deployed or tested locally within visual studio.

    Read the article

  • Folders in SQL Server Data Tools

    - by jamiet
    Recently I have begun a new project in which I am using SQL Server Data Tools (SSDT) and SQL Server Integration Services (SSIS) 2012. Although I have been using SSDT & SSIS fairly extensively while SQL Server 2012 was in the beta phase I usually find that you don’t learn about the capabilities and quirks of new products until you use them on a real project, hence I am hoping I’m going to have a lot of experiences to share on my blog over the coming few weeks. In this first such blog post I want to talk about file and folder organisation in SSDT. The predecessor to SSDT is Visual Studio Database Projects. When one created a new Visual Studio Database Project a folder structure was provided with “Schema Objects” and “Scripts” in the root and a series of subfolders for each schema: Apparently a few customers were not too happy with the tool arbitrarily creating lots of folders in Solution Explorer and hence SSDT has gone in completely the opposite direction; now no folders are created and new objects will get created in the root – it is at your discretion where they get moved to: After using SSDT for a few weeks I can safely say that I preferred the older way because I never used Solution Explorer to navigate my schema objects anyway so it didn’t bother me how many folders it created. Having said that the thought of a single long list of files in Solution Explorer without any folders makes me shudder so on this project I have been manually creating folders in which to organise files and I have tried to mimic the old way as much as possible by creating two folders in the root, one for all schema objects and another for Pre/Post deployment scripts: This works fine until different developers start to build their own different subfolder structures; if you are OCD-inclined like me this is going to grate on you eventually and hence you are going to want to move stuff around so that you have consistent folder structures for each schema and (if you have multiple databases) each project. Moreover new files get created with a filename of the object name + “.sql” and often people like to have an extra identifier in the filename to indicate the object type: The overall point is this – files and folders in your solution are going to change. Some version control systems (VCSs) don’t take kindly to files being moved around or renamed because they recognise the renamed/moved file simply as a new file and when they do that you lose the revision history which, to my mind, is one of the key benefits of using a VCS in the first place. On this project we have been using Team Foundation Server (TFS) and while it pains me to say it (as I am no great fan of TFS’s version control system) it has proved invaluable when dealing with the SSDT problems that I outlined above because it is integrated right into the Visual Studio IDE. Thus the advice from this blog post is: If you are using SSDT consider using an Visual-Studio-integrated VCS that can easily handle file renames and file moves I suspect that fans of other VCSs will counter by saying that their VCS weapon of choice can handle renames/file moves quite satisfactorily and if that’s the case…great…let me know about them in the comments. This blog post is not an attempt to make people use one particular VCS, only to make people aware of this issue that might rise when using SSDT. More to come in the coming few weeks! @jamiet

    Read the article

  • Hyper-V Live Migration across Sites!

    - by Ryan Roussel
    One of the great sessions I sat in on at Tech Ed this week was stretching a Windows 2008 R2 Hyper-V  Failover Cluster across sites.  With this ability, you could actually implement a Hyper-V cluster where you could migrate or even Live Migrate VMs across sites.   With this area’s propensity for Hurricanes, this will be a very popular topic for me over the next few months. While this technology is possible today, it’s also very complicated and can be very expensive to implement.    First your WAN connection has to support the ability to trunk your VLAN across both sites in order to Live Migrate.  This means you can’t use a Layer 3 routed connection like MPLS.  It has to be a Metro Ethernet connection or "Dark Fiber”.  Dark Fiber is unused Fiber already in the ground that can be leased from  various providers. Both of these connections would allow you to trunk layer 2 across your WAN.  Cisco does have the ability to trunk layer 2 across a routed connection by muxing the traffic but this is only available in their Nexus product line which has a very steep price tag.   If you are stuck with MPLS or the like and Nexus switching is not a realistic possibility, you will have to implement a multi-subnet cluster in which case Live Migration won’t be possible.  However you can still failover VMs to the remote site with some planning and manual intervention.  The consideration here is that the VMs will be on a different subnet once migrated, so you will have to change the IP addressing of your VMs.  This also has ramifications with DNS and Name resolution to control your down time.  DHCP with Reservations for your VMs is the preferred method to achieve the IP changes as this will automate that part of the process.   Secondly, you will have to have  a mechanism to replicate your storage across both sites.  Many SAN vendors natively support hardware based synchronous and asynchronous replication.  Some even support cluster shared volumes which were introduced in 2008 R2.   If your SANs do not support this natively, there are alternative file based replication products either software based like Double Take or hardware appliance like EMC.  Be sure to check with your vendor on the support of Disk majority if you’re replicating your quorum disk between SANs.   The last consideration is the ability to maintain quorum for your cluster.  If your replication provider does not support Disk Majority through replication, you will have to explore Node Majority with File Share Witness.  This will affect your design as a 3 node cluster with 1 node at the remote site and FSW at the production site would not have the ability to maintain quorum if the production site was lost. MS best practice for this would be to implement an even node cluster with 2 nodes at  each site and the FSW at a third site.   And there you have it.  While some considerations and research goes into implementing this solution, even a multi-subnet solution would be invaluable to organizations in the implementations of “warm” DR sites.

    Read the article

  • Moving from Silverlight 4 Beta to RC - Part 1

    The other day I had finished up my Task-It Webinar, written a few blog posts, and knew the time had come to move from my Silverlight 4 Beta environment up to the latest RC (release candidate) bits that were released last Monday. What disappointed me when I went to the Silverlight 4 Information Page is that it told me what to install, but not what to uninstall first. Uninstalling I'm not entirely sure if I had to uninstall anything, or if installing the new stuff would just work, but in poking around the web I found posts stating that you must uninstall the following items first. Unfortunately I'm going by memory here and have not been able to find my way back to the magic page I found in the myriad of posts that I went through: Microsoft VisualStudio Beta 2 Microsoft .NET Framework 4 Extended (apparently this must be done *before* the next one) Microsoft .NET Framework Client Profile Microsoft Silverlight 4 Tools for Visual Studio 2010 WCF RIA Services Preview for Visual Studio 2010 While I was at it, I removed a bunch of other stuff, like Blend 3, Blend 4, the SDK's associated with them, and a bunch other stuff that was old. Of course, I didn't really want/need to keep any Silverlight 3 stuff around as I am developing Task-It in Silverlight 4. If I need a Silverlight 3 environment at some point I'll set it up in a virtual environment. NOTE: One thing that I did not uninstall is the Microsoft Silverlight 4 Toolkit November 2009. The reason is that they have not released the March version yet, so if you uninstall this, youll end up having to reinstall it. Installing OK, now that I had all of that old stuff off my machine, now it was time to get the new stuff. For this part I liked Tim Heuer's post, A Guide to What has Changed in Silverlight 4 RC better than the Silverlight 4 Information Page. VisualStudio 2010 RC - I installed Ultimate, but you may not need that version. No harm, it's free for now anyway. I downloaded the .exe and the 3 .rar files, then ran the .exe. I then extracted the contents of the ISO (using WinZip) to a new directory, and now had Setup.exe, a bunch of .cab files and some other assorted stuff. I simply ran Setup.exe and chose custom install (only because I wanted to uncheck Visual C++...I don't really have a need for that) Silverlight 4 Tools for Visual Studio 2010 - As mentioned in Tim's blog post, this installs this installs Silverlight developer runtime, SDK, tools, and this installs Silverlight developer runtime, SDK, tools, and WCF RIA Services. WCF RIA Services Toolkit March 2010 - I'm not sure if/when I'll need any of this stuff, but no harm in installing it anyway. Expression Blend 4 beta - Only if you plan to use Blend, which I do. Windows Phone Developer Tools - Only if you are interested in playing with Windows Phone 7 development. Wrap Up Hopefully I got those steps right. If anyone finds anything I've missed, please just add a comment to this post and I'll update it accordingly.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • DATEFROMPARTS

    - by jamiet
    I recently overheard a remark by Greg Low in which he said something akin to "the most interesting parts of a new SQL Server release are the myriad of small things that are in there that make a developer's life easier" (I'm paraphrasing because I can't remember the actual quote but it was something like that). The new DATEFROMPARTS function is a classic example of that . It simply takes three integer parameters and builds a date out of them (if you have used DateSerial in Reporting Services then you'll understand). Take the following code which generates the first and last day of some given years: SELECT 2008 AS Yr INTO #Years UNION ALL SELECT 2009 UNION ALL SELECT 2010 UNION ALL SELECT 2011 UNION ALL SELECT 2012SELECT [FirstDayOfYear] = CONVERT(DATE,CONVERT(CHAR(8),((y.[Yr] * 10000) + 101))),      [LastDayOfYear] = CONVERT(DATE,CONVERT(CHAR(8),((y.[Yr] * 10000) + 1231)))FROM   #Years y here are the results: That code is pretty gnarly though with those CONVERTs in there and, worse, if the character string is constructed in a certain way then it could fail due to localisation, check this out: SET LANGUAGE french;SELECT dt,Month_Name=DATENAME(mm,dt)FROM   (       SELECT  dt = CONVERT(DATETIME,CONVERT(CHAR(4),y.[Yr]) + N'-01-02')       FROM    #Years y       )d;SET LANGUAGE us_english;SELECT dt,Month_Name=DATENAME(mm,dt)FROM   (       SELECT  dt = CONVERT(DATETIME,CONVERT(CHAR(4),y.[Yr]) + N'-01-02')       FROM    #Years y       )d; Notice how the datetime has been converted differently based on the language setting. When French, the string "2012-01-02" gets interpreted as 1st February whereas when us_english the same string is interpreted as 2nd January. Instead of all this CONVERTing nastiness we have DATEFROMPARTS: SELECT [FirstDayOfYear] = DATEFROMPARTS(y.[Yr],1,1),    [LasttDayOfYear] = DATEFROMPARTS(y.[Yr],12,31)FROM   #Years y How much nicer is that? The bad news of course is that you have to upgrade to SQL Server 2012 or migrate to SQL Azure if you want to use it, as is the way of the world! Don't forget that if you want to try this code out on SQL Azure right this second, for free, you can do so by connecting up to AdventureWorks On Azure. You don't even need to have SSMS handy - a browser that runs Silverlight will do just fine. Simply head to https://mhknbn2kdz.database.windows.net/ and use the following credentials: Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly One caveat, SELECT INTO doesn't work on SQL Azure so you'll have to use this instead: DECLARE @y TABLE ( [Yr] INT);INSERT @y([Yr])SELECT 2008 AS Yr UNION ALL SELECT 2009 UNION ALL SELECT 2010 UNION ALL SELECT 2011 UNION ALL SELECT 2012;SELECT [FirstDayOfYear] = DATEFROMPARTS(y.[Yr],1,1),      [LastDayOfYear] = DATEFROMPARTS(y.[Yr],12,31)FROM @y y;SELECT [FirstDayOfYear] = CONVERT(DATE,CONVERT(CHAR(8),((y.[Yr] * 10000) + 101))),      [LastDayOfYear] = CONVERT(DATE,CONVERT(CHAR(8),((y.[Yr] * 10000) + 1231)))FROM @y y; @Jamiet

    Read the article

  • How to Draw Lines on the Screen

    - by Geertjan
    I've seen occasional questions on mailing lists about how to use the NetBeans Visual Library to draw lines, e.g., to make graphs or diagrams of various kinds by drawing on the screen. So, rather than drag/drop causing widgets to be added, you'd want widgets to be added on mouse clicks, and you'd want to be able to connect those widgets together somehow. Via the code below, you'll be able to click on the screen, which causes a dot to appear. When you have multiple dots, you can hold down the Ctrl key and connect them together. A guiding line appears to help you position the dots exactly in line with each other. When you go to File | Print, you'll be able to preview and print the diagram you've created. A picture that speaks 1000 words: Here's the code: public final class PlotterTopComponent extends TopComponent { private final Scene scene; private final LayerWidget baseLayer; private final LayerWidget connectionLayer; private final LayerWidget interactionLayer; public PlotterTopComponent() { initComponents(); setName(Bundle.CTL_PlotterTopComponent()); setToolTipText(Bundle.HINT_PlotterTopComponent()); setLayout(new BorderLayout()); this.scene = new Scene(); this.baseLayer = new LayerWidget(scene); this.interactionLayer = new LayerWidget(scene); this.connectionLayer = new LayerWidget(scene); scene.getActions().addAction(new SceneCreateAction()); scene.addChild(baseLayer); scene.addChild(interactionLayer); scene.addChild(connectionLayer); add(scene.createView(), BorderLayout.CENTER); putClientProperty("print.printable", true); } private class SceneCreateAction extends WidgetAction.Adapter { @Override public WidgetAction.State mousePressed(Widget widget, WidgetAction.WidgetMouseEvent event) { if (event.getClickCount() == 1) { if (event.getButton() == MouseEvent.BUTTON1 || event.getButton() == MouseEvent.BUTTON2) { baseLayer.addChild(new BlackDotWidget(scene, widget, event)); repaint(); return WidgetAction.State.CONSUMED; } } return WidgetAction.State.REJECTED; } } private class BlackDotWidget extends ImageWidget { public BlackDotWidget(Scene scene, Widget widget, WidgetAction.WidgetMouseEvent event) { super(scene); setImage(ImageUtilities.loadImage("org/netbeans/plotter/blackdot.gif")); setPreferredLocation(widget.convertLocalToScene(event.getPoint())); getActions().addAction( ActionFactory.createExtendedConnectAction( connectionLayer, new BlackDotConnectProvider())); getActions().addAction( ActionFactory.createAlignWithMoveAction( baseLayer, interactionLayer, ActionFactory.createDefaultAlignWithMoveDecorator())); } } private class BlackDotConnectProvider implements ConnectProvider { @Override public boolean isSourceWidget(Widget source) { return source instanceof BlackDotWidget && source != null ? true : false; } @Override public ConnectorState isTargetWidget(Widget src, Widget trg) { return src != trg && trg instanceof BlackDotWidget ? ConnectorState.ACCEPT : ConnectorState.REJECT; } @Override public boolean hasCustomTargetWidgetResolver(Scene arg0) { return false; } @Override public Widget resolveTargetWidget(Scene arg0, Point arg1) { return null; } @Override public void createConnection(Widget source, Widget target) { ConnectionWidget conn = new ConnectionWidget(scene); conn.setTargetAnchor(AnchorFactory.createCircularAnchor(target, 10)); conn.setSourceAnchor(AnchorFactory.createCircularAnchor(source, 10)); connectionLayer.addChild(conn); } } ... ... ... Note: The code above was written based on the Visual Library tutorials on the NetBeans Platform Learning Trail, in particular via the "ConnectScene" sample in the "test.connect" package, which is part of the very long list of Visual Library samples referred to in the Visual Library tutorials on the NetBeans Platform Learning Trail. The next steps are to add a reconnect action and an action to delete a dot by double-clicking on it. Would be interesting to change the connecting line so that the length of the line were to be shown, i.e., as you draw a line from one dot to another, you'd see a constantly changing number representing the current distance of the connecting line. Also, once lines are connected to form a rectangle, would be cool to be able to write something within that rectangle. Then one could really create diagrams, which would be pretty cool.

    Read the article

  • Parsing T-SQL – The easy way

    - by Dave Ballantyne
    Every once in a while, I hit an issue that would require me to interrogate/parse some T-SQL code.  Normally, I would shy away from this and attempt to solve the problem in some other way.  I have written parsers before in the the past using LEX and YACC, and as much fun and awesomeness that path is,  I couldnt justify the time it would take. However, this week I have been faced with just such an issue and at the back of my mind I can remember reading through the SQLServer 2012 feature pack and seeing something called “Microsoft SQL Server 2012 Transact-SQL Language Service “.  This is described there as : “The SQL Server Transact-SQL Language Service is a component based on the .NET Framework which provides parsing validation and IntelliSense services for Transact-SQL for SQL Server 2012, SQL Server 2008 R2, and SQL Server 2008. “ Sounds just what I was after.  Documentation is very scant on this so dont take what follows as best practice or best use, just a practice and a use. Knowing what I was sort of looking for something, I found the relevant assembly in the gac which is the simply named ,’Microsoft.SqlServer.Management.SqlParser’. Even knowing that you wont find much in terms of documentation if you do a web-search, but you will find the MSDN documentation that list the members and methods etc… The “scanner”  class sounded the most appropriate for my needs as that is described as “Scans Transact-SQL searching for individual units of code or tokens.”. After a bit of poking, around the code i ended up with was something like [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Management.SqlParser") | Out-Null $ParseOptions = New-Object Microsoft.SqlServer.Management.SqlParser.Parser.ParseOptions $ParseOptions.BatchSeparator = 'GO' $Parser = new-object Microsoft.SqlServer.Management.SqlParser.Parser.Scanner($ParseOptions) $Sql = "Create Procedure MyProc as Select top(10) * from dbo.Table" $Parser.SetSource($Sql,0) $Token=[Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SET $Start =0 $End = 0 $State =0 $IsEndOfBatch = $false $IsMatched = $false $IsExecAutoParamHelp = $false while(($Token = $Parser.GetNext([ref]$State ,[ref]$Start, [ref]$End, [ref]$IsMatched, [ref]$IsExecAutoParamHelp ))-ne [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::EOF) { try{ ($TokenPrs =[Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]$Token) | Out-Null $TokenPrs $Sql.Substring($Start,($end-$Start)+1) }catch{ $TokenPrs = $null } } As you can see , the $Sql variable holds the sql to be parsed , that is pushed into the $Parser object using SetSource,  and then we will use GetNext until the EOF token is returned.  GetNext will also return the Start and End character positions within the source string of the parsed text. This script’s output is : TOKEN_CREATE Create TOKEN_PROCEDURE Procedure TOKEN_ID MyProc TOKEN_AS as TOKEN_SELECT Select TOKEN_TOP top TOKEN_INTEGER 10 TOKEN_FROM from TOKEN_ID dbo TOKEN_TABLE Table note that the ‘(‘, ‘)’  and ‘*’ characters have returned a token type that is not present in the Microsoft.SqlServer.Management.SqlParser.Parser.Tokens Enum that has caused an error which has been caught in the catch block.  Fun, Fun ,Fun , Simple T-SQL Parsing.  Hope this helps someone in the same position,  let me know how you get on.

    Read the article

  • WIF-less claim extraction from ACS: JWT

    - by Elton Stoneman
    ACS support for JWT still shows as "beta", but it meets the spec and it works nicely, so it's becoming the preferred option as SWT is losing favour. (Note that currently ACS doesn’t support JWT encryption, if you want encrypted tokens you need to go SAML). In my last post I covered pulling claims from an ACS token without WIF, using the SWT format. The JWT format is a little more complex, but you can still inspect claims just with string manipulation. The incoming token from ACS is still presented in the BinarySecurityToken element of the XML payload, with a TokenType of urn:ietf:params:oauth:token-type:jwt: <t:RequestSecurityTokenResponse xmlns:t="http://schemas.xmlsoap.org/ws/2005/02/trust">   <t:Lifetime>     <wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T07:39:55.337Z</wsu:Created>     <wsu:Expires xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T09:19:55.337Z</wsu:Expires>   </t:Lifetime>   <wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">     <EndpointReference xmlns="http://www.w3.org/2005/08/addressing">       <Address>http://localhost/x.y.z</Address>     </EndpointReference>   </wsp:AppliesTo>   <t:RequestedSecurityToken>     <wsse:BinarySecurityToken wsu:Id="_1eeb5cf4-b40b-40f2-89e0-a3343f6bd985-6A15D1EED0CDB0D8FA48C7D566232154" ValueType="urn:ietf:params:oauth:token-type:jwt" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">[ base64string ] </wsse:BinarySecurityToken>   </t:RequestedSecurityToken>   <t:TokenType>urn:ietf:params:oauth:token-type:jwt</t:TokenType>   <t:RequestType>http://schemas.xmlsoap.org/ws/2005/02/trust/Issue</t:RequestType>   <t:KeyType>http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey</t:KeyType> </t:RequestSecurityTokenResponse> The token as a whole needs to be base-64 decoded. The decoded value contains a header, payload and signature, dot-separated; the parts are also base-64, but they need to be decoded using a no-padding algorithm (implementation and more details in this MSDN article on validating an Exchange 2013 identity token). The values are then in JSON; the header contains the token type and the hashing algorithm: "{"typ":"JWT","alg":"HS256"}" The payload contains the same data as in the SWT, but JSON rather than querystring format: {"aud":"http://localhost/x.y.z" "iss":"https://adfstest-bhw.accesscontrol.windows.net/" "nbf":1346398795 "exp":1346404795 "http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant":"2012-08-31T07:39:53.652Z" "http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod":"http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/windows" "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname":"xyz" "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress":"[email protected]" "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn":"[email protected]" "identityprovider":"http://fs.svc.x.y.z.com/adfs/services/trust"} The signature is in the third part of the token. Unlike SWT which is fixed to HMAC-SHA-256, JWT can support other protocols (the one in use is specified as the "alg" value in the header). How to: Validate an Exchange 2013 identity token contains an implementation of a JWT parser and validator; apart from the custom base-64 decoding part, it’s very similar to SWT extraction. I've wrapped the basic SWT and JWT in a ClaimInspector.aspx page on gitHub here: SWT and JWT claim inspector. You can drop it into any ASP.Net site and set the URL to be your redirect page in ACS. Swap ACS to issue SWT or JWT, and using the same page you can inspect the claims that come out.

    Read the article

  • MSBuild (.NET 4.0) access problems

    - by JMP
    I'm using Cruise Control .NET as my build server (Windows 2008 Server). Yesterday I upgraded my ASP.NET MVC project from VS 2008/.NET 3.5 to VS 2010/.NET 4.0. The only change I made to my ccnet.config's MSBuild task was the location of MSBuild.exe. Ever since I made that change, the build has been broken with the error: MSB4019 - The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk. This file does, in fact, exist in the location specified (I solved a problem similar to this when setting up the build server for VS2008/.NET 3.5 by copying the files from my dev environment to my build environment). So I RDP'ed into the build machine and opened a command prompt, used MSBUILD to attempt to build my project. MSBUILD returns the error: MSB3021 - Unable to copy file "obj\debug....dll". Access to the path 'bin....dll' is denied. Since I'm running MSBUILD from the command prompt, logged in with an account that has administrative privileges, I'm assuming that MSBUILD is running with the same privileges that I have. Next, I tried to copy the file that MSBUILD was attempting to copy. In this case, I get the UAC dialog that makes me click the [Continue] button to complete the copy. I'd like to avoid installing Visual Studio 2010 on my build machine, can anyone suggest other fixes I might try?

    Read the article

  • Enabling Session State in SharePoint 2010?

    - by Steve Danner
    I have a web service built for SharePoint 2007 that I am trying to port to SharePoint 2010. This web service is dependent on session state to function properly, but so far, I have been enable to get session state to work at all in SharePoint 2010. This web service runs as its own web application under t he /_vti_bin virtual directory. I have tried all of the following with no luck: Ensured the "State Service" service application is running. Added the System.Web.SessionState.SessionStateModule http module to my application's web.config file. Added the System.Web.SessionState.SessionStateModule http module to my SharePoint root web.config file. Added <pages enableSessionState="true" /> to my application's web.config file. Added <pages enableSessionState="true" /> to my root web.config file. Additional Environment info: Visual Studio 2008 - SP1 .NET 3.5 - SP1 SharePoint 2010 - RC Windows Server 2008 R2 ASMX web service (not WCF) Had anyone had any luck getting a web application or web service to use session state in SharePoint 2010 yet? Thanks! Steve

    Read the article

  • Advice on database design / SQL for retrieving data with chronological order

    - by Remnant
    I am creating a database that will help keep track of which employees have been on a certain training course. I would like to get some guidance on the best way to design the database. Specifically, each employee must attend the training course each year and my database needs to keep a history of all the dates on which they have attend the course in the past. The end user will use the software as a planning tool to help them book future course dates for employees. When they select a given employee they will see: (a) Last attendance date (b) Projected future attendance date(i.e. last attendance date + 1 calendar year) In terms of my database, any given employee may have multiple past course attendance dates: EmpName AttandanceDate Joe Bloggs 1st Jan 2007 Joe Bloggs 4th Jan 2008 Joe Bloggs 3rd Jan 2009 Joe Bloggs 8th Jan 2010 My question is what is the best way to set up the database to make it easy to retrieve the most recent course attendance date? In the example above, the most recent would be 8th Jan 2010. Is there a good way to use SQL to sort by date and pick the MAX date? My other idea was to add a column called ‘MostRecent’ and just set this to TRUE. EmpName AttandanceDate MostRecent Joe Bloggs 1st Jan 2007 False Joe Bloggs 4th Jan 2008 False Joe Bloggs 3rd Jan 2009 False Joe Bloggs 8th Jan 2010 True I wondered if this would simplify the SQL i.e. SELECT Joe Bloggs WHERE MostRecent = ‘TRUE’ Also, when the user updates a given employee’s attendance record (i.e. with latest attendance date) I could use SQL to: Search for the employee and set the MostRecent value to FALSE Add a new record with MostRecent set to TRUE? Would anybody recommended either method over the other? Or do you have a completely different way of solving this problem?

    Read the article

  • No Source available

    - by Eric
    I am not sure what happened or if I did anything.. Now anytime I try and debug it says no source available on all BCL stuff For example, on a debug.print I get that message with Locating source for 'f:\dd\ndp\fx\src\CompMod\System\Diagnostics\Debug.cs'. Checksum: MD5 {40 74 18 44 a8 15 28 2e 54 75 5e 40 d1 5f 6a 0} The file 'f:\dd\ndp\fx\src\CompMod\System\Diagnostics\Debug.cs' does not exist. Looking in script documents for 'f:\dd\ndp\fx\src\CompMod\System\Diagnostics\Debug.cs'... Looking in the projects for 'f:\dd\ndp\fx\src\CompMod\System\Diagnostics\Debug.cs'. The file was not found in a project. Looking in directory 'C:\Program Files\Microsoft Visual Studio 10.0\VC\crt\src\'... Looking in directory 'C:\Program Files\Microsoft Visual Studio 10.0\VC\atlmfc\src\mfc\'... Looking in directory 'C:\Program Files\Microsoft Visual Studio 10.0\VC\atlmfc\src\atl\'... Looking in directory 'C:\Program Files\Microsoft Visual Studio 10.0\VC\atlmfc\include\'... The debug source files settings for the active solution indicate that the debugger will not ask the user to find the file: f:\dd\ndp\fx\src\CompMod\System\Diagnostics\Debug.cs. The debugger could not locate the source file 'f:\dd\ndp\fx\src\CompMod\System\Diagnostics\Debug.cs'. This happens all the time now and I 1. Don't have an F: 2. Enable .net framework source stepping is unchecked Is there some other sneaky setting to make these messages go away? Regards _Eric

    Read the article

  • "Unable To Load Client Print Control" - SSRS Printing problems again

    - by mamorgan1
    Please forgive me as my head is spinning. I have tried so many solutions to this issue, that I'm almost not sure where I am at this point. At this point in time I have these issues in my Production, Test, and Dev environments. For simplicity sake, I will just try to get it working in Dev first. Here is my setup: Database/Reporting Server (Same server): Windows Server 2003 SP2 SQL Server 2005 SP3 Development Box: Windows 7 Visual Studio 2008 SP1 SQL Server 2008 SP1 (not being used in this case, but wanted to include it in case it is relative) Internet Explorer 8 Details: * I have a custom ASP.NET application that is using ReportViewer to access reports on my Database/Reporting Server. * I am able to connect directly to Report Manager and print with no trouble. * When I view source on the page with ReportViewer, it says I'm am using version 9.0.30729.4402 . * The classid of the rsclientprint.dll that keeps getting installed to my c:\windows\downloaded program files directory is {41861299-EAB2-4DCC-986C-802AE12AC499}. * I have tried taking the rsclientprint.cab file from my Database/Reporting Server and installing it directly to my Development Box and had no success. I made sure to unregister the previously installed dll first. I feel like I have read as many solutions as I can, and so I turn to you for some assistance. Please let me know if I can provide further details that would be helpful. Thanks

    Read the article

  • Linq IQueryable variables

    - by kevinw
    Hi i have a function that should return me a string but what is is doing is bringing me back the sql expression that i am using on the database what have i done wrong public static IQueryable XMLtoProcess(string strConnection) { Datalayer.HameserveDataContext db = new HameserveDataContext(strConnection); var xml = from x in db.JobImports where x.Processed == false select new { x.Content }; return xml; } this is the code sample this is what i should be getting back <PMZEDITRI TRI_TXNNO="11127" TRI_TXNSEQ="1" TRI_CODE="600" TRI_SUBTYPE="1" TRI_STATUS="Busy" TRI_CRDATE="2008-02-25T00:00:00.0000000-00:00" TRI_CRTIME="54540" TRI_PRTIME="0" TRI_BATCH="" TRI_REF="" TRI_CPY="main" C1="DEPL" C2="007311856/001" C3="14:55" C4="CUB2201" C5="MR WILLIAM HOGG" C6="CS12085393" C7="CS" C8="Blocked drain" C9="Scheme: CIS Home Rescue edi tests" C10="MR WILLIAM HOGG" C11="74 CROMARTY" C12="OUSTON" C13="CHESTER LE STREET" C14="COUNTY DURHAM" C15="" C16="DH2 1JY" C17="" C18="" C19="" C20="" C21="CIS" C22="0018586965 ||" C23="BD" C24="W/DE/BD" C25="EX-DIRECTORY" C26="" C27="/" C28="CIS Home Rescue" C29="CIS Home Rescue Plus Insd" C30="Homeserve Claims Management Ltd|Upon successful completion of this repair the contractor must submit an itemised and costed Homeserve Claims Management Ltd Job Sheet." N1="79.9000" N2="68.0000" N3="11.9000" N4="0" N5="0" N6="0" D1="2008-02-25T00:00:00.0000000-00:00" T2="EX-DIRECTORY" T4="Blocked drain" TRI_SYSID="9" TRI_RETRY="3" TRI_RETRYTIME="0" /> can anyone help me please

    Read the article

  • How to Compile Mod_Python 3.3.1 for Python 2.6 and Apache 2.2 on Windows?

    - by John
    I have no experience compiling code other than using Visual Studio's Build command. I am hoping we can create a step by step guide for compiling mod_python on windows. Please be as descriptive as possible. This is what I've done so far: Download and install python 2.6.2 Download and install apache 2.2.11 Download the most recent source code for mod_python from svn From here I'm lost to what the next step is. I've downloaded Microsoft Visual C++ 2008 Express Edition. As mentioned by Hao I've already tried the tutorial mentioned in that link. Here is the error messages I'm receiving with that tutorial. C:\mod_python\distbuild_installer.bat Could Not Find C:\mod_python\src*.obj running bdist_wininst running build running build_py creating build creating build\lib.win32-2.6 creating build\lib.win32-2.6\mod_python copying C:\mod_python\lib\python\mod_python\apache.py - build\lib.win32-2.6\mod _python copying C:\mod_python\lib\python\mod_python\cache.py - build\lib.win32-2.6\mod_ python copying C:\mod_python\lib\python\mod_python\cgihandler.py - build\lib.win32-2.6 \mod_python copying C:\mod_python\lib\python\mod_python\Cookie.py - build\lib.win32-2.6\mod _python copying C:\mod_python\lib\python\mod_python\importer.py - build\lib.win32-2.6\m od_python copying C:\mod_python\lib\python\mod_python\psp.py - build\lib.win32-2.6\mod_py thon copying C:\mod_python\lib\python\mod_python\publisher.py - build\lib.win32-2.6\ mod_python copying C:\mod_python\lib\python\mod_python\python22.py - build\lib.win32-2.6\m od_python copying C:\mod_python\lib\python\mod_python\Session.py - build\lib.win32-2.6\mo d_python copying C:\mod_python\lib\python\mod_python\testhandler.py - build\lib.win32-2. 6\mod_python copying C:\mod_python\lib\python\mod_python\util.py - build\lib.win32-2.6\mod_p ython copying C:\mod_python\lib\python\mod_python__init__.py - build\lib.win32-2.6\m od_python running build_ext building 'mod_python_so' extension creating build\temp.win32-2.6 creating build\temp.win32-2.6\Release creating build\temp.win32-2.6\Release\mod_python creating build\temp.win32-2.6\Release\mod_python\src C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W 3 /GS- /DNDEBUG -DWIN32 -DNDEBUG -D_WINDOWS -IC:\mod_python\src\include -Ic:\apa che\include -IC:\Python26\include -IC:\Python26\PC /TcC:\mod_python\src\mod_pyth on.c /Fobuild\temp.win32-2.6\Release\mod_python\src\mod_python.obj mod_python.c c:\apache\include\ap_config.h(25) : fatal error C1083: Cannot open include file: 'apr.h': No such file or directory error: command '"C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe"' fa iled with exit status 2

    Read the article

  • Win32: What is the status of chunked encoding support in WinHttpReadData?

    - by Cheeso
    The documentation for WinHttpReadData says, regarding HTTP's chunked transfer coding: Starting in Windows Vista and Windows Server 2008, WinHttp enables applications to perform chunked transfer encoding on data sent to the server. When the Transfer-Encoding header is present on the WinHttp response, WinHttpReadData strips the chunking information before giving the data to the application. Can anyone decipher this? Q1 First, this text is on the page for WinHttpReadData, which is used to ... read data within an HTTP client application, specifically the response data. So what does it mean when it says Starting in Windows Vista and Windows Server 2008, WinHttp enables applications to perform chunked transfer encoding on data sent to the server. The WinHttpReadData function isn't used with data being sent to the server. It is used when reading data from the server. Consulting the doc for the WinHttpWriteData function, which is used to send data to the server as part of an HTTP request, there is no mention of the chunked transfer capability. Q2 Supposing that I figure out just what the newish chunked transfer support amounts to, how do I get that support? It says that it is new on Vista and WS2008. What happens if I write an app that runs on WS2003, and uses WinHttpReadData and it encounters a chunked response, or WinHttpWriteData, and it wants to send a chunked request? Between the lines, is this documentation saying that I need to link against the Vista-era Windows SDK, or later, in order to get the capability to do chunked encoding? Or is it really impossible on WS2003?, in other words it is the case that the app doing chunked transfer using this library must run on the OS specified? This might read like a rant, but it's not. I truly want to know.

    Read the article

  • asp.net mvc custom model binder

    - by mike
    pleas help guys, my custom model binder which has been working perfectly has starting giving me errors details below An item with the same key has already been added. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ArgumentException: An item with the same key has already been added. Source Error: Line 31: { Line 32: string key = bindingContext.ModelName; Line 33: var doc = base.BindModel(controllerContext, bindingContext) as Document; Line 34: Line 35: // DoBasicValidation(bindingContext, doc); Source File: C:\Users\Bich Vu\Documents\Visual Studio 2008\Projects\PitchPortal\PitchPortal.Web\Binders\DocumentModelBinder.cs Line: 33 Stack Trace: [ArgumentException: An item with the same key has already been added.] System.ThrowHelper.ThrowArgumentException(ExceptionResource resource) +51 System.Collections.Generic.Dictionary2.Insert(TKey key, TValue value, Boolean add) +7462172 System.Linq.Enumerable.ToDictionary(IEnumerable1 source, Func2 keySelector, Func2 elementSelector, IEqualityComparer1 comparer) +270 System.Linq.Enumerable.ToDictionary(IEnumerable1 source, Func2 keySelector, IEqualityComparer1 comparer) +102 System.Web.Mvc.ModelBindingContext.get_PropertyMetadata() +157 System.Web.Mvc.DefaultModelBinder.BindProperty(ControllerContext controllerContext, ModelBindingContext bindingContext, PropertyDescriptor propertyDescriptor) +158 System.Web.Mvc.DefaultModelBinder.BindProperties(ControllerContext controllerContext, ModelBindingContext bindingContext) +90 System.Web.Mvc.DefaultModelBinder.BindComplexElementalModel(ControllerContext controllerContext, ModelBindingContext bindingContext, Object model) +50 System.Web.Mvc.DefaultModelBinder.BindComplexModel(ControllerContext controllerContext, ModelBindingContext bindingContext) +1048 System.Web.Mvc.DefaultModelBinder.BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) +280 PitchPortal.Web.Binders.documentModelBinder.BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) in C:\Users\Bich Vu\Documents\Visual Studio 2008\Projects\PitchPortal\PitchPortal.Web\Binders\DocumentModelBinder.cs:33 System.Web.Mvc.ControllerActionInvoker.GetParameterValue(ControllerContext controllerContext, ParameterDescriptor parameterDescriptor) +257 System.Web.Mvc.ControllerActionInvoker.GetParameterValues(ControllerContext controllerContext, ActionDescriptor actionDescriptor) +109 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +314 System.Web.Mvc.Controller.ExecuteCore() +105 System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +39 System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) +7 System.Web.Mvc.<c_DisplayClass8.b_4() +34 System.Web.Mvc.Async.<c_DisplayClass1.b_0() +21 System.Web.Mvc.Async.<c_DisplayClass81.b_7(IAsyncResult _) +12 System.Web.Mvc.Async.WrappedAsyncResult1.End() +59 System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) +44 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) +7 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +8677678 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 any ideas guys? Thanks

    Read the article

  • running asp.net 3.5 and asp.net 2.0 in same site

    - by cori
    We're running ASP.Net 2.0 on our corporate web site, and I'd like to get it up to ASP.Net 3.5 as smoothly as possible. The project/solution architecture in VS 2005 is an ASP.Net 2.0 web project and an .Net 2.0 data access layer project which is used by the site code. Upon opening the projects in a new VS 2008 solution they seemed to be converted to .Net 3.5 with a minimum of fuss - they built correctly out of the box, deployed successfully, and seem to work just fine, which is exactly as I would expect given that .Net 2.0 and 3.5 share a common runtime. The major difference after the conversion is that the web.config file's referenced dlls are now the 3.5 versions. What I would like to do is to update the site piecemeal; as I make modifications to a given page send the 3.5 verson of that page over to our webserver and not update the whole site at once. In testing on our dev box this approach seems to be working fine - the site code is interacting with the .Net 3.5 data access layer without difficulty, a handful of pages are running 3.5 page-behind code (by this I mean that they're running assemblies built in VS 2008 - the site is using single-page assemblies for code behind), the 3.5 web.config is in place, and the bulk of the site is running code-behind assemblies built in VS2005. Everything looks great. Which makes me worried that I'm missing something. Is this architecture workable, or is there a problem lying is wait for m that I haven't considered?

    Read the article

  • How to solve "catastrophic failure" with 32-bit COM component in SysWOW64\cscript or wscript

    - by kcrumley
    I'm trying to run a VBScript script that uses a 7-year-old 3rd-party 32-bit COM component on Windows Server 2008 R2, with the command-line 32-bit script host, SysWOW64\cscript.exe. When I call CreateObject on the class, it appears to be successful, but the very first time I try to use a property or method (I've tried several different ones) on the object, it gives me the "catastrophic failure". I have identical results with SysWOW64\wscript.exe, except, of course, that my error message comes in a msgbox instead of the command-line window. I think this has to do specifically with the 64-bit scripting hosts because of the following: The equivalent Classic ASP script, calling the same component and using 95% of the same code, works correctly on the same server, with IIS configured to support 32-bit COM. The same VBScript works correctly on a 32-bit Windows XP machine and a 32-bit Windows Server 2003 machine. The component fails in exactly the same way on my 64-bit Windows 7 machine. My Google searches for solutions to this problem have mostly turned up a lot of different problems that were solved by putting the COM component into a toolbar in Visual Studio. Obviously, that solution doesn't apply here. My questions are: Is there a core problem that is always behind a "catastrophic failure" from a Windows scripting host calling a COM component? Is there a place in a configuration snap-in, or in the registry, where I need to make a change similar to the change I had to make to the IIS Application pool to "Enable 32-bit Applications"? Is there a general place in the Server 2008 R2 Event Viewer that I should be looking, to see if there are any more details on the failure, in case it turns out to be specific to this component? Thanks in advance.

    Read the article

  • What is the worst programming language you ever worked with? [closed]

    - by Ludwig Weinzierl
    If you have an interesting story to share, please post an answer, but do not abuse this question for bashing a language. We are programmers, and our primary tool is the programming language we use. While there is a lot of discussion about the best one, I'd like to hear your stories about the worst programming languages you ever worked with and I'd like to know exactly what annoyed you. I'd like to collect this stories partly to avoid common pitfalls while designing a language (especially a DSL) and partly to avoid quirky languages in the future in general. This question is not subjective. If a language supports only single character identifiers (see my own answer) this is bad in a non-debatable way. EDIT Some people have raised concerns that this question attracts trolls. Wading through all your answers made one thing clear. The large majority of answers is appropriate, useful and well written. UPDATE 2009-07-01 19:15 GMT The language overview is now complete, covering 103 different languages from 102 answers. I decided to be lax about what counts as a programming language and included anything reasonable. Thank you David for your comments on this. Here are all programming languages covered so far (alphabetical order, linked with answer, new entries in bold): ABAP, all 20th century languages, all drag and drop languages, all proprietary languages, APF, APL (1), AS400, Authorware, Autohotkey, BancaStar, BASIC, Bourne Shell, Brainfuck, C++, Centura Team Developer, Cobol (1), Cold Fusion, Coldfusion, CRM114, Crystal Syntax, CSS, Dataflex 2.3, DB/c DX, dbase II, DCL, Delphi IDE, Doors DXL, DOS batch (1), Excel Macro language, FileMaker, FOCUS, Forth, FORTRAN, FORTRAN 77, HTML, Illustra web blade, Informix 4th Generation Language, Informix Universal Server web blade, INTERCAL, Java, JavaScript (1), JCL (1), karol, LabTalk, Labview, Lingo, LISP, Logo, LOLCODE, LotusScript, m4, Magic II, Makefiles, MapBasic, MaxScript, Meditech Magic, MEL, mIRC Script, MS Access, MUMPS, Oberon, object extensions to C, Objective-C, OPS5, Oz, Perl (1), PHP, PL/SQL, PowerDynamo, PROGRESS 4GL, prova, PS-FOCUS, Python, Regular Expressions, RPG, RPG II, Scheme, ScriptMaker, sendmail.conf, Smalltalk, Smalltalk , SNOBOL, SpeedScript, Sybase PowerBuilder, Symbian C++, System RPL, TCL, TECO, The Visual Software Environment, Tiny praat, TransCAD, troff, uBasic, VB6 (1), VBScript (1), VDF4, Vimscript, Visual Basic (1), Visual C++, Visual Foxpro, VSE, Webspeed, XSLT The answers covering 80386 assembler, VB6 and VBScript have been removed.

    Read the article

  • Excel Plug-In Assembly Loading Problem (Access Denied)

    - by PlagueEditor
    I am developing an Excel 2003 add-in using Visual Studio 2008. My add-in loads fine; however, it loads plug-ins from other C# DLL's. I would like this to be done dynamically at run time so referencing them during development is something I would rather not do. Anyways, anytime I try to load a DLL from the Excel add-in at start up, it throws a security exception. This particular example is HTML Agility Pack. It's not a plug-in but a plug-in's dependency. But nonetheless it won't even load: {System.IO.FileLoadException: Could not load file or assembly 'HtmlAgilityPack, Version=1.4.0.0, Culture=neutral, PublicKeyToken=bd319b19eaf3b43a' or one of its dependencies. Failed to grant permission to execute. (Exception from HRESULT: 0x80131418) File name: 'HtmlAgilityPack, Version=1.4.0.0, Culture=neutral, PublicKeyToken=bd319b19eaf3b43a' ---> System.Security.Policy.PolicyException: Execution permission cannot be acquired. at System.Security.SecurityManager.ResolvePolicy(Evidence evidence, PermissionSet reqdPset, PermissionSet optPset, PermissionSet denyPset, PermissionSet& denied, Boolean checkExecutionPermission) at System.Security.SecurityManager.ResolvePolicy(Evidence evidence, PermissionSet reqdPset, PermissionSet optPset, PermissionSet denyPset, PermissionSet& denied, Int32& securitySpecialFlags, Boolean checkExecutionPermission) at System.Reflection.Assembly.nLoadFile(String path, Evidence evidence) at System.Reflection.Assembly.LoadFile(String path) at Cjack.Druid.SourcePluginManager.LoadPlugin(String filePath) in C:\Documents and Settings\Annie Tormey\My Documents\Visual Studio 2008\Projects\DruidAddin2003\Druid\SourcePluginManager.cs:line 26 } This is extremely frustrating because it runs perfectly fine for Office 2010 and as a standalone application. Thank-you to anyone who can give me an answer as to why this is happening or a solution to fix it. Thank-you for your time.

    Read the article

  • linebreak in url with Bibtex and hyperref package

    - by Tim
    Why is this item not shown properly in my bibliography? @misc{ann, abstract = {ANN is an implbmentation of nearest neighbor search.}, author = {David M. Mount and Sunil Arya}, howpublished = {\url{http://www.cs.umd.edu/~mount/ANN/}}, keywords = {knn}, posted-at = {2010-04-08 00:05:04}, priority = {2}, title = {ANN.}, url = "http://www.cs.umd.edu/~mount/ANN/", year = {2008} } @misc{Nilsson96introductionto, author = {Nilsson, Nils J.}, citeulike-article-id = {6995464}, howpublished = {\url{http://robotics.stanford.edu/people/nilsson/mlbook.html}}, keywords = {*file-import-10-04-11}, posted-at = {2010-04-11 06:52:28}, priority = {2}, title = {Introduction to Machine Learning: An Early Draft of a Proposed Textbook.}, year = {1996} } EDIT: I am using \usepackage{hyperref}, not \usepackage{url}. I don't know what changes I just made made the first item appear properly now @misc{ann, abstract = {ANN is an implbmentation of nearest neighbor search.}, author = {David M. Mount and Sunil Arya}, howpublished = {\url{http://www.cs.umd.edu/~mount/ANN/}}, keywords = {ann}, posted-at = {2010-04-08 00:05:04}, priority = {2}, title = {The \textsc{A}pproximate \textsc{N}earest \textsc{N}eighbor \textsc{S}earching \textsc{L}ibrary.}, url = "http://www.cs.umd.edu/~mount/ANN/", year = {2008} } EDIT: Since I am using hyperref package, it produces error when using url package together with it. So the two cannot work together? I would like to use hyper links inside pdf file, so I would like to use hyperref package instead of url package. I googled a bit, and try \usepackage[hyperindex,breaklinks]{hyperref}, but there is still no line break just as before. How can I do it? Is there conflict in the packages that I am now using?: \usepackage{amsmath} \usepackage{amsfonts} \usepackage[dvips]{graphicx} \usepackage{wrapfig} \graphicspath{{./figs/}} \DeclareGraphicsExtensions{.eps} \usepackage{fixltx2e} \usepackage{array} \usepackage{times} \usepackage{fancyhdr} \usepackage{multirow} \usepackage{algorithmic} \usepackage{algorithm} \usepackage{slashbox} \usepackage{multirow} \usepackage{rotating} \usepackage{longtable} \usepackage[hyperindex,breaklinks]{hyperref} \usepackage{forloop} \usepackage{lscape} \usepackage{supertabular} \usepackage{amssymb} \usepackage{amsthm}

    Read the article

  • Installed VS Express 2010 with .NET 4.0 and now .NET 3.5 setup project adds 15 dependencies

    - by Heckflosse_230
    Hi, I installed VS Express 2010 with .NET 4.0 and now a .NET 3.5 setup project in VS 2008 adds 15 dependencies (below), what is going on??? I did not change anything in the project in between installing VS 2010, VS 2008 is packagin the following files in the project: ==================== Packaging file 'Microsoft.Transactions.Bridge.dll'... Packaging file 'System.Core.dll'... Packaging file 'System.Data.DataSetExtensions.dll'... Packaging file 'System.Data.Entity.dll'... Packaging file 'System.Data.Linq.dll'... Packaging file 'System.Data.Services.Client.dll'... Packaging file 'System.Data.Services.Design.dll'... Packaging file 'System.IdentityModel.Selectors.dll'... Packaging file 'System.IdentityModel.dll'... Packaging file 'System.Runtime.Serialization.dll'... Packaging file 'System.ServiceModel.Web.dll'... Packaging file 'System.ServiceModel.dll'... Packaging file 'System.Web.Abstractions.dll'... Packaging file 'System.Web.Extensions.dll'... Packaging file 'System.Xml.Linq.dll'... ==================== I've uninstalled VS 2010 and .NET 4.0 but to no avail, same problem. Lesson learned: DON'T EXPERIMENT ON DEVELOPMENT MACHINE! Thanks, Chris

    Read the article

  • Creating generic list of instances of a class.

    - by Jim Branson
    I have several projects where I build a dictionary from a small class. I'm using C# 2008, Visual studio 2008 and .net 3.5 This is the code: namespace ReportsTest { class Junk { public static Dictionary<string, string> getPlatKeys() { Dictionary<string, string> retPlatKeys = new Dictionary<string, string>(); SqlConnection conn = new SqlConnection("Data Source=JB55LTARL;Initial Catalog=HldiReports;Integrated Security=True"); SqlDataReader Dr = null; conn.Open(); SqlCommand cmnd = new SqlCommand("SELECT Make, Series, RedesignYear, SeriesName FROM CompPlatformKeys", conn); Dr = cmnd.ExecuteReader(); while (Dr.Read()) { utypPlatKeys rec = new utypPlatKeys(Dr); retPlatKeys.Add(rec.Make + rec.Series + rec.RedesignYear, rec.SeriesName); } conn = null; Dr = null; return retPlatKeys; } } public class utypPlatKeys { public string Make { get; set; } public string Series { get; set; } public string RedesignYear { get; set; } public string SeriesName { get; set; } public utypPlatKeys(SqlDataReader dr) { this.Make = dr.GetInt16(dr.GetOrdinal("Make")).ToString("D3"); this.Series = dr.GetInt16(dr.GetOrdinal("Series")).ToString("D3"); this.RedesignYear = dr.GetInt16(dr.GetOrdinal("RedesignYear")).ToString(); this.SeriesName = dr["SeriesName"].ToString(); } } } The immediate window shows all of the entries in retPlatKeys and if you hover over retPlatKeys after loading it indicates the number of elements like this: "retPlatKeys| Count = 923 which is correct. I went to create a new project using this pattern only now the immediate window says retPlatKeys is out of scope and hovering over retPlatKeys after loading I get something like retPlatKeys|0x0000000002578900. Any help is greatly appreciated.

    Read the article

  • Custom Event - invokation list implementation considerations

    - by M.A. Hanin
    I'm looking for some pointers on implementing Custom Events in VB.NET (Visual Studio 2008, .NET 3.5). I know that "regular" (non-custom) Events are actually Delegates, so I was thinking of using Delegates when implementing a Custom Event. On the other hand, Andrew Troelsen's "Pro VB 2008 and the .NET 3.5 Platform" book uses Collection types in all his Custom Events examples, and Microsoft's sample codes match that line of thought. So my question is: what considerations should I have when choosing one design over the other? What are the pros and cons for each design? Which of these resembles the inner-implementation of "regular" events? Below is a sample code demonstrating the two designs. Public Class SomeClass Private _SomeEventListeners As EventHandler Public Custom Event SomeEvent As EventHandler AddHandler(ByVal value As EventHandler) _SomeEventListeners = [Delegate].Combine(_SomeEventListeners, value) End AddHandler RemoveHandler(ByVal value As EventHandler) _SomeEventListeners = [Delegate].Remove(_SomeEventListeners, value) End RemoveHandler RaiseEvent(ByVal sender As Object, ByVal e As System.EventArgs) _SomeEventListeners.Invoke(sender, e) End RaiseEvent End Event Private _OtherEventListeners As New List(Of EventHandler) Public Custom Event OtherEvent As EventHandler AddHandler(ByVal value As EventHandler) _OtherEventListeners.Add(value) End AddHandler RemoveHandler(ByVal value As EventHandler) _OtherEventListeners.Remove(value) End RemoveHandler RaiseEvent(ByVal sender As Object, ByVal e As System.EventArgs) For Each handler In _OtherEventListeners handler(sender, e) Next End RaiseEvent End Event End Class

    Read the article

< Previous Page | 617 618 619 620 621 622 623 624 625 626 627 628  | Next Page >