Search Results

Search found 1547 results on 62 pages for 'iwork 09'.

Page 61/62 | < Previous Page | 57 58 59 60 61 62  | Next Page >

  • Install SharePoint 2013 on a two server farm

    - by sreejukg
    When SharePoint 2010 was released, I published an article on how to install SharePoint on a two server farm. You can find that article from the below link. http://weblogs.asp.net/sreejukg/archive/2010/09/28/install-sharepoint-2010-in-a-farm-environment.aspx Now it is the time for SharePoint 2013. SharePoint 2013 brings lots of improvements to the topologies, but still supports two-server architecture. Be noted that “two-server architecture” is meant for small implementations with limited service applications. Refer the below link to understand more about the SharePoint architecture http://technet.microsoft.com/en-us/sharepoint/fp123594.aspx A two tier farm consists of a database server and a web/application server as follows. In this article I am going to explain how to install SharePoint in a two server farm. I prepared 2 servers, both of them joined to a domain(SP2013Domain), and in one server I installed SQL Server 2012 (Server name: SP2013_DB). Now I am going to install SharePoint 2013 in the second server (Server Name: SP2013). The following domain accounts are created for the installation.   User Account Purpose Server roles required SQLService - SQL Server service account - This account is used as the service account for SQL Server. - domain user account / local account spSetup - You will be running SharePoint setup and SharePoint products and configuration wizard using this account. -domain user account - Member of the Administrators group on each server on which Setup is run(In our case SP2013) - SQL Server login on the computer running SQL Server - Member of the Server admin SQL Server security role spDataaccess - Configure and manage server farm. This - Application pool identity for central admin website - Microsoft SharePoint Foundation Workflow Timer Service Domain user account (Other permissions will be set to this account automatically)   The above are the minimum list of accounts needed for SharePoint 2013 installation. Now you need additional accounts for services, application pool identities for web applications etc. Refer the service accounts requirements for SharePoint from the below link. http://technet.microsoft.com/en-us/library/cc263445.aspx In order to install SharePoint 2013 login to the server using setup account(spsetup). Now run the setup from the installation media. First you need to install the pre-requisites. During the installation process, the server may restart several times. The installation wizard will guide you through the installation. In the next step, you need to agree on the terms and conditions as usual. Once you click next, the installation will start immediately. The installation wizard will let you know the progress of the installation. During the installation you may receive notifications to restart the server, you need to just click the finish button so that the system will be restarted. Once all the pre-requisites are installed, you will get the success message as below. Click finish to close the dialog. Now from the media, run the setup again and this time you choose install SharePoint server. In the next screen, you need to enter the product key, and then click continue. Now you need to agree on the terms and conditions for SharePoint 2013, and click continue. Choose the file location as per your policies and click on the install now button. You will see the installation progress. Once completed, you will see the installation completed dialog. Make sure you select the run products and configuration wizard option and click close. From the start screen, click next to start the configuration wizard. You will receive warning telling you some of the services will be stopped during the installation. Select “create new server farm” radio button and click next. In the next step, you need to enter the configuration database settings. Enter the database server details and then specify the database access account. You need to specify the farm account(spdataaccess). The wizard will grant additional privileges to the account as needed. In the next step you need to specify the passphrase, you need to note this as you need this passphrase if you add additional server to the farm. In the next step, you need to enter the central administration website port and security settings. You can choose a port or just keep it as suggested by the wizard. Click next, you will see the summary of what you have been selected. Verify the selected settings and if you want to change any, just click back and change them, or click continue to start the configuration. The configuration may take some time, you can view the progress, in case of any error, you will get the log file, you need to fix any error and again start the configuration wizard. Once the configuration successful, you will see the success message. Just click finish. Now you can browse the central administration website. It is good to check the health analyzer to review whether there are any errors/warnings. No warnings/errors indicate a good installation. Two-Server architecture is the least configuration for production environments. For small firms with less number of employees can implement SharePoint 2013 using this topology and as the workload increases, they can add more servers to the farm without reconstructing everything.

    Read the article

  • Taking advantage of Windows Azure CDN and Dynamic Pages in ASP.NET - Caching content from hosted services

    - by Shawn Cicoria
    With the updates to Windows Azure CDN announced this week [1] I wanted to help illustrate the capability with a working sample that will serve up dynamic content from an ASP.NET site hosted in a WebRole. First, to get a good overview of the capability you can read the Overview of the Windows Azure CDN [2] content on MSDN. When you setup the ability to cache content from a hosted service, the requirement is to provide a path to your role’s DNS endpoint that ends in the path “/cdn”.  Additionally, you then map CDN to that service. What WAZ CDN does, is allow you to then map that through the CDN to your host.  The CDN will then make a request to your host on your client’s behalf. The requirement is still that your client, and any Url’s that are to be serviced through the CDN and this capability have to use the CDN DNS name and not your host – no different than what CDN does for Blog storage. The following 2 URL’s are samples of how the client needs to issue the requests. Windows Azure hosted service URL: http: //myHostedService.cloudapp.net/cdn/music.aspx   - for regular “dynamic” content Windows Azure CDN URL: http: //<identifier>.vo.msecnd.net/music.aspx   - for CDN “cachable” content. The first URL path’s the request direct to your host into the Azure datacenter.  The 2nd URL paths the request through the CDN infrastructure, where CDN will make the determination to request the content on behalf of the client to the Azure datacenter and your host on the /cdn path. The big advantage here is you can apply logic to your content creation.  What’s important is emitting the CDN friendly headers that allow CDN to request and re-request only when you designate based upon it’s rules of “staleness” as described in the overview page. With IIS7.5 there is an underlying issue when the Managed Module “OutputCache” is enabled that in order to emit a good header for your content, you’ll need to remove, and in my sample, helps provide CDN friendly headers.  You get IIS 7.5 when running under OS Family “2” in your service configuration. By default, and when the OutputCache managed module is loaded, if you use the HttpResponse.CachePolicy to set the Http Headers for “max-age” when the HttpCacheability is “Public”, you will NOT get the “max-age” emitted as part of the “Cache-control:” header.  Instead, the OutputCache module will remove “max-age” and just emit “public”.  It works ok when Cacheability is set to “private”. To work around the issue and ensure your code as follows emits the full max-age along with the public option, you need to remove as follows: <system.webServer>   <modules runAllManagedModulesForAllRequests="true">     <remove name="OutputCache"/>   </modules> </system.webServer>   Response.Cache.SetCacheability(HttpCacheability.Public); Response.Cache.SetMaxAge(TimeSpan.FromMinutes(rv));   In the attached solution, the way I approached it was to have a VirtualApplication under the root site that has it’s own web.config  - this VirtualApplication is the /cdn of the site and when deployed to Azure as a Web Role will surface as a distinct IIS Application – along with a separate AppDomain. The CDN Sample is a simple Web Forms site that the /default landing page contains 3 IFrames to host: 1. Content direct from the host @   http://xxxx.cloudapp.net/cdn 2. Content via the CDN @ http://azxxx.vo.msecnd.net  3. Simple list of recent requests – showing where the request came from.   When you run the sample the first time you hit the page, both the Host and the CDN will cause 2 initial requests to hit the host.  You won’t see the first requests in the list because of timing – but if you refresh, you’ll see that the list will show that you have 2 requests initially. 1. sourced direct from the Browser to the HOST 2. sourced via the CDN The picture above shows the call-outs of each of those requests – green rows showing requests coming direct to the HOST, yellow showing the CDN request.  The IP addresses of the green items are direct from the client, where the CDN is from the CDN data center. As you refresh the page (hit Ctrl+F5 to force a full refresh and avoid “304 – not changed”) you’ll see that the request to the HOST get’s processed direct; but the request to the CDN endpoint is serviced direct from the CDN and doesn’t incur any additional request back to the HOST. The following is the Headers from the CDN response (Status-Line) HTTP/1.1 200 OK Age 13 Cache-Control public, max-age=300 Connection keep-alive Content-Length 6212 Content-Type image/jpeg; charset=utf-8 Date Fri, 11 Mar 2011 20:47:14 GMT Expires Fri, 11 Mar 2011 20:52:01 GMT Last-Modified Fri, 11 Mar 2011 20:47:02 GMT Server Microsoft-IIS/7.5 X-AspNet-Version 4.0.30319 X-Powered-By ASP.NET   The following are the Headers from the HOST response (Status-Line) HTTP/1.1 200 OK Cache-Control public, max-age=300 Content-Length 6189 Content-Type image/jpeg; charset=utf-8 Date Fri, 11 Mar 2011 20:47:15 GMT Last-Modified Fri, 11 Mar 2011 20:47:02 GMT Server Microsoft-IIS/7.5 X-AspNet-Version 4.0.30319 X-Powered-By ASP.NET   You can see that with the CDN request, the countdown (age) starts for aging the content. The full sample is located here: CDNSampleSite.zip [1] http://blogs.msdn.com/b/windowsazure/archive/2011/03/09/now-available-updated-windows-azure-sdk-and-windows-azure-management-portal.aspx [2] http://msdn.microsoft.com/en-us/library/ff919703.aspx

    Read the article

  • SQL Server service accounts and SPNs

    - by simonsabin
    Service Principal Names (SPNs) are a must for kerberos authentication which is a must when using sharepoint, reporting services and sql server where you access one server that then needs to access another resource, this is called the double hop. The reason this is a complex problem is that the second hop has to be done with impersonation/delegation. For this to work there needs to be a way for the security system to make sure that the service in the middle is allowed to impersonate you, after all you are not giving the service your password. To do this you need to be using kerberos. The following is my simple interpretation of how kerberos works. I find the Kerberos documentation rediculously complex so the following might be sligthly wrong but I think its close enough. Keberos works on a ticketing system, the prinicipal is that you get a security token from AD and then you can pass that to the service in the middle which can then use that token to impersonate you. For that to work AD has to be able to identify who is allowed to use the token, in this case the service account.But how do you as a client know what service account the service in the middle is configured with. The answer is SPNs. The SPN is the mapping between your logical connection to the service account. One type of SPN is for the DNS name for the server and the port. i.e. MySQL.mydomain.com and 1433. You can see how this maps to SQL Server on that server, but how does it map to the account. Well it can be done in two ways, either you can have a mapping defined in AD or AD can use a default mapping (this is something I didn't know about). To map the SPN in AD then you have to add the SPN to the user account, this is documented in the first link below either directly or using a tool called SetSPN. You might say that is complex, well it is and thats why SQL Server tries to do it for you, at start up it tries to connect to AD and set the SPN on the account it is running as, clearly that can only happen IF SQL is running as a domain account AND importantly it has permission to do so. By default a normal domain user account doesn't have the correct permission, and is why so many people have this problem. If the account is a domain admin then it will have permission, but non of us run SQL using domain admin accounts do we. You might also note that the SPN contains the port number (this isn't a requirement now in sql 2008 but I won't go into that), so if you set it manually and you are using dynamic ports (the default for a named instance) what do you do, well every time the port changes you need to change the SPN allocated to the account. Thats why its advised to let SQL Server register the SPN itself. You may also have thought, well what happens if I change my service account, won't that lead to two accounts with the same SPN. Possibly. Having two accounts with the same SPN is definitely a problem. Why? Well because if there are two accounts Kerberos can't identify the exact account that the service is running as, it could be either account, and so your security falls back to NTLM. SETSPN is useful for finding duplicate SPNs Reading this you will probably be thinking Oh my goodness this is really difficult. It is however I've found today in investigating something else that there is an easy option. Use Network Service as your service account. Network Service is a special account and is tied to the computer. It appears that Network Service has the update rights to AD to set an SPN mapping for the computer account. This then allows the SPN mapping to work. I believe this also works for the local system account. To get all the SPNs in your AD run the following, it could be a large file, so you might want to restrict it to a specific OU, or CN ldifde -d "DC=<domain>" -l servicePrincipalName -F spn.txt You will read in the links below that you need SQL to register the SPN this is done how to use Kerberos authenticaiton in SQL Server - http://support.microsoft.com/kb/319723 Using Kerberos with SQL Server - http://blogs.msdn.com/sql_protocols/archive/2005/10/12/479871.aspx Understanding Kerberos and NTLM authentication in SQL Server Connections - http://blogs.msdn.com/sql_protocols/archive/2006/12/02/understanding-kerberos-and-ntlm-authentication-in-sql-server-connections.aspx Summary The only reason I personally know to use a domain account is when you can't get kerberos to work and you want to do BULK INSERT or other network service that requires access to a a remote server. In this case you have to resort to using SQL authentication and the SQL Server uses its service account to access the remote service, and thus you need a domain account. You migth need this if using some forms of replication. I've always found Kerberos awkward to setup and so fallen back to this domain account approach. So in summary to get Kerberos to work try using the network service or local system accounts. For a great post from the Adam Saxton of the SQL Server support team go to http://blogs.msdn.com/psssql/archive/2010/03/09/what-spn-do-i-use-and-how-does-it-get-there.aspx 

    Read the article

  • Windows Azure Use Case: Hybrid Applications

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Organizations see the need for computing infrastructures that they can “rent” or pay for only when they need them. They also understand the benefits of distributed computing, but do not want to create this infrastructure themselves. However, they may have considerations that prevent them from moving all of their current IT investment to a distributed environment: Private data (do not want to send or store sensitive data off-site) High dollar investment in current infrastructure Applications currently running well, but may need additional periodic capacity Current applications not designed in a stateless fashion In these situations, a “hybrid” approach works best. In fact, with Windows Azure, a hybrid approach is an optimal way to implement distributed computing even when the stipulations above do not apply. Keeping a majority of the computing function in an organization local while exploring and expanding that footprint into Windows and SQL Azure is a good migration or expansion strategy. A “hybrid” architecture merely means that part of a computing cycle is shared between two architectures. For instance, some level of computing might be done in a Windows Azure web-based application, while the data is stored locally at the organization. Implementation: There are multiple methods for implementing a hybrid architecture, in a spectrum from very little interaction from the local infrastructure to Windows or SQL Azure. The patterns fall into two broad schemas, and even these can be mixed. 1. Client-Centric Hybrid Patterns In this pattern, programs are coded such that the client system sends queries or compute requests to multiple systems. The “client” in this case might be a web-based codeset actually stored on another system (which acts as a client, the user’s device serving as the presentation layer) or a compiled program. In either case, the code on the client requestor carries the burden of defining the layout of the requests. While this pattern is often the easiest to code, it’s the most brittle. Any change in the architecture must be reflected on each client, but this can be mitigated by using a centralized system as the client such as in the web scenario. 2. System-Centric Hybrid Patterns Another approach is to create a distributed architecture by turning on-site systems into “services” that can be called from Windows Azure using the service Bus or the Access Control Services (ACS) capabilities. Code calls from a series of in-process client application. In this pattern you move the “client” interface into the server application logic. If you do not wish to change the application itself, you can “layer” the results of the code return using a product (such as Microsoft BizTalk) that exposes a Web Services Definition Language (WSDL) endpoint to Windows Azure using the Application Fabric. In effect, this is similar to creating a Service Oriented Architecture (SOA) environment, and has the advantage of de-coupling your computing architecture. If each system offers a “service” of the results of some software processing, the operating system or platform becomes immaterial, assuming it adheres to a service contract. There are important considerations when you federate a system, whether to Windows or SQL Azure or any other distributed architecture. While these considerations are consistent with coding any application for distributed computing, they are especially important for a hybrid application. Connection resiliency - Applications on-premise normally have low-latency and good connection properties, something you’re not always guaranteed in a distributed and hybrid application. Whether a centralized client or a distributed one, the code should be able to handle extended retry logic. Authorization and Access - In a single authorization environment like a Active Directory domain, security is handled at a user-password level. In a distributed computing environment, you have more options. You can mitigate this with  using The Windows Azure Application Fabric feature of ACS to make the Azure application aware of the App Fabric as an ADFS provider. However, a claims-based authentication structure is often a superior choice.  Consistency and Concurrency - When you have a Relational Database Management System (RDBMS), Consistency and Concurrency are part of the design. In a Service Architecture, you need to plan for sequential message handling and lifecycle. Resources: How to Build a Hybrid On-Premise/In Cloud Application: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/09/how-to-build-a-hybrid-on-premise-in-cloud-application.aspx  General Architecture guidance: http://blogs.msdn.com/b/buckwoody/archive/2010/12/21/windows-azure-learning-plan-architecture.aspx   

    Read the article

  • XNA Notes 004

    - by George Clingerman
    The XNA community has been crazy busy again. It always make me fee like such a slacker collecting all of these notes as I see the tremendous output from people all over the world and it’s incredible and humbling. There are some amazingly skilled people working with XNA. On another not, I’m going to take a minute to get on my soapbox and say, if you are developing ANYTHING and are not using some sort of source/revision control, START IMMEDIATELY. This applies to teams of one. Projects for fun. And “I back up my hard drive” or “I use dropbox!” does NOT count as using source control. You’ll be doing yourself a HUGE favor if you find one, learn to use it and integrate it into your everyday workflow. I personally use Subversion. It’s hosted offsite at xp.dev.com and I use TortoiseSVN as my front end to interface with the repository. It’s simple and easy to use and has saved me from myself so many time. Honestly, get setup with some type of source control immediately. If you don’t understand how, grab another developer that does and have them walk you through setup and the basics of using it. Ok, I’m done. On to the notes… The XNA Team Only 14 days left to Submit XNA GS 3.1 Games! http://blogs.msdn.com/b/xna/archive/2011/01/24/14-days-left-to-submit-xna-gs-3-1-games-on-app-hub.aspx Shawn Hargreaves shares some great information on Exception Handling best practices on the XNA forums http://forums.create.msdn.com/forums/p/73333/448556.aspx#448556 http://blogs.msdn.com/b/ericlippert/archive/2008/09/10/vexing-exceptions.aspx XNA MVPs @CatalinZima gives us a peek at Chicken’s Can’t Fly http://www.amusedsloth.com/games/chickens-cant-fly/ Screen-space deformations in XNA for WP7 from Catalin Zima http://twitter.com/CatalinZima/statuses/30313083767357440 http://www.amusedsloth.com/2011/01/screen-space-deformations-in-xna-for-windows-phone-7/ XNA Developers Going to GDC? Don’t miss the XNA panel hosted by a plethora of well known XNA community names! http://forums.create.msdn.com/forums/p/73576/448842.aspx#448842 MasterBlud does an interview with @Xalterax http://twitter.com/MasterBlud/statuses/28510774812999680 http://www.xboxhornet.com/wordpress/?p=7102 Luke Schneider of Radiangames posts about The Radiangames Style http://radiangames.com/?p=532 Holmade Games had a “vote for the new playable character” poll going on for Hurdle Turtle this past week http://holmadegames.blogspot.com/2011/01/new-level-pack-vote-for-your-favorite.html IGF v0.1.0.0 release post mortem http://indiefreaks.com/2011/01/24/v0-1-0-0-release-post-mortem/ James an Super Dunner post Good Morning Gato #46 and a look at the Vampire Smile box art http://www.ska-studios.com/2011/01/21/good-morning-gato-46/ http://www.ska-studios.com/2011/01/20/vampire-smiles-digital-box-art/ Alfredo Di Napoli creates Cow Pong using XNA and F#! http://alfredodinapoli.wordpress.com/2011/01/25/cow-pong-a-simple-xna-game-in-f/ Xbox LIVE Indie Games Signed In Podcast posts Episode #61 http://www.signedinpodcast.com/?p=559 Gamergeddon posts the January 23rd edition of XBLIG Round Up http://www.gamergeddon.com/2011/01/23/xbox-indie-games-round-up-january-23rd/ Indie Asylum posts Antipole Review http://www.indieasylum.com/reviews/38-xblig/112-antipole.html 1UPOrPosion Reviews OSR Unhinged http://www.1uporpoison.com/xblig/osr-unhinged/ DarkstarMatryx review Warbirds at Work http://www.darkstarmatryx.com/?p=185 Review of Aban Hawkins and the 1000 Spikes http://www.armlessoctopus.com/2011/01/24/xbox-indie-review-aban-hawkins-the-1000-spikes/ XboxHornet reviews Corrupted http://www.xboxhornet.com/wordpress/?p=7123 XBLIG 2010: The Best And The Worst http://www.gamasutra.com/blogs/JamieMann/20110121/6840/ Xbox LIVE Arcade Sales Analysis - an interesting read for XBLIG developers wondering how they’re doing compared to arcade.. http://www.gamerbytes.com/2011/01/xbla_sales_analysis_dec_2010.php Best of Indies for January 25th http://www.thisisfakediy.co.uk/articles/games/best-of-the-indies-25th-january-2011 Decimation X3 appears as an arcade machine in the wild! http://twitter.com/mdoucette/statuses/29605562484260864 XNA Game Development Guiseppe De Francesco (@PinoEire) announced Torque X 4.0 CEV is now in RC phase! http://www.garagegames.com/community/blogs/view/20779 DrMistry of mstargames shares his struggle (and mistakes) with learning to use the Content Pipeline http://www.mstargames.co.uk/mistryblogmain/35-genblog/181-pontent-cipeline-more-like-it.html New Tutorial posted XNA 2D Basic Collision Detection with Rotation from Ioannis Panagopoulos http://www.progware.org/Blog/post/XNA-2D-Basic-Collision-Detection-with-Rotation.aspx Sgt. Conker roars to life! Doing a much better (and prettier) job of collecting XNA news from around the interwebs. http://www.sgtconker.com/ http://www.sgtconker.com/2011/01/dedication-for-captain-boki/ http://www.sgtconker.com/2011/01/screen-space-deformations-in-xna-for-windows-phone-7/ http://www.sgtconker.com/2011/01/xna-4-0-light-pre-pass-2/ http://www.sgtconker.com/2011/01/indiefreaks-game-framework-0-1-0-0-released/ Offering a little free publicity for XBLIGs http://forums.create.msdn.com/forums/p/73465/448321.aspx#448321 Ben Kane writes about building loot tables from Excel using the Content Pipeline http://benkane.wordpress.com/2011/01/23/building-loot-tables-from-excel-using-the-content-pipeline/ Good tips on attracting a game artist AND an offer to create your cover art for FREE http://forums.create.msdn.com/forums/t/72998.aspx If you’re an XBLIG developer keeping your eye on places to release on the PC, might want to be watching the IndieCity blog. Seems like these guys are well on their way to constructing something worth watching. http://www.indiecity.com/blog/ DVMGames spotted a new crowd-funding site for Indies http://twitter.com/DVMGames/statuses/29947274767372289 http://www.8bitfunding.com/ Transmute continues to make progress and there’s a nice dev blog to follow along here http://forgottenstarstudios.com/blog/

    Read the article

  • LINQ und ArcObjects

    - by Marko Apfel
    LINQ und ArcObjects Motivation LINQ1 (language integrated query) ist eine Komponente des Microsoft .NET Frameworks seit der Version 3.5. Es erlaubt eine SQL-ähnliche Abfrage zu verschiedenen Datenquellen wie SQL, XML u.v.m. Wie SQL auch, bietet LINQ dazu eine deklarative Notation der Problemlösung - d.h. man muss nicht im Detail beschreiben wie eine Aufgabe, sondern was überhaupt zu lösen ist. Das befreit den Entwickler abfrageseitig von fehleranfälligen Iterator-Konstrukten. Ideal wäre es natürlich auf diese Möglichkeiten auch in der ArcObjects-Programmierung mit Features zugreifen zu können. Denkbar wäre dann folgendes Konstrukt: var largeFeatures = from feature in features where (feature.GetValue("SHAPE_Area").ToDouble() > 3000) select feature; bzw. dessen Äquivalent als Lambda-Expression: var largeFeatures = features.Where(feature => (feature.GetValue("SHAPE_Area").ToDouble() > 3000)); Dazu muss ein entsprechender Provider zu Verfügung stehen, der die entsprechende Iterator-Logik managt. Dies ist leichter als man auf den ersten Blick denkt - man muss nur die gewünschten Entitäten als IEnumerable<IFeature> liefern. (Anm.: nicht wundern - die Methoden GetValue() und ToDouble() habe ich nebenbei als Erweiterungsmethoden deklariert.) Im Hintergrund baut LINQ selbständig eine Zustandsmaschine (state machine)2 auf deren Ausführung verzögert ist (deferred execution)3 - d.h. dass erst beim tatsächlichen Anfordern von Entitäten (foreach, Count(), ToList(), ..) eine Instanziierung und Verarbeitung stattfindet, obwohl die Zuweisung schon an ganz anderer Stelle erfolgte. Insbesondere bei mehrfacher Iteration durch die Entitäten reibt man sich bei den ersten Debuggings verwundert die Augen wenn der Ausführungszeiger wie von Geisterhand wieder in die Iterator-Logik springt. Realisierung Eine ganz knappe Logik zum Konstruieren von IEnumerable<IFeature> lässt sich mittels Durchlaufen eines IFeatureCursor realisieren. Dazu werden die einzelnen Feature mit yield ausgegeben. Der einfachen Verwendung wegen, habe ich die Logik in eine Erweiterungsmethode GetFeatures() für IFeatureClass aufgenommen: public static IEnumerable GetFeatures(this IFeatureClass featureClass, IQueryFilter queryFilter, RecyclingPolicy policy) { IFeatureCursor featureCursor = featureClass.Search(queryFilter, RecyclingPolicy.Recycle == policy); IFeature feature; while (null != (feature = featureCursor.NextFeature())) { yield return feature; } //this is skipped in unit tests with cursor-mock if (Marshal.IsComObject(featureCursor)) { Marshal.ReleaseComObject(featureCursor); } } Damit kann man sich nun ganz einfach die IEnumerable<IFeature> erzeugen lassen: IEnumerable features = _featureClass.GetFeatures(RecyclingPolicy.DoNotRecycle); Etwas aufpassen muss man bei der Verwendung des "Recycling-Cursors". Nach einer verzögerten Ausführung darf im selben Kontext nicht erneut über die Features iteriert werden. In diesem Fall wird nämlich nur noch der Inhalt des letzten (recycelten) Features geliefert und alle Features sind innerhalb der Menge gleich. Kritisch würde daher das Konstrukt largeFeatures.ToList(). ForEach(feature => Debug.WriteLine(feature.OID)); weil ToList() schon einmal durch die Liste iteriert und der Cursor somit einmal durch die Features bewegt wurde. Die Erweiterungsmethode ForEach liefert dann immer dasselbe Feature. In derartigen Situationen darf also kein Cursor mit Recycling verwendet werden. Ein mehrfaches Ausführen von foreach ist hingegen kein Problem weil dafür jedes Mal die Zustandsmaschine neu instanziiert wird und somit der Cursor neu durchlaufen wird – das ist die oben schon erwähnte Magie. Ausblick Nun kann man auch einen Schritt weiter gehen und ganz eigene Implementierungen für die Schnittstelle IEnumerable<IFeature> in Angriff nehmen. Dazu müssen nur die Methode und das Property zum Zugriff auf den Enumerator ausprogrammiert werden. Im Enumerator selbst veranlasst man in der Reset()-Methode das erneute Ausführen der Suche – dazu übergibt man beispielsweise ein entsprechendes Delegate in den Konstruktur: new FeatureEnumerator( _featureClass, featureClass => featureClass.Search(_filter, isRecyclingCursor)); und ruft dieses beim Reset auf: public void Reset() {     _featureCursor = _resetCursor(_t); } Auf diese Art und Weise können Enumeratoren für völlig verschiedene Szenarien implementiert werden, die clientseitig restlos identisch nach obigen Schema verwendet werden. Damit verschmelzen Cursors, SelectionSets u.s.w. zu einer einzigen Materie und die Wiederverwendbarkeit von Code steigt immens. Obendrein lässt sich ein IEnumerable in automatisierten Unit-Tests sehr einfach mocken - ein großer Schritt in Richtung höherer Software-Qualität.4 Fazit Nichtsdestotrotz ist Vorsicht mit diesen Konstrukten in performance-relevante Abfragen geboten. Dadurch dass im Hintergrund eine Zustandsmaschine verwalten wird, entsteht einiges an Overhead dessen Verarbeitung zusätzliche Zeit kostet - ca. 20 bis 100 Prozent. Darüber hinaus ist auch das Arbeiten ohne Recycling schnell ein Performance-Gap. Allerdings ist deklarativer LINQ-Code viel eleganter, fehlerfreier und wartungsfreundlicher als das manuelle Iterieren, Vergleichen und Aufbauen einer Ergebnisliste. Der Code-Umfang verringert sich erfahrungsgemäß im Schnitt um 75 bis 90 Prozent! Dafür warte ich gerne ein paar Millisekunden länger. Wie so oft muss abgewogen werden zwischen Wartbarkeit und Performance - wobei für mich Wartbarkeit zunehmend an Priorität gewinnt. Zumeist ist sowieso nicht der Code sondern der Anwender die Bremse im Prozess. Demo-Quellcode support.esri.de   [1] Wikipedia: LINQ http://de.wikipedia.org/wiki/LINQ [2] Wikipedia: Zustandsmaschine http://de.wikipedia.org/wiki/Endlicher_Automat [3] Charlie Calverts Blog: LINQ and Deferred Execution http://blogs.msdn.com/b/charlie/archive/2007/12/09/deferred-execution.aspx [4] Clean Code Developer - gelber Grad/Automatisierte Unit Tests http://www.clean-code-developer.de/Gelber-Grad.ashx#Automatisierte_Unit_Tests_8

    Read the article

  • High Jinks, Hi Jacks, Exceptional DBA Awards and PASS

    - by Rodney
    The countdown to PASS has counted down.  The day after tomorrow I will board a plane, like many others, on my way for the 4th year in a row to SQL PASS Summit.  The anticipation has been excruciating but luckily I have this little thing called a day job as a DBA that has kept me busy and not thinking too much about the event. Well that is not exactly true since my beautiful wife works for PASS so we get to talk about SQL from the time we wake up until late in the evening. I would not have it any other way and I feel very fortunate to be a part of this great event and to have been chosen as the Exceptional DBA Award judge also for the 4th year in a row.  This year, I will have been again tasked with presenting the award to the winner, Mr. Jeff Moden and it will be a true honor to meet him in person as I have read many of his articles on SSC and have attended his session at PASS previously.  The speech is all ready but one item remains, which will be a surprise to all who attend the party on Tuesday night in Seattle (see links below).  Let's face it, Exceptional DBAs everywhere work very hard protecting our data stores, tuning queries, mentoring, saving money, installing clusters, etc and once in a while there is time to be exceptionally non-professional and have a bit of fun. Once incident that happened this year that falls under the High Jinks category was when my network admin asked if I could Telnet into a SQL instance and see if I could make the connection through the firewall that he had just configured. I was able to establish a connection on port 1433 and it occurred to me that it would be very interesting if I could actually run T-SQL queries via a Telnet session much like you might do with an SMTP server. With that thought, I proceeded to demonstrate this could be possible by convincing my senior DBA Shawn McGehee that I was able to do so. At first he did not believe me. It shook his world view.  It was inconceivable.  What I had done, behind the scenes, of course, was to copy and rename SQLCMD.exe to Telnet.exe and used it to connect and run a simple, "Select * from sys.databases" on the SQL instance. I think if it had been anyone other than Shawn I could have extended this ruse indefinitely but he caught on within 30 seconds. It was a fun thirty seconds though. On the High Jacks side of the house, which is really merged to be SQL HACKS, I finally, after several years of struggling with how to connect to an untrusted domain like in a DMZ with a windows account in SSMS, I stumbled upon a solution that does away with the requirement to use SQL Authentication.  While "Runas" is a great command to use to run an application with a higher privileged account, I had not previously been able to figure out how to connect to the remote domain with SSMS and "Runsas". It never connected and caused a login failure every time for the remote windows domain account. Then I ran across an option for "Runas",   "/netonly".  This option postpones the login until a connection is made and only then passes the remote login you supply when you first launch SSMS with the "Runas" command. So a typical shortcut would look like: "C:\Windows\System32\runas.exe /netonly /user:remotedomain.com\rodlandrum "C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exe" You will want to make sure the passwords are synced between the two domains, your local domain and the remote domain, otherwise you may have account lockout issues, but I have found in weeks of testing this is a stable solution. Now it is time to get ready to head for Seattle. Please, if you see me (@SQLBeat) or my wife (@Karlakay22) please run up and high five me (wait..High Jinks.High Jacks.High Fives.Need to change the title) or give me a big bear hug if you are strong enough to lift me off the ground. And if you do actually do that, I will think you are awesome and will not embarrass you by crying out for help or complaining of a broken back or sciatic nerve damage. And now the links to others who have all of the details. First, for the MVP Deep Dives 2, of which, like John, I was lucky enough to be able to participate in this year. http://www.simple-talk.com/community/blogs/johnm/archive/2011/09/29/103577.aspx And the details of the SSC party where the Exceptional DBA of 2011, Jeff Moden, will be awarded. http://www.simple-talk.com/community/blogs/rebecca_amos/archive/2011/10/05/103661.aspx   Cheers! Rodney

    Read the article

  • ODI 12c - Aggregating Data

    - by David Allan
    This posting will look at the aggregation component that was introduced in ODI 12c. For many ETL tool users this shouldn't be a big surprise, its a little different than ODI 11g but for good reason. You can use this component for composing data with relational like operations such as sum, average and so forth. Also, Oracle SQL supports special functions called Analytic SQL functions, you can use a specially configured aggregation component or the expression component for these now in ODI 12c. In database systems an aggregate transformation is a transformation where the values of multiple rows are grouped together as input on certain criteria to form a single value of more significant meaning - that's exactly the purpose of the aggregate component. In the image below you can see the aggregate component in action within a mapping, for how this and a few other examples are built look at the ODI 12c Aggregation Viewlet here - the viewlet illustrates a simple aggregation being built and then some Oracle analytic SQL such as AVG(EMP.SAL) OVER (PARTITION BY EMP.DEPTNO) built using both the aggregate component and the expression component. In 11g you used to just write the aggregate expression directly on the target, this made life easy for some cases, but it wan't a very obvious gesture plus had other drawbacks with ordering of transformations (agg before join/lookup. after set and so forth) and supporting analytic SQL for example - there are a lot of postings from creative folks working around this in 11g - anything from customizing KMs, to bypassing aggregation analysis in the ODI code generator. The aggregate component has a few interesting aspects. 1. Firstly and foremost it defines the attributes projected from it - ODI automatically will perform the grouping all you do is define the aggregation expressions for those columns aggregated. In 12c you can control this automatic grouping behavior so that you get the code you desire, so you can indicate that an attribute should not be included in the group by, that's what I did in the analytic SQL example using the aggregate component. 2. The component has a few other properties of interest; it has a HAVING clause and a manual group by clause. The HAVING clause includes a predicate used to filter rows resulting from the GROUP BY clause. Because it acts on the results of the GROUP BY clause, aggregation functions can be used in the HAVING clause predicate, in 11g the filter was overloaded and used for both having clause and filter clause, this is no longer the case. If a filter is after an aggregate, it is after the aggregate (not sometimes after, sometimes having).  3. The manual group by clause let's you use special database grouping grammar if you need to. For example Oracle has a wealth of highly specialized grouping capabilities for data warehousing such as the CUBE function. If you want to use specialized functions like that you can manually define the code here. The example below shows the use of a manual group from an example in the Oracle database data warehousing guide where the SUM aggregate function is used along with the CUBE function in the group by clause. The SQL I am trying to generate looks like the following from the data warehousing guide; SELECT channel_desc, calendar_month_desc, countries.country_iso_code,       TO_CHAR(SUM(amount_sold), '9,999,999,999') SALES$ FROM sales, customers, times, channels, countries WHERE sales.time_id=times.time_id AND sales.cust_id=customers.cust_id AND   sales.channel_id= channels.channel_id  AND customers.country_id = countries.country_id  AND channels.channel_desc IN   ('Direct Sales', 'Internet') AND times.calendar_month_desc IN   ('2000-09', '2000-10') AND countries.country_iso_code IN ('GB', 'US') GROUP BY CUBE(channel_desc, calendar_month_desc, countries.country_iso_code); I can capture the source datastores, the filters and joins using ODI's dataset (or as a traditional flow) which enables us to incrementally design the mapping and the aggregate component for the sum and group by as follows; In the above mapping you can see the joins and filters declared in ODI's dataset, allowing you to capture the relationships of the datastores required in an entity-relationship style just like ODI 11g. The mix of ODI's declarative design and the common flow design provides for a familiar design experience. The example below illustrates flow design (basic arbitrary ordering) - a table load where only the employees who have maximum commission are loaded into a target. The maximum commission is retrieved from the bonus datastore and there is a look using employees as the driving table and only those with maximum commission projected. Hopefully this has given you a taster for some of the new capabilities provided by the aggregate component in ODI 12c. In summary, the actions should be much more consistent in behavior and more easily discoverable for users, the use of the components in a flow graph also supports arbitrary designs and the tool (rather than the interface designer) takes care of the realization using ODI's knowledge modules. Interested to know if a deep dive into each component is interesting for folks. Any thoughts? 

    Read the article

  • WIF-less claim extraction from ACS: SWT

    - by Elton Stoneman
    WIF with SAML is solid and flexible, but unless you need the power, it can be overkill for simple claim assertion, and in the REST world WIF doesn’t have support for the latest token formats.  Simple Web Token (SWT) may not be around forever, but while it's here it's a nice easy format which you can manipulate in .NET without having to go down the WIF route. Assuming you have set up a Relying Party in ACS, specifying SWT as the token format: When ACS redirects to your login page, it will POST the SWT in the first form variable. It comes through in the BinarySecurityToken element of a RequestSecurityTokenResponse XML payload , the SWT type is specified with a TokenType of http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0 : <t:RequestSecurityTokenResponse xmlns:t="http://schemas.xmlsoap.org/ws/2005/02/trust">   <t:Lifetime>     <wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T07:31:18.655Z</wsu:Created>     <wsu:Expires xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T09:11:18.655Z</wsu:Expires>   </t:Lifetime>   <wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">     <EndpointReference xmlns="http://www.w3.org/2005/08/addressing">       <Address>http://localhost/x.y.z</Address>     </EndpointReference>   </wsp:AppliesTo>   <t:RequestedSecurityToken>     <wsse:BinarySecurityToken wsu:Id="uuid:fc8d3332-d501-4bb0-84ba-d31aa95a1a6c" ValueType="http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> [ base64string ] </wsse:BinarySecurityToken>   </t:RequestedSecurityToken>   <t:TokenType>http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0</t:TokenType>   <t:RequestType>http://schemas.xmlsoap.org/ws/2005/02/trust/Issue</t:RequestType>   <t:KeyType>http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey</t:KeyType> </t:RequestSecurityTokenResponse> Reading the SWT is as simple as base-64 decoding, then URL-decoding the element value:     var wrappedToken = XDocument.Parse(HttpContext.Current.Request.Form[1]);     var binaryToken = wrappedToken.Root.Descendants("{http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd}BinarySecurityToken").First();     var tokenBytes = Convert.FromBase64String(binaryToken.Value);     var token = Encoding.UTF8.GetString(tokenBytes);     var tokenType = wrappedToken.Root.Descendants("{http://schemas.xmlsoap.org/ws/2005/02/trust}TokenType").First().Value; The decoded token contains the claims as key/value pairs, along with the issuer, audience (ACS realm), expiry date and an HMAC hash, which are in query string format. Separate them on the ampersand, and you can write out the claim values in your logged-in page:     var decoded = HttpUtility.UrlDecode(token);     foreach (var part in decoded.Split('&'))     {         Response.Write("<pre>" + part + "</pre><br/>");     } - which will produce something like this: http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant=2012-08-31T06:57:01.855Z http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod=http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/windows http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname=XYZ http://schemas.xmlsoap.org/ws/2005/05/identity/claims/[email protected] http://schemas.xmlsoap.org/ws/2005/05/identity/claims/[email protected] http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider=http://fs.svc.xyz.com/adfs/services/trust Audience=http://localhost/x.y.z ExpiresOn=1346402225 Issuer=https://x-y-z.accesscontrol.windows.net/ HMACSHA256=oDCeEDDAWEC8x+yBnTaCLnzp4L6jI0Z/xNK95PdZTts= The HMAC hash lets you validate the token to ensure it hasn’t been tampered with. You'll need the token signing key from ACS, then you can re-sign the token and compare hashes. There's a full implementation of an SWT parser and validator here: How To Request SWT Token From ACS And How To Validate It At The REST WCF Service Hosted In Windows Azure, and a cut-down claim inspector on my github code gallery: ACS Claim Inspector. Interestingly, ACS lets you have a value for your logged-in page which has no relation to the realm for authentication, so you can put this code into a generic claim inspector page, and set that to be your logged-in page for any relying party where you want to check what's being sent through. Particularly handy with ADFS, when you're modifying the claims provided, and want to quickly see the results.

    Read the article

  • Windows 7 laptop with two active network connections will not perform DNS AAAA lookup under certain conditions

    - by Jeff Loughridge
    My laptop has two network interfaces. The Ethernet interface connects directly to my provider's edge router. It obtains an IPv6 address via SLAAC. I manually set an IPv6 DNS server. The wireless interface connects to a CPE router that doesn't understand IPv6. If the wireless interface is disabled, I can reach the IPv6 Internet with no problems using the Ethernet interface. I run into problems when both interfaces are enabled and the wireless interface get its IPv4 DNS server via DHCP. Let's look at two scenarios. Wireless interface obtains IPv4 DNS server via DHCP - The CPE router (192.168.0.1) sends its address as the DNS server. In this scenario, Windows 7 will not perform AAAA lookups. The browser uses IPv4 transit to reach dual stack web sites. I can't reach IPv6-only web sites using domain names. I can reach IPv6-enabled web sites using IPv6 literals instead of the domain name. Wireless interface is manually configured with OpenDNS DNS server - Windows 7 performs AAAA lookups using IPv6 transit (via the Ethernet). Everything works fine. My dual homed set-up is definitely not standard. Still, the behavior is very strange to me. A valid IPv6 interface exists in my Ethernet interface. Why won't Windows attempt AAAA lookups in scenario #1? I've included the output of ipconfig /all and netstat -rn. C:\Program Files\Console>ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : jake Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : res.openband.net Wireless LAN adapter Wireless Network Connection 2: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Microsoft Virtual WiFi Miniport Adapter Physical Address. . . . . . . . . : C0-CB-38-06-54-F9 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Wireless LAN adapter Wireless Network Connection: Connection-specific DNS Suffix . : res.openband.net Description . . . . . . . . . . . : DW1520 Wireless-N WLAN Half-Mini Card Physical Address. . . . . . . . . : C0-CB-38-06-54-F9 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::fc39:9293:7d01:4a75%13(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.0.105(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : Wednesday, July 11, 2012 7:35:21 AM Lease Expires . . . . . . . . . . : Thursday, July 12, 2012 9:49:46 AM Default Gateway . . . . . . . . . : 192.168.0.1 DHCP Server . . . . . . . . . . . : 192.168.0.1 DHCPv6 IAID . . . . . . . . . . . : 364956472 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-17-80-F8-14-5C-26-0A-03-23-5C DNS Servers . . . . . . . . . . . : 208.67.222.222 NetBIOS over Tcpip. . . . . . . . : Enabled Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : res.openband.net Description . . . . . . . . . . . : Intel(R) 82577LM Gigabit Network Connection Physical Address. . . . . . . . . : 5C-26-0A-03-23-5C DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IPv6 Address. . . . . . . . . . . : 2607:2600:1:850:c0e9:211a:fd05:4e0b(Preferred) Temporary IPv6 Address. . . . . . : 2607:2600:1:850:3d29:1839:62db:c4c1(Preferred) Link-local IPv6 Address . . . . . : fe80::c0e9:211a:fd05:4e0b%12(Preferred) IPv4 Address. . . . . . . . . . . : 10.52.2.51(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.254.0 Lease Obtained. . . . . . . . . . : Monday, July 09, 2012 8:55:07 AM Lease Expires . . . . . . . . . . : Thursday, July 12, 2012 7:30:05 AM Default Gateway . . . . . . . . . : fe80::214:6aff:fe51:7f3f%12 10.52.2.1 DHCP Server . . . . . . . . . . . : 216.40.77.244 DNS Servers . . . . . . . . . . . : 2620:0:ccc::2 2620:0:ccd::2 216.40.77.126 216.40.77.244 NetBIOS over Tcpip. . . . . . . . : Enabled Ethernet adapter VMware Network Adapter VMnet1: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : VMware Virtual Ethernet Adapter for VMnet1 Physical Address. . . . . . . . . : 00-50-56-C0-00-01 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::4c61:495b:229e:281e%14(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.40.1(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : DHCPv6 IAID . . . . . . . . . . . : 469782614 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-17-80-F8-14-5C-26-0A-03-23-5C DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1 fec0:0:0:ffff::2%1 fec0:0:0:ffff::3%1 NetBIOS over Tcpip. . . . . . . . : Enabled Ethernet adapter VMware Network Adapter VMnet8: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : VMware Virtual Ethernet Adapter for VMnet8 Physical Address. . . . . . . . . : 00-50-56-C0-00-08 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::f996:61eb:8c00:45e6%15(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.17.1(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : DHCPv6 IAID . . . . . . . . . . . : 486559830 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-17-80-F8-14-5C-26-0A-03-23-5C DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1 fec0:0:0:ffff::2%1 fec0:0:0:ffff::3%1 NetBIOS over Tcpip. . . . . . . . : Enabled C:\Program Files\Console>netstat -rn =========================================================================== Interface List 17...c0 cb 38 06 54 f9 ......Microsoft Virtual WiFi Miniport Adapter 13...c0 cb 38 06 54 f9 ......DW1520 Wireless-N WLAN Half-Mini Card 12...5c 26 0a 03 23 5c ......Intel(R) 82577LM Gigabit Network Connection 11...5c ac 4c f8 b8 55 ......Bluetooth Device (Personal Area Network) 14...00 50 56 c0 00 01 ......VMware Virtual Ethernet Adapter for VMnet1 15...00 50 56 c0 00 08 ......VMware Virtual Ethernet Adapter for VMnet8 1...........................Software Loopback Interface 1 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.52.2.1 10.52.2.51 10 0.0.0.0 0.0.0.0 192.168.0.1 192.168.0.105 100 10.52.2.0 255.255.254.0 On-link 10.52.2.51 261 10.52.2.51 255.255.255.255 On-link 10.52.2.51 261 10.52.3.255 255.255.255.255 On-link 10.52.2.51 261 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.0.0 255.255.255.0 On-link 192.168.0.105 306 192.168.0.105 255.255.255.255 On-link 192.168.0.105 306 192.168.0.255 255.255.255.255 On-link 192.168.0.105 306 192.168.17.0 255.255.255.0 On-link 192.168.17.1 276 192.168.17.1 255.255.255.255 On-link 192.168.17.1 276 192.168.17.255 255.255.255.255 On-link 192.168.17.1 276 192.168.40.0 255.255.255.0 On-link 192.168.40.1 276 192.168.40.1 255.255.255.255 On-link 192.168.40.1 276 192.168.40.255 255.255.255.255 On-link 192.168.40.1 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.52.2.51 261 224.0.0.0 240.0.0.0 On-link 192.168.0.105 306 224.0.0.0 240.0.0.0 On-link 192.168.40.1 276 224.0.0.0 240.0.0.0 On-link 192.168.17.1 276 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.52.2.51 261 255.255.255.255 255.255.255.255 On-link 192.168.0.105 306 255.255.255.255 255.255.255.255 On-link 192.168.40.1 276 255.255.255.255 255.255.255.255 On-link 192.168.17.1 276 =========================================================================== Persistent Routes: None IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 12 261 ::/0 fe80::214:6aff:fe51:7f3f 1 306 ::1/128 On-link 12 13 2607:2600:1:850::/64 On-link 12 261 2607:2600:1:850:3d29:1839:62db:c4c1/128 On-link 12 261 2607:2600:1:850:c0e9:211a:fd05:4e0b/128 On-link 12 261 fe80::/64 On-link 13 281 fe80::/64 On-link 14 276 fe80::/64 On-link 15 276 fe80::/64 On-link 14 276 fe80::4c61:495b:229e:281e/128 On-link 12 261 fe80::c0e9:211a:fd05:4e0b/128 On-link 15 276 fe80::f996:61eb:8c00:45e6/128 On-link 13 281 fe80::fc39:9293:7d01:4a75/128 On-link 1 306 ff00::/8 On-link 12 261 ff00::/8 On-link 13 281 ff00::/8 On-link 14 276 ff00::/8 On-link 15 276 ff00::/8 On-link =========================================================================== Persistent Routes: None

    Read the article

  • validating an XML schema with empty attributes

    - by AdRock
    I am having trouble validating my xml schema. I get these errors on the schema 113: 18 s4s-elt-invalid-content.1: The content of '#AnonType_user' is invalid. 164: 17 s4s-elt-invalid-content.1: The content of '#AnonType_festival' is invalid. Element 'sequence' is invalid, misplaced, or occurs too often. and becuase of those 2 errors, i am getting loads of the same error. This is becuase the attribute id of the festival tag may be empty becuase there is not data for that festival cvc-datatype-valid.1.2.1: '' is not a valid value for 'integer'. cvc-attribute.3: The value '' of attribute 'id' on element 'festival' is not valid with respect to its type, 'integer'. The lines in the schema causing the problems are <xs:element name="user"> <xs:complexType> <xs:attribute name="id" type="xs:integer"/> <xs:sequence> <xs:element ref="personal"/> <xs:element ref="account"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="festival"> <xs:complexType> <xs:attribute name="id" type="xs:integer" user="optional"/> <xs:sequence> <xs:element ref="event"/> <xs:element ref="contact"/> </xs:sequence> </xs:complexType> </xs:element> This is a snippet from my XML file. One user has a festival and the other doesn't <member> <user id="3"> <personal> <name>Skye Saunders</name> <sex>Female</sex> <address1>31 Anns Court</address1> <address2></address2> <city>Cirencester</city> <county>Gloucestershire</county> <postcode>GL7 1JG</postcode> <telephone>01958303514</telephone> <mobile>07260491667</mobile> <email>[email protected]</email> </personal> <account> <username>BigUndecided</username> <password>ea297847f80e046ca24a8621f4068594</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <festival id=""> <event> <eventname></eventname> <url></url> <datefrom></datefrom> <dateto></dateto> <location></location> <eventpostcode></eventpostcode> <coords> <lat></lat> <lng></lng> </coords> </event> <contact> <conname></conname> <conaddress1></conaddress1> <conaddress2></conaddress2> <concity></concity> <concounty></concounty> <conpostcode></conpostcode> <contelephone></contelephone> <conmobile></conmobile> <fax></fax> <conemail></conemail> </contact> </festival> </member> <member> <user id="4"> <personal> <name>Connor Lawson</name> <sex>Male</sex> <address1>12 Ash Way</address1> <address2></address2> <city>Swindon</city> <county>Wiltshire</county> <postcode>SN3 6GS</postcode> <telephone>01791928119</telephone> <mobile>07338695664</mobile> <email>[email protected]</email> </personal> <account> <username>iTuneStinker</username> <password>3a1f5fda21a07bfff20c41272bae7192</password> <userlevel>3</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <festival id="1"> <event> <eventname>Oxford Folk Festival</eventname> <url>http://www.oxfordfolkfestival.com/</url> <datefrom>2010-04-07</datefrom> <dateto>2010-04-09</dateto> <location>Oxford</location> <eventpostcode>OX19BE</eventpostcode> <coords> <lat>51.735640</lat> <lng>-1.276136</lng> </coords> </event> <contact> <conname>Stuart Vincent</conname> <conaddress1>P.O. Box 642</conaddress1> <conaddress2></conaddress2> <concity>Oxford</concity> <concounty>Bedfordshire</concounty> <conpostcode>OX13BY</conpostcode> <contelephone>01865 79073</contelephone> <conmobile></conmobile> <fax></fax> <conemail>[email protected]</conemail> </contact> </festival> </member> This is my schema <?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> <xs:simpleType name="postcode"> <xs:restriction base="xs:string"> <xs:minLength value="6"/> <xs:maxLength value="8"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="telephone"> <xs:restriction base="xs:string"> <xs:minLength value="10"/> <xs:maxLength value="13"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="mobile"> <xs:restriction base="xs:string"> <xs:minLength value="11"/> <xs:maxLength value="11"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="password"> <xs:restriction base="xs:string"> <xs:minLength value="32"/> <xs:maxLength value="32"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="userlevel"> <xs:restriction base="xs:integer"> <xs:enumeration value="1"/> <xs:enumeration value="2"/> <xs:enumeration value="3"/> <xs:enumeration value="4"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="county"> <xs:restriction base="xs:string"> <xs:enumeration value="Bedfordshire"/> <xs:enumeration value="Berkshire"/> <xs:enumeration value="Bristol"/> <xs:enumeration value="Buckinghamshire"/> <xs:enumeration value="Cambridgeshire"/> <xs:enumeration value="Cheshire"/> <xs:enumeration value="Cleveland"/> <xs:enumeration value="Cornwall"/> <xs:enumeration value="Cumberland"/> <xs:enumeration value="Derbyshire"/> <xs:enumeration value="Devon"/> <xs:enumeration value="Dorset"/> <xs:enumeration value="Durham"/> <xs:enumeration value="East Ridings Of Yorkshire"/> <xs:enumeration value="Essex"/> <xs:enumeration value="Gloucestershire"/> <xs:enumeration value="Hampshire"/> <xs:enumeration value="Herefordshire"/> <xs:enumeration value="Hertfordshire"/> <xs:enumeration value="Huntingdonshire"/> <xs:enumeration value="Isle Of Man"/> <xs:enumeration value="Kent"/> <xs:enumeration value="Lancashire"/> <xs:enumeration value="Leicestershire"/> <xs:enumeration value="Lincolnshire"/> <xs:enumeration value="London"/> <xs:enumeration value="Middlesex"/> <xs:enumeration value="Norfolk"/> <xs:enumeration value="North Yorkshire"/> <xs:enumeration value="Northamptonshire"/> <xs:enumeration value="Northumberland"/> <xs:enumeration value="Nottinghamshire"/> <xs:enumeration value="Oxfordshire"/> <xs:enumeration value="Rutland"/> <xs:enumeration value="Shropshire"/> <xs:enumeration value="Somerset"/> <xs:enumeration value="South Yorkshire"/> <xs:enumeration value="Staffordshire"/> <xs:enumeration value="Suffolk"/> <xs:enumeration value="Surrey"/> <xs:enumeration value="Sussex"/> <xs:enumeration value="Tyne and Wear"/> <xs:enumeration value="Warwickshire"/> <xs:enumeration value="West Yorkshire"/> <xs:enumeration value="Westmorland"/> <xs:enumeration value="Wiltshire"/> <xs:enumeration value="Wirral"/> <xs:enumeration value="Worcestershire"/> <xs:enumeration value="Yorkshire"/> </xs:restriction> </xs:simpleType> <xs:element name="folktask"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" maxOccurs="unbounded" ref="member"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="member"> <xs:complexType> <xs:sequence> <xs:element ref="user" minOccurs="0" maxOccurs="unbounded"/> <xs:element ref="festival" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="user"> <xs:complexType> <xs:attribute name="id" type="xs:integer"/> <xs:sequence> <xs:element ref="personal"/> <xs:element ref="account"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="personal"> <xs:complexType> <xs:sequence> <xs:element ref="name"/> <xs:element ref="sex"/> <xs:element ref="address1"/> <xs:element ref="address2"/> <xs:element ref="city"/> <xs:element ref="county"/> <xs:element ref="postcode"/> <xs:element ref="telephone"/> <xs:element ref="mobile"/> <xs:element ref="email"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="name" type="xs:string"/> <xs:element name="sex" type="xs:string"/> <xs:element name="address1" type="xs:string"/> <xs:element name="address2" type="xs:string"/> <xs:element name="city" type="xs:string"/> <xs:element name="county" type="xs:string"/> <xs:element name="postcode" type="postcode"/> <xs:element name="telephone" type="telephone"/> <xs:element name="mobile" type="mobile"/> <xs:element name="email" type="xs:string"/> <xs:element name="account"> <xs:complexType> <xs:sequence> <xs:element ref="username"/> <xs:element ref="password"/> <xs:element ref="userlevel"/> <xs:element ref="signupdate"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="username" type="xs:string"/> <xs:element name="password" type="password"/> <xs:element name="userlevel" type="userlevel"/> <xs:element name="signupdate" type="xs:dateTime"/> <xs:element name="festival"> <xs:complexType> <xs:attribute name="id" type="xs:integer" user="optional"/> <xs:sequence> <xs:element ref="event"/> <xs:element ref="contact"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="event"> <xs:complexType> <xs:sequence> <xs:element ref="eventname"/> <xs:element ref="url"/> <xs:element ref="datefrom"/> <xs:element ref="dateto"/> <xs:element ref="location"/> <xs:element ref="eventpostcode"/> <xs:element ref="coords"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="eventname" type="xs:string"/> <xs:element name="url" type="xs:string"/> <xs:element name="datefrom" type="xs:date"/> <xs:element name="dateto" type="xs:date"/> <xs:element name="location" type="xs:string"/> <xs:element name="eventpostcode" type="postcode"/> <xs:element name="coords"> <xs:complexType> <xs:sequence> <xs:element ref="lat"/> <xs:element ref="lng"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="lat" type="xs:decimal"/> <xs:element name="lng" type="xs:decimal"/> <xs:element name="contact"> <xs:complexType> <xs:sequence> <xs:element ref="conname"/> <xs:element ref="conaddress1"/> <xs:element ref="conaddress2"/> <xs:element ref="concity"/> <xs:element ref="concounty"/> <xs:element ref="conpostcode"/> <xs:element ref="contelephone"/> <xs:element ref="conmobile"/> <xs:element ref="fax"/> <xs:element ref="conemail"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="conname" type="xs:string"/> <xs:element name="conaddress1" type="xs:string"/> <xs:element name="conaddress2" type="xs:string"/> <xs:element name="concity" type="xs:string"/> <xs:element name="concounty" type="xs:string"/> <xs:element name="conpostcode" type="postcode"/> <xs:element name="contelephone" type="telephone"/> <xs:element name="conmobile" type="mobile"/> <xs:element name="fax" type="telephone"/> <xs:element name="conemail" type="xs:string"/> </xs:schema>

    Read the article

  • Dojo JsonRest store and dijit.Tree

    - by user1427712
    I'm having a some problem making JSonRest store and dijit.Tree with ForestModel. I've tried some combination of JsonRestStore and json data format following many tips on the web, with no success. At the end, taking example form here http://blog.respondify.se/2011/09/using-dijit-tree-with-the-new-dojo-object-store/ I've made up this simple page (I'm using dojotolkit 1.7.2) <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>Tree Model Explorer</title> <script type="text/javascript"> djConfig = { parseOnLoad : true, isDebug : true, } </script> <script type="text/javascript" djConfig="parseOnLoad: true" src="lib/dojo/dojo.js"></script> <script type="text/javascript"> dojo.require("dojo.parser"); dojo.require("dijit.Tree"); dojo.require("dojo.store.JsonRest"); dojo.require("dojo.data.ObjectStore"); dojo.require("dijit.tree.ForestStoreModel"); dojo.addOnLoad(function() { var objectStore = new dojo.store.JsonRest({ target : "test.json", labelAttribute : "name", idAttribute: "id" }); var dataStore = new dojo.data.ObjectStore({ objectStore : objectStore }); var treeModel = new dijit.tree.ForestStoreModel({ store : dataStore, deferItemLoadingUntilExpand : true, rootLabel : "Subjects", query : { "id" : "*" }, childrenAttrs : [ "children" ] }); var tree = new dijit.Tree({ model : treeModel }, 'treeNode'); tree.startup(); }); </script> </head> <body> <div id="treeNode"></div> </body> </html> My rest service responds the following json { data: [ { "id": "PippoId", "name": "Pippo", "children": [] }, { "id": "PlutoId", "name": "Pluto", "children": [] }, { "id": "PaperinoId", "name": "Paperino", "children": [] } ]} I've tried also with the following response (actually my final intention n is to use lazy loading for the tree) { data: [ { "id": "PippoId", "name": "Pippo", "$ref": "author0", "children": true }, { "id": "PlutoId", "name": "Pluto", "$ref": "author1", "children": true }, { "id": "PaperinoId", "name": "Paperino", "$ref": "author2", "children": true } ]} Neither of the two works. I see no error message in firebug. I simply see the root "Subject" on the page. Thanks to anybody could help in some way.

    Read the article

  • Cygwin in Windows 7

    - by Algorist
    Hi, I am a fan of linux but due to worst intel wireless drivers in linux, I had to switch to windows 7. I have installed cygwin in windows and want to configure ssh, to remotely connect to my laptop. I googled and found this webpage, http://art.csoft.net/2009/09/02/cygwin-ssh-server-and-windows-7/ I am getting the following error when running ssh-host-config. bala@bala-PC ~ $ ssh-host-config yes *** Info: Creating default /etc/ssh_config file *** Query: Overwrite existing /etc/sshd_config file? (yes/no) yes *** Info: Creating default /etc/sshd_config file *** Info: Privilege separation is set to yes by default since OpenSSH 3.3. *** Info: However, this requires a non-privileged account called 'sshd'. *** Info: For more info on privilege separation read /usr/share/doc/openssh/READ ME.privsep. *** Query: Should privilege separation be used? (yes/no) no *** Info: Updating /etc/sshd_config file *** Warning: The following functions require administrator privileges! *** Query: Do you want to install sshd as a service? *** Query: (Say "no" if it is already installed as a service) (yes/no) yes *** Query: Enter the value of CYGWIN for the daemon: [] *** Info: On Windows Server 2003, Windows Vista, and above, the *** Info: SYSTEM account cannot setuid to other users -- a capability *** Info: sshd requires. You need to have or to create a privileged *** Info: account. This script will help you do so. *** Warning: The owner and the Administrators need *** Warning: to have .w. permission to /var/run. *** Warning: Here are the current permissions and ACLS: *** Warning: drwxr-xr-x 1 bala None 0 2010-01-17 22:34 /var/run *** Warning: # file: /var/run *** Warning: # owner: bala *** Warning: # group: None *** Warning: user::rwx *** Warning: group::r-x *** Warning: other:r-x *** Warning: mask:rwx *** Warning: *** Warning: Please change the user and/or group ownership, *** Warning: permissions, or ACLs of /var/run. *** ERROR: Problem with /var/run directory. Exiting. The permissions of this folder are shown as Read-only(Only applies to this folder) checked in gray. I tried to uncheck, but after I open the properties again, the box is again checked. Is there a way to change the permissions of this folder. Thank you

    Read the article

  • Salesforce/PHP - Bulk Outbound message (SOAP), Time out issue

    - by Phill Pafford
    Salesforce can send up to 100 requests inside 1 SOAP message. While sending this type of Bulk Ooutbound message request my PHP script finishes executing but SF fails to accept the ACK used to clear the message queue on the Salesforce side of things. Looking at the Outbound message log (monitoring) I see all the messages in a pending state with the Delivery Failure Reason "java.net.SocketTimeoutException: Read timed out". If my script has finished execution, why do I get this error? I have tried these methods to increase the execution time on my server as I have no access on the Salesforce side: set_time_limit(0); // in the script max_execution_time = 360 ; Maximum execution time of each script, in seconds max_input_time = 360 ; Maximum amount of time each script may spend parsing request data memory_limit = 32M ; Maximum amount of memory a script may consume I used the high settings just for testing. Any thoughts as to why this is failing the ACK delivery back to Salesforce? Here is some of the code: This is how I accept and send the ACK file for the imcoming SOAP request $data = 'php://input'; $content = file_get_contents($data); if($content) { respond('true'); } else { respond('false'); } The respond function function respond($tf) { $ACK = <<<ACK <?xml version = "1.0" encoding = "utf-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soapenv:Body> <notifications xmlns="http://soap.sforce.com/2005/09/outbound"> <Ack>$tf</Ack> </notifications> </soapenv:Body> </soapenv:Envelope> ACK; print trim($ACK); } These are in a generic script that I include into the script that uses the data for a specific workflow. I can process about 25 requests (That are in 1 SOAP response) but once I go over that I get the timeout error in the Salesforce queue. for 50 requests is usually takes my PHP script 86.77 seconds. Could it be Apache? PHP? I have also tested just accepting the 100 request SOAP response and just accepting and sending the ACK the queue clears out, so I know it's on my side of things. I show no errors in the apache log, the script runs fine. Thanks for any insight into this, --Phill

    Read the article

  • Metro UsernameToken Policy

    - by Rodney
    I created a web services client prototype using api's available in weblogic 10.3. I've been told I need to use Metro 2.0 instead (it's already being used for other projects). The problem I have encounter is that the WSDL does not include any Security Policy information but a UsernameToken is required for each method call. In weblogic I was able to write my own policy xml file and instantiate my service with it (see below), however I can not seem to figure out how to do the same using Metro. Policy.xml <?xml version="1.0"?> <wsp:Policy xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200512"> <sp:SupportingTokens> <wsp:Policy> <sp:UsernameToken sp:IncludeToken="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200512/IncludeToken/AlwaysToRecipient"> <wsp:Policy> <sp:WssUsernameToken10/> <sp:HashPassword/> </wsp:Policy> </sp:UsernameToken> </wsp:Policy> </sp:SupportingTokens> </wsp:Policy> Client.java (Weblogic) ClientPolicyFeature cpf = new ClientPolicyFeature(); InputStream asStream = WebServiceSoapClient.class.getResourceAsStream("Policy.xml"); cpf.setEffectivePolicy(new InputStreamPolicySource(asStream)); try { webService = new WebService(new URL("http://192.168.1.10/WebService/WebService.asmx?wsdl"), new QName("http://testme.com", "WebService")); } catch ( MalformedURLException e ) { e.printStackTrace(); } WebServiceSoap client = webService.getWebServiceSoap(new WebServiceFeature[] {cpf}); List<CredentialProvider> credProviders = new ArrayList<CredentialProvider>(); String username = "user"; String password = "pass"; CredentialProvider cp = new ClientUNTCredentialProvider(username.getBytes(), password.getBytes()); credProviders.add(cp); Map<String, Object> rc = ((BindingProvider) client).getRequestContext(); rc.put(WSSecurityContext.CREDENTIAL_PROVIDER_LIST, credProviders); ... I am able to generate my Proxy classes using Metro however I can not figure out how to configure it to send the UsernameToken. I have attempted several different examples from the web which have not worked. Any help would be appreciated.

    Read the article

  • How to Solve N-Queens Problem in Scheme?

    - by Philip
    Hi, I'm stuck on the extended exercise 28.2 of How to Design Programs. Here is the link to the question: http://www.htdp.org/2003-09-26/Book/curriculum-Z-H-35.html#node_chap_28 I used a vector of true or false values to represent the board instead of using a list. This is what I've got which doesn't work: #lang Scheme (define-struct posn (i j)) ;takes in a position in i, j form and a board and returns a natural number that represents the position in index form ;example for board xxx ; xxx ; xxx ;(0, 1) - 1 ;(2, 1) - 7 (define (board-ref a-posn a-board) (+ (* (sqrt (vector-length a-board)) (posn-i a-posn)) (posn-j a-posn))) ;reverse of the above function ;1 - (0, 1) ;7 - (2, 1) (define (get-posn n a-board) (local ((define board-length (sqrt (vector-length a-board)))) (make-posn (floor (/ n board-length)) (remainder n board-length)))) ;determines if posn1 threatens posn2 ;true if they are on the same row/column/diagonal (define (threatened? posn1 posn2) (cond ((= (posn-i posn1) (posn-i posn2)) #t) ((= (posn-j posn1) (posn-j posn2)) #t) ((= (abs (- (posn-i posn1) (posn-i posn2))) (abs (- (posn-j posn1) (posn-j posn2)))) #t) (else #f))) ;returns a list of positions that are not threatened or occupied by queens ;basically any position with the value true (define (get-available-posn a-board) (local ((define (get-ava index) (cond ((= index (vector-length a-board)) '()) ((vector-ref a-board index) (cons index (get-ava (add1 index)))) (else (get-ava (add1 index)))))) (get-ava 0))) ;consume a position in the form of a natural number and a board ;returns a board after placing a queen on the position of the board (define (place n a-board) (local ((define (foo x) (cond ((not (board-ref (get-posn x a-board) a-board)) #f) ((threatened? (get-posn x a-board) (get-posn n a-board)) #f) (else #t)))) (build-vector (vector-length a-board) foo))) ;consume a list of positions in the form of natural number and consumes a board ;returns a list of boards after placing queens on each of the positions on the board (define (place/list alop a-board) (cond ((empty? alop) '()) (else (cons (place (first alop) a-board) (place/list (rest alop) a-board))))) ;returns a possible board after placing n queens on a-board ;returns false if impossible (define (placement n a-board) (cond ((zero? n) a-board) (else (local ((define available-posn (get-available-posn a-board))) (cond ((empty? available-posn) #f) (else (or (placement (sub1 n) (place (first available-posn) a-board)) (placement/list (sub1 n) (place/list (rest available-posn) a-board))))))))) ;returns a possible board after placing n queens on a list of boards ;returns false if all the boards are not valid (define (placement/list n boards) (cond ((empty? boards) #f) ((zero? n) (first boards)) ((not (boolean? (placement n (first boards)))) (first boards)) (else (placement/list n (rest boards)))))

    Read the article

  • Calling a .NET web service (WSE 3.0, WS-Security) from JAXWS-RI

    - by elduff
    I'm writing a JAXWS-RI client that must call a .NET Web Service that is using WS-Security. The service's WSDL does not contain any WS-Security info, but I have an example soap message from the service's authors and know that I must include wsse:Security headers, including X:509 tokens. I've been researching, and I've seen example of folks calling this type of web service from Axis and CXF (in conjunction with Rampart and/or WSS4J), but nothing about using plain JAXWS-RI itself. However, I'm (unfortunately) constrained to using JAXWS-RI by my gov't client. Does anyone have any examples/documentation of doing this from JAXWS-RI? I need to ultimately generate a SOAP header that looks something like the one below - this is a sample soap:header from a .NET client written by the service's authors. (Note: I've put the 'VALUE_HERE' string in places where I need to provide my own values) <soapenv:Envelope xmlns:iri="http://EOIR/IRIES" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xenc="http://www.w3.org/2001/04/xmlenc#"> <soapenv:Header xmlns:wsa="http://www.w3.org/2005/08/addressing"> <wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401- wss-wssecurity-secext-1.0.xsd"> <xenc:EncryptedKey Id="VALUE_HERE"> <xenc:EncryptionMethod Algorithm="http://www.w3.org/2001/04/xmlenc#rsa-oaep-mgf1p"/> <ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <wsse:SecurityTokenReference> <wsse:KeyIdentifier EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" ValueType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3"> VALUE_HERE </wsse:KeyIdentifier> </wsse:SecurityTokenReference> </ds:KeyInfo> <xenc:CipherData> <xenc:CipherValue>VALUE_HERE</xenc:CipherValue> </xenc:CipherData> <xenc:ReferenceList> <xenc:DataReference URI="#EncDataId-8"/> </xenc:ReferenceList> </xenc:EncryptedKey> </wsse:Security>

    Read the article

  • Problem serializing complex data using WCF

    - by Gustavo Paulillo
    Scenario: WCF client app, calling a web-service (JAVA) operation, wich requires a complex object as parameter. Already got the metadata. Problem: The operation has some required fields. One of them is a enum. In the SOAP sent, isnt the field above (generated metadata) - Im using WCF diagnostics and Windows Service Trace Viewer: [System.CodeDom.Compiler.GeneratedCodeAttribute("System.Xml", "2.0.50727.3082")] [System.SerializableAttribute()] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Xml.Serialization.XmlTypeAttribute(TypeName="Consult-Filter", Namespace="http://webserviceX.org/")] public partial class ConsFilter : object, System.ComponentModel.INotifyPropertyChanged { private PersonType customerTypeField; Property: [System.Xml.Serialization.XmlElementAttribute("customer-type", Form=System.Xml.Schema.XmlSchemaForm.Unqualified, Order=1)] public PersonType customerType { get { return this.customerTypeField; } set { this.customerTypeField = value; this.RaisePropertyChanged("customerType"); } } The enum: [System.CodeDom.Compiler.GeneratedCodeAttribute("System.Xml", "2.0.50727.3082")] [System.SerializableAttribute()] [System.Xml.Serialization.XmlTypeAttribute(TypeName="Person-Type", Namespace="http://webserviceX.org/")] public enum PersonType { /// <remarks/> F, /// <remarks/> J, } The trace log: <MessageLogTraceRecord> <HttpRequest xmlns="http://schemas.microsoft.com/2004/06/ServiceModel/Management/MessageTrace"> <Method>POST</Method> <QueryString></QueryString> <WebHeaders> <VsDebuggerCausalityData>data</VsDebuggerCausalityData> </WebHeaders> </HttpRequest> <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"> <s:Header> <Action s:mustUnderstand="1" xmlns="http://schemas.microsoft.com/ws/2005/05/addressing/none"></Action> <ActivityId CorrelationId="correlationId" xmlns="http://schemas.microsoft.com/2004/09/ServiceModel/Diagnostics">activityId</ActivityId> </s:Header> <s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <filter xmlns="http://webserviceX.org/"> <product-code xmlns="">116</product-code> <customer-doc xmlns="">777777777</customer-doc> </filter> </s:Body> </s:Envelope> </MessageLogTraceRecord>

    Read the article

  • Count number of queries executed by NHibernate in a unit test

    - by Bittercoder
    In some unit/integration tests of the code we wish to check that correct usage of the second level cache is being employed by our code. Based on the code presented by Ayende here: http://ayende.com/Blog/archive/2006/09/07/MeasuringNHibernatesQueriesPerPage.aspx I wrote a simple class for doing just that: public class QueryCounter : IDisposable { CountToContextItemsAppender _appender; public int QueryCount { get { return _appender.Count; } } public void Dispose() { var logger = (Logger) LogManager.GetLogger("NHibernate.SQL").Logger; logger.RemoveAppender(_appender); } public static QueryCounter Start() { var logger = (Logger) LogManager.GetLogger("NHibernate.SQL").Logger; lock (logger) { foreach (IAppender existingAppender in logger.Appenders) { if (existingAppender is CountToContextItemsAppender) { var countAppender = (CountToContextItemsAppender) existingAppender; countAppender.Reset(); return new QueryCounter {_appender = (CountToContextItemsAppender) existingAppender}; } } var newAppender = new CountToContextItemsAppender(); logger.AddAppender(newAppender); logger.Level = Level.Debug; logger.Additivity = false; return new QueryCounter {_appender = newAppender}; } } public class CountToContextItemsAppender : IAppender { int _count; public int Count { get { return _count; } } public void Close() { } public void DoAppend(LoggingEvent loggingEvent) { if (string.Empty.Equals(loggingEvent.MessageObject)) return; _count++; } public string Name { get; set; } public void Reset() { _count = 0; } } } With intended usage: using (var counter = QueryCounter.Start()) { // ... do something Assert.Equal(1, counter.QueryCount); // check the query count matches our expectations } But it always returns 0 for Query count. No sql statements are being logged. However if I make use of Nhibernate Profiler and invoke this in my test case: NHibernateProfiler.Intialize() Where NHProf uses a similar approach to capture logging output from NHibernate for analysis via log4net etc. then my QueryCounter starts working. It looks like I'm missing something in my code to get log4net configured correctly for logging nhibernate sql ... does anyone have any pointers on what else I need to do to get sql logging output from Nhibernate?

    Read the article

  • Default encoding type for wsHttp binding

    - by user102533
    My understanding was that the default encoding for wsHttp binding is text. However, when I use Fiddler to see the SOAP message, a part of it looks like this: <s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope" xmlns:a="http://www.w3.org/2005/08/addressing" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"><s:Header><a:Action s:mustUnderstand="1" u:Id="_2">http://tempuri.org/Services/MyContract/GetDataResponse</a:Action><a:RelatesTo u:Id="_3">urn:uuid:503c5525-f585-4ecd-ac09-24db78526952</a:RelatesTo><o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"><u:Timestamp u:Id="uuid-8935f789-fbb7-4c69-9f67-7708373088c5-22"><u:Created>2010-03-08T19:15:50.852Z</u:Created><u:Expires>2010-03-08T19:20:50.852Z</u:Expires></u:Timestamp><c:DerivedKeyToken u:Id="uuid-8935f789-fbb7-4c69-9f67-7708373088c5-18" xmlns:c="http://schemas.xmlsoap.org/ws/2005/02/sc"><o:SecurityTokenReference><o:Reference URI="urn:uuid:b2cbfe07-8093-4f44-8a06-f8b062291643" ValueType="http://schemas.xmlsoap.org/ws/2005/02/sc/sct"/></o:SecurityTokenReference><c:Offset>0</c:Offset><c:Length>24</c:Length><c:Nonce>afOoDygRG7BW+q8+makVIA==</c:Nonce></c:DerivedKeyToken><c:DerivedKeyToken u:Id="uuid-8935f789-fbb7-4c69-9f67-7708373088c5-19" xmlns:c="http://schemas.xmlsoap.org/ws/2005/02/sc"><o:SecurityTokenReference><o:Reference URI="urn:uuid:b2cbfe07-8093-4f44-8a06-f8b062291643" ValueType="http://schemas.xmlsoap.org/ws/2005/02/sc/sct"/></o:SecurityTokenReference><c:Nonce>l4rFsdYKLJTK4tgUWrSBRw==</c:Nonce></c:DerivedKeyToken><e:ReferenceList xmlns:e="http://www.w3.org/2001/04/xmlenc#"><e:DataReference URI="#_1"/><e:DataReference URI="#_4"/></e:ReferenceList><e:EncryptedData Id="_4" Type="http://www.w3.org/2001/04/xmlenc#Element" xmlns:e="http://www.w3.org/2001/04/xmlenc#"><e:EncryptionMethod Algorithm="http://www.w3.org/2001/04/xmlenc#aes256-cbc"/><KeyInfo xmlns="http://www.w3.org/2000/09/xmldsig#"><o:SecurityTokenReference><o:Reference ValueType="http://schemas.xmlsoap.org/ws/2005/02/sc/dk" URI="#uuid-8935f789-fbb7-4c69-9f67-7708373088c5-19"/></o:SecurityTokenReference></KeyInfo><e:CipherData><e:CipherValue>dW8B7wGV9tYaxM5ADzY6UuEgB5TFzdy4BZjOtF0NEbHyNevCIAVHMoyA69U4oUjQHMJD5nHS0N4tnJqfJkYellKlpFZcwqruJ1J/TFx9uwLFFAwZ+dSfkDqgKu/1MFzVSY8eyeYKmbPbVEYOHr0lhw3+7wn5NQr3yxvCjlucTAdklIhD72YnVlSVapOW3zgysGt5hStyj+bmIz5hLGyyv6If4HzWjUiru8V3iMM/ss1I+i9sJOD013kr4zaaA937CN9+/aZ2wbDXnYj31UX49uE/vvt9Tl+c4SiydbiX7tp1eNSTx9Ms5O64gb3aUmHEAYOJ19XCrr756ssFZtaE7QOAoPQkFbx9zXy0mb9j1YoPQNG+JAcrN0yoRN1klhccmY+csfYXdq7YBB/KS+u2WnUjQ7SlNFy5qIPxuy5y0Jyedr2diPKLi0gUi+cK49BLQtG/XEShtxFaeMy7zZTrQADxww7kEkhvtmAlmyRbz3oGc+ This doesn't look like text encoding to me (Shouldn't text encoding send data in readable form)? What am I missing? Also, how do I setup binary encoding for wsHttp binding?

    Read the article

  • git: setting a single tracking remote from a public repo.

    - by Gauthier
    I am confused with remote branches. My local repo: (local) ---A---B---C-master My remote repo (called int): (int) ---A---B---C---D---E-master What I want to do is to setup the local repo's master branch to follow that of int. Local repo: (local) ---A---B---C---D---E-master-remotes/int/master So that when int changes to: (int) ---A---B---C---D---E---F-master I can run git pull from the local repo's master and get (local) ---A---B---C---D---E---F-master-remotes/int/master Here's what I have tried: git fetch int gets me all the branches of int into remote branches. This can get messy since int might have hundreds of branches. git fetch int master gets me the commits, but no ref to it, only FETCH_HEAD. No remote branch either. git fetch int master:new_master works but I don't want a new name every time I update, and no remote branch is setup. git pull int master does what I want, but there is still no remote branch setup. I feel that it is ok to do so (that's the best I have now), but I read here and there that with the remote setup it is enough with git pull. git branch --track new_master int/master, as per http://www.gitready.com/beginner/2009/03/09/remote-tracking-branches.html . I get "not a valid object name: int/master". git remote -v does show me that int is defined and points at the correct location (1. worked). What I miss is the int/master branch, which is precisely what I want to get. git fetch in master:int/master. Well, int/master is created, but is no remote. So to summarize, I've tried some stuff with no luck. I would expect 2 to give me the remote branch to master in the repo int. The solution I use now is option 3. I read somewhere that you could change some config file by hand, but isn't that a bit cumbersome? The "cumbersome" way of editting the config file did work: [branch "master"] remote = int merge = master It can be done from command line: $ git config branch.master.remote int $ git config branch.master.merge master Any reason why option 2 above wouldn't do that automatically? Even in that case, git pull fetches all branches from the remote.

    Read the article

  • Sync services not actually syncing

    - by Paul Mrozowski
    I'm attempting to sync a SQL Server CE 3.5 database with a SQL Server 2008 database using MS Sync Services. I am using VS 2008. I created a Local Database Cache, connected it with SQL Server 2008 and picked the tables I wanted to sync. I selected SQL Server Tracking. It modified the database for change tracking and created a local copy (SDF) of the data. I need two way syncing so I created a partial class for the sync agent and added code into the OnInitialized() to set the SyncDirection for the tables to Bidirectional. I've walked through with the debugger and this code runs. Then I created another partial class for cache server sync provider and added an event handler into the OnInitialized() to hook into the ApplyChangeFailed event. This code also works OK - my code runs when there is a conflict. Finally, I manually made some changes to the server data to test syncing. I use this code to fire off a sync: var agent = new FSEMobileCacheSyncAgent(); var syncStats = agent.Synchronize(); syncStats seems to show the count of the # of changes I made on the server and shows that they were applied. However, when I open the local SDF file none of the changes are there. I basically followed the instructions I found here: http://msdn.microsoft.com/en-us/library/cc761546%28SQL.105%29.aspx and here: http://keithelder.net/blog/archive/2007/09/23/Sync-Services-for-SQL-Server-Compact-Edition-3.5-in-Visual.aspx It seems like this should "just work" at this point, but the changes made on the server aren't in the local SDF file. I guess I'm missing something but I'm just not seeing it right now. I thought this might be because I appeared to be using version 1 of Sync Services so I removed the references to Microsoft.Synchronization.* assemblies, installed the Sync framework 2.0 and added the new version of the assemblies to the project. That hasn't made any difference. Ideas? Edit: I wanted to enable tracing to see if I could track this down but the only way to do that is through a WinForms app since it requires entries in the app.config file (my original project was a class library). I created a WinForms project and recreated everything and suddenly everything is working. So apparently this requires a WinForm project for some reason? This isn't really how I planned on using this - I had hoped to kick off syncing through another non-.NET application and provide the UI there so the experience was a bit more seemless to the end user. If I can't do that, that's OK, but I'd really like to know if/how to make this work as a class library project instead.

    Read the article

  • .NET 4.5 is an in-place replacement for .NET 4.0

    - by Rick Strahl
    With the betas for .NET 4.5 and Visual Studio 11 and Windows 8 shipping many people will be installing .NET 4.5 and hacking away on it. There are a number of great enhancements that are fairly transparent, but it's important to understand what .NET 4.5 actually is in terms of the CLR running on your machine. When .NET 4.5 is installed it effectively replaces .NET 4.0 on the machine. .NET 4.0 gets overwritten by a new version of .NET 4.5 which - according to Microsoft - is supposed to be 100% backwards compatible. While 100% backwards compatible sounds great, we all know that 100% is a hard number to hit, and even the aforementioned blog post at the Microsoft site acknowledges this. But there's so much more than backwards compatibility that makes this awkward at best and confusing at worst. What does ‘Replacement’ mean? When you install .NET 4.5 your .NET 4.0 assemblies in the \Windows\.NET Framework\V4.0.30319 are overwritten with a new set of assemblies. You end up with overwritten assemblies as well as a bunch of new ones (like the new System.Net.Http assemblies for example). The following screen shot demonstrates system.dll on my test machine (left) running .NET 4.5 on the right and my production laptop running stock .NET 4.0 (right):   Clearly they are different files with a difference in file sizes (interesting that the 4.5 version is actually smaller). That’s not all. If you actually query the runtime version when .NET 4.5 is installed with with Environment.Version you still get: 4.0.30319 If you open the properties of System.dll assembly in .NET 4.5 you'll also see: Notice that the file version is also left at 4.0.xxx. There are differences in build numbers: .NET 4.0 shows 261 and the current .NET 4.5 beta build is 17379. I suppose you can use assume a build number greater than 17000 is .NET 4.5, but that's pretty hokey to say the least. There’s no easy or obvious way to tell whether you are running on 4.0 or 4.5 – to the application they appear to be the same runtime version. And that is what Microsoft intends here. .NET 4.5 is intended as an in-place upgrade. Compile to 4.5 run on 4.0 – not quite! You can compile an application for .NET 4.5 and run it on the 4.0 runtime – that is until you hit a new feature that doesn’t exist on 4.0. At which point the app bombs at runtime. Say you write some code that is mostly .NET 4.0, but only has a few of the new features of .NET 4.5 like aync/await buried deep in the bowels of the application where it only fires occasionally. .NET will happily start your application and run everything 4.0 fine, until it hits that 4.5 code – and then crash unceremoniously at runtime. Oh joy! You can .NET 4.0 applications on .NET 4.5 of course and that should work without much fanfare. Different than .NET 3.0/3.5 Note that this in-place replacement is very different from the side by side installs of .NET 2.0 and 3.0/3.5 which all ran on the 2.0 version of the CLR. The two 3.x versions were basically library enhancements on top of the core .NET 2.0 runtime. Both versions ran under the .NET 2.0 runtime which wasn’t changed (other than for security patches and bug fixes) for the whole 3.x cycle. The 4.5 update instead completely replaces the .NET 4.0 runtime and leaves the actual version number set at v4.0.30319. When you build a new project with Visual Studio 2011, you can still target .NET 4.0 or you can target .NET 4.5. But you are in effect referencing the same set of assemblies for both regardless which version you use. What's different is the compiler used to compile and link your code so compiling with .NET 4.0 gives you just the subset of the functionality that is available in .NET 4.0, but when you use the 4.5 compiler you get the full functionality of what’s actually available in the assemblies and extra libraries. It doesn’t look like you will be able to use Visual Studio 2010 to develop .NET 4.5 applications. Good news – Bad news Microsoft is trying hard to experiment with every possible permutation of releasing new versions of the .NET framework apparently. No two updates have been the same. Clearly updating to a full new version of .NET (ie. .NET 2.0, 4.0 and at some point 5.0 runtimes) has its own set of challenges, but doing an in-place update of the runtime and then not even providing a good way to tell which version is installed is pretty whacky even by Microsoft’s standards. Especially given that .NET 4.5 includes a fairly significant update with all the aysnc functionality baked into the runtime. Most of the IO APIs have been updated to support task based async operation which significantly affects many existing APIs. To make things worse .NET 4.5 will be the initial version of .NET that ships with Windows 8 so it will be with us for a long time to come unless Microsoft finally decides to push .NET versions onto Windows machines as part of system upgrades (which currently doesn’t happen). This is the same story we had when Vista launched with .NET 3.0 which was a minor version that quickly was replaced by 3.5 which was more long lived and practical. People had enough problems dealing with the confusing versioning of the 3.x versions which ran on .NET 2.0. I can’t count the amount support calls and questions I’ve fielded because people couldn’t find a .NET 3.5 entry in the IIS version dialog. The same is likely to happen with .NET 4.5. It’s all well and good when we know that .NET 4.5 is an in-place replacement, but administrators and IT folks not intimately familiar with .NET are unlikely to understand this nuance and end up thoroughly confused which version is installed. It’s hard for me to see any upside to an in-place update and I haven’t really seen a good explanation of why this approach was decided on. Sure if the version stays the same existing assembly bindings don’t break so applications can stay running through an update. I suppose this is useful for some component vendors and strongly signed assemblies in corporate environments. But seriously, if you are going to throw .NET 4.5 into the mix, who won’t be recompiling all code and thoroughly test that code to work on .NET 4.5? A recompile requirement doesn’t seem that serious in light of a major version upgrade.  Resources http://blogs.msdn.com/b/dotnet/archive/2011/09/26/compatibility-of-net-framework-4-5.aspx http://www.devproconnections.com/article/net-framework/net-framework-45-versioning-faces-problems-141160© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Customize Team Build 2010 – Part 13: Get control over the Build Output

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application In the part 8, I have explained how you can add informational messages, warnings or errors to the build output. If you want to integrate with other lines of text to the build output, you need to do more. This post will show you how you can add extra steps, additional information and hyperlinks to the build output. UPDATE 13-12-2010: Thanks to Jason Pricket, it is now also possible to not show every activity in the build log. This is really useful when you are doing for-loops in your template. To see how you can do that, check out Jason's blog: http://blogs.msdn.com/b/jpricket/archive/2010/12/09/tfs-2010-making-your-build-log-less-noisy.aspx Add an hyperlink to the end of the build output Lets start with a simple example of how you can adjust the build output. In this case we are going to add at the end of the build output an hyperlink where a user can click on to for example start the deployment to the test environment. In part 4 you can find information how you can create a custom activity To add information to the build output, you need the BuildDetail. This value is a variable in your xaml and is thus easily transferable to you custom activity. Besides the BuildDetail the user has also to specify the text and the url that has to be added to the end of the build output. The following code segment shows you how you can achieve this.     [BuildActivity(HostEnvironmentOption.All)]    public sealed class AddHyperlinkToBuildOutput : CodeActivity    {        [RequiredArgument]        public InArgument<IBuildDetail> BuildDetail { get; set; }         [RequiredArgument]        public InArgument<string> DisplayText { get; set; }         [RequiredArgument]        public InArgument<string> Url { get; set; }         protected override void Execute(CodeActivityContext context)        {            // Obtain the runtime value of the input arguments                        IBuildDetail buildDetail = context.GetValue(this.BuildDetail);            string displayText = context.GetValue(this.DisplayText);            string url = context.GetValue(this.Url);             // Add the hyperlink            buildDetail.Information.AddExternalLink(displayText, new Uri(url));            buildDetail.Information.Save();        }    } If you add this activity to somewhere in your build process template (within the scope Run on Agent), you will get the following build output Add an line of text to the build output The next challenge is to add this kind of output not only to the end of the build output but at the step that is currently executing. To be able to do this, you need the current node in the build output. The following code shows you how you can achieve this. First you need to get the current activity tracking, which you can get with the following line of code             IActivityTracking currentTracking = context.GetExtension<IBuildLoggingExtension>().GetActivityTracking(context); Then you can create a new node and set its type to Activity Tracking Node (so copy it from the current node) and do nice things with the node.             IBuildInformationNode childNode = currentTracking.Node.Children.CreateNode();            childNode.Type = currentTracking.Node.Type;            childNode.Fields.Add("DisplayText", "This text is displayed."); You can also add a build step to display progress             IBuildStep buildStep = childNode.Children.AddBuildStep("Custom Build Step", "This is my custom build step");            buildStep.FinishTime = DateTime.Now.AddSeconds(10);            buildStep.Status = BuildStepStatus.Succeeded; Or you can add an hyperlink to the node             childNode.Children.AddExternalLink("My link", new Uri(http://www.ewaldhofman.nl)); When you combine this together you get the following result in the build output   You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • FTP Publishing with the new Windows Azure Release

    - by Harish Ranganathan
    There is a good chance you might have stumbled upon the new Windows Azure Release that we made on June 6th.  Scott Guthrie’s Post quite summarizes the overall new features. One of my favorite features is the Windows Azure Websites and the ability to do publish files to Azure using your FTP Client. Windows Azure Websites offers low cost (free upto 10 websites) web hosting where you can deploy any website that can run on IIS 7.0, quickly. The earlier releases of Azure SDKs and the Azure platform support .NET 3.5 & above for running your applications.  This was a constraint for many since there are/were a lot of ASP.NET 2.0 applications built over time and simply to put it on Azure, many of you were skeptical to migrate it to .NET 4. Windows Azure Websites offer the flexibility of running IIS 7.0 supported .NET Versions which means you can run .NET 1.1, 2.0, 3.5 and .NET 4.  Not just that! You can also run classic ASP Applications. Windows Azure Websites don’t need you to go through the complexity of adding the Cloud Project Template and then publishing the Configuration Files.  Lets take a step by step understanding of Websites and publishing using FTP. I downloaded the Club Website Starter Kit from http://www.asp.net/downloads/starter-kits/club It also requires a database and I downloaded the SQL Scripts and created a SQL Server Database called Club. This installs a Web Site Project Template.  Note that I am running Windows 8 Release Preview and Visual Studio 2012 RC.  After installing the template, select File – New – Website and don’t forget to choose the Framework version as .NET 2.0 You can see the “Club Website Starter Kit” .  Once you select the Website gets created.  You would encounter a warning indicating that the Club Website Starter Kit uses SQL Express and the recommended database is LocalDB Express.  Click ok to continue.  Once the Website is created open up the Web.config and locate the “ClubSiteDB” connection string.  By default, it points to a SQL Express Database.  Instead configure it to use your local SQL Server. Also, open up Global.asax and comment out the following line if (!Roles.RoleExists("Administrators")) Roles.CreateRole("Administrators"); There seems to be an issue in the code that doesn’t create the role.  Post that, hit CTRL+F5 and you should be able to see the Website Running, as below So, now we have the Club Starter Kit site up running locally.  Moving to Azure Visit http://manage.windowsazure.com/ and sign up for a trial account.  This allows you to host up to 10 websites for free and a host of other benefits.  The free Websites can be extended to an year without any charge.  Once you have signed up, sign in to the portal using the Live ID used for sign up. After signing in, you would be presented with the “All Items” listing page which lists, Websites, Cloud Services, Databases etc.,  If this is the first time, you wouldn’t find anything. Click on the “Websites” link from the left menu.  Click on “New” in the bottom and it should show up a dialog.  In the same, select Website and click on “Quick Create” and in the URL Textbox, specify “MyFirstDemo” and click the “Create Web Site” link below. It should take a few seconds to create the Website.  Once the Website is created, click on the listing and it should open up the Dashboard.  Since we haven’t done anything yet, there shouldn’t be any statistics Click on the “Download publish profile” link in the right bottom.  This file has the FTP publishing settings. Also, if you scroll down you can see the FTP URL for this site.  It should typically start ftp://waws-xxxx-xxx-xxxx In the downloaded publish profile file, you can also find the ftp URL.  Pick the following from this file publishUrl (the 2nd one, the one that features after publishMethod =”FTP”) and the userName and userPWD that follows. Note that we have everything required to publish the files.  But since the Club Starter Kit uses Databases, we need to have the Database running on SQL Azure.  Go back to the Main Menu and click on “New” in the bottom but this time select “SQL Database” and provide “Club” as Database name for “Quick Create” If this is the first time a Server would be created.  Otherwise, it would pickup the existing server name. Once the database is created, you can use the SQL Azure Migration Wizard http://sqlazuremw.codeplex.com/ and provide the credentials to connect to local database and then the SQL Azure database for migrating the “Club” database.  The migration wizard UI hasn’t changed much and is the same as explained by me in one my posts earlier http://geekswithblogs.net/ranganh/archive/2009/09/29/taking-your-northwind-database-to-sql-azure-and-binding-it.aspx Once the database is migrated, come back to the main screen and click on the Database base in the Azure Management Portal.  It opens up the dashboard of the database.  Click on “Show connection Strings” and it would popup a list of connection string formats.  Choose the ADO.NET connection string and after editing the password with the password that you provided when creating the database server in the Azure Portal, paste it into the config file of the Club Starter Kit Website.  Just to reiterate, the connection string key is ClubSiteDB. Try running the Website once to ensure that the application though running locally could connect to the SQL Database running on Azure. Once you are able to run the website successfully, we are all set to do the FTP Publishing. Download your favorite FTP tool.  I use http://filezilla-project.org/ In the Host Textbox, paste the FTP URL that you picked up from the publish profile file and also paste the username and password.  Click on “QuickConnect”.  If everything is fine, you should be able to connect to the remote server.  If it is successfully connected, you can see the wwwroot folder of the Website, running in Azure Make sure on the “Local Site” in the left, you choose the path to the folder of your Website.  Open up the Website folder on the left such that it lists all the files and folders inside.  Select all of them and click select “Upload” or simply drag and drop all the files to the root folder that is listed above.  Once the publishing is done, you should be able to hit the SiteURL that you can find the dashboard page of the website.  In our case, it would be http://MyFirstDemo.azurewebsites.net That’s it, we have now done FTP publishing in Azure and that too we are running a .NET 2.0 Website on Azure. Cheers !!!

    Read the article

< Previous Page | 57 58 59 60 61 62  | Next Page >