Search Results

Search found 35149 results on 1406 pages for 'yield return'.

Page 178/1406 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • Customize autoindent settings in VIMRC file

    - by Shane Reustle
    I have autoindent enabled in my .vimrc file but have run into an annoying bug/feature. For example, when I'm tabbed in 3 times, and I hit return, the new line is also tabbed in 3 times. Then when I hit enter again, that new line is also indented 3 times, as it should. The problem occurs when I go back up to the previous line (the first of the 2 new lines). VIM automatically removes the whitespace because it saw it as an empty line. Is there a way to disable this from happening? I'd like to be able to back to coding like this: function test(){ <return> <return> } <up> <right> Thanks!

    Read the article

  • Hiding a HTTP Auth-Realm by sending 404 to non-known IPs?

    - by zhenech
    I have an Apache (2.2) serving a web-app on example.com. That web-app has a debug-page reachable via example.com/debug. /debug is currently protected with a HTTP basic auth. As there is only a very small user-base who has access to the debug-page, I would like to hide it based on IP address and return 404 to clients not accessing from our VPN. Serving a 404 based on IP-address only is easy and is described in http://serverfault.com/a/13071. But as soon I add authentication, the users see a 401 instead of a 404. Basically, what I need is: if ($REMOTE_ADDR ~ 10.11.12.*): do_basic_auth (aka return 401) else: return 404

    Read the article

  • Escape not idempotent in zsh's vi emulation?

    - by user1063042
    I have zsh configured to use vi keybindings. I've noticed some unexpected behavior with "escape". In vim (I haven't tested vanilla vi) if I hit escape twice, I can hit 'i' once to return to insert mode. In zsh if I happen to hit escape twice, hitting 'i' won't return me to insert mode, I have to hit it twice. Another example of this comes up in navigation. If I hit escape once, I can use '^' and '$' as expected. But if I've accidentally hit escape twice (or more) they fail to work until I return to insert mode and escape again. Do I somehow have zsh configured incorrectly, or is this just a known difference in zsh's vi emulation?

    Read the article

  • Refining an AutoHotkey script

    - by roy2012
    The purpose of this script is: The first two rows of hotkeys always effective. The remaining hotkeys work at NO TEXT INPUT Status only. In other words, when the small vertical lines are flashing anywhere on the screen and waiting for input text / digital, press zxasq, the effect is equal to the normal original letters. How can I do that? Rwin::^space AppsKey::^w CapsLock::MButton z::PgUp x::PgDn *a up::send {shift up}{ctrl up}{LButton up} *a:: GetKeyState, LButtonState, LButton ; if LButtonState = U ; send {shift down}{ctrl down}{LButton down} ; return *s up::send {shift up}{ctrl up}{RButton up} *s:: GetKeyState, RButtonState, RButton ; if RButtonState = U ; send {shift down}{ctrl down}{RButton down} ; return *q up::send {shift up}{ctrl up}{MButton up} *q:: GetKeyState, MButtonState, MButton ; if MButtonState = U ; send {shift down}{ctrl down}{MButton down} ; return

    Read the article

  • Exclude pings from apache error logs (ran from PHP exec)

    - by fooraide
    Now, for a number of reasons I need to ping several hosts on a regular basis for a dashboard display. I use this PHP function to do it: function PingHost($strIpAddr) { exec(escapeshellcmd('ping -q -W 1 -c 1 '.$strIpAddr), $dataresult, $returnvar); if (substr($dataresult[4],0,3) == "rtt") { //We got a ping result, lets parse it. $arr = explode("/",$dataresult[4]); return ereg_replace(" ms","",$arr[4]); } elseif (substr($dataresult[3],35,16) == "100% packet loss") { //Host is down! return "Down"; } elseif ($returnvar == "2") { return "No DNS"; } } The problem is that whenever there is an unknown host, I will get an error logged to my apache error log (/var/log/apache/error.log). How would I go about disabling logs for this particular function ? Disabling logs in the vhost is not an option since logs for that vhost are relevant, just not the pings. Thanks,

    Read the article

  • how to google a symbol keyword like "$?"

    - by ZhengZhiren
    i saw a trick in a book: in a linux shell, we can use &? to get the return value of a command. For example,we run a command,if it exit normally, the return value is 0. And then we type $?,we will get 0 in the screen. i want to google this kind of usage, so i have to type these two symbol $? in the search blank.But the search engine just return nothing to me... i have looked at the google help page, but still can't find a solution. so my question is: how can i search with this kind of keyword. or if you can give me some advise of the usage of $? or sort of thing, that will be also appreciated.

    Read the article

  • BIND unstable with DLZ+MySQL on Ubuntu 9.10, any ideas?

    - by Chris
    My BIND server keeps dropping out and I can't work out why. Here is some info from the syslog that I think pertains to the failure(s): Apr 22 21:12:17 dnsdebug named[6613]: mysql driver unable to return result set for lookup query Apr 22 21:12:17 dnsdebug kernel: [285552.573949] type=1503 audit(1271963537.759:53): operation="open" pid=6618 parent=1 profile="/usr/sbin/named" requested_mask="::rw" denied_mask="::rw" fsuid=107 ouid=0 name="/dev/tty" Apr 22 21:12:17 dnsdebug named[6613]: mysql driver unable to return result set for lookup query Apr 22 21:13:17 dnsdebug named[6613]: last message repeated 7 times Any ideas? Mysql had a segfault sometimes, but that seems to be no longer an issue. It's the 64bit version of ubuntu too. Sometimes it will return records just fine, other times it appears to just randomly go down.

    Read the article

  • nginx not returning 304 on cached content

    - by Don H
    I'm using nginx as a reverse proxy with an Apache back-end handling some PHP files. The files return the right expiry headers and proxy_cache does a good job of caching them, but I've noticed that the cached content returns a 200 on every refresh, when it might be more efficient to return a 304 on the cached files. The files in question are generated by PHP. The urls do not have .php in them as they've been prettified. Any idea why nginx might not be returning 304 on repeated visits to a cached PHP output? To clarify: It's using proxy_cache for caching dynamic PHP pages (not static html pages generated by PHP). I'm setting expires headers in the PHP file of time + 24 hours. With that in mind, I was hoping nginx would be able to then return 304s on its cached versions during that 24 hour window.

    Read the article

  • count the number of times a substring is found within a date range in excel

    - by ckr
    I have a spreadsheet that contains test data. column A has the test name and column B contains the test date. I want to count the number of times that the string Rerun is found within a certain date range. For example A B test1 11/2/2012 test2 11/7/2012 test1_Rerun_1 11/10/2012 test2_Rerun_1 11/16/2012 I am doing a weekly report so want to show how many tests had to be rerun in a particular week. so in the above example: week ending 11/2/12 would return 0 (look for dates 10/26/12 and <=11/2/12 with substring "Rerun") week ending 11/9/12 would return 0 (look for dates 11/2/12 and <= 11/9/12 with substring "Rerun") week ending 11/16/12 would return 2 (look for dates 11/9/12 and <=11/16/12 with substring "Rerun")

    Read the article

  • nginx: handling 404 with error_page

    - by ytw
    Originally, I have something like this in the nginx.conf file. location ^~ /test_api { types { application/json json; } root /usr/local/www/data; rewrite "/test_api/(.*)" /api_response/test_api_$1.json break; error_page 404 /api_response/unknown_request.json; } When a requested resource is not found locally, unknown_request.json (default response) is returned correctly. Then I had to change the rewrite to point to a remote server as follows: rewrite "/test_api/(.*)" $scheme://www.somedomain.com/test_api_$1 break; It doesn't return unknown_request.json (default response) anymore even though the remote server returns a 404. Is there a way to continue to return unknown_request.json to the client when the remote server returns a 404 assuming the remote server can't be changed to return unknown_request.json? Thanks very much.

    Read the article

  • Silverlight Commands Hacks: Passing EventArgs as CommandParameter to DelegateCommand triggered by Ev

    - by brainbox
    Today I've tried to find a way how to pass EventArgs as CommandParameter to DelegateCommand triggered by EventTrigger. By reverse engineering of default InvokeCommandAction I find that blend team just ignores event args.To resolve this issue I have created my own action for triggering delegate commands.public sealed class InvokeDelegateCommandAction : TriggerAction<DependencyObject>{    /// <summary>    ///     /// </summary>    public static readonly DependencyProperty CommandParameterProperty =        DependencyProperty.Register("CommandParameter", typeof(object), typeof(InvokeDelegateCommandAction), null);    /// <summary>    ///     /// </summary>    public static readonly DependencyProperty CommandProperty = DependencyProperty.Register(        "Command", typeof(ICommand), typeof(InvokeDelegateCommandAction), null);    /// <summary>    ///     /// </summary>    public static readonly DependencyProperty InvokeParameterProperty = DependencyProperty.Register(        "InvokeParameter", typeof(object), typeof(InvokeDelegateCommandAction), null);    private string commandName;    /// <summary>    ///     /// </summary>    public object InvokeParameter    {        get        {            return this.GetValue(InvokeParameterProperty);        }        set        {            this.SetValue(InvokeParameterProperty, value);        }    }    /// <summary>    ///     /// </summary>    public ICommand Command    {        get        {            return (ICommand)this.GetValue(CommandProperty);        }        set        {            this.SetValue(CommandProperty, value);        }    }    /// <summary>    ///     /// </summary>    public string CommandName    {        get        {            return this.commandName;        }        set        {            if (this.CommandName != value)            {                this.commandName = value;            }        }    }    /// <summary>    ///     /// </summary>    public object CommandParameter    {        get        {            return this.GetValue(CommandParameterProperty);        }        set        {            this.SetValue(CommandParameterProperty, value);        }    }    /// <summary>    ///     /// </summary>    /// <param name="parameter"></param>    protected override void Invoke(object parameter)    {        this.InvokeParameter = parameter;                if (this.AssociatedObject != null)        {            ICommand command = this.ResolveCommand();            if ((command != null) && command.CanExecute(this.CommandParameter))            {                command.Execute(this.CommandParameter);            }        }    }    private ICommand ResolveCommand()    {        ICommand command = null;        if (this.Command != null)        {            return this.Command;        }        var frameworkElement = this.AssociatedObject as FrameworkElement;        if (frameworkElement != null)        {            object dataContext = frameworkElement.DataContext;            if (dataContext != null)            {                PropertyInfo commandPropertyInfo = dataContext                    .GetType()                    .GetProperties(BindingFlags.Public | BindingFlags.Instance)                    .FirstOrDefault(                        p =>                        typeof(ICommand).IsAssignableFrom(p.PropertyType) &&                        string.Equals(p.Name, this.CommandName, StringComparison.Ordinal)                    );                if (commandPropertyInfo != null)                {                    command = (ICommand)commandPropertyInfo.GetValue(dataContext, null);                }            }        }        return command;    }}Example:<ComboBox>    <ComboBoxItem Content="Foo option 1" />    <ComboBoxItem Content="Foo option 2" />    <ComboBoxItem Content="Foo option 3" />    <Interactivity:Interaction.Triggers>        <Interactivity:EventTrigger EventName="SelectionChanged" >            <Presentation:InvokeDelegateCommandAction                 Command="{Binding SubmitFormCommand}"                CommandParameter="{Binding RelativeSource={RelativeSource Self}, Path=InvokeParameter}" />        </Interactivity:EventTrigger>    </Interactivity:Interaction.Triggers>                </ComboBox>BTW: InvokeCommanAction CommandName property are trying to find command in properties of view. It very strange, because in MVVM pattern command should be in viewmodel supplied to datacontext.

    Read the article

  • Integrating WIF with WCF Data Services

    - by cibrax
    A time ago I discussed how a custom REST Starter kit interceptor could be used to parse a SAML token in the Http Authorization header and wrap that into a ClaimsPrincipal that the WCF services could use. The thing is that code was initially created for Geneva framework, so it got deprecated quickly. I recently needed that piece of code for one of projects where I am currently working on so I decided to update it for WIF. As this interceptor can be injected in any host for WCF REST services, also represents an excellent solution for integrating claim-based security into WCF Data Services (previously known as ADO.NET Data Services). The interceptor basically expects a SAML token in the Authorization header. If a token is found, it is parsed and a new ClaimsPrincipal is initialized and injected in the WCF authorization context. public class SamlAuthenticationInterceptor : RequestInterceptor {   SecurityTokenHandlerCollection handlers;   public SamlAuthenticationInterceptor()     : base(false)   {     this.handlers = FederatedAuthentication.ServiceConfiguration.SecurityTokenHandlers;   }   public override void ProcessRequest(ref RequestContext requestContext)   {     SecurityToken token = ExtractCredentials(requestContext.RequestMessage);     if (token != null)     {       ClaimsIdentityCollection claims = handlers.ValidateToken(token);       var principal = new ClaimsPrincipal(claims);       InitializeSecurityContext(requestContext.RequestMessage, principal);     }     else     {       DenyAccess(ref requestContext);     }   }   private void DenyAccess(ref RequestContext requestContext)   {     Message reply = Message.CreateMessage(MessageVersion.None, null);     HttpResponseMessageProperty responseProperty = new HttpResponseMessageProperty() { StatusCode = HttpStatusCode.Unauthorized };     responseProperty.Headers.Add("WWW-Authenticate",           String.Format("Basic realm=\"{0}\"", ""));     reply.Properties[HttpResponseMessageProperty.Name] = responseProperty;     requestContext.Reply(reply);     requestContext = null;   }   private SecurityToken ExtractCredentials(Message requestMessage)   {     HttpRequestMessageProperty request = (HttpRequestMessageProperty)  requestMessage.Properties[HttpRequestMessageProperty.Name];     string authHeader = request.Headers["Authorization"];     if (authHeader != null && authHeader.Contains("<saml"))     {       XmlTextReader xmlReader = new XmlTextReader(new StringReader(authHeader));       var col = SecurityTokenHandlerCollection.CreateDefaultSecurityTokenHandlerCollection();       SecurityToken token = col.ReadToken(xmlReader);                                        return token;     }     return null;   }   private void InitializeSecurityContext(Message request, IPrincipal principal)   {     List<IAuthorizationPolicy> policies = new List<IAuthorizationPolicy>();     policies.Add(new PrincipalAuthorizationPolicy(principal));     ServiceSecurityContext securityContext = new ServiceSecurityContext(policies.AsReadOnly());     if (request.Properties.Security != null)     {       request.Properties.Security.ServiceSecurityContext = securityContext;     }     else     {       request.Properties.Security = new SecurityMessageProperty() { ServiceSecurityContext = securityContext };      }    }    class PrincipalAuthorizationPolicy : IAuthorizationPolicy    {      string id = Guid.NewGuid().ToString();      IPrincipal user;      public PrincipalAuthorizationPolicy(IPrincipal user)      {        this.user = user;      }      public ClaimSet Issuer      {        get { return ClaimSet.System; }      }      public string Id      {        get { return this.id; }      }      public bool Evaluate(EvaluationContext evaluationContext, ref object state)      {        evaluationContext.AddClaimSet(this, new DefaultClaimSet(System.IdentityModel.Claims.Claim.CreateNameClaim(user.Identity.Name)));        evaluationContext.Properties["Identities"] = new List<IIdentity>(new IIdentity[] { user.Identity });        evaluationContext.Properties["Principal"] = user;        return true;      }    } A WCF Data Service, as any other WCF Service, contains a service host where this interceptor can be injected. The following code illustrates how that can be done in the “svc” file. <%@ ServiceHost Language="C#" Debug="true" Service="ContactsDataService"                 Factory="AppServiceHostFactory" %> using System; using System.ServiceModel; using System.ServiceModel.Activation; using Microsoft.ServiceModel.Web; class AppServiceHostFactory : ServiceHostFactory {    protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)   {     WebServiceHost2 result = new WebServiceHost2(serviceType, true, baseAddresses);     result.Interceptors.Add(new SamlAuthenticationInterceptor());                 return result;   } } WCF Data Services includes an specific WCF host of out the box (DataServiceHost). However, the service is not affected at all if you replace it with a custom one as I am doing in the code above (WebServiceHost2 is part of the REST Starter kit). Finally, the client application needs to pass the SAML token somehow to the data service. In case you are using any Http client library for consuming the data service, that’s easy to do, you only need to include the SAML token as part of the “Authorization” header. If you are using the auto-generated data service proxy, a little piece of code is needed to inject a SAML token into the DataServiceContext instance. That class provides an event “SendingRequest” that any client application can leverage to include custom code that modified the Http request before it is sent to the service. So, you can easily create an extension method for the DataServiceContext that negotiates the SAML token with an existing STS, and adds that token as part of the “Authorization” header. public static class DataServiceContextExtensions {        public static void ConfigureFederatedCredentials(this DataServiceContext context, string baseStsAddress, string realm)   {     string address = string.Format(STSAddressFormat, baseStsAddress, realm);                  string token = NegotiateSecurityToken(address);     context.SendingRequest += (source, args) =>     {       args.RequestHeaders.Add("Authorization", token);     };   } private string NegotiateSecurityToken(string address) { } } I left the NegociateSecurityToken method empty for this extension as it depends pretty much on how you are negotiating tokens from an existing STS. In case you want to end-to-end REST solution that involves an Http endpoint for the STS, you should definitely take a look at the Thinktecture starter STS project in codeplex.

    Read the article

  • In HLSL pixel shader , why is SV_POSITION different to other semantics?

    - by tina nyaa
    In my HLSL pixel shader, SV_POSITION seems to have different values to any other semantic I use. I don't understand why this is. Can you please explain it? For example, I am using a triangle with the following coordinates: (0.0f, 0.5f) (0.5f, -0.5f) (-0.5f, -0.5f) The w and z values are 0 and 1, respectively. This is the pixel shader. struct VS_IN { float4 pos : POSITION; }; struct PS_IN { float4 pos : SV_POSITION; float4 k : LOLIMASEMANTIC; }; PS_IN VS( VS_IN input ) { PS_IN output = (PS_IN)0; output.pos = input.pos; output.k = input.pos; return output; } float4 PS( PS_IN input ) : SV_Target { // screenshot 1 return input.pos; // screenshot 2 return input.k; } technique10 Render { pass P0 { SetGeometryShader( 0 ); SetVertexShader( CompileShader( vs_4_0, VS() ) ); SetPixelShader( CompileShader( ps_4_0, PS() ) ); } } Screenshot 1: http://i.stack.imgur.com/rutGU.png Screenshot 2: http://i.stack.imgur.com/NStug.png (Sorry, I'm not allowed to post images until I have a lot of 'reputation') When I use the first statement (result is first screenshot), the one that uses the SV_POSITION semantic, the result is completely unexpected and is yellow, whereas using any other semantic will produce the expected result. Why is this?

    Read the article

  • Need help transforming DirectX 9 skybox hlsl shader to DirectX 11

    - by J2V
    I am in the middle of implementing a skybox to my game. I have been following this tutorial http://rbwhitaker.wikidot.com/skyboxes-2. I am using MonoGame as a framework and in order to support both Windows and Windows 8 metro I need to compile the shader with pixel and vertex shader 4. compile vs_4_0_level_9_1 compile ps_4_0_level_9_1 However some of the hlsl syntax has been updated with DX10 and DX11. I need to update this hlsl code: float4x4 World; float4x4 View; float4x4 Projection; float3 CameraPosition; Texture SkyBoxTexture; samplerCUBE SkyBoxSampler = sampler_state { texture = <SkyBoxTexture>; magfilter = LINEAR; minfilter = LINEAR; mipfilter = LINEAR; AddressU = Mirror; AddressV = Mirror; }; struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : POSITION0; float3 TextureCoordinate : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); float4 VertexPosition = mul(input.Position, World); output.TextureCoordinate = VertexPosition - CameraPosition; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { return texCUBE(SkyBoxSampler, normalize(input.TextureCoordinate)); } technique Skybox { pass Pass1 { VertexShader = compile vs_2_0 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } } I quess I need to change Texture into TextureCube, change sampler, swap texCUBE() with TextureCube.Sample() and change PixelShader return semantic to SV_Target0. I'm very new in shader languages and any help is appreciated!

    Read the article

  • Twitter gem - undefined method `stringify_keys’

    - by Piet
    Have you been getting the following errors when running the Twitter gem lately ? /usr/local/lib/ruby/gems/1.8/gems/httparty-0.4.3/lib/httparty/response.rb:15:in `send': undefined method `stringify_keys' for # (NoMethodError) from /usr/local/lib/ruby/gems/1.8/gems/httparty-0.4.3/lib/httparty/response.rb:15:in `method_missing’ from /usr/local/lib/ruby/gems/1.8/gems/mash-0.0.3/lib/mash.rb:131:in `deep_update’ from /usr/local/lib/ruby/gems/1.8/gems/mash-0.0.3/lib/mash.rb:50:in `initialize’ from /usr/local/lib/ruby/gems/1.8/gems/twitter-0.6.13/lib/twitter/search.rb:101:in `new’ from /usr/local/lib/ruby/gems/1.8/gems/twitter-0.6.13/lib/twitter/search.rb:101:in `fetch’ from test.rb:26 It’s because Twitter has been sending back plain text errors that are treated as a string instead of json and can’t be properly ‘Mashed’ by the Twitter gem. Also check http://github.com/jnunemaker/twitter/issues#issue/6. Without diving into the bowels of the Twitter gem or HTTParty, you could ‘begin…rescue’ this error and try again in 5 minutes. I fixed it by overriding the offending code to return nil and checking for a nil response as follows: module Twitter class Search def fetch(force=false) if @fetch.nil? || force query = @query.dup query[:q] = query[:q].join(' ') query[:format] = 'json' #This line is the hack and whole reason we're monkey-patching at all. response = self.class.get('http://search.twitter.com/search', :query => query, :format => :json) #Our patch: response should be a Hash. If it isnt, return nil. return nil if response.class != Hash @fetch = Mash.new(response) end @fetch end end end (adapted from http://github.com/jnunemaker/twitter/issues#issue/9) If you have a better solution: speak up!

    Read the article

  • Reusable VS clean code - where's the balance?

    - by Radek Šimko
    Let's say I have a data model for a blog posts and have two use-cases of that model - getting all blogposts and getting only blogposts which were written by specific author. There are basically two ways how I can realize that. 1st model class Articles { public function getPosts() { return $this->connection->find() ->sort(array('creation_time' => -1)); } public function getPostsByAuthor( $authorUid ) { return $this->connection->find(array('author_uid' => $authorUid)) ->sort(array('creation_time' => -1)); } } 1st usage (presenter/controller) if ( $GET['author_uid'] ) { $posts = $articles->getPostsByAuthor($GET['author_uid']); } else { $posts = $articles->getPosts(); } 2nd one class Articles { public function getPosts( $authorUid = NULL ) { $query = array(); if( $authorUid !== NULL ) { $query = array('author_uid' => $authorUid); } return $this->connection->find($query) ->sort(array('creation_time' => -1)); } } 2nd usage (presenter/controller) $posts = $articles->getPosts( $_GET['author_uid'] ); To sum up (dis)advantages: 1) cleaner code 2) more reusable code Which one do you think is better and why? Is there any kind of compromise between those two?

    Read the article

  • Integrate Bing Search API into ASP.Net application

    - by sreejukg
    Couple of months back, I wrote an article about how to integrate Bing Search engine (API 2.0) with ASP.Net website. You can refer the article here http://weblogs.asp.net/sreejukg/archive/2012/04/07/integrate-bing-api-for-search-inside-asp-net-web-application.aspx Things are changing rapidly in the tech world and Bing has also changed! The Bing Search API 2.0 will work until August 1, 2012, after that it will not return results. Shocked? Don’t worry the API has moved to Windows Azure market place and available for you to sign up and continue using it and there is a free version available based on your usage. In this article, I am going to explain how you can integrate the new Bing API that is available in the Windows Azure market place with your website. You can access the Windows Azure market place from the below link https://datamarket.azure.com/ There is lot of applications available for you to subscribe and use. Bing is one of them. You can find the new Bing Search API from the below link https://datamarket.azure.com/dataset/5BA839F1-12CE-4CCE-BF57-A49D98D29A44 To get access to Bing Search API, first you need to register an account with Windows Azure market place. Sign in to the Windows Azure market place site using your windows live account. Once you sign in with your windows live account, you need to register to Windows Azure Market place account. From the Windows Azure market place, you will see the sign in button it the top right of the page. Clicking on the sign in button will take you to the Windows live ID authentication page. You can enter a windows live ID here to login. Once logged in you will see the Registration page for the Windows Azure market place as follows. You can agree or disagree for the email address usage by Microsoft. I believe selecting the check box means you will get email about what is happening in Windows Azure market place. Click on continue button once you are done. In the next page, you should accept the terms of use, it is not optional, you must agree to terms and conditions. Scroll down to the page and select the I agree checkbox and click on Register Button. Now you are a registered member of Windows Azure market place. You can subscribe to data applications. In order to use BING API in your application, you must obtain your account Key, in the previous version of Bing you were required an API key, the current version uses Account Key instead. Once you logged in to the Windows Azure market place, you can see “My Account” in the top menu, from the Top menu; go to “My Account” Section. From the My Account section, you can manage your subscriptions and Account Keys. Account Keys will be used by your applications to access the subscriptions from the market place. Click on My Account link, you can see Account Keys in the left menu and then Add an account key or you can use the default Account key available. Creating account key is very simple process. Also you can remove the account keys you create if necessary. The next step is to subscribe to BING Search API. At this moment, Bing Offers 2 APIs for search. The available options are as follows. 1. Bing Search API - https://datamarket.azure.com/dataset/5ba839f1-12ce-4cce-bf57-a49d98d29a44 2. Bing Search API – Web Results only - https://datamarket.azure.com/dataset/8818f55e-2fe5-4ce3-a617-0b8ba8419f65 The difference is that the later will give you only web results where the other you can specify the source type such as image, video, web, news etc. Carefully choose the API based on your application requirements. In this article, I am going to use Web Results Only API, but the steps will be similar to both. Go to the API page https://datamarket.azure.com/dataset/8818f55e-2fe5-4ce3-a617-0b8ba8419f65, you can see the subscription options in the right side. And in the bottom of the page you can see the free option Since I am going to use the free options, just Click the Sign Up link for that. Just select I agree check box and click on the Sign Up button. You will get a recipt pagethat detail your subscription. Now you are ready Bing Search API – Web results. The next step is to integrate the API into your ASP.Net application. Now if you go to the Search API page (as well as in the Receipt page), you can see a .Net C# Class Library link, click on the link, you will get a code file named “BingSearchContainer.cs”. In the following sections I am going to demonstrate the use of Bing Search API from an ASP.Net application. Create an empty ASP.Net web application. In the solution explorer, the application will looks as follows. Now add the downloaded code file (“BingSearchContainer.cs”) to the project. Right click your project in solution explorer, Add -> existing item, then browse to the downloaded location, select the “BingSearchContainer.cs” file and add it to the project. To build the code file you need to add reference to the following library. System.Data.Services.Client You can find the library in the .Net tab, when you select Add -> Reference Try to build your project now; it should build without any errors. Add an ASP.Net page to the project. I have included a text box and a button, then a Grid View to the page. The idea is to Search the text entered and display the results in the gridview. The page will look in the Visual Studio Designer as follows. The markup of the page is as follows. In the button click event handler for the search button, I have used the following code. Now run your project and enter some text in the text box and click the Search button, you will see the results coming from Bing, cool. I entered the text “Microsoft” in the textbox and clicked on the button and I got the following results. Searching Specific Websites If you want to search a particular website, you pass the site url with site:<site url name> and if you have more sites, use pipe (|). e.g. The following search query site:microsoft.com | site:adobe.com design will search the word design and return the results from Microsoft.com and Adobe.com See the sample code that search only Microsoft.com for the text entered for the above sample. var webResults = bingContainer.Web("site:www.Microsoft.com " + txtSearch.Text, null, null, null, null, null, null); Paging the results returned by the API By default the BING API will return 100 results based on your query. The default code file that you downloaded from BING doesn’t include any option for this. You can modify the downloaded code to perform this paging. The BING API supports two parameters $top (for number of results to return) and $skip (for number of records to skip). So if you want 3rd page of results with page size = 10, you need to pass $top = 10 and $skip=20. Open the BingSearchContainer.cs in the editor. You can see the Web method in it as follows. public DataServiceQuery<WebResult> Web(String Query, String Market, String Adult, Double? Latitude, Double? Longitude, String WebFileType, String Options) {  In the method signature, I have added two more parameters public DataServiceQuery<WebResult> Web(String Query, String Market, String Adult, Double? Latitude, Double? Longitude, String WebFileType, String Options, int resultCount, int pageNo) { and in the method, you need to pass the parameters to the query variable. query = query.AddQueryOption("$top", resultCount); query = query.AddQueryOption("$skip", (pageNo -1)*resultCount); return query; Note that I didn’t perform any validation, but you need to check conditions such as resultCount and pageCount should be greater than or equal to 1. If the parameters are not valid, the Bing Search API will throw the error. The modified method is as follows. The changes are highlighted. Now see the following code in the SearchPage.aspx.cs file protected void btnSearch_Click(object sender, EventArgs e) {     var bingContainer = new Bing.BingSearchContainer(new Uri(https://api.datamarket.azure.com/Bing/SearchWeb/));     // replace this value with your account key     var accountKey = "your key";     // the next line configures the bingContainer to use your credentials.     bingContainer.Credentials = new NetworkCredential(accountKey, accountKey);     var webResults = bingContainer.Web("site:microsoft.com" +txtSearch.Text , null, null, null, null, null, null,3,2);     lstResults.DataSource = webResults;     lstResults.DataBind(); } The following code will return 3 results starting from second page (by skipping first 3 results). See the result page as follows. Bing provides complete integration to its offerings. When you develop search based applications, you can use the power of Bing to perform the search. Integrating Bing Search API to ASP.Net application is a simple process and without investing much time, you can develop a good search based application. Make sure you read the terms of use before designing the application and decide which API usage is suitable for you. Further readings BING API Migration Guide http://go.microsoft.com/fwlink/?LinkID=248077 Bing API FAQ http://go.microsoft.com/fwlink/?LinkID=252146 Bing API Schema Guide http://go.microsoft.com/fwlink/?LinkID=252151

    Read the article

  • How to Use USER_DEFINED Activity in OWB Process Flow

    - by Jinggen He
    Process Flow is a very important component of Oracle Warehouse Builder. With Process Flow, we can create and control the ETL process by setting all kinds of activities in a well-constructed flow. In Oracle Warehouse Builder 11gR2, there are 28 kinds of activities, which fall into three categories: Control activities, OWB specific activities and Utility activities. For more information about Process Flow activities, please refer to OWB online doc. Most of those activities are pre-defined for some specific use. For example, the Mapping activity allows execution an OWB mapping in Process Flow and the FTP activity allows an interaction between the local host and a remote FTP server. Besides those activities for specific purposes, the User Defined activity enables you to incorporate into a Process Flow an activity that is not defined within Warehouse Builder. So the User Defined activity brings flexibility and extensibility to Process Flow. In this article, we will take an amazing tour of using the User Defined activity. Let's start. Enable execution of User Defined activity Let's start this section from creating a very simple Process Flow, which contains a Start activity, a User Defined activity and an End Success activity. Leave all parameters of activity USER_DEFINED unchanged except that we enter /tmp/test.sh into the Value column of the COMMAND parameter. Then let's create the shell script test.sh in /tmp directory. Here is the content of /tmp/test.sh (this article is demonstrating a scenario in Linux system, and /tmp/test.sh is a Bash shell script): echo Hello World! > /tmp/test.txt Note: don't forget to grant the execution privilege on /tmp/test.sh to OS Oracle user. For simplicity, we just use the following command. chmod +x /tmp/test.sh OK, it's so simple that we’ve almost done it. Now deploy the Process Flow and run it. For a newly installed OWB, we will come across an error saying "RPE-02248: For security reasons, activity operator Shell has been disabled by the DBA". See below. That's because, by default, the User Defined activity is DISABLED. Configuration about this can be found in <ORACLE_HOME>/owb/bin/admin/Runtime.properties: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint=DISABLED The property can be set to three different values: NATIVE_JAVA, SCHEDULER and DISBALED. Where NATIVE_JAVA uses the Java 'Runtime.exec' interface, SCHEDULER uses a DBMS Scheduler external job submitted by the Control Center repository owner which is executed by the default operating system user configured by the DBA. DISABLED prevents execution via these operators. We enable the execution of User Defined activity by setting: property.RuntimePlatform.0.NativeExecution.Shell.security_constraint= NATIVE_JAVA Restart the Control Center service for the change of setting to take effect. cd <ORACLE_HOME>/owb/rtp/sql sqlplus OWBSYS/<password of OWBSYS> @stop_service.sql sqlplus OWBSYS/<password of OWBSYS> @start_service.sql And then run the Process Flow again. We will see that the Process Flow completes successfully. The execution of /tmp/test.sh successfully generated a file /tmp/test.txt, containing the line Hello World!. Pass parameters to User Defined Activity The Process Flow created in the above section has a drawback: the User Defined activity doesn't accept any information from OWB nor does it give any meaningful results back to OWB. That's to say, it lacks interaction. Maybe, sometimes such a Process Flow can fulfill the business requirement. But for most of the time, we need to get the User Defined activity executed according to some information prior to that step. In this section, we will see how to pass parameters to the User Defined activity and pass them into the to-be-executed shell script. First, let's see how to pass parameters to the script. The User Defined activity has an input parameter named PARAMETER_LIST. This is a list of parameters that will be passed to the command. Parameters are separated from one another by a token. The token is taken as the first character on the PARAMETER_LIST string, and the string must also end in that token. Warehouse Builder recommends the '?' character, but any character can be used. For example, to pass 'abc,' 'def,' and 'ghi' you can use the following equivalent: ?abc?def?ghi? or !abc!def!ghi! or |abc|def|ghi| If the token character or '\' needs to be included as part of the parameter, then it must be preceded with '\'. For example '\\'. If '\' is the token character, then '/' becomes the escape character. Let's configure the PARAMETER_LIST parameter as below: And modify the shell script /tmp/test.sh as below: echo $1 is saying hello to $2! > /tmp/test.txt Re-deploy the Process Flow and run it. We will see that the generated /tmp/test.txt contains the following line: Bob is saying hello to Alice! In the example above, the parameters passed into the shell script are static. This case is not so useful because: instead of passing parameters, we can directly write the value of the parameters in the shell script. To make the case more meaningful, we can pass two dynamic parameters, that are obtained from the previous activity, to the shell script. Prepare the Process Flow as below: The Mapping activity MAPPING_1 has two output parameters: FROM_USER, TO_USER. The User Defined activity has two input parameters: FROM_USER, TO_USER. All the four parameters are of String type. Additionally, the Process Flow has two string variables: VARIABLE_FOR_FROM_USER, VARIABLE_FOR_TO_USER. Through VARIABLE_FOR_FROM_USER, the input parameter FROM_USER of USER_DEFINED gets value from output parameter FROM_USER of MAPPING_1. We achieve this by binding both parameters to VARIABLE_FOR_FROM_USER. See the two figures below. In the same way, through VARIABLE_FOR_TO_USER, the input parameter TO_USER of USER_DEFINED gets value from output parameter TO_USER of MAPPING_1. Also, we need to change the PARAMETER_LIST of the User Defined activity like below: Now, the shell script is getting input from the Mapping activity dynamically. Deploy the Process Flow and all of its necessary dependees then run the Process Flow. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! 'USER B' and 'USER A' are two outputs of the Mapping execution. Write the shell script within Oracle Warehouse Builder In the previous section, the shell script is located in the /tmp directory. But sometimes, when the shell script is small, or for the sake of maintaining consistency, you may want to keep the shell script inside Oracle Warehouse Builder. We can achieve this by configuring these three parameters of a User Defined activity properly: COMMAND: Set the path of interpreter, by which the shell script will be interpreted. PARAMETER_LIST: Set it blank. SCRIPT: Enter the shell script content. Note that in Linux the shell script content is passed into the interpreter as standard input at runtime. About how to actually pass parameters to the shell script, we can utilize variable substitutions. As in the following figure, ${FROM_USER} will be replaced by the value of the FROM_USER input parameter of the User Defined activity. So will the ${TO_USER} symbol. Besides the custom substitution variables, OWB also provide some system pre-defined substitution variables. You can refer to the online document for that. Deploy the Process Flow and run it. We see that the generated /tmp/test.txt contains the following line: USER B is saying hello to USER A! Leverage the return value of User Defined activity All of the previous sections are connecting the User Defined activity to END_SUCCESS with an unconditional transition. But what should we do if we want different subsequent activities for different shell script execution results? 1.  The simplest way is to add three simple-conditioned out-going transitions for the User Defined activity just like the figure below. In the figure, to simplify the scenario, we connect the User Defined activity to three End activities. Basically, if the shell script ends successfully, the whole Process Flow will end at END_SUCCESS, otherwise, the whole Process Flow will end at END_ERROR (in our case, ending at END_WARNING seldom happens). In the real world, we can add more complex and meaningful subsequent business logic. 2.  Or we can utilize complex conditions to work with different results of the User Defined activity. Previously, in our script, we only have this line: echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt We can add more logic in it and return different values accordingly. echo ${FROM_USER} is saying hello to ${TO_USER}! > /tmp/test.txt if CONDITION_1 ; then ...... exit 0 fi if CONDITION_2 ; then ...... exit 2 fi if CONDITION_3 ; then ...... exit 3 fi After that we can leverage the result by checking RESULT_CODE in condition expression of those out-going transitions. Let's suppose that we have the Process Flow as the following graph (SUB_PROCESS_n stands for more different further processes): We can set complex condition for the transition from USER_DEFINED to SUB_PROCESS_1 like this: Other transitions can be set in the same way. Note that, in our shell script, we return 0, 2 and 3, but not 1. As in Linux system, if the shell script comes across a system error like IO error, the return value will be 1. We can explicitly handle such a return value. Summary Let's summarize what has been discussed in this article: How to create a Process Flow with a User Defined activity in it How to pass parameters from the prior activity to the User Defined activity and finally into the shell script How to write the shell script within Oracle Warehouse Builder How to do variable substitutions How to let the User Defined activity return different values and in what way can we leverage

    Read the article

  • What code smell best describes this code?

    - by Paul Stovell
    Suppose you have this code in a class: private DataContext _context; public Customer[] GetCustomers() { GetContext(); return _context.Customers.ToArray(); } public Order[] GetOrders() { GetContext(); return _context.Customers.ToArray(); } // For the sake of this example, a new DataContext is *required* // for every public method call private void GetContext() { if (_context != null) { _context.Dispose(); } _context = new DataContext(); } This code isn't thread-safe - if two calls to GetOrders/GetCustomers are made at the same time from different threads, they may end up using the same context, or the context could be disposed while being used. Even if this bug didn't exist, however, it still "smells" like bad code. A much better design would be for GetContext to always return a new instance of DataContext and to get rid of the private field, and to dispose of the instance when done. Changing from an inappropriate private field to a local variable feels like a better solution. I've looked over the code smell lists and can't find one that describes this. In the past I've thought of it as temporal coupling, but the Wikipedia description suggests that's not the term: Temporal coupling When two actions are bundled together into one module just because they happen to occur at the same time. This page discusses temporal coupling, but the example is the public API of a class, while my question is about the internal design. Does this smell have a name? Or is it simply "buggy code"?

    Read the article

  • Windows Azure AppFabric: ServiceBus Queue WPF Sample

    - by xamlnotes
    The latest version of the AppFabric ServiceBus now has support for queues and topics. Today I will show you a bit about using queues and also talk about some of the best practices in using them. If you are just getting started, you can check out this site for more info on Windows Azure. One of the 1st things I thought if when Azure was announced back when was how we handle fault tolerance. Web sites hosted in Azure are no much of an issue unless they are using SQL Azure and then you must account for potential fault or latency issues. Today I want to talk a bit about ServiceBus and how to handle fault tolerance.  And theres stuff like connecting to the servicebus and so on you have to take care of. To demonstrate some of the things you can do, let me walk through this sample WPF app that I am posting for you to download. To start off, the application is going to need things like the servicenamespace, issuer details and so forth to make everything work.  To facilitate this I created settings in the wpf app for all of these items. Then I mapped a static class to them and set the values when the program loads like so: StaticElements.ServiceNamespace = Convert.ToString(Properties.Settings.Default["ServiceNamespace"]); StaticElements.IssuerName = Convert.ToString(Properties.Settings.Default["IssuerName"]); StaticElements.IssuerKey = Convert.ToString(Properties.Settings.Default["IssuerKey"]); StaticElements.QueueName = Convert.ToString(Properties.Settings.Default["QueueName"]);   Now I can get to each of these elements plus some other common values or instances directly from the StaticElements class. Now, lets look at the application.  The application looks like this when it starts:   The blue graphic represents the queue we are going to use.  The next figure shows the form after items were added and the queue stats were updated . You can see how the queue has grown: To add an item to the queue, click the Add Order button which displays the following dialog: After you fill in the form and press OK, the order is published to the ServiceBus queue and the form closes. The application also allows you to read the queued items by clicking the Process Orders button. As you can see below, the form shows the queued items in a list and the  queue has disappeared as its now empty. In real practice we normally would use a Windows Service or some other automated process to subscribe to the queue and pull items from it. I created a class named ServiceBusQueueHelper that has the core queue features we need. There are three public methods: * GetOrCreateQueue – Gets an instance of the queue description if the queue exists. if not, it creates the queue and returns a description instance. * SendMessageToQueue = This method takes an order instance and sends it to the queue. The call to the queue is wrapped in the ExecuteAction method from the Transient Fault Tolerance Framework and handles all the retry logic for the queue send process. * GetOrderFromQueue – Grabs an order from the queue and returns a typed order from the queue. It also marks the message complete so the queue can remove it.   Now lets turn to the WPF window code (MainWindow.xaml.cs). The constructor contains the 4 lines shown about to setup the static variables and to perform other initialization tasks. The next few lines setup certain features we need for the ServiceBus: TokenProvider credentials = TokenProvider.CreateSharedSecretTokenProvider(StaticElements.IssuerName, StaticElements.IssuerKey); Uri serviceUri = ServiceBusEnvironment.CreateServiceUri("sb", StaticElements.ServiceNamespace, string.Empty); StaticElements.CurrentNamespaceManager = new NamespaceManager(serviceUri, credentials); StaticElements.CurrentMessagingFactory = MessagingFactory.Create(serviceUri, credentials); The next two lines update the queue name label and also set the timer to 20 seconds.             QueueNameLabel.Content = StaticElements.QueueName;             _timer.Interval = TimeSpan.FromSeconds(20);             Next I call the UpdateQueueStats to initialize the UI for the queue:             UpdateQueueStats();             _timer.Tick += new EventHandler(delegate(object s, EventArgs a)                         {                      UpdateQueueStats();                  });             _timer.Start();         } The UpdateQueueStats method shown below. You can see that it uses the GetOrCreateQueue method mentioned earlier to grab the queue description, then it can get the MessageCount property.         private void UpdateQueueStats()         {             _queueDescription = _serviceBusQueueHelper.GetOrCreateQueue();             QueueCountLabel.Content = "(" + _queueDescription.MessageCount + ")";             long count = _queueDescription.MessageCount;             long queueWidth = count * 20;             QueueRectangle.Width = queueWidth;             QueueTickCount += 1;             TickCountlabel.Content = QueueTickCount.ToString();         }   The ReadQueueItemsButton_Click event handler calls the GetOrderFromQueue method and adds the order to the listbox. If you look at the SendQueueMessageController, you can see the SendMessage method that sends an order to the queue. Its pretty simple as it just creates a new CustomerOrderEntity instance,fills it and then passes it to the SendMessageToQueue. As you can see, all of our interaction with the queue is done through the helper class (ServiceBusQueueHelper). Now lets dig into the helper class. First, before you create anything like this, download the Transient Fault Handling Framework. Microsoft provides this free and they also provide the C# source. Theres a great article that shows how to use this framework with ServiceBus. I included the entire ServiceBusQueueHelper class in List 1. Notice the using statements for TransientFaultHandling: using Microsoft.AzureCAT.Samples.TransientFaultHandling; using Microsoft.AzureCAT.Samples.TransientFaultHandling.ServiceBus; The SendMessageToQueue in Listing 1 shows how to use the async send features of ServiceBus with them wrapped in the Transient Fault Handling Framework.  It is not much different than plain old ServiceBus calls but it sure makes it easy to have the fault tolerance added almost for free. The GetOrderFromQueue uses the standard synchronous methods to access the queue. The best practices article walks through using the async approach for a receive operation also.  Notice that this method makes a call to Receive to get the message then makes a call to GetBody to get a new strongly typed instance of CustomerOrderEntity to return. Listing 1 using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.AzureCAT.Samples.TransientFaultHandling; using Microsoft.AzureCAT.Samples.TransientFaultHandling.ServiceBus; using Microsoft.ServiceBus; using Microsoft.ServiceBus.Messaging; using System.Xml.Serialization; using System.Diagnostics; namespace WPFServicebusPublishSubscribeSample {     class ServiceBusQueueHelper     {         RetryPolicy currentPolicy = new RetryPolicy<ServiceBusTransientErrorDetectionStrategy>(RetryPolicy.DefaultClientRetryCount);         QueueClient currentQueueClient;         public QueueDescription GetOrCreateQueue()         {                        QueueDescription queue = null;             bool createNew = false;             try             {                 // First, let's see if a queue with the specified name already exists.                 queue = currentPolicy.ExecuteAction<QueueDescription>(() => { return StaticElements.CurrentNamespaceManager.GetQueue(StaticElements.QueueName); });                 createNew = (queue == null);             }             catch (MessagingEntityNotFoundException)             {                 // Looks like the queue does not exist. We should create a new one.                 createNew = true;             }             // If a queue with the specified name doesn't exist, it will be auto-created.             if (createNew)             {                 try                 {                     var newqueue = new QueueDescription(StaticElements.QueueName);                     queue = currentPolicy.ExecuteAction<QueueDescription>(() => { return StaticElements.CurrentNamespaceManager.CreateQueue(newqueue); });                 }                 catch (MessagingEntityAlreadyExistsException)                 {                     // A queue under the same name was already created by someone else,                     // perhaps by another instance. Let's just use it.                     queue = currentPolicy.ExecuteAction<QueueDescription>(() => { return StaticElements.CurrentNamespaceManager.GetQueue(StaticElements.QueueName); });                 }             }             currentQueueClient = StaticElements.CurrentMessagingFactory.CreateQueueClient(StaticElements.QueueName);             return queue;         }         public void SendMessageToQueue(CustomerOrderEntity Order)         {             BrokeredMessage msg = null;             GetOrCreateQueue();             // Use a retry policy to execute the Send action in an asynchronous and reliable fashion.             currentPolicy.ExecuteAction             (                 (cb) =>                 {                     // A new BrokeredMessage instance must be created each time we send it. Reusing the original BrokeredMessage instance may not                     // work as the state of its BodyStream cannot be guaranteed to be readable from the beginning.                     msg = new BrokeredMessage(Order);                     // Send the event asynchronously.                     currentQueueClient.BeginSend(msg, cb, null);                 },                 (ar) =>                 {                     try                     {                         // Complete the asynchronous operation.                         // This may throw an exception that will be handled internally by the retry policy.                         currentQueueClient.EndSend(ar);                     }                     finally                     {                         // Ensure that any resources allocated by a BrokeredMessage instance are released.                         if (msg != null)                         {                             msg.Dispose();                             msg = null;                         }                     }                 },                 (ex) =>                 {                     // Always dispose the BrokeredMessage instance even if the send                     // operation has completed unsuccessfully.                     if (msg != null)                     {                         msg.Dispose();                         msg = null;                     }                     // Always log exceptions.                     Trace.TraceError(ex.Message);                 }             );         }                 public CustomerOrderEntity GetOrderFromQueue()         {             CustomerOrderEntity Order = new CustomerOrderEntity();             QueueClient myQueueClient = StaticElements.CurrentMessagingFactory.CreateQueueClient(StaticElements.QueueName, ReceiveMode.PeekLock);             BrokeredMessage message;             ServiceBusQueueHelper serviceBusQueueHelper = new ServiceBusQueueHelper();             QueueDescription queueDescription;             queueDescription = serviceBusQueueHelper.GetOrCreateQueue();             if (queueDescription.MessageCount > 0)             {                 message = myQueueClient.Receive(TimeSpan.FromSeconds(90));                 if (message != null)                 {                     try                     {                         Order = message.GetBody<CustomerOrderEntity>();                         message.Complete();                     }                     catch (Exception ex)                     {                         throw ex;                     }                 }                 else                 {                     throw new Exception("Did not receive the messages");                 }             }             return Order;         }     } } I will post a link to the download demo in a separate post soon.

    Read the article

  • C#: Adding Functionality to 3rd Party Libraries With Extension Methods

    - by James Michael Hare
    Ever have one of those third party libraries that you love but it's missing that one feature or one piece of syntactical candy that would make it so much more useful?  This, I truly think, is one of the best uses of extension methods.  I began discussing extension methods in my last post (which you find here) where I expounded upon what I thought were some rules of thumb for using extension methods correctly.  As long as you keep in line with those (or similar) rules, they can often be useful for adding that little extra functionality or syntactical simplification for a library that you have little or no control over. Oh sure, you could take an open source project, download the source and add the methods you want, but then every time the library is updated you have to re-add your changes, which can be cumbersome and error prone.  And yes, you could possibly extend a class in a third party library and override features, but that's only if the class is not sealed, static, or constructed via factories. This is the perfect place to use an extension method!  And the best part is, you and your development team don't need to change anything!  Simply add the using for the namespace the extensions are in! So let's consider this example.  I love log4net!  Of all the logging libraries I've played with, it, to me, is one of the most flexible and configurable logging libraries and it performs great.  But this isn't about log4net, well, not directly.  So why would I want to add functionality?  Well, it's missing one thing I really want in the ILog interface: ability to specify logging level at runtime. For example, let's say I declare my ILog instance like so:     using log4net;     public class LoggingTest     {         private static readonly ILog _log = LogManager.GetLogger(typeof(LoggingTest));         ...     }     If you don't know log4net, the details aren't important, just to show that the field _log is the logger I have gotten from log4net. So now that I have that, I can log to it like so:     _log.Debug("This is the lowest level of logging and just for debugging output.");     _log.Info("This is an informational message.  Usual normal operation events.");     _log.Warn("This is a warning, something suspect but not necessarily wrong.");     _log.Error("This is an error, some sort of processing problem has happened.");     _log.Fatal("Fatals usually indicate the program is dying hideously."); And there's many flavors of each of these to log using string formatting, to log exceptions, etc.  But one thing there isn't: the ability to easily choose the logging level at runtime.  Notice, the logging levels above are chosen at compile time.  Of course, you could do some fun stuff with lambdas and wrap it, but that would obscure the simplicity of the interface.  And yes there is a Logger property you can dive down into where you can specify a Level, but the Level properties don't really match the ILog interface exactly and then you have to manually build a LogEvent and... well, it gets messy.  I want something simple and sexy so I can say:     _log.Log(someLevel, "This will be logged at whatever level I choose at runtime!");     Now, some purists out there might say you should always know what level you want to log at, and for the most part I agree with them.  For the most party the ILog interface satisfies 99% of my needs.  In fact, for most application logging yes you do always know the level you will be logging at, but when writing a utility class, you may not always know what level your user wants. I'll tell you, one of my favorite things is to write reusable components.  If I had my druthers I'd write framework libraries and shared components all day!  And being able to easily log at a runtime-chosen level is a big need for me.  After all, if I want my code to really be re-usable, I shouldn't force a user to deal with the logging level I choose. One of my favorite uses for this is in Interceptors -- I'll describe Interceptors in my next post and some of my favorites -- for now just know that an Interceptor wraps a class and allows you to add functionality to an existing method without changing it's signature.  At the risk of over-simplifying, it's a very generic implementation of the Decorator design pattern. So, say for example that you were writing an Interceptor that would time method calls and emit a log message if the method call execution time took beyond a certain threshold of time.  For instance, maybe if your database calls take more than 5,000 ms, you want to log a warning.  Or if a web method call takes over 1,000 ms, you want to log an informational message.  This would be an excellent use of logging at a generic level. So here was my personal wish-list of requirements for my task: Be able to determine if a runtime-specified logging level is enabled. Be able to log generically at a runtime-specified logging level. Have the same look-and-feel of the existing Debug, Info, Warn, Error, and Fatal calls.    Having the ability to also determine if logging for a level is on at runtime is also important so you don't spend time building a potentially expensive logging message if that level is off.  Consider an Interceptor that may log parameters on entrance to the method.  If you choose to log those parameter at DEBUG level and if DEBUG is not on, you don't want to spend the time serializing those parameters. Now, mine may not be the most elegant solution, but it performs really well since the enum I provide all uses contiguous values -- while it's never guaranteed, contiguous switch values usually get compiled into a jump table in IL which is VERY performant - O(1) - but even if it doesn't, it's still so fast you'd never need to worry about it. So first, I need a way to let users pass in logging levels.  Sure, log4net has a Level class, but it's a class with static members and plus it provides way too many options compared to ILog interface itself -- and wouldn't perform as well in my level-check -- so I define an enum like below.     namespace Shared.Logging.Extensions     {         // enum to specify available logging levels.         public enum LoggingLevel         {             Debug,             Informational,             Warning,             Error,             Fatal         }     } Now, once I have this, writing the extension methods I need is trivial.  Once again, I would typically /// comment fully, but I'm eliminating for blogging brevity:     namespace Shared.Logging.Extensions     {         // the extension methods to add functionality to the ILog interface         public static class LogExtensions         {             // Determines if logging is enabled at a given level.             public static bool IsLogEnabled(this ILog logger, LoggingLevel level)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         return logger.IsDebugEnabled;                     case LoggingLevel.Informational:                         return logger.IsInfoEnabled;                     case LoggingLevel.Warning:                         return logger.IsWarnEnabled;                     case LoggingLevel.Error:                         return logger.IsErrorEnabled;                     case LoggingLevel.Fatal:                         return logger.IsFatalEnabled;                 }                                 return false;             }             // Logs a simple message - uses same signature except adds LoggingLevel             public static void Log(this ILog logger, LoggingLevel level, object message)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message);                         break;                     case LoggingLevel.Informational:                         logger.Info(message);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message);                         break;                     case LoggingLevel.Error:                         logger.Error(message);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message);                         break;                 }             }             // Logs a message and exception to the log at specified level.             public static void Log(this ILog logger, LoggingLevel level, object message, Exception exception)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message, exception);                         break;                     case LoggingLevel.Informational:                         logger.Info(message, exception);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message, exception);                         break;                     case LoggingLevel.Error:                         logger.Error(message, exception);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message, exception);                         break;                 }             }             // Logs a formatted message to the log at the specified level.              public static void LogFormat(this ILog logger, LoggingLevel level, string format,                                          params object[] args)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.DebugFormat(format, args);                         break;                     case LoggingLevel.Informational:                         logger.InfoFormat(format, args);                         break;                     case LoggingLevel.Warning:                         logger.WarnFormat(format, args);                         break;                     case LoggingLevel.Error:                         logger.ErrorFormat(format, args);                         break;                     case LoggingLevel.Fatal:                         logger.FatalFormat(format, args);                         break;                 }             }         }     } So there it is!  I didn't have to modify the log4net source code, so if a new version comes out, i can just add the new assembly with no changes.  I didn't have to subclass and worry about developers not calling my sub-class instead of the original.  I simply provide the extension methods and it's as if the long lost extension methods were always a part of the ILog interface! Consider a very contrived example using the original interface:     // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsWarnEnabled)             {                 _log.WarnFormat("Statement {0} took too long to execute.", statement);             }             ...         }     }     Now consider this alternate call where the logging level could be perhaps a property of the class          // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // allow logging level to be specified by user of class instead         public LoggingLevel ThresholdLogLevel { get; set; }                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsLogEnabled(ThresholdLogLevel))             {                 _log.LogFormat(ThresholdLogLevel, "Statement {0} took too long to execute.",                     statement);             }             ...         }     } Next time, I'll show one of my favorite uses for these extension methods in an Interceptor.

    Read the article

  • Are `break` and `continue` bad programming practices?

    - by Mikhail
    My boss keeps mentioning nonchalantly that bad programmers use break and continue in loops. I use them all the time because they make sense; let me show you the inspiration: function verify(object) { if (object->value < 0) return false; if (object->value > object->max_value) return false; if (object->name == "") return false; ... } The point here is that first the function checks that the conditions are correct, then executes the actual functionality. IMO same applies with loops: while (primary_condition) { if (loop_count > 1000) break; if (time_exect > 3600) break; if (this->data == "undefined") continue; if (this->skip == true) continue; ... } I think this makes it easier to read & debug; but I also don't see a downside. Please comment.

    Read the article

  • Escaping Generics with T4 Templates

    - by Gavin Stevens
    I've been doing some work with T4 templates lately and ran into an issue which I couldn't find an answer to anywhere.  I finally figured it out, so I thought I'd share the solution. I was trying to generate a code class with a T4 template which used generics The end result a method like: public IEnumerator GetEnumerator()             {                 return new TableEnumerator<Table>(_page);             } the related section of the T4 template looks like this:  public IEnumerator GetEnumerator()             {                 return new TableEnumerator<#=renderClass.Name#>(_page);             } But this of course is missing the Generic Syntax for < > which T4 complains about because < > are reserved. using syntax like <#<#><#=renderClass.Name#><#=<#> won't work becasue the TextTransformation engine chokes on them.  resulting in : Error 2 The number of opening brackets ('<#') does not match the number of closing brackets ('#>')  even trying to escape the characters won't work: <#\<#><#=renderClass.Name#><#\<#> this results in: Error 4 A Statement cannot appear after the first class feature in the template. Only boilerplate, expressions and other class features are allowed after the first class feature block.  The final solution delcares a few strings to represent the literals like this: <#+    void RenderCollectionEnumerator(RenderCollection renderClass)  {     string open = "<";   string close =">"; #>    public partial class <#=renderClass.Name#> : IEnumerable         {             private readonly PageBase _page;             public <#=renderClass.Name#>(PageBase page)             {                 _page = page;             }             public IEnumerator GetEnumerator()             {                 return new TableEnumerator<#=open#><#=renderClass.Name#><#=close#>(_page);             }         } <#+  }  #> This works, and everyone is happy, resulting in an automatically generated class enumerator, complete with generics! Hopefully this will save someone some time :)

    Read the article

  • A Gentle .NET touch to Unix Touch

    - by lavanyadeepak
    A Gentle .NET touch to Unix Touch The Unix world has an elegant utility called 'touch' which would modify the timestamp of the file whose path is being passed an argument to  it. Unfortunately, we don't have a quick and direct such tool in Windows domain. However, just a few lines of code in C# can fill this gap to embrace and rejuvenate any file in the file system, subject to access ACL restrictions with the current timestamp.   using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; namespace LavanyaDeepak.Utilities { class Touch { static void Main(string[] args) { if (args.Length < 1) { Console.WriteLine("Please specify the path of the file to operate upon."); return; } if (!File.Exists(args[0])) { try { FileAttributes objFileAttributes = File.GetAttributes(args[0]); if ((objFileAttributes & FileAttributes.Directory) == FileAttributes.Directory) { Console.WriteLine("The input was not a regular file."); return; } } catch { } Console.WriteLine("The file does not seem to be exist."); return; } try { File.SetLastWriteTime(args[0], DateTime.Now); Console.WriteLine("The touch completed successfully"); } catch (System.UnauthorizedAccessException exUnauthException) { Console.WriteLine("Unable to touch file. Access is denied. The security manager responded: " + exUnauthException.Message); } catch (IOException exFileAccessException) { Console.WriteLine("Unable to touch file. The IO interface failed to complete request and responded: " + exFileAccessException.Message); } catch (Exception exGenericException) { Console.WriteLine("Unable to touch file. An internal error occured. The details are: " + exGenericException.Message); } } } }

    Read the article

  • Problem loading shaders with slimdx

    - by Levi
    I'm attempting to load an FX file in slimdx, I've got this exact FX file loading and compiling fine with XNA 4.0 but I'm getting errors with slimdx, here's my code to load it. using SlimDX.Direct3D11; using SlimDX.D3DCompiler; public static Effect LoadFXShader(string path) { Effect shader; using (var bytecode = ShaderBytecode.CompileFromFile(path, null, "fx_2_0", ShaderFlags.None, EffectFlags.None)) shader = new Effect(Devices.GPU.GraphicsDevice, bytecode); return shader; } Here's the shader: #define TEXTURE_TILE_SIZE 16 struct VertexToPixel { float4 Position : POSITION; float2 TextureCoords: TEXCOORD1; }; struct PixelToFrame { float4 Color : COLOR0; }; //------- Constants -------- float4x4 xView; float4x4 xProjection; float4x4 xWorld; float4x4 preViewProjection; //float random; //------- Texture Samplers -------- Texture TextureAtlas; sampler TextureSampler = sampler_state { texture = <TextureAtlas>; magfilter = Point; minfilter = point; mipfilter=linear; AddressU = mirror; AddressV = mirror;}; //------- Technique: Textured -------- VertexToPixel TexturedVS( byte4 inPos : POSITION, float2 inTexCoords: TEXCOORD0) { inPos.w = 1; VertexToPixel Output = (VertexToPixel)0; float4x4 preViewProjection = mul (xView, xProjection); float4x4 preWorldViewProjection = mul (xWorld, preViewProjection); Output.Position = mul(inPos, preWorldViewProjection); Output.TextureCoords = inTexCoords / TEXTURE_TILE_SIZE; return Output; } PixelToFrame TexturedPS(VertexToPixel PSIn) { PixelToFrame Output = (PixelToFrame)0; Output.Color = tex2D(TextureSampler, PSIn.TextureCoords); if(Output.Color.a != 1) clip(-1); return Output; } technique Textured { pass Pass0 { VertexShader = compile vs_2_0 TexturedVS(); PixelShader = compile ps_2_0 TexturedPS(); } } Now this exact shader works fine in XNA, but in slimdx I get the error ChunkDefault.fx(28,27): error X3000: unrecognized identifier 'byte4'

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >