Search Results

Search found 55276 results on 2212 pages for 'eicar test string'.

Page 343/2212 | < Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >

  • C# 5 Async, Part 1: Simplifying Asynchrony – That for which we await

    - by Reed
    Today’s announcement at PDC of the future directions C# is taking excite me greatly.  The new Visual Studio Async CTP is amazing.  Asynchronous code – code which frustrates and demoralizes even the most advanced of developers, is taking a huge leap forward in terms of usability.  This is handled by building on the Task functionality in .NET 4, as well as the addition of two new keywords being added to the C# language: async and await. This core of the new asynchronous functionality is built upon three key features.  First is the Task functionality in .NET 4, and based on Task and Task<TResult>.  While Task was intended to be the primary means of asynchronous programming with .NET 4, the .NET Framework was still based mainly on the Asynchronous Pattern and the Event-based Asynchronous Pattern. The .NET Framework added functionality and guidance for wrapping existing APIs into a Task based API, but the framework itself didn’t really adopt Task or Task<TResult> in any meaningful way.  The CTP shows that, going forward, this is changing. One of the three key new features coming in C# is actually a .NET Framework feature.  Nearly every asynchronous API in the .NET Framework has been wrapped into a new, Task-based method calls.  In the CTP, this is done via as external assembly (AsyncCtpLibrary.dll) which uses Extension Methods to wrap the existing APIs.  However, going forward, this will be handled directly within the Framework.  This will have a unifying effect throughout the .NET Framework.  This is the first building block of the new features for asynchronous programming: Going forward, all asynchronous operations will work via a method that returns Task or Task<TResult> The second key feature is the new async contextual keyword being added to the language.  The async keyword is used to declare an asynchronous function, which is a method that either returns void, a Task, or a Task<T>. Inside the asynchronous function, there must be at least one await expression.  This is a new C# keyword (await) that is used to automatically take a series of statements and break it up to potentially use discontinuous evaluation.  This is done by using await on any expression that evaluates to a Task or Task<T>. For example, suppose we want to download a webpage as a string.  There is a new method added to WebClient: Task<string> WebClient.DownloadStringTaskAsync(Uri).  Since this returns a Task<string> we can use it within an asynchronous function.  Suppose, for example, that we wanted to do something similar to my asynchronous Task example – download a web page asynchronously and check to see if it supports XHTML 1.0, then report this into a TextBox.  This could be done like so: private async void button1_Click(object sender, RoutedEventArgs e) { string url = "http://reedcopsey.com"; string content = await new WebClient().DownloadStringTaskAsync(url); this.textBox1.Text = string.Format("Page {0} supports XHTML 1.0: {1}", url, content.Contains("XHTML 1.0")); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Let’s walk through what’s happening here, step by step.  By adding the async contextual keyword to the method definition, we are able to use the await keyword on our WebClient.DownloadStringTaskAsync method call. When the user clicks this button, the new method (Task<string> WebClient.DownloadStringTaskAsync(string)) is called, which returns a Task<string>.  By adding the await keyword, the runtime will call this method that returns Task<string>, and execution will return to the caller at this point.  This means that our UI is not blocked while the webpage is downloaded.  Instead, the UI thread will “await” at this point, and let the WebClient do it’s thing asynchronously. When the WebClient finishes downloading the string, the user interface’s synchronization context will automatically be used to “pick up” where it left off, and the Task<string> returned from DownloadStringTaskAsync is automatically unwrapped and set into the content variable.  At this point, we can use that and set our text box content. There are a couple of key points here: Asynchronous functions are declared with the async keyword, and contain one or more await expressions In addition to the obvious benefits of shorter, simpler code – there are some subtle but tremendous benefits in this approach.  When the execution of this asynchronous function continues after the first await statement, the initial synchronization context is used to continue the execution of this function.  That means that we don’t have to explicitly marshal the call that sets textbox1.Text back to the UI thread – it’s handled automatically by the language and framework!  Exception handling around asynchronous method calls also just works. I’d recommend every C# developer take a look at the documentation on the new Asynchronous Programming for C# and Visual Basic page, download the Visual Studio Async CTP, and try it out.

    Read the article

  • Mapping UrlEncoded POST Values in ASP.NET Web API

    - by Rick Strahl
    If there's one thing that's a bit unexpected in ASP.NET Web API, it's the limited support for mapping url encoded POST data values to simple parameters of ApiController methods. When I first looked at this I thought I was doing something wrong, because it seems mighty odd that you can bind query string values to parameters by name, but can't bind POST values to parameters in the same way. To demonstrate here's a simple example. If you have a Web API method like this:[HttpGet] public HttpResponseMessage Authenticate(string username, string password) { …} and then hit with a URL like this: http://localhost:88/samples/authenticate?Username=ricks&Password=sekrit it works just fine. The query string values are mapped to the username and password parameters of our API method. But if you now change the method to work with [HttpPost] instead like this:[HttpPost] public HttpResponseMessage Authenticate(string username, string password) { …} and hit it with a POST HTTP Request like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Content-type: application/x-www-form-urlencoded Content-Length: 30 Username=ricks&Password=sekrit you'll find that while the request works, it doesn't actually receive the two string parameters. The username and password parameters are null and so the method is definitely going to fail. When I mentioned this over Twitter a few days ago I got a lot of responses back of why I'd want to do this in the first place - after all HTML Form submissions are the domain of MVC and not WebAPI which is a valid point. However, the more common use case is using POST Variables with AJAX calls. The following is quite common for passing simple values:$.post(url,{ Username: "Rick", Password: "sekrit" },function(result) {…}); but alas that doesn't work. How ASP.NET Web API handles Content Bodies Web API supports parsing content data in a variety of ways, but it does not deal with multiple posted content values. In effect you can only post a single content value to a Web API Action method. That one parameter can be very complex and you can bind it in a variety of ways, but ultimately you're tied to a single POST content value in your parameter definition. While it's possible to support multiple parameters on a POST/PUT operation, only one parameter can be mapped to the actual content - the rest have to be mapped to route values or the query string. Web API treats the whole request body as one big chunk of data that is sent to a Media Type Formatter that's responsible for de-serializing the content into whatever value the method requires. The restriction comes from async nature of Web API where the request data is read only once inside of the formatter that retrieves and deserializes it. Because it's read once, checking for content (like individual POST variables) first is not possible. However, Web API does provide a couple of ways to access the form POST data: Model Binding - object property mapping to bind POST values FormDataCollection - collection of POST keys/values ModelBinding POST Values - Binding POST data to Object Properties The recommended way to handle POST values in Web API is to use Model Binding, which maps individual urlencoded POST values to properties of a model object provided as the parameter. Model binding requires a single object as input to be bound to the POST data, with each POST key that matches a property name (including nested properties like Address.Street) being mapped and updated including automatic type conversion of simple types. This is a very nice feature - and a familiar one from MVC - that makes it very easy to have model objects mapped directly from inbound data. The obvious drawback with Model Binding is that you need a model for it to work: You have to provide a strongly typed object that can receive the data and this object has to map the inbound data. To rewrite the example above to use ModelBinding I have to create a class maps the properties that I need as parameters:public class LoginData { public string Username { get; set; } public string Password { get; set; } } and then accept the data like this in the API method:[HttpPost] public HttpResponseMessage Authenticate(LoginData login) { string username = login.Username; string password = login.Password; … } This works fine mapping the POST values to the properties of the login object. As a side benefit of this method definition, the method now also allows posting of JSON or XML to the same endpoint. If I change my request to send JSON like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: application/jsonContent-type: application/json Content-Length: 40 {"Username":"ricks","Password":"sekrit"} it works as well and transparently, courtesy of the nice Content Negotiation features of Web API. There's nothing wrong with using Model binding and in fact it's a common practice to use (view) model object for inputs coming back from the client and mapping them into these models. But it can be  kind of a hassle if you have AJAX applications with a ton of backend hits, especially if many methods are very atomic and focused and don't effectively require a model or view. Not always do you have to pass structured data, but sometimes there are just a couple of simple response values that need to be sent back. If all you need is to pass a couple operational parameters, creating a view model object just for parameter purposes seems like overkill. Maybe you can use the query string instead (if that makes sense), but if you can't then you can often end up with a plethora of 'message objects' that serve no further  purpose than to make Model Binding work. Note that you can accept multiple parameters with ModelBinding so the following would still work:[HttpPost] public HttpResponseMessage Authenticate(LoginData login, string loginDomain) but only the object will be bound to POST data. As long as loginDomain comes from the querystring or route data this will work. Collecting POST values with FormDataCollection Another more dynamic approach to handle POST values is to collect POST data into a FormDataCollection. FormDataCollection is a very basic key/value collection (like FormCollection in MVC and Request.Form in ASP.NET in general) and then read the values out individually by querying each. [HttpPost] public HttpResponseMessage Authenticate(FormDataCollection form) { var username = form.Get("Username"); var password = form.Get("Password"); …} The downside to this approach is that it's not strongly typed, you have to handle type conversions on non-string parameters, and it gets a bit more complicated to test such as setup as you have to seed a FormDataCollection with data. On the other hand it's flexible and easy to use and especially with string parameters is easy to deal with. It's also dynamic, so if the client sends you a variety of combinations of values on which you make operating decisions, this is much easier to work with than a strongly typed object that would have to account for all possible values up front. The downside is that the code looks old school and isn't as self-documenting as a parameter list or object parameter would be. Nevertheless it's totally functionality and a viable choice for collecting POST values. What about [FromBody]? Web API also has a [FromBody] attribute that can be assigned to parameters. If you have multiple parameters on a Web API method signature you can use [FromBody] to specify which one will be parsed from the POST content. Unfortunately it's not terribly useful as it only returns content in raw format and requires a totally non-standard format ("=content") to specify your content. For more info in how FromBody works and several related issues to how POST data is mapped, you can check out Mike Stalls post: How WebAPI does Parameter Binding Not really sure where the Web API team thought [FromBody] would really be a good fit other than a down and dirty way to send a full string buffer. Extending Web API to make multiple POST Vars work? Don't think so Clearly there's no native support for multiple POST variables being mapped to parameters, which is a bit of a bummer. I know in my own work on one project my customer actually found this to be a real sticking point in their AJAX backend work, and we ended up not using Web API and using MVC JSON features instead. That's kind of sad because Web API is supposed to be the proper solution for AJAX backends. With all of ASP.NET Web API's extensibility you'd think there would be some way to build this functionality on our own, but after spending a bit of time digging and asking some of the experts from the team and Web API community I didn't hear anything that even suggests that this is possible. From what I could find I'd say it's not possible primarily because Web API's Routing engine does not account for the POST variable mapping. This means [HttpPost] methods with url encoded POST buffers are not mapped to the parameters of the endpoint, and so the routes would never even trigger a request that could be intercepted. Once the routing doesn't work there's not much that can be done. If somebody has an idea how this could be accomplished I would love to hear about it. Do we really need multi-value POST mapping? I think that that POST value mapping is a feature that one would expect of any API tool to have. If you look at common APIs out there like Flicker and Google Maps etc. they all work with POST data. POST data is very prominent much more so than JSON inputs and so supporting as many options that enable would seem to be crucial. All that aside, Web API does provide very nice features with Model Binding that allows you to capture many POST variables easily enough, and logistically this will let you build whatever you need with POST data of all shapes as long as you map objects. But having to have an object for every operation that receives a data input is going to take its toll in heavy AJAX applications, with a lot of types created that do nothing more than act as parameter containers. I also think that POST variable mapping is an expected behavior and Web APIs non-support will likely result in many, many questions like this one: How do I bind a simple POST value in ASP.NET WebAPI RC? with no clear answer to this question. I hope for V.next of WebAPI Microsoft will consider this a feature that's worth adding. Related Articles Passing multiple POST parameters to Web API Controller Methods Mike Stall's post: How Web API does Parameter Binding Where does ASP.NET Web API Fit?© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Why do I start at privilege level 1 when logging into a Cisco ASA 5510?

    - by Alain O'Dea
    I have created a test user that is set to privilege 15 in the config: username test password **************** encrypted privilege 15 When I log in to the ASA 5510 I am in privilege 1 according to sh curpriv: login as: test [email protected]'s password: Type help or '?' for a list of available commands. asa> sh curpriv Username : test Current privilege level : 1 Current Mode/s : P_UNPR Attempting enable fails even though I know I have the correct enable password: asa> en Password: ************************* Password: ************************* Password: ************************* Access denied. Logging in from unprivileged puts me on privilege 15 and I can do as a please: asa> login Username : test Pasword: ************************* asa> sh curpriv Current privilege level : 15 Current Mode/s : P_PRIV asa> The only thing I can track this to is a configuration change I made where I removed a VPN user we no longer needed. Why do I start at privilege level 1 when logging into a Cisco ASA 5510?

    Read the article

  • GuestPost: Unit Testing Entity Framework (v1) Dependent Code using TypeMock Isolator

    - by Eric Nelson
    Time for another guest post (check out others in the series), this time bringing together the world of mocking with the world of Entity Framework. A big thanks to Moses for agreeing to do this. Unit Testing Entity Framework Dependent Code using TypeMock Isolator by Muhammad Mosa Introduction Unit testing data access code in my opinion is a challenging thing. Let us consider unit tests and integration tests. In integration tests you are allowed to have environmental dependencies such as a physical database connection to insert, update, delete or retrieve your data. However when performing unit tests it is often much more efficient and productive to remove environmental dependencies. Instead you will need to fake these dependencies. Faking a database (also known as mocking) can be relatively straight forward but the version of Entity Framework released with .Net 3.5 SP1 has a number of implementation specifics which actually makes faking the existence of a database quite difficult. Faking Entity Framework As mentioned earlier, to effectively unit test you will need to fake/simulate Entity Framework calls to the database. There are many free open source mocking frameworks that can help you achieve this but it will require additional effort to overcome & workaround a number of limitations in those frameworks. Examples of these limitations include: Not able to fake calls to non virtual methods Not able to fake sealed classes Not able to fake LINQ to Entities queries (replace database calls with in-memory collection calls) There is a mocking framework which is flexible enough to handle limitations such as those above. The commercially available TypeMock Isolator can do the job for you with less code and ultimately more readable unit tests. I’m going to demonstrate tackling one of those limitations using MoQ as my mocking framework. Then I will tackle the same issue using TypeMock Isolator. Mocking Entity Framework with MoQ One basic need when faking Entity Framework is to fake the ObjectContext. This cannot be done by passing any connection string. You have to pass a correct Entity Framework connection string that specifies CSDL, SSDL and MSL locations along with a provider connection string. Assuming we are going to do that, we’ll explore another limitation. The limitation we are going to face now is related to not being able to fake calls to non-virtual/overridable members with MoQ. I have the following repository method that adds an EntityObject (instance of a Blog entity) to Blogs entity set in an ObjectContext. public override void Add(Blog blog) { if(BlogContext.Blogs.Any(b=>b.Name == blog.Name)) { throw new InvalidOperationException("Blog with same name already exists!"); } BlogContext.AddToBlogs(blog); } The method does a very simple check that the name of the new Blog entity instance doesn’t exist. This is done through the simple LINQ query above. If the blog doesn’t already exist it simply adds it to the current context to be saved when SaveChanges of the ObjectContext instance (e.g. BlogContext) is called. However, if a blog with the same name exits, and exception (InvalideOperationException) will be thrown. Let us now create a unit test for the Add method using MoQ. [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void Add_Should_Throw_InvalidOperationException_When_Blog_With_Same_Name_Already_Exits() { //(1) We shouldn't depend on configuration when doing unit tests! But, //its a workaround to fake the ObjectContext string connectionString = ConfigurationManager .ConnectionStrings["MyBlogConnString"] .ConnectionString; //(2) Arrange: Fake ObjectContext var fakeContext = new Mock<MyBlogContext>(connectionString); //(3) Next Line will pass, as ObjectContext now can be faked with proper connection string var repo = new BlogRepository(fakeContext.Object); //(4) Create fake ObjectQuery<Blog>. Will be used to substitute MyBlogContext.Blogs property var fakeObjectQuery = new Mock<ObjectQuery<Blog>>("[Blogs]", fakeContext.Object); //(5) Arrange: Set Expectations //Next line will throw an exception by MoQ: //System.ArgumentException: Invalid setup on a non-overridable member fakeContext.SetupGet(c=>c.Blogs).Returns(fakeObjectQuery.Object); fakeObjectQuery.Setup(q => q.Any(b => b.Name == "NewBlog")).Returns(true); //Act repo.Add(new Blog { Name = "NewBlog" }); } This test method is checking to see if the correct exception ([ExpectedException(typeof(InvalidOperationException))]) is thrown when a developer attempts to Add a blog with a name that’s already exists. On (1) a connection string is initialized from configuration file. To retrieve the full connection string. On (2) a fake ObjectContext is being created. The ObjectContext here is MyBlogContext and its being created using this var fakeContext = new Mock<MyBlogContext>(connectionString); This way a fake context is being created using MoQ. On (3) a BlogRepository instance is created. BlogRepository has dependency on generate Entity Framework ObjectContext, MyObjectContext. And so the fake context is passed to the constructor. var repo = new BlogRepository(fakeContext.Object); On (4) a fake instance of ObjectQuery<Blog> is being created to use as a substitute to MyObjectContext.Blogs property as we will see in (5). On (5) setup an expectation for calling Blogs property of MyBlogContext and substitute the return result with the fake ObjectQuery<Blog> instance created on (4). When you run this test it will fail with MoQ throwing an exception because of this line: fakeContext.SetupGet(c=>c.Blogs).Returns(fakeObjectQuery.Object); This happens because the generate property MyBlogContext.Blogs is not virtual/overridable. And assuming it is virtual or you managed to make it virtual it will fail at the following line throwing the same exception: fakeObjectQuery.Setup(q => q.Any(b => b.Name == "NewBlog")).Returns(true); This time the test will fail because the Any extension method is not virtual/overridable. You won’t be able to replace ObjectQuery<Blog> with fake in memory collection to test your LINQ to Entities queries. Now lets see how replacing MoQ with TypeMock Isolator can help. Mocking Entity Framework with TypeMock Isolator The following is the same test method we had above for MoQ but this time implemented using TypeMock Isolator: [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void Add_New_Blog_That_Already_Exists_Should_Throw_InvalidOperationException() { //(1) Create fake in memory collection of blogs var fakeInMemoryBlogs = new List<Blog> {new Blog {Name = "FakeBlog"}}; //(2) create fake context var fakeContext = Isolate.Fake.Instance<MyBlogContext>(); //(3) Setup expected call to MyBlogContext.Blogs property through the fake context Isolate.WhenCalled(() => fakeContext.Blogs) .WillReturnCollectionValuesOf(fakeInMemoryBlogs.AsQueryable()); //(4) Create new blog with a name that already exits in the fake in memory collection in (1) var blog = new Blog {Name = "FakeBlog"}; //(5) Instantiate instance of BlogRepository (Class under test) var repo = new BlogRepository(fakeContext); //(6) Acting by adding the newly created blog () repo.Add(blog); } When running the above test method it will pass as the Add method of BlogRepository is going to throw an InvalidOperationException which is the expected behaviour. Nothing prevents us from faking out the database interaction! Even faking ObjectContext  at (2) didn’t require a connection string. On (3) Isolator sets up a faking result for MyBlogContext.Blogs when its being called through the fake instance fakeContext created on (2). The faking result is just an in-memory collection declared an initialized on (1). Finally at (6) action we call the Add method of BlogRepository passing a new Blog instance that has a name that’s already exists in the fake in-memory collection which we set up at (1). As expected the test will pass because it will throw the expected exception defined on top of the test method - InvalidOperationException. TypeMock Isolator succeeded in faking Entity Framework with ease. Conclusion We explored how to write a simple unit test using TypeMock Isolator for code which is using Entity Framework. We also explored a few of the limitations of other mocking frameworks which TypeMock is successfully able to handle. There are workarounds that you can use to overcome limitations when using MoQ or Rhino Mock, however the workarounds will require you to write more code and your tests will likely be more complex. For a comparison between different mocking frameworks take a look at this document produced by TypeMock. You might also want to check out this open source project to compare mocking frameworks. I hope you enjoyed this post Muhammad Mosa http://mosesofegypt.net/ http://twitter.com/mosessaur Screencast of unit testing Entity Framework Related Links GuestPost: Introduction to Mocking GuesPost: Typemock Isolator – Much more than an Isolation framework

    Read the article

  • Does Hotspot Shield hide my activity from my ISP?

    - by test
    Can Hotspot Shield make your activities invisible to your ISP? Or can they still see what you're downloading if they so choose? Here's the text from the product description: Hotspot Shield protects your entire web surfing session; securing your connection at both your home Internet network & Public Internet networks (both wired and wireless). Hotspot Shield protects your identity by ensuring that all web transactions (shopping, filling out forms, downloads) are secured through HTTPS. Hotspot Shield also makes you private online making your identity invisible to third party websites and ISP’s. I'm just not sure what it means by "invisible to third-party websites and ISPs" and if that means the ISP can still see what I'm doing.

    Read the article

  • make IIS 7.5 cache static content files over diferent pages

    - by Achilles
    On a Windows 2008 R2, using DNS and IIS I've established my development test server; i.e. I'll have a web application that I can browse on http://test.dev I've moved all the static content files like images, js files and css files into another application which is visible on http://cdn.test.dev test.dev, uses cdn.test.dev urls like http://cdn.test.dev/js/jquery.js to load js, css and images. When I first load "~/" of test.dev, all files will load with a response code of 200; when I press F5 in Firefox, all files, except the "~/default.aspx", will load with 304 response code; but pressing Ctrl+F5 loads them again with a 200 code; if I browse another url like "~/pages/" in test.dev, all of those static files will reload with a 200 code... Is this normal or I'm doing something wrong? Actually, I'm looking for a behavior like this: I want the client to load http://cdn.test.dev/js/jquery.js, only once. I want the client's browser to use this jquery.js file, from cache, in all other pages of test.dev Is this possible? This is the web.config file I have in the root directory of cdn.test.dev: <configuration> <system.webServer> <caching> <profiles> <add extension=".png" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".gif" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".jpg" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".js" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".css" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".axd" kernelCachePolicy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> </profiles> </caching> <httpProtocol allowKeepAlive="true"> <customHeaders> <add name="Cache-Control" value="public, max-age=31536000" /> </customHeaders> </httpProtocol> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true"> <remove name="RadUploadModule" /> <remove name="RadCompression" /> <add name="RadUploadModule" type="Telerik.Web.UI.RadUploadHttpModule" preCondition="integratedMode" /> <add name="RadCompression" type="Telerik.Web.UI.RadCompression" preCondition="integratedMode" /> </modules> <handlers> <remove name="ChartImage_axd" /> <remove name="Telerik_Web_UI_SpellCheckHandler_axd" /> <remove name="Telerik_Web_UI_DialogHandler_aspx" /> <remove name="Telerik_RadUploadProgressHandler_ashx" /> <remove name="Telerik_Web_UI_WebResource_axd" /> <add name="ChartImage_axd" path="ChartImage.axd" type="Telerik.Web.UI.ChartHttpHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_SpellCheckHandler_axd" path="Telerik.Web.UI.SpellCheckHandler.axd" type="Telerik.Web.UI.SpellCheckHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_DialogHandler_aspx" path="Telerik.Web.UI.DialogHandler.aspx" type="Telerik.Web.UI.DialogHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_RadUploadProgressHandler_ashx" path="Telerik.RadUploadProgressHandler.ashx" type="Telerik.Web.UI.RadUploadProgressHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_WebResource_axd" path="Telerik.Web.UI.WebResource.axd" type="Telerik.Web.UI.WebResource" verb="*" preCondition="integratedMode" /> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="10485760" /> </requestFiltering> </security> <staticContent> <clientCache cacheControlMode="UseExpires" httpExpires="Wed, 01 Jan 2020 00:00:00 GMT"/> </staticContent> </system.webServer> <appSettings /> <system.web> <compilation debug="false" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="telerik" namespace="Telerik.Web.UI" assembly="Telerik.Web.UI" /> </controls> </pages> <httpHandlers> <add path="ChartImage.axd" type="Telerik.Web.UI.ChartHttpHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.SpellCheckHandler.axd" type="Telerik.Web.UI.SpellCheckHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.DialogHandler.aspx" type="Telerik.Web.UI.DialogHandler" verb="*" validate="false" /> <add path="Telerik.RadUploadProgressHandler.ashx" type="Telerik.Web.UI.RadUploadProgressHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.WebResource.axd" type="Telerik.Web.UI.WebResource" verb="*" validate="false" /> </httpHandlers> <httpModules> <add name="RadUploadModule" type="Telerik.Web.UI.RadUploadHttpModule" /> <add name="RadCompression" type="Telerik.Web.UI.RadCompression" /> </httpModules> <httpRuntime maxRequestLength="10240" /> </system.web> </configuration> and this is the resulting response header for http://cdn.test.dev/css/global.css: Cache-Control: private,public, max-age=31536000 Content-Type: text/css Content-Encoding: gzip Expires: Wed, 01 Jan 2020 00:00:00 GMT Last-Modified: Mon, 06 Sep 2010 08:53:06 GMT Accept-Ranges: bytes Etag: "0454eca04dcb1:0" Vary: Accept-Encoding Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Mon, 06 Sep 2010 14:57:08 GMT Content-Length: 4495

    Read the article

  • Ubuntu 11.10, using wget/curl fails with ssl

    - by Greg Spiers
    Note: See edit 3 for solution On a completely new install of Ubuntu I'm getting the following errors when using wget: wget https://test.sagepay.com --2012-03-27 12:55:12-- https://test.sagepay.com/ Resolving test.sagepay.com... 195.170.169.8 Connecting to test.sagepay.com|195.170.169.8|:443... connected. ERROR: cannot verify test.sagepay.com's certificate, issued by `/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL SGC CA': Unable to locally verify the issuer's authority. To connect to test.sagepay.com insecurely, use `--no-check-certificate'. I've tried installing ca-certificates and configuring the ca-certs and they appear to all be setup in /etc/ssl/certs. The same issue exists for cURL: curl https://test.sagepay.com curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed Which leads me to believe it's something wrong with openssl server wide. wget and curl both work correctly locally on OSX and I have confirmed with a few people that it's working on their servers so I suspect it's nothing to do with the server I'm attempting to connect to. Any ideas or suggestions on things to try to narrow it down? Thank you Edit As requested verbose output from curl curl -Iv https://test.sagepay.com * About to connect() to test.sagepay.com port 443 (#0) * Trying 195.170.169.8... connected * Connected to test.sagepay.com (195.170.169.8) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS alert, Server hello (2): * SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed * Closing connection #0 curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html Edit 2 Using the hash from your comment I see this: ubuntu@srv-tf6sq:/etc/ssl/certs$ ls -al 7651b327.0 lrwxrwxrwx 1 root root 59 2012-03-27 12:48 7651b327.0 -> Verisign_Class_3_Public_Primary_Certification_Authority.pem ubuntu@srv-tf6sq:/etc/ssl/certs$ ls -al Verisign_Class_3_Public_Primary_Certification_Authority.pem lrwxrwxrwx 1 root root 94 2012-01-18 07:21 Verisign_Class_3_Public_Primary_Certification_Authority.pem -> /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt ubuntu@srv-tf6sq:/etc/ssl/certs$ ls -al /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt -rw-r--r-- 1 root root 834 2011-09-28 14:53 /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt ubuntu@srv-tf6sq:/etc/ssl/certs$ more /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt -----BEGIN CERTIFICATE----- MIICPDCCAaUCEDyRMcsf9tAbDpq40ES/Er4wDQYJKoZIhvcNAQEFBQAwXzELMAkG A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFz cyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2 MDEyOTAwMDAwMFoXDTI4MDgwMjIzNTk1OVowXzELMAkGA1UEBhMCVVMxFzAVBgNV BAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmlt YXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUAA4GN ADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhE BarsAx94f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/is I19wKTakyYbnsZogy1Olhec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0G CSqGSIb3DQEBBQUAA4GBABByUqkFFBkyCEHwxWsKzH4PIRnN5GfcX6kb5sroc50i 2JhucwNhkcV8sEVAbkSdjbCxlnRhLQ2pRdKkkirWmnWXbj9T/UWZYB2oK0z5XqcJ 2HUw19JlYD1n1khVdWk/kfVIC0dpImmClr7JyDiGSnoscxlIaU5rfGW/D/xwzoiQ -----END CERTIFICATE----- But doing the steps myself I end up with a different hash: strace -o /tmp/foo.out curl -Iv https://test.sagepay.com and grep ssl /tmp/foo.out open("/lib/x86_64-linux-gnu/libssl.so.1.0.0", O_RDONLY) = 3 stat("/etc/ssl/certs/415660c1.0", {st_mode=S_IFREG|0644, st_size=834, ...}) = 0 open("/etc/ssl/certs/415660c1.0", O_RDONLY) = 4 stat("/etc/ssl/certs/415660c1.1", 0x7fff7dab07b0) = -1 ENOENT (No such file or directory) readlink -f /etc/ssl/certs/415660c1.0 /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt more /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt -----BEGIN CERTIFICATE----- MIICPDCCAaUCEDyRMcsf9tAbDpq40ES/Er4wDQYJKoZIhvcNAQEFBQAwXzELMAkG A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFz cyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2 MDEyOTAwMDAwMFoXDTI4MDgwMjIzNTk1OVowXzELMAkGA1UEBhMCVVMxFzAVBgNV BAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmlt YXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUAA4GN ADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhE BarsAx94f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/is I19wKTakyYbnsZogy1Olhec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0G CSqGSIb3DQEBBQUAA4GBABByUqkFFBkyCEHwxWsKzH4PIRnN5GfcX6kb5sroc50i 2JhucwNhkcV8sEVAbkSdjbCxlnRhLQ2pRdKkkirWmnWXbj9T/UWZYB2oK0z5XqcJ 2HUw19JlYD1n1khVdWk/kfVIC0dpImmClr7JyDiGSnoscxlIaU5rfGW/D/xwzoiQ -----END CERTIFICATE----- Any other ideas? Thank you for the help so far :) Edit 3 So it turns out that installing the ca-certificates package didn't install the one that I needed. I found this post about certificates being presented out of order. This seems to be the case with my request to sagepay. The solution ended up being to install another CA certificate from Verisign. I'm not sure why this fixes the issue with it being out of order but it does, but I suspect the out of order issue really isn't a problem at all and it was infact because I was missing a certificate all along. The additional certificate is available in that post but I didn't want to blindly trust it. I've looked at the list of CA certificates from cURL's site and it is listed there so I do trust it. The certificate: Verisign Class 3 Public Primary Certification Authority ======================================================= -----BEGIN CERTIFICATE----- MIICPDCCAaUCEHC65B0Q2Sk0tjjKewPMur8wDQYJKoZIhvcNAQECBQAwXzELMAkGA1UEBhMCVVMx FzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmltYXJ5 IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2MDEyOTAwMDAwMFoXDTI4MDgwMTIzNTk1OVow XzELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAz IFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUA A4GNADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhEBarsAx94 f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/isI19wKTakyYbnsZogy1Ol hec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0GCSqGSIb3DQEBAgUAA4GBALtMEivPLCYA TxQT3ab7/AoRhIzzKBxnki98tsX63/Dolbwdj2wsqFHMc9ikwFPwTtYmwHYBV4GSXiHx0bH/59Ah WM1pF+NEHJwZRDmJXNycAA9WjQKZ7aKQRUzkuxCkPfAyAw7xzvjoyVGM5mKf5p/AfbdynMk2Omuf Tqj/ZA1k -----END CERTIFICATE----- I put this in a file in: /usr/share/ca-certificates/curl/Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.crt I then modified the /etc/ca-certificates.conf and added the following line at the end: curl/Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.crt After that I ran the command: sudo update-ca-certificates Looking into the /etc/ssl/certs directory I see it correctly linked: ls -al | grep cURL lrwxrwxrwx 1 root root 69 2012-03-27 16:03 415660c1.0 -> Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.pem lrwxrwxrwx 1 root root 69 2012-03-27 16:03 7651b327.0 -> Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.pem lrwxrwxrwx 1 root root 101 2012-03-27 16:03 Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.pem -> /usr/share/ca-certificates/curl/Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.crt And everything works! curl -I https://test.sagepay.com HTTP/1.1 200 OK...

    Read the article

  • To ref or not to ref

    - by nmarun
    So the question is what is the point of passing a reference type along with the ref keyword? I have an Employee class as below: 1: public class Employee 2: { 3: public string FirstName { get; set; } 4: public string LastName { get; set; } 5:  6: public override string ToString() 7: { 8: return string.Format("{0}-{1}", FirstName, LastName); 9: } 10: } In my calling class, I say: 1: class Program 2: { 3: static void Main() 4: { 5: Employee employee = new Employee 6: { 7: FirstName = "John", 8: LastName = "Doe" 9: }; 10: Console.WriteLine(employee); 11: CallSomeMethod(employee); 12: Console.WriteLine(employee); 13: } 14:  15: private static void CallSomeMethod(Employee employee) 16: { 17: employee.FirstName = "Smith"; 18: employee.LastName = "Doe"; 19: } 20: }   After having a look at the code, you’ll probably say, Well, an instance of a class gets passed as a reference, so any changes to the instance inside the CallSomeMethod, actually modifies the original object. Hence the output will be ‘John-Doe’ on the first call and ‘Smith-Doe’ on the second. And you’re right: So the question is what’s the use of passing this Employee parameter as a ref? 1: class Program 2: { 3: static void Main() 4: { 5: Employee employee = new Employee 6: { 7: FirstName = "John", 8: LastName = "Doe" 9: }; 10: Console.WriteLine(employee); 11: CallSomeMethod(ref employee); 12: Console.WriteLine(employee); 13: } 14:  15: private static void CallSomeMethod(ref Employee employee) 16: { 17: employee.FirstName = "Smith"; 18: employee.LastName = "Doe"; 19: } 20: } The output is still the same: Ok, so is there really a need to pass a reference type using the ref keyword? I’ll remove the ‘ref’ keyword and make one more change to the CallSomeMethod method. 1: class Program 2: { 3: static void Main() 4: { 5: Employee employee = new Employee 6: { 7: FirstName = "John", 8: LastName = "Doe" 9: }; 10: Console.WriteLine(employee); 11: CallSomeMethod(employee); 12: Console.WriteLine(employee); 13: } 14:  15: private static void CallSomeMethod(Employee employee) 16: { 17: employee = new Employee 18: { 19: FirstName = "Smith", 20: LastName = "John" 21: }; 22: } 23: } In line 17 you’ll see I’ve ‘new’d up the incoming Employee parameter and then set its properties to new values. The output tells me that the original instance of the Employee class does not change. Huh? But an instance of a class gets passed by reference, so why did the values not change on the original instance or how do I keep the two instances in-sync all the times? Aah, now here’s the answer. In order to keep the objects in sync, you pass them using the ‘ref’ keyword. 1: class Program 2: { 3: static void Main() 4: { 5: Employee employee = new Employee 6: { 7: FirstName = "John", 8: LastName = "Doe" 9: }; 10: Console.WriteLine(employee); 11: CallSomeMethod(ref employee); 12: Console.WriteLine(employee); 13: } 14:  15: private static void CallSomeMethod(ref Employee employee) 16: { 17: employee = new Employee 18: { 19: FirstName = "Smith", 20: LastName = "John" 21: }; 22: } 23: } Viola! Now, to prove it beyond doubt, I said, let me try with another reference type: string. 1: class Program 2: { 3: static void Main() 4: { 5: string name = "abc"; 6: Console.WriteLine(name); 7: CallSomeMethod(ref name); 8: Console.WriteLine(name); 9: } 10:  11: private static void CallSomeMethod(ref string name) 12: { 13: name = "def"; 14: } 15: } The output was as expected, first ‘abc’ and then ‘def’ - proves the 'ref' keyword works here as well. Now, what if I remove the ‘ref’ keyword? The output should still be the same as the above right, since string is a reference type? 1: class Program 2: { 3: static void Main() 4: { 5: string name = "abc"; 6: Console.WriteLine(name); 7: CallSomeMethod(name); 8: Console.WriteLine(name); 9: } 10:  11: private static void CallSomeMethod(string name) 12: { 13: name = "def"; 14: } 15: } Wrong, the output shows ‘abc’ printed twice. Wait a minute… now how could this be? This is because string is an immutable type. This means that any time you modify an instance of string, new memory address is allocated to the instance. The effect is similar to ‘new’ing up the Employee instance inside the CallSomeMethod in the absence of the ‘ref’ keyword. Verdict: ref key came to the rescue and saved the planet… again!

    Read the article

  • Forbidden access on Apache in Mac Lion

    - by Luis Berrocal
    I'm trying to configure Apache to work with Symfony in my Macbook Pro. I Have installed Lion OSX. I uncommented the line Include /private/etc/apache2/extra/httpd-vhosts.conf on /etc/apache2/httpd.conf. I configured Apache by editing the /private/etc/apache2/extra/httpd-vhosts.conf. and adding the following: :: NameVirtualHost *:80 <VirtualHost *.80> ServerName localhost DocumentRoot "/Library/WebServer/Documents" </VirtualHost> <VirtualHost *:80> DocumentRoot "/Users/luiscberrocal/Documents/dev/lion_test/web" ServerName lion.localhost <Directory "/Users/luiscberrocal/Documents/dev/lion_test/web"> Options Indexes FollowSymlinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> 3. Added the following to /private/etc/hosts 127.0.0.1 lion.localhost Now when I access http://localhost/test.php I get the following message Forbidden You don't have permission to access /test.php on this server. Apache/2.2.20 (Unix) DAV/2 PHP/5.3.6 with Suhosin-Patch Server at localhost Port 80 I already tried: chmod 777 test.php chmod +x test.php I get the same message if I try to access http://lion.localhost/ I opened the /var/log/apache2/error_log and this is what I found relevant: [Sat Dec 31 09:37:49 2011] [notice] Apache/2.2.20 (Unix) DAV/2 PHP/5.3.6 with Suhosin-Patch configured -- resuming normal operations [Sat Dec 31 09:37:53 2011] [error] [client ::1] (13)Permission denied: access to /test.php denied [Sat Dec 31 09:37:55 2011] [error] [client ::1] (13)Permission denied: access to /test.php denied [Sat Dec 31 09:38:13 2011] [notice] caught SIGTERM, shutting down [Sat Dec 31 09:38:13 2011] [error] (EAI 8)nodename nor servname provided, or not known: Could not resolve host name *.80 -- ignoring! httpd: Could not reliably determine the server's fully qualified domain name, using Luis-Berrocals-MacBook-Pro.local for ServerName [Sat Dec 31 09:38:14 2011] [warn] mod_bonjour: Cannot stat template index file '/System/Library/User Template/English.lproj/Sites/index.html'. [Sat Dec 31 09:38:14 2011] [warn] mod_bonjour: Cannot stat template index file '/System/Library/User Template/English.lproj/Sites/index.html'. [Sat Dec 31 09:38:14 2011] [notice] Digest: generating secret for digest authentication ... [Sat Dec 31 09:38:14 2011] [notice] Digest: done [Sat Dec 31 09:38:14 2011] [notice] Apache/2.2.20 (Unix) DAV/2 PHP/5.3.6 with Suhosin-Patch configured -- resuming normal operations [Sat Dec 31 09:38:18 2011] [error] [client ::1] (13)Permission denied: access to /test.php denied [Sat Dec 31 09:38:19 2011] [error] [client ::1] (13)Permission denied: access to /test.php denied [Sat Dec 31 10:18:09 2011] [error] [client 127.0.0.1] (13)Permission denied: access to /test.php denied [Sat Dec 31 10:18:15 2011] [error] [client 127.0.0.1] (13)Permission denied: access to / denied I can't figure out what I'm doing wrong.

    Read the article

  • CodePlex Daily Summary for Sunday, October 06, 2013

    CodePlex Daily Summary for Sunday, October 06, 2013Popular ReleasesMedia Companion: Media Companion MC3.580b: Fixed IMDB Actor names and Actor Roles, empty <actor> entries in movie nfo, and actor scraping during initial movie scrape. Revision HistoryEvent-Based Components AppBuilder: AB3.Iteration.53: Iteration 53 (Feature): Allow drag&drop of existing component (flow, step) from component list to chart. Duplicate names are automatically recognized and solved. By the color of the draged component you can see what kind of component (flow or step) is currently draged. New: AddExistingComponentFlow, PartDragDropEventHandler, ExistingStepPreparerPulse: Pulse 0.6.7.3: Pulse is now accepting donations. To donate by Bitcoin or PayPal see https://pulse.codeplex.com/wikipage?title=Donations Lots of updates in v0.6.7.3: (Feature) New option allows you to disable wallpaper changing when a full screen application is running. This way Pulse doesn't slow down/lag your videos and games :) (Fix) Some users were getting Wallbase errors when logging in. This has been fixed. (Feature) Right click a provider and you can now make a copy of it by selecting the "Dupl...MoreTerra (Terraria World Viewer): MoreTerra 1.11.1: Release 1.11.1 =========== =Bug Fixes= =========== Added more tile blocks (Clouds, crimstone) Added items (binoculars, rope, Pirahna Gun) Added ores (Lead, Tin) Chests now work, I broke them yesterday. =============== =Known Issues= =============== I am having trouble with new background walls. So you will see a red outline for crimson then a pink inside. Same with where I think the queen bee lives.VG-Ripper & PG-Ripper: PG-Ripper 1.4.19: NEW: Added Option to login as Guest NEW: Added Menu Option to delete an Forum Account NEW: Added Support for "ImageTeam.org links FIXED: Fixed Ripping of http://forum.babeunion.com ForumsSimpleExcelReportMaker: Serm 0.03: SourceCode and Sample .Net Framework 3.5 AnyCPU compile.Application Architecture Guidelines: App Architecture Guidelines 3.0.8: This document is an overview of software qualities, principles, patterns, practices, tools and libraries.fastJSON: v2.0.22: 2.0.22 - added .net 3.5 project - now compiling to 'output' directory - added signed assembly - version numbers will stay at 2.0.0.0 for drop in compatibility - file version will reflect the build number - bug fix deserializing to dictionaries instead of dataset when type is not definedResponsive SharePoint: Bootstrap 3 for SharePoint 2013 - Alpha 0.1: Bootstrap 3 for SharePoint 2013 Alpha version 0.1 NOTE - This is an alpha version, there are bound to be issues. Please help us solve them by contributing in our Discussion. Publishing - The source for Twitter Bootstrap 3.0.0 integrated into SharePoint 2013 for a site with Publishing enabled. Non-Publishing - A master page and branding assets for Twitter Bootstrap 3.0.0 integrated into SharePoint 2013 without Publishing enabled. PageLayoutSampleContent - Sample content for included page l...C++ AMP Conformance Test Suite: C++ AMP Conformance Test Suite 1.0.0: This release contains following changes from previous release: Removed the tests that were testing Microsoft specific behavior not part of open specification. The test suite now contains two folders, containing set of test cases, named ‘Tests’ and ‘TestsWithProp'. The set of tests under these two folders are identical except one difference. The set of test cases under directory ‘TestsWithProp’ makes use of ‘properties’ (which the compiler being tested should handle as mentioned in the open ...ASP.NET dhtmxGantt Class: dhtmlxGantt2.vb class: This is the latest class based on work performed. For more information read the project description and get the source files from dhtmlx.comExpressiveDataGenerators: Alpha 2: Fix serveral bugs, more testsQuickTestsFramework: 1.0.0: First release with stable API.VS Tiny Extension for TortoiseGit: 0.1c: + Icons revised + Push button disappeared when IDE loads the menu instead of toolbar. + Detected twice loading and prevented. + About box deprecated. + Next version will have major improvements. NEW: Visual Studio 2013 Support!BlackJumboDog: Ver5.9.6: 2013.09.30 Ver5.9.6 (1)SMTP???????、???????????????? (2)WinAPI??????? (3)Web???????CGI???????????????????????PayBox payment gateway provider for NB_Store: NB_Store_Gateway_01.00.02_PayBox: Paybox DNN module installMicrosoft Ajax Minifier: Microsoft Ajax Minifier 5.2: Mostly internal code tweaks. added -nosize switch to turn off the size- and gzip-calculations done after minification. removed the comments in the build targets script for the old AjaxMin build task (discussion #458831). Fixed an issue with extended Unicode characters encoded inside a string literal with adjacent \uHHHH\uHHHH sequences. Fixed an IndexOutOfRange exception when encountering a CSS identifier that's a single underscore character (_). In previous builds, the net35 and net20...AJAX Control Toolkit: September 2013 Release: AJAX Control Toolkit Release Notes - September 2013 Release (Updated) Version 7.1005September 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Important UpdateThis release has been updated to fix three issues: Up...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen.v2.1.4.apifix-alpha: WDTVHubGen.v2.1.4.apifix-alpha is for testers to figure out if we got the NEW api plugged in ok. thanksVisual Log Parser: VisualLogParser: Portable Visual Log Parser for Dotnet 4.0New ProjectsBasic4Android (B4A) Charting Framework: dhtlmxCharts, GoogleCharts etc: Basic4Android (B4A) mobile charting framework.Client Meeting Tool: This site facilitate users to create and schedule meetings for an event.FoodScan: This app focuses on implementing diet monitoring application for Malaysian overweight and obese adolescents using AR technique on Windows Phone 8.Hello Team foundation server: Try to use team foundation server and compare it with GitKDG C# Password Generator: C# password generator developed by KDG.KDG's C# Password Generator: C# password generator that uses Random to create strong passwords based on user input.Meta: Meta is the EECS 111 programming language at Northwestern University. Meta is a dynamically-typed scheme-like language built on .NET. This is its home.Monoscript: Allows using Mono and C# for scripting on Unix. Source files are automatically compiled and executed. Caching is employed to avoid recompiling unchanged files.mtdsharp: Developed by Chris Hyndman, Alec KC, Lu Huang and Merrill Huang for CS 196 at the University of Illinois.Planr.me: Planr is a time management website currently in the early development stageProject Hermes: This very project is currently closed door and under core development. The project description and other works would be published soon.Remindme for Windows Phone 8: Simple, open source Pocket client for Windows Phone 8Remindme for WinRT: Simple, open source Pocket client for WinRT and Windows 8.SQL Server Periodic Table with Molecules: This a SQL Server Database intended to be used by students and researchers for Chemistry and Physics projects. Tesseract: The Tesseract Project aims to easily display and rotate 4 Dimensional Objects in 3D Test Case Manager: A Windows Application which extends Microsoft Test Manager. Features: * one click search * test case export * better test case reader * extended edit modeTest Project for Assignment 1: This is a test ProjectTorah File: Torah File is an project that allow you to use Torah Bible and Mishneh for the computer by type of the programming languages that will be able to use the ToUSAePay nopCommerce Payment Plugin: A simple plugin for nopCommerce to use the USAePay SOAP API interface for processing credit cards.Veterinaria Dr Leo: Este es nuestro Proyecto del curso Calidad y Pruebas de Software 2013-2 Arevalo Ticlla, Susan. Chalán Malca, Elvis. Cruzado Asencio, Gustavo.Visual Studio Test Extensions: The Visual Studio Test Extensions provides extensions and tools for the Visual Studio MSTest engine. It allows to execute unit tests in a separate AppDomain.WSAAD7COM1052: Central repository for 7COM1052 - Web Scripting & Application Development (COM)wscc2013online: this is a project related to Web Application Development at the University of Hertfordshire

    Read the article

  • TFS 2010 Build: Dealing with the API restriction error

    - by Jakob Ehn
    Recently I’ve come across this error a couple of times when running builds that exeucte unit tests using Test containers: API restriction: The assembly 'file:///C:\Builds\<path>\myassembly.dll' has already loaded from a different location. It cannot be loaded from a new location within the same appdomain. Every time I’ve got this error, the project has been a web application, and the path to the assembly points down to the _PublishedWebsites directory that is created beneath the Binaries folder during a team build. The error description really says it all (although slightly cryptic), when using test containers, MSTest needs to load all assemblies and see if they contain any unit tests. During this serach, it finds the ‘myassembly.dll’ in two different locations. First it is found directly beneth the Binaries folder, and then it is alos found beneath the _PublishedWebsites\Project\bin folder. The reason is that the default setting for test containers in a TFS 2010 build definition is **\*test*.dll:   This pattern means that MSTest will search recursively for all assemblies beneath the Binaries folder, and during the search it will find the MyAssembly.dll twice. The solution is simple, set the Test assembly file specification property to *test*.dll instead, this will disable the recursive search:

    Read the article

  • setting documentroot in apache

    - by fusion
    i've set the documentroot in httpd.conf as: DocumentRoot "C:\Users\user1\Documents\WebProjects" if the files are located in WebProjects, they work; however if i create a sub folder [project] in WebProjects and access them via the browser, it doesn't load. for example, if i create a folder 'test' in WebProjects and a php file called test.php and call it: localhost/test/test.php . .this won't work and give the error of file not found on server. but if i put all the files in WebProjects itself, ie. test.php in WebProjects, it will work [localhost/test.php]. this makes my WebProjects folder look very cluttered with different files of different projects strewn around. and it isn't practical either. i'm new to using apache and hence would like to know how to set the document root such that i can access and load all the Projects/folders in WebProjects.

    Read the article

  • C#/.NET Little Wonders: Fun With Enum Methods

    - by James Michael Hare
    Once again lets dive into the Little Wonders of .NET, those small things in the .NET languages and BCL classes that make development easier by increasing readability, maintainability, and/or performance. So probably every one of us has used an enumerated type at one time or another in a C# program.  The enumerated types we create are a great way to represent that a value can be one of a set of discrete values (or a combination of those values in the case of bit flags). But the power of enum types go far beyond simple assignment and comparison, there are many methods in the Enum class (that all enum types “inherit” from) that can give you even more power when dealing with them. IsDefined() – check if a given value exists in the enum Are you reading a value for an enum from a data source, but are unsure if it is actually a valid value or not?  Casting won’t tell you this, and Parse() isn’t guaranteed to balk either if you give it an int or a combination of flags.  So what can we do? Let’s assume we have a small enum like this for result codes we want to return back from our business logic layer: 1: public enum ResultCode 2: { 3: Success, 4: Warning, 5: Error 6: } In this enum, Success will be zero (unless given another value explicitly), Warning will be one, and Error will be two. So what happens if we have code like this where perhaps we’re getting the result code from another data source (could be database, could be web service, etc)? 1: public ResultCode PerformAction() 2: { 3: // set up and call some method that returns an int. 4: int result = ResultCodeFromDataSource(); 5:  6: // this will suceed even if result is < 0 or > 2. 7: return (ResultCode) result; 8: } So what happens if result is –1 or 4?  Well, the cast does not fail, so what we end up with would be an instance of a ResultCode that would have a value that’s outside of the bounds of the enum constants we defined. This means if you had a block of code like: 1: switch (result) 2: { 3: case ResultType.Success: 4: // do success stuff 5: break; 6:  7: case ResultType.Warning: 8: // do warning stuff 9: break; 10:  11: case ResultType.Error: 12: // do error stuff 13: break; 14: } That you would hit none of these blocks (which is a good argument for always having a default in a switch by the way). So what can you do?  Well, there is a handy static method called IsDefined() on the Enum class which will tell you if an enum value is defined.  1: public ResultCode PerformAction() 2: { 3: int result = ResultCodeFromDataSource(); 4:  5: if (!Enum.IsDefined(typeof(ResultCode), result)) 6: { 7: throw new InvalidOperationException("Enum out of range."); 8: } 9:  10: return (ResultCode) result; 11: } In fact, this is often recommended after you Parse() or cast a value to an enum as there are ways for values to get past these methods that may not be defined. If you don’t like the syntax of passing in the type of the enum, you could clean it up a bit by creating an extension method instead that would allow you to call IsDefined() off any isntance of the enum: 1: public static class EnumExtensions 2: { 3: // helper method that tells you if an enum value is defined for it's enumeration 4: public static bool IsDefined(this Enum value) 5: { 6: return Enum.IsDefined(value.GetType(), value); 7: } 8: }   HasFlag() – an easier way to see if a bit (or bits) are set Most of us who came from the land of C programming have had to deal extensively with bit flags many times in our lives.  As such, using bit flags may be almost second nature (for a quick refresher on bit flags in enum types see one of my old posts here). However, in higher-level languages like C#, the need to manipulate individual bit flags is somewhat diminished, and the code to check for bit flag enum values may be obvious to an advanced developer but cryptic to a novice developer. For example, let’s say you have an enum for a messaging platform that contains bit flags: 1: // usually, we pluralize flags enum type names 2: [Flags] 3: public enum MessagingOptions 4: { 5: None = 0, 6: Buffered = 0x01, 7: Persistent = 0x02, 8: Durable = 0x04, 9: Broadcast = 0x08 10: } We can combine these bit flags using the bitwise OR operator (the ‘|’ pipe character): 1: // combine bit flags using 2: var myMessenger = new Messenger(MessagingOptions.Buffered | MessagingOptions.Broadcast); Now, if we wanted to check the flags, we’d have to test then using the bit-wise AND operator (the ‘&’ character): 1: if ((options & MessagingOptions.Buffered) == MessagingOptions.Buffered) 2: { 3: // do code to set up buffering... 4: // ... 5: } While the ‘|’ for combining flags is easy enough to read for advanced developers, the ‘&’ test tends to be easy for novice developers to get wrong.  First of all you have to AND the flag combination with the value, and then typically you should test against the flag combination itself (and not just for a non-zero)!  This is because the flag combination you are testing with may combine multiple bits, in which case if only one bit is set, the result will be non-zero but not necessarily all desired bits! Thanks goodness in .NET 4.0 they gave us the HasFlag() method.  This method can be called from an enum instance to test to see if a flag is set, and best of all you can avoid writing the bit wise logic yourself.  Not to mention it will be more readable to a novice developer as well: 1: if (options.HasFlag(MessagingOptions.Buffered)) 2: { 3: // do code to set up buffering... 4: // ... 5: } It is much more concise and unambiguous, thus increasing your maintainability and readability. It would be nice to have a corresponding SetFlag() method, but unfortunately generic types don’t allow you to specialize on Enum, which makes it a bit more difficult.  It can be done but you have to do some conversions to numeric and then back to the enum which makes it less of a payoff than having the HasFlag() method.  But if you want to create it for symmetry, it would look something like this: 1: public static T SetFlag<T>(this Enum value, T flags) 2: { 3: if (!value.GetType().IsEquivalentTo(typeof(T))) 4: { 5: throw new ArgumentException("Enum value and flags types don't match."); 6: } 7:  8: // yes this is ugly, but unfortunately we need to use an intermediate boxing cast 9: return (T)Enum.ToObject(typeof (T), Convert.ToUInt64(value) | Convert.ToUInt64(flags)); 10: } Note that since the enum types are value types, we need to assign the result to something (much like string.Trim()).  Also, you could chain several SetFlag() operations together or create one that takes a variable arg list if desired. Parse() and ToString() – transitioning from string to enum and back Sometimes, you may want to be able to parse an enum from a string or convert it to a string - Enum has methods built in to let you do this.  Now, many may already know this, but may not appreciate how much power are in these two methods. For example, if you want to parse a string as an enum, it’s easy and works just like you’d expect from the numeric types: 1: string optionsString = "Persistent"; 2:  3: // can use Enum.Parse, which throws if finds something it doesn't like... 4: var result = (MessagingOptions)Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result == MessagingOptions.Persistent) 7: { 8: Console.WriteLine("It worked!"); 9: } Note that Enum.Parse() will throw if it finds a value it doesn’t like.  But the values it likes are fairly flexible!  You can pass in a single value, or a comma separated list of values for flags and it will parse them all and set all bits: 1: // for string values, can have one, or comma separated. 2: string optionsString = "Persistent, Buffered"; 3:  4: var result = (MessagingOptions)Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 7: { 8: Console.WriteLine("It worked!"); 9: } Or you can parse in a string containing a number that represents a single value or combination of values to set: 1: // 3 is the combination of Buffered (0x01) and Persistent (0x02) 2: var optionsString = "3"; 3:  4: var result = (MessagingOptions) Enum.Parse(typeof (MessagingOptions), optionsString); 5:  6: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 7: { 8: Console.WriteLine("It worked again!"); 9: } And, if you really aren’t sure if the parse will work, and don’t want to handle an exception, you can use TryParse() instead: 1: string optionsString = "Persistent, Buffered"; 2: MessagingOptions result; 3:  4: // try parse returns true if successful, and takes an out parm for the result 5: if (Enum.TryParse(optionsString, out result)) 6: { 7: if (result.HasFlag(MessagingOptions.Persistent) && result.HasFlag(MessagingOptions.Buffered)) 8: { 9: Console.WriteLine("It worked!"); 10: } 11: } So we covered parsing a string to an enum, what about reversing that and converting an enum to a string?  The ToString() method is the obvious and most basic choice for most of us, but did you know you can pass a format string for enum types that dictate how they are written as a string?: 1: MessagingOptions value = MessagingOptions.Buffered | MessagingOptions.Persistent; 2:  3: // general format, which is the default, 4: Console.WriteLine("Default : " + value); 5: Console.WriteLine("G (default): " + value.ToString("G")); 6:  7: // Flags format, even if type does not have Flags attribute. 8: Console.WriteLine("F (flags) : " + value.ToString("F")); 9:  10: // integer format, value as number. 11: Console.WriteLine("D (num) : " + value.ToString("D")); 12:  13: // hex format, value as hex 14: Console.WriteLine("X (hex) : " + value.ToString("X")); Which displays: 1: Default : Buffered, Persistent 2: G (default): Buffered, Persistent 3: F (flags) : Buffered, Persistent 4: D (num) : 3 5: X (hex) : 00000003 Now, you may not really see a difference here between G and F because I used a [Flags] enum, the difference is that the “F” option treats the enum as if it were flags even if the [Flags] attribute is not present.  Let’s take a non-flags enum like the ResultCode used earlier: 1: // yes, we can do this even if it is not [Flags] enum. 2: ResultCode value = ResultCode.Warning | ResultCode.Error; And if we run that through the same formats again we get: 1: Default : 3 2: G (default): 3 3: F (flags) : Warning, Error 4: D (num) : 3 5: X (hex) : 00000003 Notice that since we had multiple values combined, but it was not a [Flags] marked enum, the G and default format gave us a number instead of a value name.  This is because the value was not a valid single-value constant of the enum.  However, using the F flags format string, it broke out the value into its component flags even though it wasn’t marked [Flags]. So, if you want to get an enum to display appropriately for whether or not it has the [Flags] attribute, use G which is the default.  If you always want it to attempt to break down the flags, use F.  For numeric output, obviously D or  X are the best choice depending on whether you want decimal or hex. Summary Hopefully, you learned a couple of new tricks with using the Enum class today!  I’ll add more little wonders as I think of them and thanks for all the invaluable input!   Technorati Tags: C#,.NET,Little Wonders,Enum,BlackRabbitCoder

    Read the article

  • Oracle 64-bit assembly throws BadImageFormatException when running unit tests

    - by pjohnson
    We recently upgraded to the 64-bit Oracle client. Since then, Visual Studio 2010 unit tests that hit the database (I know, unit tests shouldn't hit the database--they're not perfect) all fail with this error message:Test method MyProject.Test.SomeTest threw exception: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.BadImageFormatException: Could not load file or assembly 'Oracle.DataAccess, Version=4.112.3.0, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format.I resolved this by changing the test settings to run tests in 64-bit. From the Test menu, go to Edit Test Settings, and pick your settings file. Go to Hosts, and change the "Run tests in 32 bit or 64 bit process" dropdown to "Run tests in 64 bit process on 64 bit machine". Now your tests should run.This fix makes me a little nervous. Visual Studio 2010 and earlier seem to change that file for no apparent reason, add more settings files, etc. If you're not paying attention, you could have TestSettings1.testsettings through TestSettings99.testsettings sitting there and never notice the difference. So it's worth making a note of how to change it in case you have to redo it, and being vigilant about files VS tries to add.I'm not entirely clear on why this was even a problem. Isn't that the point of an MSIL assembly, that it's not specific to the hardware it runs on? An IL disassembler can open the Oracle.DataAccess.dll in question, and in its Runtime property, I see the value "v4.0.30319 / x64". So I guess the assembly was specifically build to target 64-bit platforms only, possibly due to a 64-bit-specific difference in the external Oracle client upon which it depends. Most other assemblies, especially in the .NET Framework, list "msil", and a couple list "x86". So I guess this is another entry in the long list of ways Oracle refuses to play nice with Windows and .NET.If this doesn't solve your problem, you can read others' research into this error, and where to change the same test setting in Visual Studio 2012.

    Read the article

  • Python 3.4 adds re.fullmatch()

    - by Jan Goyvaerts
    Python 3.4 does not bring any changes to its regular expression syntax compared to previous 3.x releases. It does add one new function to the re module called fullmatch(). This function takes a regular expression and a subject string as its parameters. It returns True if the regular expression can match the string entirely. It returns False if the string cannot be matched or if it can only be matched partially. This is useful when using a regular expression to validate user input. Do note that fullmatch() will return True if the subject string is the empty string and the regular expression can find zero-length matches. A zero-length match of a zero-length string is a complete match. So if you want to check whether the user entered a sequence of digits, use \d+ rather than \d* as the regex.

    Read the article

  • Custom Content Pipeline with Automatic Serialization Load Error

    - by Direweasel
    I'm running into this error: Error loading "desert". Cannot find type TiledLib.MapContent, TiledLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null. at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.InstantiateTypeReader(String readerTypeName, ContentReader contentReader, ContentTypeReader& reader) at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.GetTypeReader(String readerTypeName, ContentReader contentReader, List1& newTypeReaders) at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.ReadTypeManifest(Int32 typeCount, ContentReader contentReader) at Microsoft.Xna.Framework.Content.ContentReader.ReadHeader() at Microsoft.Xna.Framework.Content.ContentReader.ReadAsset[T]() at Microsoft.Xna.Framework.Content.ContentManager.ReadAsset[T](String assetName, Action1 recordDisposableObject) at Microsoft.Xna.Framework.Content.ContentManager.Load[T](String assetName) at TiledTest.Game1.LoadContent() in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Game1.cs:line 51 at Microsoft.Xna.Framework.Game.Initialize() at TiledTest.Game1.Initialize() in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Game1.cs:line 39 at Microsoft.Xna.Framework.Game.RunGame(Boolean useBlockingRun) at Microsoft.Xna.Framework.Game.Run() at TiledTest.Program.Main(String[] args) in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Program.cs:line 15 When trying to run the game. This is a basic demo to try and utilize a separate project library called TiledLib. I have four projects overall: TiledLib (C# Class Library) TiledTest (Windows Game) TiledTestContent (Content) TMX CP Ext (Content Pipeline Extension Library) TiledLib contains MapContent which is throwing the error, however I believe this may just be a generic error with a deeper root problem. EMX CP Ext contains one file: MapProcessor.cs using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Content.Pipeline; using Microsoft.Xna.Framework.Content.Pipeline.Graphics; using Microsoft.Xna.Framework.Content.Pipeline.Processors; using Microsoft.Xna.Framework.Content; using TiledLib; namespace TMX_CP_Ext { // Each tile has a texture, source rect, and sprite effects. [ContentSerializerRuntimeType("TiledTest.Tile, TiledTest")] public class DemoMapTileContent { public ExternalReference<Texture2DContent> Texture; public Rectangle SourceRectangle; public SpriteEffects SpriteEffects; } // For each layer, we store the size of the layer and the tiles. [ContentSerializerRuntimeType("TiledTest.Layer, TiledTest")] public class DemoMapLayerContent { public int Width; public int Height; public DemoMapTileContent[] Tiles; } // For the map itself, we just store the size, tile size, and a list of layers. [ContentSerializerRuntimeType("TiledTest.Map, TiledTest")] public class DemoMapContent { public int TileWidth; public int TileHeight; public List<DemoMapLayerContent> Layers = new List<DemoMapLayerContent>(); } [ContentProcessor(DisplayName = "TMX Processor - TiledLib")] public class MapProcessor : ContentProcessor<MapContent, DemoMapContent> { public override DemoMapContent Process(MapContent input, ContentProcessorContext context) { // build the textures TiledHelpers.BuildTileSetTextures(input, context); // generate source rectangles TiledHelpers.GenerateTileSourceRectangles(input); // now build our output, first by just copying over some data DemoMapContent output = new DemoMapContent { TileWidth = input.TileWidth, TileHeight = input.TileHeight }; // iterate all the layers of the input foreach (LayerContent layer in input.Layers) { // we only care about tile layers in our demo TileLayerContent tlc = layer as TileLayerContent; if (tlc != null) { // create the new layer DemoMapLayerContent outLayer = new DemoMapLayerContent { Width = tlc.Width, Height = tlc.Height, }; // we need to build up our tile list now outLayer.Tiles = new DemoMapTileContent[tlc.Data.Length]; for (int i = 0; i < tlc.Data.Length; i++) { // get the ID of the tile uint tileID = tlc.Data[i]; // use that to get the actual index as well as the SpriteEffects int tileIndex; SpriteEffects spriteEffects; TiledHelpers.DecodeTileID(tileID, out tileIndex, out spriteEffects); // figure out which tile set has this tile index in it and grab // the texture reference and source rectangle. ExternalReference<Texture2DContent> textureContent = null; Rectangle sourceRect = new Rectangle(); // iterate all the tile sets foreach (var tileSet in input.TileSets) { // if our tile index is in this set if (tileIndex - tileSet.FirstId < tileSet.Tiles.Count) { // store the texture content and source rectangle textureContent = tileSet.Texture; sourceRect = tileSet.Tiles[(int)(tileIndex - tileSet.FirstId)].Source; // and break out of the foreach loop break; } } // now insert the tile into our output outLayer.Tiles[i] = new DemoMapTileContent { Texture = textureContent, SourceRectangle = sourceRect, SpriteEffects = spriteEffects }; } // add the layer to our output output.Layers.Add(outLayer); } } // return the output object. because we have ContentSerializerRuntimeType attributes on our // objects, we don't need a ContentTypeWriter and can just use the automatic serialization. return output; } } } TiledLib contains a large amount of files including MapContent.cs using System; using System.Collections.Generic; using System.Globalization; using System.Xml; using Microsoft.Xna.Framework.Content.Pipeline; namespace TiledLib { public enum Orientation : byte { Orthogonal, Isometric, } public class MapContent { public string Filename; public string Directory; public string Version = string.Empty; public Orientation Orientation; public int Width; public int Height; public int TileWidth; public int TileHeight; public PropertyCollection Properties = new PropertyCollection(); public List<TileSetContent> TileSets = new List<TileSetContent>(); public List<LayerContent> Layers = new List<LayerContent>(); public MapContent(XmlDocument document, ContentImporterContext context) { XmlNode mapNode = document["map"]; Version = mapNode.Attributes["version"].Value; Orientation = (Orientation)Enum.Parse(typeof(Orientation), mapNode.Attributes["orientation"].Value, true); Width = int.Parse(mapNode.Attributes["width"].Value, CultureInfo.InvariantCulture); Height = int.Parse(mapNode.Attributes["height"].Value, CultureInfo.InvariantCulture); TileWidth = int.Parse(mapNode.Attributes["tilewidth"].Value, CultureInfo.InvariantCulture); TileHeight = int.Parse(mapNode.Attributes["tileheight"].Value, CultureInfo.InvariantCulture); XmlNode propertiesNode = document.SelectSingleNode("map/properties"); if (propertiesNode != null) { Properties = new PropertyCollection(propertiesNode, context); } foreach (XmlNode tileSet in document.SelectNodes("map/tileset")) { if (tileSet.Attributes["source"] != null) { TileSets.Add(new ExternalTileSetContent(tileSet, context)); } else { TileSets.Add(new TileSetContent(tileSet, context)); } } foreach (XmlNode layerNode in document.SelectNodes("map/layer|map/objectgroup")) { LayerContent layerContent; if (layerNode.Name == "layer") { layerContent = new TileLayerContent(layerNode, context); } else if (layerNode.Name == "objectgroup") { layerContent = new MapObjectLayerContent(layerNode, context); } else { throw new Exception("Unknown layer name: " + layerNode.Name); } // Layer names need to be unique for our lookup system, but Tiled // doesn't require unique names. string layerName = layerContent.Name; int duplicateCount = 2; // if a layer already has the same name... if (Layers.Find(l => l.Name == layerName) != null) { // figure out a layer name that does work do { layerName = string.Format("{0}{1}", layerContent.Name, duplicateCount); duplicateCount++; } while (Layers.Find(l => l.Name == layerName) != null); // log a warning for the user to see context.Logger.LogWarning(string.Empty, new ContentIdentity(), "Renaming layer \"{1}\" to \"{2}\" to make a unique name.", layerContent.Type, layerContent.Name, layerName); // save that name layerContent.Name = layerName; } Layers.Add(layerContent); } } } } I'm lost as to why this is failing. Thoughts? -- EDIT -- After playing with it a bit, I would think it has something to do with referencing the projects. I'm already referencing the TiledLib within my main windows project (TiledTest). However, this doesn't seem to make a difference. I can place the dll generated from the TiledLib project into the debug folder of TiledTest, and this causes it to generate a different error: Error loading "desert". Cannot find ContentTypeReader for Microsoft.Xna.Framework.Content.Pipeline.ExternalReference`1[Microsoft.Xna.Framework.Content.Pipeline.Graphics.Texture2DContent]. at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.GetTypeReader(Type targetType, ContentReader contentReader) at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.GetTypeReader(Type targetType) at Microsoft.Xna.Framework.Content.ReflectiveReaderMemberHelper..ctor(ContentTypeReaderManager manager, FieldInfo fieldInfo, PropertyInfo propertyInfo, Type memberType, Boolean canWrite) at Microsoft.Xna.Framework.Content.ReflectiveReaderMemberHelper.TryCreate(ContentTypeReaderManager manager, Type declaringType, FieldInfo fieldInfo) at Microsoft.Xna.Framework.Content.ReflectiveReader1.Initialize(ContentTypeReaderManager manager) at Microsoft.Xna.Framework.Content.ContentTypeReaderManager.ReadTypeManifest(Int32 typeCount, ContentReader contentReader) at Microsoft.Xna.Framework.Content.ContentReader.ReadHeader() at Microsoft.Xna.Framework.Content.ContentReader.ReadAsset[T]() at Microsoft.Xna.Framework.Content.ContentManager.ReadAsset[T](String assetName, Action1 recordDisposableObject) at Microsoft.Xna.Framework.Content.ContentManager.Load[T](String assetName) at TiledTest.Game1.LoadContent() in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Game1.cs:line 51 at Microsoft.Xna.Framework.Game.Initialize() at TiledTest.Game1.Initialize() in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Game1.cs:line 39 at Microsoft.Xna.Framework.Game.RunGame(Boolean useBlockingRun) at Microsoft.Xna.Framework.Game.Run() at TiledTest.Program.Main(String[] args) in C:\My Documents\Dropbox\Visual Studio Projects\TiledTest\TiledTest\TiledTest\Program.cs:line 15 This is all incredibly frustrating as the demo doesn't appear to have any special linking properties. The TiledLib I am utilizing is from Nick Gravelyn, and can be found here: https://bitbucket.org/nickgravelyn/tiledlib. The demo it comes with works fine, and yet in recreating I always run into this error.

    Read the article

  • How to pad number with leading zero with C#

    - by Jalpesh P. Vadgama
    Recently I was working with a project where I was in need to format a number in such a way which can apply leading zero for particular format.  So after doing such R and D I have found a great way to apply this leading zero format. I was having need that I need to pad number in 5 digit format. So following is a table in which format I need my leading zero format. 1-> 00001 20->00020 300->00300 4000->04000 50000->5000 So in the above example you can see that 1 will become 00001 and 20 will become 00200 format so on. So to display an integer value in decimal format I have applied interger.Tostring(String) method where I have passed “Dn” as the value of the format parameter, where n represents the minimum length of the string. So if we pass 5 it will have padding up to 5 digits. So let’s create a simple console application and see how its works. Following is a code for that. using System; namespace LeadingZero { class Program { static void Main(string[] args) { int a = 1; int b = 20; int c = 300; int d = 4000; int e = 50000; Console.WriteLine(string.Format("{0}------>{1}",a,a.ToString("D5"))); Console.WriteLine(string.Format("{0}------>{1}", b, b.ToString("D5"))); Console.WriteLine(string.Format("{0}------>{1}", c, c.ToString("D5"))); Console.WriteLine(string.Format("{0}------>{1}", d, d.ToString("D5"))); Console.WriteLine(string.Format("{0}------>{1}", e, e.ToString("D5"))); Console.ReadKey(); } } } As you can see in the above code I have use string.Format function to display value of integer and after using integer value’s  ToString method. Now Let’s run the console application and following is the output as expected. Here you can see the integer number are converted into the exact output that we requires. That’s it you can see it’s very easy. We have written code in nice clean way and without writing any extra code or loop. Hope you liked it. Stay tuned for more.. Till than happy programming.

    Read the article

  • avconv and ffmpeg - drawtext filter text_w evaluates as 0 in Ubuntu Precise

    - by Chris White
    I'm attempting to draw text onto a video using either the avconv or ffmpeg commands. When specifying x= for where on the final video to place the text, the 'text_w' value is evaluating to 0, rather than the width of the rendered text as it should. I'm using Ubuntu 12.04 I've got avconv version 0.8.3-4:0.8.3-0ubuntu0.12.04.1 and ffmpeg version 0.8.3-4:0.8.3-0ubuntu0.12.04.1 Example command: avconv -i test.mov -vf "drawtext=fontfile='/usr/share/fonts/truetype/ttf-dejavu/DejaVuSans.ttf':text='test text':x=text_w:y=50:fontsize=24:fontcolor=black" texted.mov This command causes the text to be printed as if x were set to 0. What I'd really like to be able to do is center the text horizontally using something like this: avconv -i test.mov -vf "drawtext=fontfile='/usr/share/fonts/truetype/ttf-dejavu/DejaVuSans.ttf':text='test text':x=(main_w-text_w)/2:y=50:fontsize=24:fontcolor=black" texted.mov Using ffmpeg for to attempt the same ends with the same result ffmpeg -i test.mov -vf "drawtext=fontfile='/usr/share/fonts/truetype/ttf-dejavu/DejaVuSans.ttf':text='test text':x=(main_w-text_w)/2:y=50:fontsize=24:fontcolor=black" texted.mov

    Read the article

  • UndoRedo on Nodes

    - by Geertjan
    When a change is made to the property in the Properties Window, below, the undo/redo functionality becomes enabled: When undo/redo are invoked, e.g., via the buttons in the toolbar, the display name of the node changes accordingly. The only problem I have is that the buttons only become enabled when the Person Window is selected, not when the Properties Window is selected, which would be desirable. Here's the Person object: public class Person implements PropertyChangeListener {     private String name;     public static final String PROP_NAME = "name";     public Person(String name) {         this.name = name;     }     public String getName() {         return name;     }     public void setName(String name) {         String oldName = this.name;         this.name = name;         propertyChangeSupport.firePropertyChange(PROP_NAME, oldName, name);     }     private transient final PropertyChangeSupport propertyChangeSupport = new PropertyChangeSupport(this);     public void addPropertyChangeListener(PropertyChangeListener listener) {         propertyChangeSupport.addPropertyChangeListener(listener);     }     public void removePropertyChangeListener(PropertyChangeListener listener) {         propertyChangeSupport.removePropertyChangeListener(listener);     }     @Override     public void propertyChange(PropertyChangeEvent evt) {         propertyChangeSupport.firePropertyChange(evt);     } } And here's the Node with UndoRedo enablement: public class PersonNode extends AbstractNode implements UndoRedo.Provider, PropertyChangeListener {     private UndoRedo.Manager manager = new UndoRedo.Manager();     private boolean undoRedoEvent;     public PersonNode(Person person) {         super(Children.LEAF, Lookups.singleton(person));         person.addPropertyChangeListener(this);         setDisplayName(person.getName());     }     @Override     protected Sheet createSheet() {         Sheet sheet = Sheet.createDefault();         Sheet.Set set = Sheet.createPropertiesSet();         set.put(new NameProperty(getLookup().lookup(Person.class)));         sheet.put(set);         return sheet;     }     @Override     public void propertyChange(PropertyChangeEvent evt) {         if (evt.getPropertyName().equals(Person.PROP_NAME)) {             firePropertyChange(evt.getPropertyName(), evt.getOldValue(), evt.getNewValue());         }     }     public void fireUndoableEvent(String property, Person source, Object oldValue, Object newValue) {         manager.addEdit(new MyAbstractUndoableEdit(source, oldValue, newValue));     }     @Override     public UndoRedo getUndoRedo() {         return manager;     }     @Override     public String getDisplayName() {         Person p = getLookup().lookup(Person.class);         if (p != null) {             return p.getName();         }         return super.getDisplayName();     }     private class NameProperty extends PropertySupport.ReadWrite<String> {         private Person p;         public NameProperty(Person p) {             super("name", String.class, "Name", "Name of Person");             this.p = p;         }         @Override         public String getValue() throws IllegalAccessException, InvocationTargetException {             return p.getName();         }         @Override         public void setValue(String newValue) throws IllegalAccessException, IllegalArgumentException, InvocationTargetException {             String oldValue = p.getName();             p.setName(newValue);             if (!undoRedoEvent) {                 fireUndoableEvent("name", p, oldValue, newValue);                 fireDisplayNameChange(oldValue, newValue);             }         }     }     class MyAbstractUndoableEdit extends AbstractUndoableEdit {         private final String oldValue;         private final String newValue;         private final Person source;         private MyAbstractUndoableEdit(Person source, Object oldValue, Object newValue) {             this.oldValue = oldValue.toString();             this.newValue = newValue.toString();             this.source = source;         }         @Override         public boolean canRedo() {             return true;         }         @Override         public boolean canUndo() {             return true;         }         @Override         public void undo() throws CannotUndoException {             undoRedoEvent = true;             source.setName(oldValue.toString());             fireDisplayNameChange(oldValue, newValue);             undoRedoEvent = false;         }         @Override         public void redo() throws CannotUndoException {             undoRedoEvent = true;             source.setName(newValue.toString());             fireDisplayNameChange(oldValue, newValue);             undoRedoEvent = false;         }     } } Does anyone out there know how to have the Undo/Redo functionality enabled when the Properties Window is selected?

    Read the article

  • Collaboration using github and testing the code

    - by wyred
    The procedure in my team is that we all commit our code to the same development branch. We have a test server that runs updated code from this branch so that we can test our code on the servers. The problem is that if we want to merge the development branch to the master branch in order to publish new features to our production servers, some features that may not have been ready will be applied to the production servers. So we're considering having each developer work on a feature/topic branch where each of them work on their own features and when it's ready, merge it into the development branch for testing, and then into the master branch. However, because our test server only pulls changes from the development branch, the developers are unable to test their features. While this is not a huge issue as they can test it on their local machine, the only problem I foresee is if we want to test callbacks from third-party services like sendgrid (where you specify a url for sendgrid to update you on the status of emails sent out). How to handle this problem? Note: We're not advanced git users. We use the Github app for MacOSX and Windows to commit our work.

    Read the article

  • Dynamic Filtering

    - by Ricardo Peres
    Continuing my previous posts on dynamic LINQ, now it's time for dynamic filtering. For now, I'll focus on string matching. There are three standard operators for string matching, which both NHibernate, Entity Framework and LINQ to SQL recognize: Equals Contains StartsWith EndsWith So, if we want to apply filtering by one of these operators on a string property, we can use this code: public enum MatchType { StartsWith = 0, EndsWith = 1, Contains = 2, Equals = 3 } public static List Filter(IEnumerable enumerable, String propertyName, String filter, MatchType matchType) { return (Filter(enumerable, typeof(T), propertyName, filter, matchType) as List); } public static IList Filter(IEnumerable enumerable, Type elementType, String propertyName, String filter, MatchType matchType) { MethodInfo asQueryableMethod = typeof(Queryable).GetMethods(BindingFlags.Static | BindingFlags.Public).Where(m = (m.Name == "AsQueryable") && (m.ContainsGenericParameters == false)).Single(); IQueryable query = (enumerable is IQueryable) ? (enumerable as IQueryable) : asQueryableMethod.Invoke(null, new Object [] { enumerable }) as IQueryable; MethodInfo whereMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "Where").ToArray() [ 0 ].MakeGenericMethod(elementType); MethodInfo matchMethod = typeof(String).GetMethod ( (matchType == MatchType.StartsWith) ? "StartsWith" : (matchType == MatchType.EndsWith) ? "EndsWith" : (matchType == MatchType.Contains) ? "Contains" : "Equals", new Type [] { typeof(String) } ); PropertyInfo displayProperty = elementType.GetProperty(propertyName, BindingFlags.Public | BindingFlags.Instance); MemberExpression member = Expression.MakeMemberAccess(Expression.Parameter(elementType, "n"), displayProperty); MethodCallExpression call = Expression.Call(member, matchMethod, Expression.Constant(filter)); LambdaExpression where = Expression.Lambda(call, member.Expression as ParameterExpression); query = whereMethod.Invoke(null, new Object [] { query, where }) as IQueryable; MethodInfo toListMethod = typeof(Enumerable).GetMethod("ToList", BindingFlags.Static | BindingFlags.Public).MakeGenericMethod(elementType); IList list = toListMethod.Invoke(null, new Object [] { query }) as IList; return (list); } var list = new [] { new { A = "aa" }, new { A = "aabb" }, new { A = "ccaa" }, new { A = "ddaadd" } }; var contains = Filter(list, "A", "aa", MatchType.Contains); var endsWith = Filter(list, "A", "aa", MatchType.EndsWith); var startsWith = Filter(list, "A", "aa", MatchType.StartsWith); var equals = Filter(list, "A", "aa", MatchType.Equals); Perhaps I'll write some more posts on this subject in the near future. SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • Oracle Enterprise Manager 12c Configuration Best Practices (Part 3 of 3)

    - by Bethany Lapaglia
    <span id="XinhaEditingPostion"></span>&amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; This is part 3 of a three-part blog series that summarizes the most commonly implemented configuration changes to improve performance and operation of a large Enterprise Manager 12c environment. A “large” environment is categorized by the number of agents, targets and users. See the Oracle Enterprise Manager Cloud Control Advanced Installation and Configuration Guide chapter on Sizing for more details on sizing your environment properly. Part 1 of this series covered recommended configuration changes for the OMS and Repository Part 2 covered recommended changes for the Weblogic server Part 3 covers general configuration recommendations and a few known issues The entire series can be found in the My Oracle Support note titled Oracle Enterprise Manager 12c Configuration Best Practices [1553342.1]. Configuration Recommendations Configure E-Mail Notifications for EM related Alerts In some environments, the notifications for events for different target types may be sent to different support teams (i.e. notifications on host targets may be sent to a platform support team). However, the EM application administrators should be well informed of any alerts or problems seen on the EM infrastructure components. Recommendation: Create a new Incident rule for monitoring all EM components and setup the notifications to be sent to the EM administrator(s). The notification methods available can create or update an incident, send an email or forward to an event connector. To setup the incident rule set follow the steps below. Note that each individual rule in the rule set can have different actions configured. 1.  To create an incident rule for monitoring the EM components, click on Setup / Incidents / Incident Rules. On the All Enterprise Rules page, click on the out-of-box rule called “Incident management Ruleset for all targets” and then click on the Actions drop down list and select “Create Like Rule Set…” 2. For the rule set name, enter a name such as MTM Ruleset. Under the Targets tab, select “All targets of types” and select “OMS and Repository” from the drop down list. This target type contains all of the key EM components (OMS servers, repository, domains, etc.) 3. Click on the Rules tab. To edit a rule, click on the rule name and click on Edit as seen below 4. Modify the following rules: a. Incident creation Rule for metric alerts i. Leave the Type set as is but change the Severity to add Warning by clicking on the drop down list and selecting “Warning”. Click Next. ii.  Add or modify the actions as required (i.e. add email notifications). Click Continue and then click Next. iii. Leave the Name and description the same and click Next. iv. Click Continue on the Review page. b. Incident creation Rule for target unreachable. i.   Leave the Type set as is but change the Target type to add OMS and Repository by clicking on the drop down list selecting “OMS and Repository”. Click Next. ii.  Add or modify the actions as required (i.e. add email notifications) Click Continue and then click Next. iii. Leave the Name and description the same and click Next. iv. Click Continue on the Review page. 5.  Modify the actions for any other rule as required and be sure to click the “Save” push button to save the rule set or all changes will be lost. Configure Out-of-Band Notifications for EM Agent Out-of-Band notifications act as a backup when there’s a complete EM outage or a repository database issue. This is configured on the agent of the OMS server and can be used to send emails or execute another script that would create a trouble ticket. It will send notifications about the following issues: • Repository Database down • All OMS are down • Repository side collection job that is broken or has an invalid schedule • Notification job that is broken or has an invalid schedule Recommendation: To setup Out-of-Band Notifications, refer to the MOS note “How To Setup Out Of Bound Email Notification In 12c” (Doc ID 1472854.1) Modify the Performance Test for the EM Console Service The EM Console Service has an out-of-box defined performance test that will be run to determine the status of this service. The test issues a request via an HTTP method to a specific URL. By default, the HTTP method used for this test is a GET but for performance reasons, should be changed to HEAD. The URL used for this request is set to point to a specific OMS server by default. If a multi-OMS system has been implemented and the OMS servers are behind a load balancer, then the URL in this section must be modified to point to the load balancer name instead of a specific server name. If this is not done and a portion of the infrastructure is down then the EM Console Service will show down as this test will fail. Recommendation: Modify the HTTP Method for the EM Console Service test and the URL if required following the detailed steps below. 1.  To create an incident rule for monitoring the EM components, click on Targets / Services. From the list of services, click on the EM Console Service. 2. On the EM Console Service page, click on the Test Performance tab. 3.  At the bottom of the page, click on the Web Transaction test called EM Console Service Test 4.  Click on the Service Tests and Beacons breadcrumb near the top of the page. 5.  Under the Service Tests section, make sure the EM Console Service Test is selected and click on the Edit push button. 6.  Under the Transaction section, make sure the Access Logout page transaction is selected and click on the Edit push button 7) Under the Request section, change the HTTP Method from the default of GET to the recommended value of HEAD. The URL in this section must be modified to point to the load balancer name instead of a specific server name if multi-OMSes have been implemented. Check for Known Issues Job Purge Repository Job is Shown as Down This issue is caused after upgrading EM from 12c to 12cR2. On the Repository page under Setup ? Manage Cloud Control ? Repository, the job called “Job Purge” is shown as down and the Next Scheduled Run is blank. Also, repvfy reports that this is a missing DBMS_SCHEDULER job. Recommendation: In EM 12cR2, the apply_purge_policies have been moved from the MGMT_JOB_ENGINE package to the EM_JOB_PURGE package. To remove this error, execute the commands below: $ repvfy verify core -test 2 -fix To confirm that the issue resolved, execute $ repvfy verify core -test 2 It can also be verified by refreshing the Job Service page in EM and check the status of the job, it should now be Up. Configure the Listener Targets in EM with the Listener Password (where required) EM will report this error every time it is encountered in the listener log file. In a RAC environment, typically the grid home and rdbms homes are owned by different OS users. The listener always runs from the grid home. Only the listener process owner can query or change the listener properties. The listener uses a password to allow other OS users (ex. the agent user) to query the listener process for parameters. EM has a default listener target metric that will query these properties. If the agent is not permitted to do this, the TNS incident (TNS-1190) will be logged in the listener’s log file. This means that the listener targets in EM also need to have this password set. Not doing so will cause many TNS incidents (TNS-1190). Below is a sample of this error from the listener log file: Recommendation: Set a listener password and include it in the configuration of the listener targets in EM For steps on setting the listener passwords, see MOS notes: 260986.1 , 427422.1

    Read the article

  • objectClass in openldap.

    - by garden air
    I am working on openldap in my linux box (centos) as testing.I create a base file to discuss with you about objectClass functionality & its impact if we not write.I write objectClass two times i.e top and domain .What does it mean ? The 2nd one is drived from the firect objectClass like parents child relation ? [root@srv1 openldap]# vim base.ldif base.ldif dn: dc=test,dc=local dc: test objectClass: top objectClass: domain Now I create add two OUs and does not add objectClass:top in both sales and marketing. To add two OUs i.e Sales and Marketing dn: ou=Sales,dc=test,dc=local ou: Sales objectClass:organizationalUnit dn: ou=Marketing,dc=test,dc=local ou: Marketing objectClass: organizationalUnit The confusion is should use all the parent objectClass and chield objectClass ? If we not add what impact will be on the structure ? In the following I use objectClass top and organizationalunit dn: ou=Sales,dc=test,dc=local ou: Sales objectClass: top objectClass:organizationalUnit dn: ou=Marketing,dc=test,dc=local ou: Marketing objectClass: top objectClass: organizationalUnit Please guide me which one is correct ? thanks garden

    Read the article

  • C#/.NET Little Wonders: Static Char Methods

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Often times in our code we deal with the bigger classes and types in the BCL, and occasionally forgot that there are some nice methods on the primitive types as well.  Today we will discuss some of the handy static methods that exist on the char (the C# alias of System.Char) type. The Background I was examining a piece of code this week where I saw the following: 1: // need to get the 5th (offset 4) character in upper case 2: var type = symbol.Substring(4, 1).ToUpper(); 3:  4: // test to see if the type is P 5: if (type == "P") 6: { 7: // ... do something with P type... 8: } Is there really any error in this code?  No, but it still struck me wrong because it is allocating two very short-lived throw-away strings, just to store and manipulate a single char: The call to Substring() generates a new string of length 1 The call to ToUpper() generates a new upper-case version of the string from Step 1. In my mind this is similar to using ToUpper() to do a case-insensitive compare: it isn’t wrong, it’s just much heavier than it needs to be (for more info on case-insensitive compares, see #2 in 5 More Little Wonders). One of my favorite books is the C++ Coding Standards: 101 Rules, Guidelines, and Best Practices by Sutter and Alexandrescu.  True, it’s about C++ standards, but there’s also some great general programming advice in there, including two rules I love:         8. Don’t Optimize Prematurely         9. Don’t Pessimize Prematurely We all know what #8 means: don’t optimize when there is no immediate need, especially at the expense of readability and maintainability.  I firmly believe this and in the axiom: it’s easier to make correct code fast than to make fast code correct.  Optimizing code to the point that it becomes difficult to maintain often gains little and often gives you little bang for the buck. But what about #9?  Well, for that they state: “All other things being equal, notably code complexity and readability, certain efficient design patterns and coding idioms should just flow naturally from your fingertips and are no harder to write then the pessimized alternatives. This is not premature optimization; it is avoiding gratuitous pessimization.” Or, if I may paraphrase: “where it doesn’t increase the code complexity and readability, prefer the more efficient option”. The example code above was one of those times I feel where we are violating a tacit C# coding idiom: avoid creating unnecessary temporary strings.  The code creates temporary strings to hold one char, which is just unnecessary.  I think the original coder thought he had to do this because ToUpper() is an instance method on string but not on char.  What he didn’t know, however, is that ToUpper() does exist on char, it’s just a static method instead (though you could write an extension method to make it look instance-ish). This leads me (in a long-winded way) to my Little Wonders for the day… Static Methods of System.Char So let’s look at some of these handy, and often overlooked, static methods on the char type: IsDigit(), IsLetter(), IsLetterOrDigit(), IsPunctuation(), IsWhiteSpace() Methods to tell you whether a char (or position in a string) belongs to a category of characters. IsLower(), IsUpper() Methods that check if a char (or position in a string) is lower or upper case ToLower(), ToUpper() Methods that convert a single char to the lower or upper equivalent. For example, if you wanted to see if a string contained any lower case characters, you could do the following: 1: if (symbol.Any(c => char.IsLower(c))) 2: { 3: // ... 4: } Which, incidentally, we could use a method group to shorten the expression to: 1: if (symbol.Any(char.IsLower)) 2: { 3: // ... 4: } Or, if you wanted to verify that all of the characters in a string are digits: 1: if (symbol.All(char.IsDigit)) 2: { 3: // ... 4: } Also, for the IsXxx() methods, there are overloads that take either a char, or a string and an index, this means that these two calls are logically identical: 1: // check given a character 2: if (char.IsUpper(symbol[0])) { ... } 3:  4: // check given a string and index 5: if (char.IsUpper(symbol, 0)) { ... } Obviously, if you just have a char, then you’d just use the first form.  But if you have a string you can use either form equally well. As a side note, care should be taken when examining all the available static methods on the System.Char type, as some seem to be redundant but actually have very different purposes.  For example, there are IsDigit() and IsNumeric() methods, which sound the same on the surface, but give you different results. IsDigit() returns true if it is a base-10 digit character (‘0’, ‘1’, … ‘9’) where IsNumeric() returns true if it’s any numeric character including the characters for ½, ¼, etc. Summary To come full circle back to our opening example, I would have preferred the code be written like this: 1: // grab 5th char and take upper case version of it 2: var type = char.ToUpper(symbol[4]); 3:  4: if (type == 'P') 5: { 6: // ... do something with P type... 7: } Not only is it just as readable (if not more so), but it performs over 3x faster on my machine:    1,000,000 iterations of char method took: 30 ms, 0.000050 ms/item.    1,000,000 iterations of string method took: 101 ms, 0.000101 ms/item. It’s not only immediately faster because we don’t allocate temporary strings, but as an added bonus there less garbage to collect later as well.  To me this qualifies as a case where we are using a common C# performance idiom (don’t create unnecessary temporary strings) to make our code better. Technorati Tags: C#,CSharp,.NET,Little Wonders,char,string

    Read the article

  • Cannot connect puppet agent to puppet master

    - by u123
    I have installed puppet 3.3.1 on a debian 7 machine (test-puppet-master) and the puppet agent on another debian 7 machine (test-puppet-agent/192.11.80.246) acting as a client. I start the master with: puppet master --verbose --no-daemonize And I start the agent with: puppet agent --server=test-puppet-master --no-daemonize --verbose Notice: Did not receive certificate which gives the following output on the master: Notice: Starting Puppet master version 3.3.1 Error: Could not resolve 192.11.80.246: no name for 192.11.80.246 Info: Inserting default '~ ^/catalog/([^/]+)$' (auth true) ACL Info: Inserting default '~ ^/node/([^/]+)$' (auth true) ACL Info: Inserting default '/file' (auth ) ACL Info: Inserting default '/certificate_revocation_list/ca' (auth true) ACL Info: Inserting default '~ ^/report/([^/]+)$' (auth true) ACL Info: Inserting default '/certificate/ca' (auth any) ACL Info: Inserting default '/certificate/' (auth any) ACL Info: Inserting default '/certificate_request' (auth any) ACL Info: Inserting default '/status' (auth true) ACL Info: Not Found: Could not find certificate test-puppet-agent Error: Could not resolve 192.11.80.246: no name for 192.11.80.246 Info: Not Found: Could not find certificate test-puppet-agent Error: Could not resolve 192.11.80.246: no name for 192.11.80.246 Info: Not Found: Could not find certificate test-puppet-agent Any ideas why the agent cannot connect?

    Read the article

< Previous Page | 339 340 341 342 343 344 345 346 347 348 349 350  | Next Page >