Search Results

Search found 8293 results on 332 pages for '80x24 console'.

Page 51/332 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • My ubuntu with unity not loading after last reboot

    - by Abonec
    I have asus u36sd and after last reboot I can't start up my ubuntu 11.10. Usually I suspend my notebook by closing cover but today I reboot it and it not starting up. Booting flowing by normal till to login screen but if I move mouse cursor after that image immediately switch to console (without any error; only normal loading startup processes) and back to login screen. I can type my password and boot continuing loading but after few moment it again switch back to dark console and switch again to login screen. I can load recovery mode but if I try touch my cursor (by mouse or internal notebook touchpad) it again switch back to console and to login screen. But if I use only keyboard it work fine. Where I can see detailed log information about my problem?

    Read the article

  • does class reference itself static anti pattern in prism

    - by Michael Riva
    I have an application and my desing approach look like this: class Manager { public int State; static Manager _instance = null; public static Manager Instance { get { return _instance; } set { if (_instance == value) return; _instance = value; } } public Manager() { State = 0; Instance=this; } } class Module1 { public void GetState() { Console.WriteLine(Manager.Instance.State); } } class Module2 { public void GetState() { Console.WriteLine(Manager.Instance.State); } } class Module3 { public void GetState() { Console.WriteLine(Manager.Instance.State); } } Manager class already registered in Bootstrapper like : protected override void ConfigureContainer() { base.ConfigureContainer(); Container.RegisterType<Manager>(new ContainerControlledLifetimeManager()); } protected override void InitializeModules() { Manager man= Container.Resolve<Manager>(); } Question is do I need to define my manager object as static in its field to be able to reach its state? Or this is anti pattern or bad for performance?

    Read the article

  • How to get output from upstart jobs when logged in via SSH?

    - by Binarus
    Hi, at the moment, I am trying to learn upstart and can't get around a basic problem. To monitor what my job definitions are doing, I would like to see text output from the jobs. That does not seem to be possible when I am logged on via SSH. Currently, I am having this problem with Natty 11.04, but I am convinced that it is a more common one. Probably I just don't know about some important, yet very basic, fact. A simple job file I use (filename /etc/init/test.conf): description "test" start on test console owner kill timeout 5 task script /bin/echo Gotcha... end script My goal is to see the text "Gotcha..." when doing "initctl emit test" or "initctl start test". But that does not work. What I have tried so far: "console output" instead of "console owner" "exec /bin/echo Gotcha..." instead of script...end script I am grateful for any advice. Thank you very much, Binarus

    Read the article

  • C# : Parsing information out of a path

    - by mbcrump
    If you have a path for example: \\MyServer\MyFolderA\MyFolderB\MyFile.jpg and you need to parse out the filename, directory name or the directory parent name. You should use the fileinfo class instead of a regex expression or a string split. See the example code below:   Code Snippet using System; using System.IO;   class Test {     static void Main(string[] args)     {         string file = @"\\MyServer\MyFolderA\MyFolderB\MyFile.jpg";         FileInfo fi = new FileInfo(file);         Console.WriteLine(fi.Name);                  // Prints File.jpg         Console.WriteLine(fi.Directory.Name);        // Prints FolderB         Console.WriteLine(fi.Directory.Parent.Name); // Prints FolderA     } }

    Read the article

  • Is JScript dying? If so, where should I go? [closed]

    - by David Is Not Here
    I recently poked around Google for a little bit, looking for information about coding JScript. It's very sparse, which surprised me -- it took a link to a link to find Microsoft's own reference, which appears to omit most if not all references to console-based scripting that extends past Javascript. I'm working with the console here, not a webpage, so input and output seems very different than what Microsoft explains. If JScript is dying (and it appears to be so), where do I go from here? VBScript? My options are limited because the computers I'm using this on are carefully patrolled for new software. JScript's similarity to JavaScript was the biggest reason I had chosen it for porting over some of my prior work. I'm specifically looking for, at best, a console scripting language that doesn't need any extra software on Windows XP or higher, that at least supports standard input, output, pause, and file manipulation, little else.

    Read the article

  • Given an integer, determine if it is a Palindrome in less than O(n) [on hold]

    - by user134235
    There is an O(n) solution to the problem of determining if an integer is a palindrome below. Is it possible to solve this problem in O(log n) or better? static void IsPalindrome(int Number) { int OrignalNum = Number; int Reverse = 0; int Remainder = 0; if (Number > 0) { while (Number > 0) { Remainder = Number % 10; Reverse = Reverse * 10 + Remainder; Number = Number / 10; } if (OrignalNum == Reverse) Console.WriteLine("It is a Palindrome"); else Console.WriteLine("It is not a Palindrome"); } else Console.WriteLine("Enter Number Again"); }

    Read the article

  • problem to change my Xenserver password

    - by Michlaou
    I try to change my root password on my Xenserver 6.0. I follow these steps: enter boot: menu.c32 selecet xe-serial and press tab add "single" before the 2nd triple hyphens and i press enter. I have that: mboot.c32 /boot/xen.gz com1=115200,8n1 console=com1, vga mem=1024G dom0_max_vcpus4 dom0_mem=752M lowmem_emergency_pool=1M crashkernel=64M@32M single --- /boot/vmlinuz-2.6-xen root=LABEL=root-rodraxar ro console=tty0 xencons=hvc console=hvc0 --- /boot/initrd-2.6-xen.img I have commande on the screen and it's stop at: ext3-fs: monted filesystem with ordered data mode. Can you help me?

    Read the article

  • Force screen size when testing embedded DOS app in Windows 7 command window

    - by tomlogic
    I'm doing some embedded DOS development with OpenWatcom (great Windows-hosted compiler for targeting 16-bit DOS applications). The target hardware has a 24x16 character screen (that supposedly emulates CGA to some degree), and I'm trying to get the CMD.EXE window on my Windows 7 machine to stay at a fixed 24x16 without any scroll bars. I've used both the window properties and MODE CON: COLS=24 LINES=16 to get the screen size that I wanted, but as soon as my application uses an INT10 BIOS calls to clear the screen, the mode jumps back to 80x24. Here's what I'm using to clear the screen: void cls(void) { // Clear screen and reset cursor position to (0,0) union REGS regs; regs.w.cx = 0; // Upper left regs.w.dx = 0x1018; // Lower right (of 16x24) regs.h.bh = 7; // Blank lines attribute (white text on black) regs.w.ax = 0x0600; // 06 = scroll up, AL=00 to clear int86( 0x10, &regs, &regs ); } Any ideas? I can still do my testing at 80x24 (or 80x25), but it doesn't entirely behave like the 24x16 mode.

    Read the article

  • Using JSON.NET for dynamic JSON parsing

    - by Rick Strahl
    With the release of ASP.NET Web API as part of .NET 4.5 and MVC 4.0, JSON.NET has effectively pushed out the .NET native serializers to become the default serializer for Web API. JSON.NET is vastly more flexible than the built in DataContractJsonSerializer or the older JavaScript serializer. The DataContractSerializer in particular has been very problematic in the past because it can't deal with untyped objects for serialization - like values of type object, or anonymous types which are quite common these days. The JavaScript Serializer that came before it actually does support non-typed objects for serialization but it can't do anything with untyped data coming in from JavaScript and it's overall model of extensibility was pretty limited (JavaScript Serializer is what MVC uses for JSON responses). JSON.NET provides a robust JSON serializer that has both high level and low level components, supports binary JSON, JSON contracts, Xml to JSON conversion, LINQ to JSON and many, many more features than either of the built in serializers. ASP.NET Web API now uses JSON.NET as its default serializer and is now pulled in as a NuGet dependency into Web API projects, which is great. Dynamic JSON Parsing One of the features that I think is getting ever more important is the ability to serialize and deserialize arbitrary JSON content dynamically - that is without mapping the JSON captured directly into a .NET type as DataContractSerializer or the JavaScript Serializers do. Sometimes it isn't possible to map types due to the differences in languages (think collections, dictionaries etc), and other times you simply don't have the structures in place or don't want to create them to actually import the data. If this topic sounds familiar - you're right! I wrote about dynamic JSON parsing a few months back before JSON.NET was added to Web API and when Web API and the System.Net HttpClient libraries included the System.Json classes like JsonObject and JsonArray. With the inclusion of JSON.NET in Web API these classes are now obsolete and didn't ship with Web API or the client libraries. I re-linked my original post to this one. In this post I'll discus JToken, JObject and JArray which are the dynamic JSON objects that make it very easy to create and retrieve JSON content on the fly without underlying types. Why Dynamic JSON? So, why Dynamic JSON parsing rather than strongly typed parsing? Since applications are interacting more and more with third party services it becomes ever more important to have easy access to those services with easy JSON parsing. Sometimes it just makes lot of sense to pull just a small amount of data out of large JSON document received from a service, because the third party service isn't directly related to your application's logic most of the time - and it makes little sense to map the entire service structure in your application. For example, recently I worked with the Google Maps Places API to return information about businesses close to me (or rather the app's) location. The Google API returns a ton of information that my application had no interest in - all I needed was few values out of the data. Dynamic JSON parsing makes it possible to map this data, without having to map the entire API to a C# data structure. Instead I could pull out the three or four values I needed from the API and directly store it on my business entities that needed to receive the data - no need to map the entire Maps API structure. Getting JSON.NET The easiest way to use JSON.NET is to grab it via NuGet and add it as a reference to your project. You can add it to your project with: PM> Install-Package Newtonsoft.Json From the Package Manager Console or by using Manage NuGet Packages in your project References. As mentioned if you're using ASP.NET Web API or MVC 4 JSON.NET will be automatically added to your project. Alternately you can also go to the CodePlex site and download the latest version including source code: http://json.codeplex.com/ Creating JSON on the fly with JObject and JArray Let's start with creating some JSON on the fly. It's super easy to create a dynamic object structure with any of the JToken derived JSON.NET objects. The most common JToken derived classes you are likely to use are JObject and JArray. JToken implements IDynamicMetaProvider and so uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JObject for the base object and songs and JArray for the actual collection of songs:[TestMethod] public void JObjectOutputTest() { // strong typed instance var jsonObject = new JObject(); // you can explicitly add values here using class interface jsonObject.Add("Entered", DateTime.Now); // or cast to dynamic to dynamically add/read properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1976; album.Songs = new JArray() as dynamic; dynamic song = new JObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces a complete JSON structure: { "Entered": "2012-08-18T13:26:37.7137482-10:00", "AlbumName": "Dirty Deeds Done Dirt Cheap", "Artist": "AC/DC", "YearReleased": 1976, "Songs": [ { "SongName": "Dirty Deeds Done Dirt Cheap", "SongLength": "4:11" }, { "SongName": "Love at First Feel", "SongLength": "3:10" } ] } Notice that JSON.NET does a nice job formatting the JSON, so it's easy to read and paste into blog posts :-). JSON.NET includes a bunch of configuration options that control how JSON is generated. Typically the defaults are just fine, but you can override with the JsonSettings object for most operations. The important thing about this code is that there's no explicit type used for holding the values to serialize to JSON. Rather the JSON.NET objects are the containers that receive the data as I build up my JSON structure dynamically, simply by adding properties. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JObject to create a album 'object' and immediately cast it to dynamic. JObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JObject values are stored in pseudo collections of key value pairs that are exposed as properties through the IDynamicMetaObject interface exposed in JSON.NET's JToken base class. For objects the syntax is very clean - you add simple typed values as properties. For objects and arrays you have to explicitly create new JObject or JArray, cast them to dynamic and then add properties and items to them. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the names and values you create are accessed consistently and without typos in your code. Note that you can also access the JObject instance directly (not as dynamic) and get access to the underlying JObject type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JContainer (the base class for JObject and JArray) is a collection so you can also iterate over the properties at runtime easily:foreach (var item in jsonObject) { Console.WriteLine(item.Key + " " + item.Value.ToString()); } The functionality of the JSON objects are very similar to .NET's ExpandObject and if you used it before, you're already familiar with how the dynamic interfaces to the JSON objects works. Importing JSON with JObject.Parse() and JArray.Parse() The JValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:public void JValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"", ""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JObject class and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JToken and I have to cast them to their appropriate types first before I can do type comparisons as in the Asserts at the end of the test method. This is required because of the way that dynamic types work which can't determine the type based on the method signature of the Assert.AreEqual(object,object) method. I have to either assign the dynamic value to a variable as I did above, or explicitly cast ( (string) json.Name) in the actual method call. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1976, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/…ASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; JArray jsonVal = JArray.Parse(jsonString) as JArray; dynamic albums = jsonVal; foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName); } JObject and JArray in ASP.NET Web API Of course these types also work in ASP.NET Web API controller methods. If you want you can accept parameters using these object or return them back to the server. The following contrived example receives dynamic JSON input, and then creates a new dynamic JSON object and returns it based on data from the first:[HttpPost] public JObject PostAlbumJObject(JObject jAlbum) { // dynamic input from inbound JSON dynamic album = jAlbum; // create a new JSON object to write out dynamic newAlbum = new JObject(); // Create properties on the new instance // with values from the first newAlbum.AlbumName = album.AlbumName + " New"; newAlbum.NewProperty = "something new"; newAlbum.Songs = new JArray(); foreach (dynamic song in album.Songs) { song.SongName = song.SongName + " New"; newAlbum.Songs.Add(song); } return newAlbum; } The raw POST request to the server looks something like this: POST http://localhost/aspnetwebapi/samples/PostAlbumJObject HTTP/1.1User-Agent: FiddlerContent-type: application/jsonHost: localhostContent-Length: 88 {AlbumName: "Dirty Deeds",Songs:[ { SongName: "Problem Child"},{ SongName: "Squealer"}]} and the output that comes back looks like this: {  "AlbumName": "Dirty Deeds New",  "NewProperty": "something new",  "Songs": [    {      "SongName": "Problem Child New"    },    {      "SongName": "Squealer New"    }  ]} The original values are echoed back with something extra appended to demonstrate that we're working with a new object. When you receive or return a JObject, JValue, JToken or JArray instance in a Web API method, Web API ignores normal content negotiation and assumes your content is going to be received and returned as JSON, so effectively the parameter and result type explicitly determines the input and output format which is nice. Dynamic to Strong Type Mapping You can also map JObject and JArray instances to a strongly typed object, so you can mix dynamic and static typing in the same piece of code. Using the 2 Album jsonString shown earlier, the code below takes an array of albums and picks out only a single album and casts that album to a static Album instance.[TestMethod] public void JsonParseToStrongTypeTest() { JArray albums = JArray.Parse(jsonString) as JArray; // pick out one album JObject jalbum = albums[0] as JObject; // Copy to a static Album instance Album album = jalbum.ToObject<Album>(); Assert.IsNotNull(album); Assert.AreEqual(album.AlbumName,jalbum.Value<string>("AlbumName")); Assert.IsTrue(album.Songs.Count > 0); } This is pretty damn useful for the scenario I mentioned earlier - you can read a large chunk of JSON and dynamically walk the property hierarchy down to the item you want to access, and then either access the specific item dynamically (as shown earlier) or map a part of the JSON to a strongly typed object. That's very powerful if you think about it - it leaves you in total control to decide what's dynamic and what's static. Strongly typed JSON Parsing With all this talk of dynamic let's not forget that JSON.NET of course also does strongly typed serialization which is drop dead easy. Here's a simple example on how to serialize and deserialize an object with JSON.NET:[TestMethod] public void StronglyTypedSerializationTest() { // Demonstrate deserialization from a raw string var album = new Album() { AlbumName = "Dirty Deeds Done Dirt Cheap", Artist = "AC/DC", Entered = DateTime.Now, YearReleased = 1976, Songs = new List<Song>() { new Song() { SongName = "Dirty Deeds Done Dirt Cheap", SongLength = "4:11" }, new Song() { SongName = "Love at First Feel", SongLength = "3:10" } } }; // serialize to string string json2 = JsonConvert.SerializeObject(album,Formatting.Indented); Console.WriteLine(json2); // make sure we can serialize back var album2 = JsonConvert.DeserializeObject<Album>(json2); Assert.IsNotNull(album2); Assert.IsTrue(album2.AlbumName == "Dirty Deeds Done Dirt Cheap"); Assert.IsTrue(album2.Songs.Count == 2); } JsonConvert is a high level static class that wraps lower level functionality, but you can also use the JsonSerializer class, which allows you to serialize/parse to and from streams. It's a little more work, but gives you a bit more control. The functionality available is easy to discover with Intellisense, and that's good because there's not a lot in the way of documentation that's actually useful. Summary JSON.NET is a pretty complete JSON implementation with lots of different choices for JSON parsing from dynamic parsing to static serialization, to complex querying of JSON objects using LINQ. It's good to see this open source library getting integrated into .NET, and pushing out the old and tired stock .NET parsers so that we finally have a bit more flexibility - and extensibility - in our JSON parsing. Good to go! Resources Sample Test Project http://json.codeplex.com/© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Exploring packages in code

    In my previous post Searching for tasks with code you can see how to explore the control flow side of packages, drilling down through containers, task, and event handlers, but it didn’t cover the data flow. I recently saw a post on the MSDN forum asking how to edit an existing package programmatically, and the sticking point was how to find the the data flow and the components inside. This post builds on some of the previous code and shows how you can explore all objects inside a package. I took the sample Task Search application I’d written previously, and came up with a totally pointless little console application that just walks through the package and writes out the basic type and name of every object it finds, starting with the package itself e.g. Package – MyPackage . The sample package we used last time showed nested objects as well an event handler; a OnPreExecute event tucked away on the task SQL In FEL. The output of this sample tool would look like this: PackageObjects v1.0.0.0 (1.0.0.26627) Copyright (C) 2009 Konesans Ltd Processing File - Z:\Users\Darren Green\Documents\Visual Studio 2005\Projects\SSISTestProject\EventsAndContainersWithExe cSQLForSearch.dtsx Package - EventsAndContainersWithExecSQLForSearch For Loop - FOR Counter Loop Task - SQL In Counter Loop Sequence Container - SEQ For Each Loop Wrapper For Each Loop - FEL Simple Loop Task - SQL In FEL Task - SQL On Pre Execute for FEL SQL Task Sequence Container - SEQ Top Level Sequence Container - SEQ Nested Lvl 1 Sequence Container - SEQ Nested Lvl 2 Task - SQL In Nested Lvl 2 Task - SQL In Nested Lvl 1 #1 Task - SQL In Nested Lvl 1 #2 Connection Manager – LocalHost The code is very similar to what we had previously, but there are a couple of extra bits to deal with connections and to look more closely at a task and see if it is a Data Flow task. For connections your just examine the package's Connections collection as shown in the abridged snippets below. First you can see the call to the ProcessConnections method, followed by the method itself. // Load the package file Application application = new Application(); using (Package package = application.LoadPackage(filename, null)) { // Write out the package name Console.WriteLine("Package - {0}", package.Name); ... More ... // Look and the connections ProcessConnections(package.Connections); } private static void ProcessConnections(Connections connections) { foreach (ConnectionManager connectionManager in connections) { Console.WriteLine("Connection Manager - {0}", connectionManager.Name); } } What we didn’t see in the sample output above was anything to do with the Data Flow, but rest assured the code now handles it too. The following snippet shows how each task is examined to see if it is a Data Flow task, and if so we can then loop through all of the components inside the data flow. private static void ProcessTaskHost(TaskHost taskHost) { if (taskHost == null) { return; } Console.WriteLine("Task - {0}", taskHost.Name); // Check if the task is a Data Flow task MainPipe pipeline = taskHost.InnerObject as MainPipe; if (pipeline != null) { ProcessPipeline(pipeline); } } private static void ProcessPipeline(MainPipe pipeline) { foreach (IDTSComponentMetaData90 componentMetadata in pipeline.ComponentMetaDataCollection) { Console.WriteLine("Pipeline Component - {0}", componentMetadata.Name); // If you wish to make changes to the component then you should really use the managed wrapper. // CManagedComponentWrapper wrapper = componentMetadata.Instantiate(); // wrapper.SetComponentProperty("PropertyName", "Value"); } } Hopefully you can see how we get a reference to the Data Flow task, and then use the ComponentMetaDataCollection to find out what components we have inside the pipeline. If you wanted to know more about the component you could look at the ObjectType or ComponentClassID properties. After that it gets a bit harder and you should get a reference to the wrapper object as the comment suggest and start using the properties, just like you would in the create packages samples, see our Code Development category for some for these examples. Download Sample code project PackageObjects.zip (5KB)

    Read the article

  • How do I set the title of Terminal.app with the fish shell?

    - by lorin
    I'm trying the fish shell in Mac OS X, intalled using MacPorts. I'd like to have the title of my Terminal window be my current directory. Currently, the title just says Terminal - fish - 80x24 According to the fish documentation, the default fish_title function should provide this behavior. It doesn't do the right thing in Terminal.app, although it does work with iTerm. Defining my own fish_title function doesn't fix the problem. Has anybody been able to get this to work?

    Read the article

  • Asynchrony in C# 5: Dataflow Async Logger Sample

    - by javarg
    Check out this (very simple) code examples for TPL Dataflow. Suppose you are developing an Async Logger to register application events to different sinks or log writers. The logger architecture would be as follow: Note how blocks can be composed to achieved desired behavior. The BufferBlock<T> is the pool of log entries to be process whereas linked ActionBlock<TInput> represent the log writers or sinks. The previous composition would allows only one ActionBlock to consume entries at a time. Implementation code would be something similar to (add reference to System.Threading.Tasks.Dataflow.dll in %User Documents%\Microsoft Visual Studio Async CTP\Documentation): TPL Dataflow Logger var bufferBlock = new BufferBlock<Tuple<LogLevel, string>>(); ActionBlock<Tuple<LogLevel, string>> infoLogger =     new ActionBlock<Tuple<LogLevel, string>>(         e => Console.WriteLine("Info: {0}", e.Item2)); ActionBlock<Tuple<LogLevel, string>> errorLogger =     new ActionBlock<Tuple<LogLevel, string>>(         e => Console.WriteLine("Error: {0}", e.Item2)); bufferBlock.LinkTo(infoLogger, e => (e.Item1 & LogLevel.Info) != LogLevel.None); bufferBlock.LinkTo(errorLogger, e => (e.Item1 & LogLevel.Error) != LogLevel.None); bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Info, "info message")); bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Error, "error message")); Note the filter applied to each link (in this case, the Logging Level selects the writer used). We can specify message filters using Predicate functions on each link. Now, the previous sample is useless for a Logger since Logging Level is not exclusive (thus, several writers could be used to process a single message). Let´s use a Broadcast<T> buffer instead of a BufferBlock<T>. Broadcast Logger var bufferBlock = new BroadcastBlock<Tuple<LogLevel, string>>(     e => new Tuple<LogLevel, string>(e.Item1, e.Item2)); ActionBlock<Tuple<LogLevel, string>> infoLogger =     new ActionBlock<Tuple<LogLevel, string>>(         e => Console.WriteLine("Info: {0}", e.Item2)); ActionBlock<Tuple<LogLevel, string>> errorLogger =     new ActionBlock<Tuple<LogLevel, string>>(         e => Console.WriteLine("Error: {0}", e.Item2)); ActionBlock<Tuple<LogLevel, string>> allLogger =     new ActionBlock<Tuple<LogLevel, string>>(     e => Console.WriteLine("All: {0}", e.Item2)); bufferBlock.LinkTo(infoLogger, e => (e.Item1 & LogLevel.Info) != LogLevel.None); bufferBlock.LinkTo(errorLogger, e => (e.Item1 & LogLevel.Error) != LogLevel.None); bufferBlock.LinkTo(allLogger, e => (e.Item1 & LogLevel.All) != LogLevel.None); bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Info, "info message")); bufferBlock.Post(new Tuple<LogLevel, string>(LogLevel.Error, "error message")); As this block copies the message to all its outputs, we need to define the copy function in the block constructor. In this case we create a new Tuple, but you can always use the Identity function if passing the same reference to every output. Try both scenarios and compare the results.

    Read the article

  • Exploring packages in code

    In my previous post Searching for tasks with code you can see how to explore the control flow side of packages, drilling down through containers, task, and event handlers, but it didn’t cover the data flow. I recently saw a post on the MSDN forum asking how to edit an existing package programmatically, and the sticking point was how to find the the data flow and the components inside. This post builds on some of the previous code and shows how you can explore all objects inside a package. I took the sample Task Search application I’d written previously, and came up with a totally pointless little console application that just walks through the package and writes out the basic type and name of every object it finds, starting with the package itself e.g. Package – MyPackage . The sample package we used last time showed nested objects as well an event handler; a OnPreExecute event tucked away on the task SQL In FEL. The output of this sample tool would look like this: PackageObjects v1.0.0.0 (1.0.0.26627) Copyright (C) 2009 Konesans Ltd Processing File - Z:\Users\Darren Green\Documents\Visual Studio 2005\Projects\SSISTestProject\EventsAndContainersWithExe cSQLForSearch.dtsx Package - EventsAndContainersWithExecSQLForSearch For Loop - FOR Counter Loop Task - SQL In Counter Loop Sequence Container - SEQ For Each Loop Wrapper For Each Loop - FEL Simple Loop Task - SQL In FEL Task - SQL On Pre Execute for FEL SQL Task Sequence Container - SEQ Top Level Sequence Container - SEQ Nested Lvl 1 Sequence Container - SEQ Nested Lvl 2 Task - SQL In Nested Lvl 2 Task - SQL In Nested Lvl 1 #1 Task - SQL In Nested Lvl 1 #2 Connection Manager – LocalHost The code is very similar to what we had previously, but there are a couple of extra bits to deal with connections and to look more closely at a task and see if it is a Data Flow task. For connections your just examine the package's Connections collection as shown in the abridged snippets below. First you can see the call to the ProcessConnections method, followed by the method itself. // Load the package file Application application = new Application(); using (Package package = application.LoadPackage(filename, null)) { // Write out the package name Console.WriteLine("Package - {0}", package.Name); ... More ... // Look and the connections ProcessConnections(package.Connections); } private static void ProcessConnections(Connections connections) { foreach (ConnectionManager connectionManager in connections) { Console.WriteLine("Connection Manager - {0}", connectionManager.Name); } } What we didn’t see in the sample output above was anything to do with the Data Flow, but rest assured the code now handles it too. The following snippet shows how each task is examined to see if it is a Data Flow task, and if so we can then loop through all of the components inside the data flow. private static void ProcessTaskHost(TaskHost taskHost) { if (taskHost == null) { return; } Console.WriteLine("Task - {0}", taskHost.Name); // Check if the task is a Data Flow task MainPipe pipeline = taskHost.InnerObject as MainPipe; if (pipeline != null) { ProcessPipeline(pipeline); } } private static void ProcessPipeline(MainPipe pipeline) { foreach (IDTSComponentMetaData90 componentMetadata in pipeline.ComponentMetaDataCollection) { Console.WriteLine("Pipeline Component - {0}", componentMetadata.Name); // If you wish to make changes to the component then you should really use the managed wrapper. // CManagedComponentWrapper wrapper = componentMetadata.Instantiate(); // wrapper.SetComponentProperty("PropertyName", "Value"); } } Hopefully you can see how we get a reference to the Data Flow task, and then use the ComponentMetaDataCollection to find out what components we have inside the pipeline. If you wanted to know more about the component you could look at the ObjectType or ComponentClassID properties. After that it gets a bit harder and you should get a reference to the wrapper object as the comment suggest and start using the properties, just like you would in the create packages samples, see our Code Development category for some for these examples. Download Sample code project PackageObjects.zip (5KB)

    Read the article

  • Bacula windows client could not connect to Bacula director

    - by pr0f-r00t
    I have a Bacula server on my Linux Debian squeeze host (Bacula version 5.0.2) and a Bacula client on Windows XP SP3. On my network each client can see each other, can share files and can ping. On my local server I could run bconsole and the server responds but when I run bconsole or bat on my windows client the server does not respond. Here are my configuration files: bacula-dir.conf: # # Default Bacula Director Configuration file # # The only thing that MUST be changed is to add one or more # file or directory names in the Include directive of the # FileSet resource. # # For Bacula release 5.0.2 (28 April 2010) -- debian squeeze/sid # # You might also want to change the default email address # from root to your address. See the "mail" and "operator" # directives in the Messages resource. # Director { # define myself Name = nima-desktop-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 1 Password = "Cv70F6pf1t6pBopT4vQOnigDrR0v3L" # Console password Messages = Daemon DirAddress = 127.0.0.1 # DirAddress = 72.16.208.1 } JobDefs { Name = "DefaultJob" Type = Backup Level = Incremental Client = nima-desktop-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultJob" } #Job { # Name = "BackupClient2" # Client = nima-desktop2-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=nima-desktop-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /nonexistant/path/to/file/archive/dir/bacula-restores } # job for vmware windows host Job { Name = "nimaxp-fd" Type = Backup Client = nimaxp-fd FileSet = "nimaxp-fs" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = Default Write Bootstrap = "/var/bacula/working/rsys-win-www-1-fd.bsr" #Change this } # job for vmware windows host Job { Name = "arg-michael-fd" Type = Backup Client = nimaxp-fd FileSet = "arg-michael-fs" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = Default Write Bootstrap = "/var/bacula/working/rsys-win-www-1-fd.bsr" #Change this } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } # # Put your list of files here, preceded by 'File =', one per line # or include an external list with: # # File = <file-name # # Note: / backs up everything on the root partition. # if you have other partitions such as /usr or /home # you will probably want to add them too. # # By default this is defined to point to the Bacula binary # directory to give a reasonable FileSet to backup to # disk storage during initial testing. # File = /usr/sbin } # # If you backup the root directory, the following two excluded # files can be useful # Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "nimaxp-fs" Enable VSS = yes Include { Options { signature = MD5 } File = "C:\softwares" File = C:/softwares File = "C:/softwares" } } # List of files to be backed up FileSet { Name = "arg-michael-fs" Enable VSS = yes Include { Options { signature = MD5 } File = "C:\softwares" File = C:/softwares File = "C:/softwares" } } # # When to do the backups, full backup on first sunday of the month, # differential (i.e. incremental since full) every other sunday, # and incremental backups other days Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = nima-desktop-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "_MOfxEuRzxijc0DIMcBqtyx9iW1tzE7V6" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # Client file service for vmware windows host Client { Name = nimaxp-fd Address = nimaxp FDPort = 9102 Catalog = MyCatalog Password = "Ku8F1YAhDz5EMUQjiC9CcSw95Aho9XbXailUmjOaAXJP" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # Client file service for vmware windows host Client { Name = arg-michael-fd Address = 192.168.0.61 FDPort = 9102 Catalog = MyCatalog Password = "b4E9FU6s/9Zm4BVFFnbXVKhlyd/zWxj0oWITKK6CALR/" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = nima-desktop2-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "_MOfxEuRzxijc0DIMcBqtyx9iW1tzE7V62" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" Device = FileStorage Media Type = File } # Definition of DDS tape storage device #Storage { # Name = DDS-4 # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # password for Storage daemon # Device = DDS-4 # must be same as Device in Storage daemon # Media Type = DDS-4 # must be same as MediaType in Storage daemon # Autochanger = yes # enable for autochanger device #} # Definition of 8mm tape storage device #Storage { # Name = "8mmDrive" # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # Device = "Exabyte 8mm" # MediaType = "8mm" #} # Definition of DVD storage device #Storage { # Name = "DVD" # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # Device = "DVD Writer" # MediaType = "DVD" #} # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; dbuser = ""; dbpassword = "" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard # # NOTE! If you send to two email or more email addresses, you will need # to replace the %r in the from field (-f part) with a single valid # email address in both the mailcommand and the operatorcommand. # What this does is, it sets the email address that emails would display # in the FROM field, which is by default the same email as they're being # sent to. However, if you send email to more than one address, then # you'll have to set the FROM address manually, to a single address. # for example, a '[email protected]', is better since that tends to # tell (most) people that its coming from an automated source. # mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = nima-desktop-mon Password = "-T0h6HCXWYNy0wWqOomysMvRGflQ_TA6c" CommandACL = status, .status } bacula-fd.conf on client: # # Default Bacula File Daemon Configuration file # # For Bacula release 5.0.3 (08/05/10) -- Windows MinGW32 # # There is not much to change here except perhaps the # File daemon Name # # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = nimaxp-fd FDport = 9102 # where we listen for the director WorkingDirectory = "C:\\Program Files\\Bacula\\working" Pid Directory = "C:\\Program Files\\Bacula\\working" # Plugin Directory = "C:\\Program Files\\Bacula\\plugins" Maximum Concurrent Jobs = 10 } # # List Directors who are permitted to contact this File daemon # Director { Name = Nima-desktop-dir Password = "Cv70F6pf1t6pBopT4vQOnigDrR0v3L" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = nimaxp-mon Password = "q5b5g+LkzDXorMViFwOn1/TUnjUyDlg+gRTBp236GrU3" Monitor = yes } # Send all messages except skipped files back to Director Messages { Name = Standard director = Nima-desktop = all, !skipped, !restored } I have checked my firewall and disabled the firewall but it doesn't work.

    Read the article

  • Using the latest (stable release) of Oracle Developer Tools for Visual Studio 11.1.0.7.20.

    - by mbcrump
    +  = Simple and safe Data connections.   This guide is for someone wanting to use the latest ODP.NET quickly without reading the official documentation. This guide will get you up and running in about 15 minutes. I have reviewed my referral link to my simple Setting up ODP.net with Win7 x64 and noticed most people were searching for one of the following terms: “how to use odp.net with vs” “setup connection odp.net” “query db using odp and vs” While my article provided links and a sample tnsnames.ora file, it really didn’t tell you how to use it. I’m hoping that this brief tutorial will help. So before we get started, you will need the following: Download the following: www.oracle.com/technology/software/tech/dotnet/utilsoft.html from oracle and install it. It is the first one on the page. Visual Studio 2008 or 2010. It should be noted that The System.Data.OracleClient namespace is the OLD .NET Framework Data Provider for Oracle. It should not be used anymore as it has been depreciated. The latest version which is what we are using is Oracle.DataAccess.Client. First things first, Add a reference to the Oracle.DataAccess.Client after you install ODP.NET   Copy and paste the following C# code into your project and replace the relevant info including the query string and you should be able to return data. I have commented several lines of code to assist in understanding what it is doing.   Lambda Expression. using System; using System.Data; using Oracle.DataAccess.Client;   namespace ConsoleApplication1 {     class Program     {         static void Main(string[] args)         {           try         {             //Setup DataSource             string oradb = "Data Source=(DESCRIPTION ="                                    + "(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1521)))"                                    + "(CONNECT_DATA = (SERVICE_NAME = SERVICENAME))) ;"                                    + "Persist Security Info=True;User ID=USER;Password=PASSWORD;";                        //Open Connection to Oracle - this could be moved outside the try.             OracleConnection conn = new OracleConnection(oradb);             conn.Open();               //Create cmd and use parameters to prevent SQL injection attacks.             OracleCommand cmd = new OracleCommand();             cmd.Connection = conn;               cmd.CommandText = "select username from table where username = :username";               OracleParameter p1 = new OracleParameter("username", OracleDbType.Varchar2);             p1.Value = username;             cmd.Parameters.Add(p1);               cmd.CommandType = CommandType.Text;               OracleDataReader dr = cmd.ExecuteReader();             dr.Read();               //Contains the value of the datarow             Console.WriteLine(dr["username"].ToString());               //Disposes of objects.             dr.Dispose();             cmd.Dispose();             conn.Dispose();         }           catch (OracleException ex) // Catches only Oracle errors         {             switch (ex.Number)             {                 case 1:                     Console.WriteLine("Error attempting to insert duplicate data.");                     break;                 case 12545:                     Console.WriteLine("The database is unavailable.");                     break;                 default:                     Console.WriteLine(ex.Message.ToString());                     break;             }         }           catch (Exception ex) // Catches any error not previously caught         {                   Console.WriteLine("Unidentified Error: " + ex.Message.ToString());              }         }       }           } At this point, you should have a working Program that returns data from an oracle database. If you are still having trouble then drop me a line and I will be happy to assist. As of this writing, oracle has announced the latest beta release of ODP.NET 11.2.0.1.1 Beta.  This release includes .NET Framework 4 and .NET Framework 4 Client Profile support. You may want to hold off on this version for a while as its BETA, and I wouldn’t want any production code using any BETA software.

    Read the article

  • With a little effort you can &ldquo;SEMI&rdquo;-protect your C# assemblies with obfuscation.

    - by mbcrump
    This method will not protect your assemblies from a experienced hacker. Everyday we see new keygens, cracks, serials being released that contain ways around copy protection from small companies. This is a simple process that will make a lot of hackers quit because so many others use nothing. If you were a thief would you pick the house that has security signs and an alarm or one that has nothing? To so begin: Obfuscation is the concealment of meaning in communication, making it confusing and harder to interpret. Lets begin by looking at the cartoon below:     You are probably familiar with the term and probably ignored this like most programmers ignore user security. Today, I’m going to show you reflection and a way to obfuscate it. Please understand that I am aware of ways around this, but I believe some security is better than no security.  In this sample program below, the code appears exactly as it does in Visual Studio. When the program runs, you get either a true or false in a console window. Sample Program. using System; using System.Diagnostics; using System.Linq;   namespace ObfuscateMe {     class Program     {                static void Main(string[] args)         {               Console.WriteLine(IsProcessOpen("notepad")); //Returns a True or False depending if you have notepad running.             Console.ReadLine();         }             public static bool IsProcessOpen(string name)         {             return Process.GetProcesses().Any(clsProcess => clsProcess.ProcessName.Contains(name));         }     } }   Pretend, that this is a commercial application. The hacker will only have the executable and maybe a few config files, etc. After reviewing the executable, he can determine if it was produced in .NET by examing the file in ILDASM or Redgate’s Reflector. We are going to examine the file using RedGate’s Reflector. Upon launch, we simply drag/drop the exe over to the application. We have the following for the Main method:   and for the IsProcessOpen method:     Without any other knowledge as to how this works, the hacker could export the exe and get vs project build or copy this code in and our application would run. Using Reflector output. using System; using System.Diagnostics; using System.Linq;   namespace ObfuscateMe {     class Program     {                static void Main(string[] args)         {               Console.WriteLine(IsProcessOpen("notepad"));             Console.ReadLine();         }             public static bool IsProcessOpen(string name)         {             return Process.GetProcesses().Any<Process>(delegate(Process clsProcess)             {                 return clsProcess.ProcessName.Contains(name);             });         }       } } The code is not identical, but returns the same value. At this point, with a little bit of effort you could prevent the hacker from reverse engineering your code so quickly by using Eazfuscator.NET. Eazfuscator.NET is just one of many programs built for this. Visual Studio ships with a community version of Dotfoscutor. So download and load Eazfuscator.NET and drag/drop your exectuable/project into the window. It will work for a few minutes depending if you have a quad-core or not. After it finishes, open the executable in RedGate Reflector and you will get the following: Main After Obfuscation IsProcessOpen Method after obfuscation: As you can see with the jumbled characters, it is not as easy as the first example. I am aware of methods around this, but it takes more effort and unless the hacker is up for the challenge, they will just pick another program. This is also helpful if you are a consultant and make clients pay a yearly license fee. This would prevent the average software developer from jumping into your security routine after you have left. I hope this article helped someone. If you have any feedback, please leave it in the comments below.

    Read the article

  • An observation on .NET loops – foreach, for, while, do-while

    It’s very common that .NET programmers use “foreach” loop for iterating through collections. Following is my observation whilst I was testing simple scenario on loops. “for” loop is 30% faster than “foreach” and “while” loop is 50% faster than “foreach”. “do-while” is bit faster than “while”. Someone may feel that how does it make difference if I’m iterating only 1000 times in a loop. This test case is only for simple iteration. According to the "Data structure" concepts, best and worst cases are completely based on the data we provide to the algorithm. so we can not conclude that a "foreach" algorithm is not good. All I want to tell that we need to be little cautious even choosing the loops. Example:- You might want to chose quick sort when you want to sort more numbers. At the same time bubble sort may be effective than quick sort when you want to sort less numbers. Take a simple scenario, a request of a simple web application fetches the data of 10000 (10K) rows and iterating them for some business logic. Think, this application is being accessed by 1000 (1K) people simultaneously. In this simple scenario you are ending up with 10000000 (10Million or 1 Crore) iterations. below is the test scenario with simple console application to test 100 Million records. using System;using System.Collections.Generic;using System.Diagnostics;namespace ConsoleApplication1{ class Program { static void Main(string[] args) { var sw = new Stopwatch(); var numbers = GetSomeNumbers(); sw.Start(); foreach (var item in numbers) { } sw.Stop(); Console.WriteLine( String.Format("\"foreach\" took {0} milliseconds", sw.ElapsedMilliseconds)); sw.Reset(); sw.Start(); for (int i = 0; i < numbers.Count; i++) { } sw.Stop(); Console.WriteLine( String.Format("\"for\" loop took {0} milliseconds", sw.ElapsedMilliseconds)); sw.Reset(); sw.Start(); var it = 0; while (it++ < numbers.Count) { } sw.Stop(); Console.WriteLine( String.Format("\"while\" loop took {0} milliseconds", sw.ElapsedMilliseconds)); sw.Reset(); sw.Start(); var it2 = 0; do { } while (it2++ < numbers.Count); sw.Stop(); Console.WriteLine( String.Format("\"do-while\" loop took {0} milliseconds", sw.ElapsedMilliseconds)); } #region Get me 10Crore (100 Million) numbers private static List<int> GetSomeNumbers() { var lstNumbers = new List<int>(); var count = 100000000; for (var i = 1; i <= count; i++) { lstNumbers.Add(i); } return lstNumbers; } #endregion Get me some numbers }} In above example, I was just iterating through 100 Million numbers. You can see the time to execute various  loops provided in .NET Output "foreach" took 1108 milliseconds "for" loop took 727 milliseconds "while" loop took 596 milliseconds "do-while" loop took 594 milliseconds   Press any key to continue . . . So I feel we need to be careful while choosing the looping strategy. Please comment your thoughts. span.fullpost {display:none;}

    Read the article

  • Dependency Injection Introduction

    - by MarkPearl
    I recently was going over a great book called “Dependency Injection in .Net” by Mark Seeman. So far I have really enjoyed the book and would recommend anyone looking to get into DI to give it a read. Today I thought I would blog about the first example Mark gives in his book to illustrate some of the benefits that DI provides. The ones he lists are Late binding Extensibility Parallel Development Maintainability Testability To illustrate some of these benefits he gives a HelloWorld example using DI that illustrates some of the basic principles. It goes something like this… class Program { static void Main(string[] args) { var writer = new ConsoleMessageWriter(); var salutation = new Salutation(writer); salutation.Exclaim(); Console.ReadLine(); } } public interface IMessageWriter { void Write(string message); } public class ConsoleMessageWriter : IMessageWriter { public void Write(string message) { Console.WriteLine(message); } } public class Salutation { private readonly IMessageWriter _writer; public Salutation(IMessageWriter writer) { _writer = writer; } public void Exclaim() { _writer.Write("Hello World"); } }   If you had asked me a few years ago if I had thought this was a good approach to solving the HelloWorld problem I would have resounded “No”. How could the above be better than the following…. class Program { static void Main(string[] args) { Console.WriteLine("Hello World"); Console.ReadLine(); } }  Today, my mind-set has changed because of the pain of past programs. So often we can look at a small snippet of code and make judgements when we need to keep in mind that we will most probably be implementing these patterns in projects with hundreds of thousands of lines of code and in projects that we have tests that we don’t want to break and that’s where the first solution outshines the latter. Let’s see if the first example achieves some of the outcomes that were listed as benefits of DI. Could I test the first solution easily? Yes… We could write something like the following using NUnit and RhinoMocks… [TestFixture] public class SalutationTests { [Test] public void ExclaimWillWriteCorrectMessageToMessageWriter() { var writerMock = MockRepository.GenerateMock<IMessageWriter>(); var sut = new Salutation(writerMock); sut.Exclaim(); writerMock.AssertWasCalled(x => x.Write("Hello World")); } }   This would test the existing code fine. Let’s say we then wanted to extend the original solution so that we had a secure message writer. We could write a class like the following… public class SecureMessageWriter : IMessageWriter { private readonly IMessageWriter _writer; private readonly string _secretPassword; public SecureMessageWriter(IMessageWriter writer, string secretPassword) { _writer = writer; _secretPassword = secretPassword; } public void Write(string message) { if (_secretPassword == "Mark") { _writer.Write(message); } else { _writer.Write("Unauthenticated"); } } }   And then extend our implementation of the program as follows… class Program { static void Main(string[] args) { var writer = new SecureMessageWriter(new ConsoleMessageWriter(), "Mark"); var salutation = new Salutation(writer); salutation.Exclaim(); Console.ReadLine(); } }   Our application has now been successfully extended and yet we did very little code change. In addition, our existing tests did not break and we would just need add tests for the extended functionality. Would this approach allow parallel development? Well, I am in two camps on parallel development but with some planning ahead of time it would allow for it as you would simply need to decide on the interface signature and could then have teams develop different sections programming to that interface. So,this was really just a quick intro to some of the basic concepts of DI that Mark introduces very successfully in his book. I am hoping to blog about this further as I continue through the book to list some of the more complex implementations of containers.

    Read the article

  • How to restore/change Alt+Tab behaviour/ram usage and a few other things after Ubuntu upgrade from 11.04 to 11.10?

    - by fiktor
    I use Ubuntu for programming. I recently updated it from 11.04 to 11.10. There are some things I don't like in the new version of Unity desktop interface. I don't actually know if it is hard to restore previous behavior or not, and if it is not, where should I look to do that. I know a bit of programming, but I really don't know much about Linux settings. I used to have 3-6 terminal windows and switch between them with Alt+Tab and Shift+Alt+Tab. I liked half-transparent terminal windows, since with them I could open web-page with some instruction in Firefox, press Alt+Tab and type commands in a console window, being able to recognize text on a web-page under it. Now I have problems with my usual work-style because of the following. List of "negative" changes Alt+Tab shows just one icon for all console windows. When I wait some time, it, however, shows all windows, but I don't like to wait. I prefer to remember order of windows and press Alt+Tab as many times as I need to switch to the right window. Alt+Shift+Tab to switch in reverse order doesn't work now. Console windows are not transparent any more. When I don't wait, and switch to this icon, it shows all console windows altogether. So even if they were transparent, I wouldn't be able to see anything below them (I can read something only from the window, which is directly under current one, not a few levels under). When I run a few console windows in Unity I had 740Mb used on Ubuntu 11.04, but I have 1050Mb now. The question is how to make it back to 750-. I really need my memory, since I use my computer to work with 1512Mb of data and I try to save every 10Mb possible (if it doesn't take too much of machine and, more importantly, my time). When I press "The Super key" I have a field to type the name of the program I want to run. But now it sometimes shows this field, but when I'm trying to type nothing happens. Probably, focus is not on the right field. I don't really mean to restore exactly the same behavior, but I want to make my work in Ubuntu 11.10 efficient (at least as efficient as in Ubuntu 11.04). I would be happy if there are some ways to accomplish that. What have I tried I have installed CompizConfig Settings Manager. I have read this question. However enabling "Static Application Switcher" makes Alt+Tab crazy: after enabling it It says about key-binding conflicts with "Ubuntu Unity Plugin"; "Alt+Tab" switching doesn't change, but "Shift+Alt+Tab" now works and shows all windows; Memory usage increases. I have tried turning off Ubuntu Unity Plugin, but this doesn't seem right thing to do, since it seems to turn off all menus, a lot of keystrokes and app-launcher, which usually activates with "The Super key". I have found, that window transparency can be enabled by "Opacity, Brightness and Saturation" plugin from Accessibility. However I don't know if enabling it is the right thing to do (at least it increases memory usage). Update: everything solved but #3: see my own answer below. I have made a separate question about issue #3 (transparency).

    Read the article

  • Using AES encryption in .NET - CryptographicException saying the padding is invalid and cannot be removed

    - by Jake Petroules
    I wrote some AES encryption code in C# and I am having trouble getting it to encrypt and decrypt properly. If I enter "test" as the passphrase and "This data must be kept secret from everyone!" I receive the following exception: System.Security.Cryptography.CryptographicException: Padding is invalid and cannot be removed. at System.Security.Cryptography.RijndaelManagedTransform.DecryptData(Byte[] inputBuffer, Int32 inputOffset, Int32 inputCount, Byte[]& outputBuffer, Int32 outputOffset, PaddingMode paddingMode, Boolean fLast) at System.Security.Cryptography.RijndaelManagedTransform.TransformFinalBlock(Byte[] inputBuffer, Int32 inputOffset, Int32 inputCount) at System.Security.Cryptography.CryptoStream.FlushFinalBlock() at System.Security.Cryptography.CryptoStream.Dispose(Boolean disposing) at System.IO.Stream.Close() at System.IO.Stream.Dispose() ... And if I enter something less than 16 characters I get no output. I believe I need some special handling in the encryption since AES is a block cipher, but I'm not sure exactly what that is, and I wasn't able to find any examples on the web showing how. Here is my code: using System; using System.IO; using System.Security.Cryptography; using System.Text; public static class DatabaseCrypto { public static EncryptedData Encrypt(string password, string data) { return DatabaseCrypto.Transform(true, password, data, null, null) as EncryptedData; } public static string Decrypt(string password, EncryptedData data) { return DatabaseCrypto.Transform(false, password, data.DataString, data.SaltString, data.MACString) as string; } private static object Transform(bool encrypt, string password, string data, string saltString, string macString) { using (AesManaged aes = new AesManaged()) { aes.Mode = CipherMode.CBC; aes.Padding = PaddingMode.PKCS7; int key_len = aes.KeySize / 8; int iv_len = aes.BlockSize / 8; const int salt_size = 8; const int iterations = 8192; byte[] salt = encrypt ? new Rfc2898DeriveBytes(string.Empty, salt_size).Salt : Convert.FromBase64String(saltString); byte[] bc_key = new Rfc2898DeriveBytes("BLK" + password, salt, iterations).GetBytes(key_len); byte[] iv = new Rfc2898DeriveBytes("IV" + password, salt, iterations).GetBytes(iv_len); byte[] mac_key = new Rfc2898DeriveBytes("MAC" + password, salt, iterations).GetBytes(16); aes.Key = bc_key; aes.IV = iv; byte[] rawData = encrypt ? Encoding.UTF8.GetBytes(data) : Convert.FromBase64String(data); using (ICryptoTransform transform = encrypt ? aes.CreateEncryptor() : aes.CreateDecryptor()) using (MemoryStream memoryStream = encrypt ? new MemoryStream() : new MemoryStream(rawData)) using (CryptoStream cryptoStream = new CryptoStream(memoryStream, transform, encrypt ? CryptoStreamMode.Write : CryptoStreamMode.Read)) { if (encrypt) { cryptoStream.Write(rawData, 0, rawData.Length); return new EncryptedData(salt, mac_key, memoryStream.ToArray()); } else { byte[] originalData = new byte[rawData.Length]; int count = cryptoStream.Read(originalData, 0, originalData.Length); return Encoding.UTF8.GetString(originalData, 0, count); } } } } } public class EncryptedData { public EncryptedData() { } public EncryptedData(byte[] salt, byte[] mac, byte[] data) { this.Salt = salt; this.MAC = mac; this.Data = data; } public EncryptedData(string salt, string mac, string data) { this.SaltString = salt; this.MACString = mac; this.DataString = data; } public byte[] Salt { get; set; } public string SaltString { get { return Convert.ToBase64String(this.Salt); } set { this.Salt = Convert.FromBase64String(value); } } public byte[] MAC { get; set; } public string MACString { get { return Convert.ToBase64String(this.MAC); } set { this.MAC = Convert.FromBase64String(value); } } public byte[] Data { get; set; } public string DataString { get { return Convert.ToBase64String(this.Data); } set { this.Data = Convert.FromBase64String(value); } } } static void ReadTest() { Console.WriteLine("Enter password: "); string password = Console.ReadLine(); using (StreamReader reader = new StreamReader("aes.cs.txt")) { EncryptedData enc = new EncryptedData(); enc.SaltString = reader.ReadLine(); enc.MACString = reader.ReadLine(); enc.DataString = reader.ReadLine(); Console.WriteLine("The decrypted data was: " + DatabaseCrypto.Decrypt(password, enc)); } } static void WriteTest() { Console.WriteLine("Enter data: "); string data = Console.ReadLine(); Console.WriteLine("Enter password: "); string password = Console.ReadLine(); EncryptedData enc = DatabaseCrypto.Encrypt(password, data); using (StreamWriter stream = new StreamWriter("aes.cs.txt")) { stream.WriteLine(enc.SaltString); stream.WriteLine(enc.MACString); stream.WriteLine(enc.DataString); Console.WriteLine("The encrypted data was: " + enc.DataString); } }

    Read the article

  • Is there a way of recover from an Exception in Directory.EnumerateFiles ?

    - by Magnus Johansson
    In .NET 4, there's this Directory.EnumerateFiles() method with recursion that seems handy. However, if an Exception occurs within a recursion, how can I continue/recover from that and continuing enumerate the rest of the files? try { var files = from file in Directory.EnumerateFiles(@"c:\", "*.*", SearchOption.AllDirectories) select new { File = file }; Console.WriteLine(files.Count().ToString()); } catch (UnauthorizedAccessException uEx) { Console.WriteLine(uEx.Message); } catch (PathTooLongException ptlEx) { Console.WriteLine(ptlEx.Message); }

    Read the article

  • CakePHP Shell issue with my directory structure

    - by keslert
    I have structured my Cake files in the following way /cakes /2_3 /app /Console /cake.php /apps /www.website1.com /Console /HelloShell.php /Controllers /etc... /www.website2.com /Console /Controllers /etc... I am getting an error "Error: Shell class HelloShell could not be found." I have added "cakes/2_3/app/Console" to my PATH, and I am running cake from inside www.website1.com. Current Paths: -app: www.website1.com -working: apps\www.website1.com -root: apps -core: cakes\2_3\lib Any ideas on what I am doing wrong?

    Read the article

  • C# getting mouse coordinates from a tab page when selected

    - by Tom
    I wish to show the mouse coordinates only when i am viewing tabpage7. So far i have: this.tabPage7.MouseMove += new System.Windows.Forms.MouseEventHandler(this.OnMouseMove); protected void OnMouseMove(object sender, MouseEventArgs mouseEv) { Console.WriteLine("happening"); Console.WriteLine(mouseEv.X.ToString()); Console.WriteLine(mouseEv.Y.ToString()); } but this doesn't appear to be doing anything, could someone help show me what im doing wrong please?

    Read the article

  • cordova 2.0.0 download file Android

    - by N0rA
    I tried to use cordova 2.0.0 API FileTransfer().download() and i got 2 consecutive errors 1) download error source file 2) download error target file here is my code... function downloadMaterial() { var fileTransfer = new FileTransfer(); var serverURL = 'http://img.youm7.com/images/NewsPics/large/S2200921125437.jpg'; var uri = encodeURI(serverURL); var filePath = persistent_root.fullPath+"/" + fileName; //filePath = file:///data/data/com.example.studyapp/S2200921125437.jpg var onSuccess = function (entry) { console.log("download complete: " + entry.fullPath); }; var onError = function (error) { console.log("download error source " + error.source); console.log("download error target " + error.target); console.log("upload error code " + error.code); }; fileTransfer.download(uri, filePath , onSuccess, onError); } Do you have any idea what should I do? Thanks in advance ...

    Read the article

  • Why does IHttpAsyncHandler leak memory under load?

    - by Anton
    I have noticed that the .NET IHttpAsyncHandler (and the IHttpHandler, to a lesser degree) leak memory when subjected to concurrent web requests. In my tests, the development web server (Cassini) jumps from 6MB memory to over 100MB, and once the test is finished, none of it is reclaimed. The problem can be reproduced easily. Create a new solution (LeakyHandler) with two projects: An ASP.NET web application (LeakyHandler.WebApp) A Console application (LeakyHandler.ConsoleApp) In LeakyHandler.WebApp: Create a class called TestHandler that implements IHttpAsyncHandler. In the request processing, do a brief Sleep and end the response. Add the HTTP handler to Web.config as test.ashx. In LeakyHandler.ConsoleApp: Generate a large number of HttpWebRequests to test.ashx and execute them asynchronously. As the number of HttpWebRequests (sampleSize) is increased, the memory leak is made more and more apparent. LeakyHandler.WebApp TestHandler.cs namespace LeakyHandler.WebApp { public class TestHandler : IHttpAsyncHandler { #region IHttpAsyncHandler Members private ProcessRequestDelegate Delegate { get; set; } public delegate void ProcessRequestDelegate(HttpContext context); public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb, object extraData) { Delegate = ProcessRequest; return Delegate.BeginInvoke(context, cb, extraData); } public void EndProcessRequest(IAsyncResult result) { Delegate.EndInvoke(result); } #endregion #region IHttpHandler Members public bool IsReusable { get { return true; } } public void ProcessRequest(HttpContext context) { Thread.Sleep(10); context.Response.End(); } #endregion } } LeakyHandler.WebApp Web.config <?xml version="1.0"?> <configuration> <system.web> <compilation debug="false" /> <httpHandlers> <add verb="POST" path="test.ashx" type="LeakyHandler.WebApp.TestHandler" /> </httpHandlers> </system.web> </configuration> LeakyHandler.ConsoleApp Program.cs namespace LeakyHandler.ConsoleApp { class Program { private static int sampleSize = 10000; private static int startedCount = 0; private static int completedCount = 0; static void Main(string[] args) { Console.WriteLine("Press any key to start."); Console.ReadKey(); string url = "http://localhost:3000/test.ashx"; for (int i = 0; i < sampleSize; i++) { HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url); request.Method = "POST"; request.BeginGetResponse(GetResponseCallback, request); Console.WriteLine("S: " + Interlocked.Increment(ref startedCount)); } Console.ReadKey(); } static void GetResponseCallback(IAsyncResult result) { HttpWebRequest request = (HttpWebRequest)result.AsyncState; HttpWebResponse response = (HttpWebResponse)request.EndGetResponse(result); try { using (Stream stream = response.GetResponseStream()) { using (StreamReader streamReader = new StreamReader(stream)) { streamReader.ReadToEnd(); System.Console.WriteLine("C: " + Interlocked.Increment(ref completedCount)); } } response.Close(); } catch (Exception ex) { System.Console.WriteLine("Error processing response: " + ex.Message); } } } }

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >