Search Results

Search found 35802 results on 1433 pages for 'on line resources'.

Page 49/1433 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • How can I "bulk paste" a clipboard string of multi-line text into a readable ordered list?

    - by gunshor
    How can I "bulk paste" a clipboard string of multi-line text into a readable ordered list? I'm trying to demonstrate how to turn any string of multi-line text into an ordered list. The script (preferably JS) needs to respect: - carriage returns at the end of a line, to mean "that line ends here" - indentations at the beginning of a line, to mean "this is part of the item above it" - dashes at the beginning of a line, to mean "this is a task, and the line above it is its project"

    Read the article

  • alias/function with command line arguments

    - by Agzam
    I'm tired of typing manage.py startserver 10.211.55.4:4000, so decided to make an alias for that. Only thing is: the port sometime changes. So I did this in bash profile: function runserver() { python manage.py runserver 10.211.55.4:$1 } But then when I call it: runserver 3000, it starts it, but immediately stops saying: "Error: That IP address can't be assigned-to". However if I type the same thing right into command line it works with no complains.

    Read the article

  • Tech Tip : Sending Email from Command line

    <b>Geek Ride:</b> "Everyone is not as lucky as having a full fletched email client like thunderbird or kmail to send mails. There is one unlucky group known as system administrators who have to send the mails either through the command line or a script running on the remote server."

    Read the article

  • Apache Maven 3 Races to the Finish Line

    <b>Developer.com:</b> "The open source Apache Maven project has been helping software developers for over six years with their project build and reporting management needs. For most of that time, the project has been offering incremental updates to the Apache Maven 2.x product line, but in the next few months, Maven 3 is set to emerge."

    Read the article

  • Puppet: array in parameterized classes VS using resources

    - by Luke404
    I have some use cases where I want to define multiple similar resources that should end up in a single file (via a template). As an example I'm trying to write a puppet module that will let me manage the mapping between MAC addresses and network interface names (writing udev's persistent-net-rules file from puppet), but there are also many other similar usage cases. I searched around and found that it could be done with the new parameterised classes syntax: if implemented that way it should end up being used like this: node { "myserver.example.com": class { "network::iftab": interfaces => { "eth0" => { "mac" => "ab:cd:ef:98:76:54" } "eth1" => { "mac" => "98:76:de:ad:be:ef" } } } } Not too bad, I agree, but it would rapidly explode when you manage more complex stuff (think network configurations like in this module or any other multiple-complex-resources-in-a-single-config-file stuff). In a similar question on SF someone suggested using Pienaar's puppet-concat module but I doubt it could get any better than parameterised classes. What would be really cool and clean in the configuration definition would be something like the included host type, it's usage is simple, pretty and clean and naturally maps to multiple resources that will end up being configured in a single place. Transposed to my example it would be like: node { "myserver.example.com": interface { "eth0": "mac" => "ab:cd:ef:98:76:54", "foo" => "bar", "asd" => "lol", "eth1": "mac" => "98:76:de:ad:be:ef", "foo" => "rab", "asd" => "olo", } } ...that looks much better to my eyes, even with 3x options to each resource. Should I really be passing arrays to parameterised classes, or there is a better way to do this kind of stuff? Is there some accepted consensus in the puppet [users|developers] community? By the way, I'm referring to the latest stable release of the 2.7 branch and I am not interested in compatibility with older versions.

    Read the article

  • How can I uniqely record every new command I use, and possibly timestamp it?

    - by Nirmik
    I've been on Linux for more than 6 months now but never went too much into the CLI (command-line interface or terminal or shell) Now as I ask questions here, get answers, or help from other sites, I learn new commands... How can I can store every new command in a text file? Only new/*unique* commands, not repetitions of the same command. Here's an example: In the terminal, I enter the commands like this- ubuntu@ubuntu:~$ *command1* ubuntu@ubuntu:~$ *command2* ubuntu@ubuntu:~$ *command3* ubuntu@ubuntu:~$ *command4* ubuntu@ubuntu:~$ *command1* Now, these commands should get saved in a text file say commandrec like this- *command1* *command2* *command3* *command4* NOTE:The last command in the terminal which was again command1 is not recorded/saved again in the text file. And the next time I open the terminal, and enter a new command command 5, it should get appended to the list in commandrec (but if the command was used earlier on some other date, it should still be ignored. For example, command 1 entered again along with command 5 on a new day/time but command1 not recorded as already used) The commandrec file looking something like this- 31/05/12 12:00:00 *command1* *command2* *command3* *command4* 01/06/12 13:00:00 *command 5* (the time and date thing would be great if possible, but okay even if that isn't there) This way, I can have a record of all commands used by me to date. How can this be done?

    Read the article

  • Bash script not adding variables to session

    - by travega
    I have a bash script that I have added as a startup application. It does a bunch of exports and alias assignment. #! /bin/bash alias devhm='cd ${DEV_HOME}; ll'; alias wlhm='cd ${WL_HOME}; ll'; alias dirch='watch --interval=1 "ls -la"'; alias vols='watch --interval=1 "df -h"'; alias svn-update='svn update --depth infinity ./*'; alias mci="~/mci.sh"; alias vncserver="vncserver -geometry 1680x1050"; alias ..="cd .."; alias hist="history | grep "; export PROXY_HOST=proxy.my.setup; export PROXY_PORT=3128; export LD_LIBRARY_PATH=$LD_LIBRARY_PATH/usr/lib/oracle/12.1/client64/lib; export ORACLE_HOME=/usr/lib/oracle/12.1/client64; export TNS_ADMIN=${ORACLE_HOME}/network/admin; echo "DONE!"; But none of these values are available in my terminal sessions anymore. Even when I run the script straight into the terminal like so: ./setup.sh I see the "DONE!" prompt printed but no aliases or env variables are set. If I copy and paste the contents of the file into the terminal the aliases and env variables are set. I have tried adding a line to execute the script from .bashrc also but still no aliases or env variables set. Any ideas what might be going on here? Also could anyone suggest a better way to have these env variables/aliases added to every terminal session?

    Read the article

  • Windows XP seemingly out of resources but plenty of free RAM and swap available

    - by Artem Russakovskii
    This one has been bothering me for years and so far I couldn't find an adequate solution. The problem occurs on pretty much every XP install I've done. After opening a variety of programs or the system running existing programs for a while, Windows seemingly runs out of resources, without telling me. There's ALWAYS free RAM. For example, it just happened to me and I had over a gig of free RAM. There are no viruses, spyware, or other nonsense - it is a Windows resource problem, but the question is which resource is it running out of, how does one pinpoint it, and how does one prevent it? Sometimes, this happens after running specific programs - for example, today it happened when I started Photoshop CS4 and Flash CS4 at the same time. I also noticed that restarting The Bat (email client by Ritlabs) seems to get rid of this problem for a while but again, this happens on machines that don't even have The Bat installed. So what does exactly happen? The symptoms are: pressing alt-tab doesn't bring up the list anymore - it just jumps to the next window instantly, very similar to the way Alt-Esc works, however in this case, it's due to not having enough resources to bring up the alt-tab menu random programs would randomly crash, citing random errors, out of memory errors, system resources, inabilities to do system calls, etc. random programs would start missing random parts - for example, Firefox top menus might disappear, pull up partial selections, or not pull up anymore altogether. IE might lose a few of its toolbars. Some programs might fail to redraw or would just plain go gray where the UI used to be. Windows itself never complains about running out of RAM, virtual memory, or anything at all, yet it's running out of something. The only clue I was able to find and apply the fix today was this Desktop Heap Limitation. I haven't confirmed the fix working as not enough time passed. In the meantime, what are everyone's thoughts?

    Read the article

  • How to Kill and Alternate X session via cli

    - by L. D. James
    Can someone tell me how to remove dormant X sessions. This question is similar to Logging out other users from the command line, but more specific to controlling X displays which I find hard to kill. I used the command "who -u" to get the session of the other screens: $ who -u Which gave me: user1 :0 2014-08-18 12:08 ? 2891 (:0) user1 pts/26 2014-08-18 16:11 17:18 3984 (:0) user2 :1 2014-08-18 18:21 ? 25745 (:1) user1 pts/27 2014-08-18 23:10 00:27 3984 (:0) user1 pts/32 2014-08-18 23:10 10:42 3984 (:0) user1 pts/46 2014-08-18 23:14 00:04 3984 (:0) user1 pts/48 2014-08-19 04:10 . 3984 (:0) The kill -9 25745 doesn't appear to do anything. I have a workshop where a number of users will use the computer under their own login. After the workshop is over there are a number of logins that are left open. I would prefer to kill the open sessions rather than try to log into each users' screen. Again, this question isn't just about logging users' out. I'm hoping to get clarity also for killing/removing stuck processes that are hard to kill. New Info While still pondering how to kill the process I wrote the following script, which did it: #!/bin/bash results=1 while [[ $results > 0 ]] do sudo kill -9 25745 results=$? echo -ne "Response:$results..." sleep 20 done After a graceful waiting period, if there isn't a better answer I'll mark this as answered with this resolution. This may resolve the problem with other stuck processes I have had in the past.

    Read the article

  • No GUI boot; startx error, I suspect no filesystem corruption.

    - by Dharmaj Soni
    Till yesterday, my Ubuntu 9.10 was working fine. I had watched a movie using vlc. I had also charged my ipod using the laptop. Today, when I started it, I automatically booted into command line. There seems to be no filesystem corruption etc as I can view/open (text) files. Before the CLI appeared, the screen blinked with a cursor, then the white Ubuntu logo flashed, and then I got the CLI login prompt. After logging in, if I try startx, to start gnome, I get the following error after a few seconds: giving up xinit: No such file or directory (errno 2): unable to connect to X server xinit: No such process (errno 3): Server error* The same error comes up, even if I use sudo, or if I change my directory to '/' before using startx, and also when, from the grub, using the recovery mode option to load into CLI, and then trying startx. On trying command 'xinit', I get "Server error" Also, on trying GDM, I get 2 errors. I cannot connect to the internet in this state. Thanks for any help. I am using Dell Inspiron 1440, no special graphics card.

    Read the article

  • Help with "advanced" shell scripting | how to create an image preview of a pdf

    - by lucapozzobon
    First of all, sorry for my english: i'm not british/american. Here is my problem. I've got a folder named pdf with lots of files pdf inside it. I've got another folder named thumbnail, which is empty. I want to create jpg images preview of each pdf to use them in my HTML webpages as previews of the pdf. To do this I'm using a software called IMAGEMAGICK. I tried to put the code inside my PHP files to get the purpose, but it doesn't work. As you understood, I have created a small search engine with apache, mysql to search for pdf locally (offline). Now I want to add a "preview" of the first page of pdfs. Instead, it works by bash command line and the code is: convert pdf/name_of_the_file_pdf.pdf[0] name_of_the_imagefile.jpg (The zero stands for that the image is taken from the FIRST page of pdf) How can i make a script that takes each name of pdf files and put it into that code???? To list all the file, I did ls >pdf but with the little knowledge I have I can't go further.... Some pdf's names contain spaces....Is that a problem? PDF files are so many that i can't do the task typing every name,it wouldn't be a nice and clever work!!!! Thanks a lot in advance!!!

    Read the article

  • Beginning with shell scripting

    - by Kevin Wyman
    I am fresh into Ubuntu and one of my goals is shell scripting for personal (and maybe public) use. I'm a novice, though I do understand some of the basics (e.g. what a variable, string, loop, etc... is) but to get the most of scripting I need to learn in-depth. I figure the best way to do that is to jump right into scripting and ask questions only pertinent to the stage I am at in my attempted script. Scenario: I have edited my sudoers file to allow my non-root user to run sudo commands without being prompted for a password. Question: In vim, what would be the best code to use for a function that checks whether this condition is [true], If not, prompt the user if they want the script to edit and save the sudoers file to make this condition [true]? Layout - If condition is true, carry-on with rest of script. If condition is not true, the script silently edits/adds the line: %sudo ALL=(ALL:ALL) NOPASSWD: ALL in the sudoers file, saves and then continues on with the next part of the script. Any help with this would be greatly appreciated and assist me in my journey to writing shell scripts.

    Read the article

  • Select (loop) or command not working in shell-script

    - by user208098
    I've been tinkering with Linux and Unix for years but still a novice in my mind and recently find myself trying to be more pro with it as I work in IT. So with that notion I'm studying shell scripting. I've hit a snag in ubuntu using the latest version 13.10 Saucy. When I use the select command in a sh script it doesn't work, depending on how I format the command it will either return Unexpected "do" or Unexpected "done". See the following two examples: This section of code produces an unexpected "do" error: #/bin/bash PS3='Please enter your choice' select opt in option1 option2 option3 quit do case $opt in "option1") echo "you chose choice 1" ;; "option2") echo "you chose choice 2" ;; "option3") echo "you chose choice 3" ;; "quit") break ;; *) echo invalid option ;; esac done This section of code produces an unexpected "done" error. #/bin/bash PS3='Please enter your choice' select opt in option1 option2 option3 quit ; do case $opt in "Option1") echo "you chose choice 1" ;; "Option2") echo "you chose choice 2" ;; "Option3") echo "you chose choice 3" ;; "quit") break ;; *) echo invalid option ;; esac done When I enter these parameters into the command line interactively or manually I get the desired result which is a list of choices to choose from. However when executed from a script I get the before mentioned errors. Also a side note I have tried this in Fedora as a script and it worked perfectly so my question is why isn't it working in Ubuntu, is this a difference between RHL and Debian? Or is it a bug in the latest version of Ubuntu? Thanks in advance for any help! KG

    Read the article

  • DRBD stacked resources: recovering from failure

    - by Marcus Downing
    We're running a stacked four-node DRBD setup like this: A --> B | | v v C D This means three DRBD resources running across these four servers. Servers A and B are Xen hosts running VMs, while servers C and D are for backups. A is in the same datacentre as C. From server A to server C, in the first datacentre, using protocol B From server B to server D, in the second datacentre, using protocol B From server A to server B, different datacentres, stacked resource using protocol A First question: booting a stacked resource We haven't got any vital data running on this setup yet - we're still making sure it works first. This means simulating power cuts, network outages etc and seeing what steps we need to recover. When we pull the power out of server A, both resources go down; it attempts to bring them back up at next boot. However, it only succeeds at bringing up the lower-level resource, A-C. The stacked resource A-B doesn't even try to connect, presumably because it can't find the device until it's a connected primary on the lower level. So if anything goes wrong we need to manually log in and bring that resource up, then start the virtual machine on top of it. Second question: setting the primary of a stacked resource Our lower-level resources are configured so that the right one is considered primary: resource test-AC { on A { ... } on C { ... } startup { become-primary-on A; } } But I don't see any way to do the same with a stacked resource, as the following isn't a valid config: resource test-AB { stacked-on-top-of test-AC { ... } stacked-on-top-of test-BD { ... } startup { become-primary-on test-AC; } } This too means that recovering from a failure requires manual intervention. Is there no way to set the automatic primary for a stacked resource?

    Read the article

  • RPC Server Unavailable on Hyper-V cluster when moving resources after the host adapter has failed

    - by Doug Luxem
    On a Windows 2008 R2 SP1 cluster running Hyper-V, a lost network connectivity on the primary host interface. The interface was rapidly flapping up and down, and this was later determined to be caused by a faulty switch port. As this was a clustered server, the host interface was not fault tolerant (seeing as how the whole server was fault tolerant), so connectivity to the host was going up and down. The Hyper-V guests were completely unaffected by the network outage as they used a dedicated trunk on the server separate from the host interface. Additionally, dedicated interfaces for the cluster and live migration networks were fine. In order to diagnose the server, I tried to move all resources (Hyper-V Guests) to other nodes through Failover Cluster Manager. These moves failed with an error RPC Server Unavailable. The only way to move resources was by shutting down the guests, stopping the cluster service on the Node A, allowing other nodes to take ownership of the resources, and restarting the guests. A few other notes: All nodes have Client for MS Networks and File & Printer Sharing enabled on the Cluster and LM networks. Node A was accessible over cluster and LM networks from other nodes (these are private, cluster-only networks); pingable, CIFs, etc. Accessing \\NODEA is done over the Host adapters, as you would expect in this case and is the reason for the RPC Server Unavailable error with that adapter being down. My questions here are - Is there a way to still use Live Migration in a failure scenario such as this to prevent shutting down the Hyper-V guests? How can the network be reconfigured in the future so that the cluster service attempts to use the cluster and/or live migration networks to issue the RPC requests?

    Read the article

  • What packages are safe to uninstall to reduce installation size?

    - by mathematician1975
    This question is similar to a previous question I asked How can I turn my desktop Ubuntu 8.04 into a command line only install?. I was wondering if anyone can recommend any other bulky packages from the standard 8.04 installation that can reduce the size on disk of my installation. All I really require is socket functionality, g++ and gcc, some kind of text editor and SSH client and server. Things that I don't require are things like media players, audio packages, and the more "superficial" kind of desktop niceties. Is there anything particularly large in a standard install that is safe for me to remove without compromising my requirements above? I am a bit apprehensive about trying to uninstall items and I am not totally confident about removal of particular things having a negative effect on the functionality of any other things I might need (an example is would it be safe for me to remove everything to do with Perl, or does the system/kernel/other processes require this) ??? Basically I would like to be left with the kind of items that would have been installed in the CLI version of 8.04 (had the alternative iso image not been faulty). Any help/suggestions would be gratefully received.

    Read the article

  • How to structure reading of commands given at a(n interactive) CLI prompt?

    - by Anto
    Let's say I have a program called theprogram (the marketing team was on strike when the product was to be named). I start that program by typing, perhaps not surprisingly, the program name as a command into a command prompt. After that, I get into a loop (from the users standpoint, an interactive command-line prompt), where one command will be read from the user, and depending on what command was given, the program will execute some instructions. I have been doing something like the following (in C-like pseudocode): main_loop{ in=read_input(); if(in=="command 1") do_something(); else if(in=="command 2") do_something_else(); ... } (In a real program, I would probably encapsulate more things into different procedures, this is just an example.) This works well for a small amount of commands, but let's say you have 100, 1000 or even 10 000 of them (the manual would be huge!). It is clearly a bad idea to have 10 000 ifs and else ifs after each other, for instance, the program would be hard to read, hard to maintain, contain a lot of boilerplate code... Yeah, you don't want to do that, so what approach would you recommend me to use (I will probably never use 10 000 commands in a program, but the solution should, at least preferably, be able to scale to that kind of massive (?) problems. The solution doesn't have to allow for arguments to the commands)?

    Read the article

  • How do I force the system to look for new sound devices?

    - by John Zeringue
    I have a small flat screen TV (Toshiba) that I often connect via HDMI to my laptop (an HP Pavilion dv4) and use as a computer monitor. When I do this, I prefer to use the TV's speakers, particularly when I'm watching video, because the sound quality is much better. However, when I connect the HDMI cable after turning on my computer (rather than having it plugged in before booting up), Sound Settings does not list the TV as an output device, and I am forced to reboot. I was curious if there is an easier solution to this, perhaps a CLI command to check for new sound outputs. Does anyone know of something like this or have another solution? To clarify, I believe this is a software problem. The HDMI always works, and I can switch my desktop over to the TV at any time. The issue is that, if the HDMI wasn't connected during start up, the TV will not appear as an output option in the Sound Settings GUI. So, unless I'm mistaken, my question is not HDMI specific, but rather a general usage question about manipulating sound settings from the command line.

    Read the article

  • How can I get rid of the motd message "*** /dev/sdb1 will be checked for errors at next reboot ***"? [duplicate]

    - by kmm
    This question already has an answer here: Persistent “disk will be checked…” in the message of the day (motd) even after reboot 3 answers My motd persistently has: *** /dev/sdb1 will be checked for errors at next reboot *** The problem is that I don't have /dev/sdb1 on my system. I only have /dev/sdb2 (mouted as /) and /dev/sda1 which mounts to /media/backup. I delete that line from /etc/motd, but it reappears after reboot. Here's my df output: Filesystem Size Used Avail Use% Mounted on /dev/sdb2 73G 3.7G 66G 6% / udev 490M 4.0K 490M 1% /dev tmpfs 200M 760K 199M 1% /run none 5.0M 0 5.0M 0% /run/lock none 498M 0 498M 0% /run/shm /dev/sda1 1.9T 429G 1.4T 25% /media/backup Update Here is the output of sudo fdisk -l Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003dfc2 Device Boot Start End Blocks Id System /dev/sda1 63 3907024064 1953512001 83 Linux Disk /dev/sdb: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00049068 Device Boot Start End Blocks Id System /dev/sdb1 152301568 156301311 1999872 82 Linux swap / Solaris /dev/sdb2 * 2048 152301567 76149760 83 Linux Partition table entries are not in disk order I guess /dev/sdb1 is my swap space.

    Read the article

  • creating a tooltip for a line drawn on cartesiandatacanvas in flex

    - by Guru
    I am trying to draw a line on cartesiandatacanvas. While I am able to draw lines easily using the canvas.moveTo and canvas.lineTo methods, I cannot provide a tooltip to the line if I use that functionality. I have tried creating a label(when ever I draw a line) and adding a tooltip to it but since I show the lines in a 10*10 grid both vertical and horizontal there is a overlap and it is confusing. So now I am trying to create a line object that extends shape or UIComponent. I cannot add this object to canvas using the addChild method it does not work. The addDataChild method works but it messes with the positioning of the line. Can someone help with a solution to this. Simply put I want to draw lines on a datacanvas and add tooltips to them. Here is my code for the line object: package model { import flash.display.CapsStyle; import flash.display.JointStyle; import flash.display.LineScaleMode; import mx.core.UIComponent; public class Line extends UIComponent { public var x1:Number; public var x2:Number; public var y1:Number; public var y2:Number; public var color:Number; public function Line(x1:Number, y1:Number, x2:Number, y2:Number,color:Number) { super(); this.graphics.lineStyle(4, color, 1, true, LineScaleMode.NORMAL, CapsStyle.ROUND, JointStyle.MITER, 1 ); this.graphics.moveTo(x1,y1); this.graphics.lineTo(x2,y2); } } } Here is a sample MXML that has a canvas and tries to use the line object above: <?xml version="1.0" encoding="utf-8"?> <mx:WindowedApplication xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute" initialize="init()"> <mx:Script> <![CDATA[ import model.Line; import mx.charts.chartClasses.CartesianCanvasValue; private var accidImage:Image = new Image(); public function init():void { var line:Line = new Line(10,10,40,40,0XFF0000); // canvas.addChild(line); *Does not Work* canvas.addDataChild(line,10,10,null,null,null,null); } ]]> </mx:Script> <mx:Panel x="60" y="53" width="517" height="472" layout="absolute"> <mx:PlotChart x="48" y="10" id="plotchart1"> <mx:series> <mx:PlotSeries displayName="Series 1" yField=""/> </mx:series> <mx:annotationElements> <mx:CartesianDataCanvas id="canvas" includeInRanges="true"/> </mx:annotationElements> <mx:verticalAxis> <mx:LinearAxis id="axis11" minimum="0" maximum="100" interval="10" padding="10"/> </mx:verticalAxis> <mx:horizontalAxis> <mx:LinearAxis id="axis21" minimum="0" maximum="100" interval="10" padding="10"/> </mx:horizontalAxis> </mx:PlotChart> <mx:Legend dataProvider="{plotchart1}"/> </mx:Panel> </mx:WindowedApplication>

    Read the article

  • The Return Of __FILE__ And __LINE__ In .NET 4.5

    - by Alois Kraus
    Good things are hard to kill. One of the most useful predefined compiler macros in C/C++ were __FILE__ and __LINE__ which do expand to the compilation units file name and line number where this value is encountered by the compiler. After 4.5 versions of .NET we are on par with C/C++ again. It is of course not a simple compiler expandable macro it is an attribute but it does serve exactly the same purpose. Now we do get CallerLineNumberAttribute  == __LINE__ CallerFilePathAttribute        == __FILE__ CallerMemberNameAttribute  == __FUNCTION__ (MSVC Extension)   The most important one is CallerMemberNameAttribute which is very useful to implement the INotifyPropertyChanged interface without the need to hard code the name of the property anymore. Now you can simply decorate your change method with the new CallerMemberName attribute and you get the property name as string directly inserted by the C# compiler at compile time.   public string UserName { get { return _userName; } set { _userName=value; RaisePropertyChanged(); // no more RaisePropertyChanged(“UserName”)! } } protected void RaisePropertyChanged([CallerMemberName] string member = "") { var copy = PropertyChanged; if(copy != null) { copy(new PropertyChangedEventArgs(this, member)); } } Nice and handy. This was obviously the prime reason to implement this feature in the C# 5.0 compiler. You can repurpose this feature for tracing to get your hands on the method name of your caller along other stuff very fast now. All infos are added during compile time which is much faster than other approaches like walking the stack. The example on MSDN shows the usage of this attribute with an example public static void TraceMessage(string message, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int sourceLineNumber = 0) { Console.WriteLine("Hi {0} {1} {2}({3})", message, memberName, sourceFilePath, sourceLineNumber); }   When I do think of tracing I do usually want to have a API which allows me to Trace method enter and leave Trace messages with a severity like Info, Warning, Error When I do print a trace message it is very useful to print out method and type name as well. So your API must either be able to pass the method and type name as strings or extract it automatically via walking back one Stackframe and fetch the infos from there. The first glaring deficiency is that there is no CallerTypeAttribute yet because the C# compiler team was not satisfied with its performance.   A usable Trace Api might therefore look like   enum TraceTypes { None = 0, EnterLeave = 1 << 0, Info = 1 << 1, Warn = 1 << 2, Error = 1 << 3 } class Tracer : IDisposable { string Type; string Method; public Tracer(string type, string method) { Type = type; Method = method; if (IsEnabled(TraceTypes.EnterLeave,Type, Method)) { } } private bool IsEnabled(TraceTypes traceTypes, string Type, string Method) { // Do checking here if tracing is enabled return false; } public void Info(string fmt, params object[] args) { } public void Warn(string fmt, params object[] args) { } public void Error(string fmt, params object[] args) { } public static void Info(string type, string method, string fmt, params object[] args) { } public static void Warn(string type, string method, string fmt, params object[] args) { } public static void Error(string type, string method, string fmt, params object[] args) { } public void Dispose() { // trace method leave } } This minimal trace API is very fast but hard to maintain since you need to pass in the type and method name as hard coded strings which can change from time to time. But now we have at least CallerMemberName to rid of the explicit method parameter right? Not really. Since any acceptable usable trace Api should have a method signature like Tracexxx(… string fmt, params [] object args) we not able to add additional optional parameters after the args array. If we would put it before the format string we would need to make it optional as well which would mean the compiler would need to figure out what our trace message and arguments are (not likely) or we would need to specify everything explicitly just like before . There are ways around this by providing a myriad of overloads which in the end are routed to the very same method but that is ugly. I am not sure if nobody inside MS agrees that the above API is reasonable to have or (more likely) that the whole talk about you can use this feature for diagnostic purposes was not a core feature at all but a simple byproduct of making the life of INotifyPropertyChanged implementers easier. A way around this would be to allow for variable argument arrays after the params keyword another set of optional arguments which are always filled by the compiler but I do not know if this is an easy one. The thing I am missing much more is the not provided CallerType attribute. But not in the way you would think of. In the API above I did add some filtering based on method and type to stay as fast as possible for types where tracing is not enabled at all. It should be no more expensive than an additional method call and a bool variable check if tracing for this type is enabled at all. The data is tightly bound to the calling type and method and should therefore become part of the static type instance. Since extending the CLR type system for tracing is not something I do expect to happen I have come up with an alternative approach which allows me basically to attach run time data to any existing type object in super fast way. The key to success is the usage of generics.   class Tracer<T> : IDisposable { string Method; public Tracer(string method) { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public void Dispose() { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public static void Info(string fmt, params object[] args) { } /// <summary> /// Every type gets its own instance with a fresh set of variables to describe the /// current filter status. /// </summary> /// <typeparam name="T"></typeparam> internal class TraceData<UsingType> { internal static TraceData<UsingType> Instance = new TraceData<UsingType>(); public bool IsInitialized = false; // flag if we need to reinit the trace data in case of reconfigured trace settings at runtime public TraceTypes Enabled = TraceTypes.None; // Enabled trace levels for this type } } We do not need to pass the type as string or Type object to the trace Api. Instead we define a generic Api that accepts the using type as generic parameter. Then we can create a TraceData static instance which is due to the nature of generics a fresh instance for every new type parameter. My tests on my home machine have shown that this approach is as fast as a simple bool flag check. If you have an application with many types using tracing you do not want to bring the app down by simply enabling tracing for one special rarely used type. The trace filter performance for the types which are not enabled must be therefore the fasted code path. This approach has the nice side effect that if you store the TraceData instances in one global list you can reconfigure tracing at runtime safely by simply setting the IsInitialized flag to false. A similar effect can be achieved with a global static Dictionary<Type,TraceData> object but big hash tables have random memory access semantics which is bad for cache locality and you always need to pay for the lookup which involves hash code generation, equality check and an indexed array access. The generic version is wicked fast and allows you to add more features to your tracing Api with minimal perf overhead. But it is cumbersome to write the generic type argument always explicitly and worse if you do refactor code and move parts of it to other classes it might be that you cannot configure tracing correctly. I would like therefore to decorate my type with an attribute [CallerType] class Tracer<T> : IDisposable to tell the compiler to fill in the generic type argument automatically. class Program { static void Main(string[] args) { using (var t = new Tracer()) // equivalent to new Tracer<Program>() { That would be really useful and super fast since you do not need to pass any type object around but you do have full type infos at hand. This change would be breaking if another non generic type exists in the same namespace where now the generic counterpart would be preferred. But this is an acceptable risk in my opinion since you can today already get conflicts if two generic types of the same name are defined in different namespaces. This would be only a variation of this issue. When you do think about this further you can add more features like to trace the exception in your Dispose method if the method is left with an exception with that little trick I did write some time ago. You can think of tracing as a super fast and configurable switch to write data to an output destination or to execute alternative actions. With such an infrastructure you can e.g. Reconfigure tracing at run time. Take a memory dump when a specific method is left with a specific exception. Throw an exception when a specific trace statement is hit (useful for testing error conditions). Execute a passed delegate which e.g. dumps additional state when enabled. Write data to an in memory ring buffer and dump it when specific events do occur (e.g. method is left with an exception, triggered from outside). Write data to an output device. …. This stuff is really useful to have when your code is in production on a mission critical server and you need to find the root cause of sporadic crashes of your application. It could be a buggy graphics card driver which throws access violations into your application (ok with .NET 4 not anymore except if you enable a compatibility flag) where you would like to have a minidump or you have reached after two weeks of operation a state where you need a full memory dump at a specific point in time in the middle of an transaction. At my older machine I do get with this super fast approach 50 million traces/s when tracing is disabled. When I do know that tracing is enabled for this type I can walk the stack by using StackFrameHelper.GetStackFramesInternal to check further if a specific action or output device is configured for this method which is about 2-3 times faster than the regular StackTrace class. Even with one String.Format I am down to 3 million traces/s so performance is not so important anymore since I do want to do something now. The CallerMemberName feature of the C# 5 compiler is nice but I would have preferred to get direct access to the MethodHandle and not to the stringified version of it. But I really would like to see a CallerType attribute implemented to fill in the generic type argument of the call site to augment the static CLR type data with run time data.

    Read the article

  • What does the EC2 command line say when a machine won't start?

    - by OneSolitaryNoob
    When starting an instance on Amazon EC2, how would I detect a failure, for instance, if there's no machine available to fulfill my request? I'm using one of the less-common machine types and am concerned it won't start up, but am having trouble finding out what message to look for to detect this. I'm using the EC2 commandline tools to do this. I know I can look for 'running' when I do ec2-describe-instance to see if the machine is up, but don't know what to look for to see if the startup failed. Thanks!

    Read the article

  • Firefox 3.6.3 on Snow Leopard 10.6.3 - symbolic link to command line binary doesn't work?

    - by David Watson
    I have Firefox 10.6.3 installed on Mac OS X Snow Leopard from the DMG. I can run firefox from the terminal using /Applications/Firefox.app/Contents/MacOS/firefox-bin. However, if I create a symbolic link: sudo ln -s /Applications/Firefox.app/Contents/MacOS/firefox-bin /bin/firefox then it refuses to run, or at least display. When I issue "firefox" from the terminal, I can see the process in top, but never get the GUI to appear. :/ = ls -lr /bin/firefox lrwxr-xr-x 1 root wheel 52 May 5 15:19 /bin/firefox - /Applications/Firefox.app/Contents/MacOS/firefox-bin Any ideas? Thanks, David

    Read the article

  • Powershell 2.0 error handling - Command line call vs. ISE

    - by Gromix
    Hi, In the context of deployment scripts, I would like to capture any error than happens and stop immediately. I have notice some significant differences between the following calls: powershell.exe -File Script.ps1 powershell.exe -Command "& '.\Script.ps1'" powershell.exe .\Script.ps1 For example, the -File call will handle errors in the exact same way as the ISE. The other two seem to ignore the $ErrorActionPreference variable, and do not seem to catch Write-Error in try/catch blocks. Could someone help me understand the implications of each one, and why they are behaving differently? Thanks, Romain

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >