Search Results

Search found 61582 results on 2464 pages for 'time trace'.

Page 75/2464 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • The Return Of __FILE__ And __LINE__ In .NET 4.5

    - by Alois Kraus
    Good things are hard to kill. One of the most useful predefined compiler macros in C/C++ were __FILE__ and __LINE__ which do expand to the compilation units file name and line number where this value is encountered by the compiler. After 4.5 versions of .NET we are on par with C/C++ again. It is of course not a simple compiler expandable macro it is an attribute but it does serve exactly the same purpose. Now we do get CallerLineNumberAttribute  == __LINE__ CallerFilePathAttribute        == __FILE__ CallerMemberNameAttribute  == __FUNCTION__ (MSVC Extension)   The most important one is CallerMemberNameAttribute which is very useful to implement the INotifyPropertyChanged interface without the need to hard code the name of the property anymore. Now you can simply decorate your change method with the new CallerMemberName attribute and you get the property name as string directly inserted by the C# compiler at compile time.   public string UserName { get { return _userName; } set { _userName=value; RaisePropertyChanged(); // no more RaisePropertyChanged(“UserName”)! } } protected void RaisePropertyChanged([CallerMemberName] string member = "") { var copy = PropertyChanged; if(copy != null) { copy(new PropertyChangedEventArgs(this, member)); } } Nice and handy. This was obviously the prime reason to implement this feature in the C# 5.0 compiler. You can repurpose this feature for tracing to get your hands on the method name of your caller along other stuff very fast now. All infos are added during compile time which is much faster than other approaches like walking the stack. The example on MSDN shows the usage of this attribute with an example public static void TraceMessage(string message, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int sourceLineNumber = 0) { Console.WriteLine("Hi {0} {1} {2}({3})", message, memberName, sourceFilePath, sourceLineNumber); }   When I do think of tracing I do usually want to have a API which allows me to Trace method enter and leave Trace messages with a severity like Info, Warning, Error When I do print a trace message it is very useful to print out method and type name as well. So your API must either be able to pass the method and type name as strings or extract it automatically via walking back one Stackframe and fetch the infos from there. The first glaring deficiency is that there is no CallerTypeAttribute yet because the C# compiler team was not satisfied with its performance.   A usable Trace Api might therefore look like   enum TraceTypes { None = 0, EnterLeave = 1 << 0, Info = 1 << 1, Warn = 1 << 2, Error = 1 << 3 } class Tracer : IDisposable { string Type; string Method; public Tracer(string type, string method) { Type = type; Method = method; if (IsEnabled(TraceTypes.EnterLeave,Type, Method)) { } } private bool IsEnabled(TraceTypes traceTypes, string Type, string Method) { // Do checking here if tracing is enabled return false; } public void Info(string fmt, params object[] args) { } public void Warn(string fmt, params object[] args) { } public void Error(string fmt, params object[] args) { } public static void Info(string type, string method, string fmt, params object[] args) { } public static void Warn(string type, string method, string fmt, params object[] args) { } public static void Error(string type, string method, string fmt, params object[] args) { } public void Dispose() { // trace method leave } } This minimal trace API is very fast but hard to maintain since you need to pass in the type and method name as hard coded strings which can change from time to time. But now we have at least CallerMemberName to rid of the explicit method parameter right? Not really. Since any acceptable usable trace Api should have a method signature like Tracexxx(… string fmt, params [] object args) we not able to add additional optional parameters after the args array. If we would put it before the format string we would need to make it optional as well which would mean the compiler would need to figure out what our trace message and arguments are (not likely) or we would need to specify everything explicitly just like before . There are ways around this by providing a myriad of overloads which in the end are routed to the very same method but that is ugly. I am not sure if nobody inside MS agrees that the above API is reasonable to have or (more likely) that the whole talk about you can use this feature for diagnostic purposes was not a core feature at all but a simple byproduct of making the life of INotifyPropertyChanged implementers easier. A way around this would be to allow for variable argument arrays after the params keyword another set of optional arguments which are always filled by the compiler but I do not know if this is an easy one. The thing I am missing much more is the not provided CallerType attribute. But not in the way you would think of. In the API above I did add some filtering based on method and type to stay as fast as possible for types where tracing is not enabled at all. It should be no more expensive than an additional method call and a bool variable check if tracing for this type is enabled at all. The data is tightly bound to the calling type and method and should therefore become part of the static type instance. Since extending the CLR type system for tracing is not something I do expect to happen I have come up with an alternative approach which allows me basically to attach run time data to any existing type object in super fast way. The key to success is the usage of generics.   class Tracer<T> : IDisposable { string Method; public Tracer(string method) { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public void Dispose() { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public static void Info(string fmt, params object[] args) { } /// <summary> /// Every type gets its own instance with a fresh set of variables to describe the /// current filter status. /// </summary> /// <typeparam name="T"></typeparam> internal class TraceData<UsingType> { internal static TraceData<UsingType> Instance = new TraceData<UsingType>(); public bool IsInitialized = false; // flag if we need to reinit the trace data in case of reconfigured trace settings at runtime public TraceTypes Enabled = TraceTypes.None; // Enabled trace levels for this type } } We do not need to pass the type as string or Type object to the trace Api. Instead we define a generic Api that accepts the using type as generic parameter. Then we can create a TraceData static instance which is due to the nature of generics a fresh instance for every new type parameter. My tests on my home machine have shown that this approach is as fast as a simple bool flag check. If you have an application with many types using tracing you do not want to bring the app down by simply enabling tracing for one special rarely used type. The trace filter performance for the types which are not enabled must be therefore the fasted code path. This approach has the nice side effect that if you store the TraceData instances in one global list you can reconfigure tracing at runtime safely by simply setting the IsInitialized flag to false. A similar effect can be achieved with a global static Dictionary<Type,TraceData> object but big hash tables have random memory access semantics which is bad for cache locality and you always need to pay for the lookup which involves hash code generation, equality check and an indexed array access. The generic version is wicked fast and allows you to add more features to your tracing Api with minimal perf overhead. But it is cumbersome to write the generic type argument always explicitly and worse if you do refactor code and move parts of it to other classes it might be that you cannot configure tracing correctly. I would like therefore to decorate my type with an attribute [CallerType] class Tracer<T> : IDisposable to tell the compiler to fill in the generic type argument automatically. class Program { static void Main(string[] args) { using (var t = new Tracer()) // equivalent to new Tracer<Program>() { That would be really useful and super fast since you do not need to pass any type object around but you do have full type infos at hand. This change would be breaking if another non generic type exists in the same namespace where now the generic counterpart would be preferred. But this is an acceptable risk in my opinion since you can today already get conflicts if two generic types of the same name are defined in different namespaces. This would be only a variation of this issue. When you do think about this further you can add more features like to trace the exception in your Dispose method if the method is left with an exception with that little trick I did write some time ago. You can think of tracing as a super fast and configurable switch to write data to an output destination or to execute alternative actions. With such an infrastructure you can e.g. Reconfigure tracing at run time. Take a memory dump when a specific method is left with a specific exception. Throw an exception when a specific trace statement is hit (useful for testing error conditions). Execute a passed delegate which e.g. dumps additional state when enabled. Write data to an in memory ring buffer and dump it when specific events do occur (e.g. method is left with an exception, triggered from outside). Write data to an output device. …. This stuff is really useful to have when your code is in production on a mission critical server and you need to find the root cause of sporadic crashes of your application. It could be a buggy graphics card driver which throws access violations into your application (ok with .NET 4 not anymore except if you enable a compatibility flag) where you would like to have a minidump or you have reached after two weeks of operation a state where you need a full memory dump at a specific point in time in the middle of an transaction. At my older machine I do get with this super fast approach 50 million traces/s when tracing is disabled. When I do know that tracing is enabled for this type I can walk the stack by using StackFrameHelper.GetStackFramesInternal to check further if a specific action or output device is configured for this method which is about 2-3 times faster than the regular StackTrace class. Even with one String.Format I am down to 3 million traces/s so performance is not so important anymore since I do want to do something now. The CallerMemberName feature of the C# 5 compiler is nice but I would have preferred to get direct access to the MethodHandle and not to the stringified version of it. But I really would like to see a CallerType attribute implemented to fill in the generic type argument of the call site to augment the static CLR type data with run time data.

    Read the article

  • How to make TimeMachine back up contents of any path or mounted volume

    - by Olfan
    I keep different types of data in different encrypted sparsebundle images (say, one for each client) which automatically mount upon login but can't be opened by anybody other than myself. So, after login I have a number of virtual volumes in /Volumes/ which keeps my client data both secure and organized. How do I include data inside these virtual Volumes in TimeMachine's backups, or data residing in any path on any partition/volume? I found a promising solution description at blog.eurocomp.info involving editing the com.apple.TimeMachine.plist but all I can get TimeMachine to do is backing up the sparsebundle files themselves. I want it to back up the files inside the mounted image, though - something like adding /Volumes/Client_abc/ to TimeMachine's search path. Please do not redirect my to this previous question as it doesn't solve the problem at all. Please also refrain from telling me why you think I should not want this answer as that will not solve anything either. Please lastly don't say "it can't be done" unless you can technically prove that claim.

    Read the article

  • Burn the CD or DVD for one time use

    - by kumar
    I want to burn the CD or DVD for one-time use, that is - CD or DVD copy protection, like CD to CD or CD to hard disc copy protection. The CD has a setup. After setup process is finished the setup file will destroy automatically or disable the CD contents. How to create like this. Please give me some ideas

    Read the article

  • Spreadsheet functions to query route planner for travel time/distance

    - by Rich
    I would like to achieve something whereby I have a spreadsheet such that the columns are: Column A - place name Column B - place name Column C - distance by road between places in columns A and B Column D - travel time by road between places in columns A and B I thought it might be possible using Google Docs' spreadsheet and its 'Google' functions, but I've not found any that might do the trick. In the end I could knock up an app to do it using the Google Maps API but would rather avoid it if I can.

    Read the article

  • Good C++ books regarding Performance?

    - by Leon
    Besides the books everyone knows about, like Meyer's 3 Effective C++/STL books, are there any other really good C++ books specifically aimed towards performance code? Maybe this is for gaming, telecommunications, finance/high frequency etc? When I say performance I mean things where a normal C++ book wouldnt bother advising because the gain in performance isn't worthwhile for 95% of C++ developers. Maybe suggestions like avoiding virtual pointers, going into great depth about inlining etc? A book going into great depth on C++ memory allocation or multithreading performance would obviously be very useful.

    Read the article

  • Where opera store cookies set by expire after restart (time 0)

    - by marc
    Welcome, I'm looking for information where opera store temporary cookies with time expire (0), what's mean - cookie should be deleted after restart browser. Do opera create temporary file for these cookies and delete after restart (if yes what file), or store them in memory (RAM). I'm tested file "cookies4.dat" by hex-editor and it don't have that "temp" 0 cookies stored inside. regards

    Read the article

  • Windows 7, file properties, date modified, how do you show seconds?

    - by Jordan Weinstein
    Anyone know a way to immediately show the seconds of a file's date modified property in the GUI? So if you create a file, any file in any directory, right-click and choose Properties, the date modified (if it's recent) will say something like "dd/mm/yyy hh:mm, one minute ago" - reminder this is in Windows 7. Windows XP did it normally. Then they changed something. If you wait a while, eventually you'll see the seconds, I'm not sure how long a while is, but this is incredibly annoying if you want to troubleshoot something that relies on the seconds of timestamps... is there a setting? registry key I can change perhaps? I'm literally using Chrome, pasting in the path of the directory to be able to see the seconds quickly (as a workaround) but would be nice to be able to use Win7.

    Read the article

  • Windows Azure Diagnostics: Next to Useless?

    - by Your DisplayName here!
    To quote my good friend Christian: “Tracing is probably one of the most discussed topics in the Windows Azure world. Not because it is freaking cool – but because it can be very tedious and partly massively counter-intuitive.” <rant> The .NET Framework has this wonderful facility called TraceSource. You define a named trace and route that to a configurable listener. This gives you a lot of flexibility – you can create a single trace file – or multiple ones. There is even nice tooling around that. SvcTraceViewer from the SDK let’s you open the XML trace files – you can filter and sort by trace source and event type, aggreate multiple files…blablabla. Just what you would expect from a decent tracing infrastructure. Now comes Windows Azure. I was already were grateful that starting with the SDK 1.2 we finally had a way to do tracing and diagnostics in the cloud (kudos!). But the way the Azure DiagnosticMonitor is currently implemented – could be called flawed. The Azure SDK provides a DiagnosticsMonitorTraceListener – which is the right way to go. The only problem is, that way this works is, that all traces (from all sources) get written to an ETW trace. Then the DiagMon listens to these traces and copies them periodically to your storage account. So far so good. But guess what happens to your nice trace files: the trace source names get “lost”. They appear in your message text at the end. So much for filtering and sorting and aggregating (regex #fail or #win??). Every trace line becomes an entry in a Azure Storage Table – the svclog format is gone. So much for the existing tooling. To solve that problem, one workaround was to write your own trace listener (!) that creates svclog files inside of local storage and use the DiagMon to copy those. Christian has a blog post about that. OK done that. Now it turns out that this mechanism does not work anymore in 1.3 with FullIIS (see here). Quoting: “Some IIS 7.0 logs not collected due to permissions issues...The root cause to both of these issues is the permissions on the log files.” And the workaround: “To read the files yourself, log on to the instance with a remote desktop connection.” Now then have fun with your multi-instance deployments…. </rant>

    Read the article

  • sudo taking long time

    - by Sam
    On a Ubuntu 9 64bit Linux machine, sudo takes longer time to start. "sudo echo hi" takes 2-3 minutes. strace on sudo tells poll("/etc/pam.d/system-auth", POLLIN) timesout after 5 seconds and there are multiple calls(may be a loop) to same system call (which causes 2-3min delay). Any idea why sudo has to wait for /etc/pam.d/system-auth? Any tunable to make sudo to timeout faster? Thanks Samuel

    Read the article

  • Fastest booting desktop linux distro? [closed]

    - by Kim
    I'm currently running Ubuntu 9.04 on my laptop and I'm very happy with it. But boot times aren't great... So I'd like to have a second distribution on my hard disc that I can boot to quickly check my email and stuff like that. It really only needs to run firefox and a terminal. Ext4 support would be a plus since my Ubuntu partition is ext4. In the next couple of hours I will try xPUD and DSL. Any other suggestions? EDIT: Tried xpud, hangs on boot.

    Read the article

  • Disable user accounts after a certain time period

    - by Joe Taylor
    Is there a way to create a local user account on a Windows 7 Professional machine that stays active only for a certain length of time e.g 12 months? The reason for doing this is we loan out laptops for 12 months to students, however some are less than prompt at bringing them back, if they were locked out of the laptop this would force them to bring the machine back in if they hoped to continue using the laptop. Also a warning would be helpful, although I could do this with a scheduled task.

    Read the article

  • OC'ing a Sapphire 7950, problems with memory clock

    - by Cliff
    I bought a new rig with sapphire radeon 7950 flex w/ boost. Good scores initially in 3dmark 11, around 7300 stock. Problem is, as soon as i start overclocking, issues arise. the core clock is at 860, but rises to 925 with boost applied in pressured situations. So all the OC tools are showing 925 as base clock for some reason. Ocing the core clock has been no problem though, got up to 1200 pretty stable, but only an incremental increase in 3dmark 11, to 7600 from 7300, which is worrying. The real trouble starts when i start touching the memory clock. As soon as i touch it even by 1 point up to 1251mhz, the performance goes way down. suddenly i score under 5000 graphics in 3dmark11, no matter if its 1251hz or 1500hz. Ive tried adjusting every other parameter, different tools (sapphiretrixx, catalyst, afterburner) all still the same. Tried upping the power, still same. Where is the issue here?

    Read the article

  • changing timezone with dpkg-reconfigure tzdata and debconf-set-selections

    - by Carlos Campderrós
    I want to set up a script that automatically changes the timezone on a machine (running ubuntu 11.10) and also sets the right values to the debconf database. I've tried the following, but it does not work (at the end, the current timezone remains unchanged, and if I run manually the dpkg-reconfigure tzdata command, the selected values are indeed the old ones): #!/bin/sh -e echo "tzdata tzdata/Areas select Europe" | debconf-set-selections echo "tzdata tzdata/Zones/Europe select Madrid" | debconf-set-selections echo "tzdata tzdata/Zones/America select " | debconf-set-selections dpkg-reconfigure -f noninteractive tzdata So, by now, I'm doing it messing with the files /etc/localtime and /etc/timezone directly, but I'd rather prefer the dpkg-reconfigure and debconf way.

    Read the article

  • Which tool do you use for your to-do list?

    - by howitzer
    I've been constantly switching tools that keep track of my to-do list: Online account in SimpleGTD To-do applet in my RSS aggregator Outlook's builtin Task List My cellphone's built-in to-do list Post-it notes on my monitors at home and at work Paper list in my wallet (thought about enhancing it and adding a pen like Jeff Atwood did) Which proved most useful for you?

    Read the article

  • Can I Be Alerted On-Screen Each Time Someone Remote Desktop's Onto My Windows 2003 Server

    - by Sohnee
    I work all day on a Windows Server 2003 machine and have noticed people "borrowing" my machine by using Remote Desktop to log in. This is pretty much "normal behaviour" at the company I work at, but I'd like to know when this is happening. Is there any way I can be alerted each time someone remote desktop's onto my server? A simple "Bob has logged in" would be great - and I imagine there is a facility somewhere to enable this.

    Read the article

  • SQL SERVER – ?Finding Out What Changed in a Deleted Database – Notes from the Field #041

    - by Pinal Dave
    [Note from Pinal]: This is a 41th episode of Notes from the Field series. The real world is full of challenges. When we are reading theory or book, we sometimes do not realize how real world reacts works and that is why we have the series notes from the field, which is extremely popular with developers and DBA. Let us talk about interesting problem of how to figure out what has changed in the DELETED database. Well, you think I am just throwing the words but in reality this kind of problems are making our DBA’s life interesting and in this blog post we have amazing story from Brian Kelley about the same subject. In this episode of the Notes from the Field series database expert Brian Kelley explains a how to find out what has changed in deleted database. Read the experience of Brian in his own words. Sometimes, one of the hardest questions to answer is, “What changed?” A similar question is, “Did anything change other than what we expected to change?” The First Place to Check – Schema Changes History Report: Pinal has recently written on the Schema Changes History report and its requirement for the Default Trace to be enabled. This is always the first place I look when I am trying to answer these questions. There are a couple of obvious limitations with the Schema Changes History report. First, while it reports what changed, when it changed, and who changed it, other than the base DDL operation (CREATE, ALTER, DELETE), it does not present what the changes actually were. This is not something covered by the default trace. Second, the default trace has a fixed size. When it hits that size, the changes begin to overwrite. As a result, if you wait too long, especially on a busy database server, you may find your changes rolled off. But the Database Has Been Deleted! Pinal cited another issue, and that’s the inability to run the Schema Changes History report if the database has been dropped. Thankfully, all is not lost. One thing to remember is that the Schema Changes History report is ultimately driven by the Default Trace. As you may have guess, it’s a trace, like any other database trace. And the Default Trace does write to disk. The trace files are written to the defined LOG directory for that SQL Server instance and have a prefix of log_: Therefore, you can read the trace files like any other. Tip: Copy the files to a working directory. Otherwise, you may occasionally receive a file in use error. With the Default Trace files, if you ask the question early enough, you can see the information for a deleted database just the same as any other database. Testing with a Deleted Database: Here’s a short script that will create a database, create a schema, create an object, and then drop the database. Without the database, you can’t do a standard Schema Changes History report. CREATE DATABASE DeleteMe; GO USE DeleteMe; GO CREATE SCHEMA Test AUTHORIZATION dbo; GO CREATE TABLE Test.Foo (FooID INT); GO USE MASTER; GO DROP DATABASE DeleteMe; GO This sets up the perfect situation where we can’t retrieve the information using the Schema Changes History report but where it’s still available. Finding the Information: I’ve sorted the columns so I can see the Event Subclass, the Start Time, the Database Name, the Object Name, and the Object Type at the front, but otherwise, I’m just looking at the trace files using SQL Profiler. As you can see, the information is definitely there: Therefore, even in the case of a dropped/deleted database, you can still determine who did what and when. You can even determine who dropped the database (loginame is captured). The key is to get the default trace files in a timely manner in order to extract the information. If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How can I compare Excel serial dates WITHOUT converting to mm/dd/yy type dates?

    - by dwwilson66
    I have a table that contains a number of values representing Excel serial dates. After a number of unsuccessful attempts to compare fields, my current approach is to do comparisons between serial dates instead of calendar dates. I am trying to summarize the data--by DAY--with formulae. CONSIDER: 41021 some data 41021.625 some data 41021.63542 some data 41022 some data 41022.26042 some data 41022.91667 some data 41023 some data 41023.375 some data DESIRED RESULT: 41021 sum of 41021, 41021.625 and 41021.63542 data 41022 sum of 41022, 41022.26042 and 41022.91667 data 41023 sum of 41023 and 41023.375 data In essence, for all instances of SerialDate.SerialTime, SUM data values associated with SerialDate.* regardless of the *.SerialTime for that date. While I can see how to do this by creating additional dates column formatted as =TEXT(<DateField>,"mm/dd/yyyy") I'm looking for a solution that will allow me to handle this 'conversion' in the formula, e.g.SUMIF((TEXT(<dateRange>,"yy/mm/dd"),=(TEXT(<dateField,"yy/mm/dd")),<dataRange> Make sense? Anyone have any ideas? Thanks

    Read the article

  • Time Application Startup

    - by Clinton Blackmore
    Is there anyway to time how long it takes an Application to start up on the Mac? We were getting reports of Word 2008 taking a half-hour to launch, and, while we think we've resolved the problem, it would be nice, in the future, to be able to: verify the veracity of such statements, and see if any of our changes make a difference.

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >