Search Results

Search found 54098 results on 2164 pages for 'something broken'.

Page 86/2164 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • How do I setup a syslog server for my network?

    - by Solignis
    I would like to setup a syslog server to forward all log file from all of my VMs and servers. I really don't much about what is out there. So I turn to the community, Something on Linux is fine, what I want more is alert ability like emails telling me something is not right. If there was something to sort the logs by source that would be cool. Where would I want run the syslog server from? My admin WS or a server/VM? Any input would be wonderful. Thanks in advance.

    Read the article

  • Tool to test a user account and password (test login)

    - by TheCleaner
    Yeah, I can fire up a VM or remote into something and try the password...I know...but is there a tool or script that will simulate a login just enough to confirm or deny that the password is correct? Scenario: A server service account's password is "forgotten"...but we think we know what it is. I'd like to pass the credentials to something and have it kick back with "correct password" or "incorrect password". I even thought about a drive mapping script with that user account and password being passed to see if it mapped the drive successfully or not but got lost in the logic of making it work correctly...something like: -Script asks for username via msgbox -script asks for password via msgbox -script tries to map a drive to a common share that everyone has access to -script unmaps drive if successful -script returns popup msgbox stating "Correct Password" or else "Incorrect Password" Any help is appreciated...you'd think this would be a rare occurrence not requiring a tool to support it but...well....

    Read the article

  • Unix: sleep until the specified time

    - by aix
    Is there a (semi-)standard Unix command that would sleep until the time specified on its command line? In other words, I am looking for something similar to sleep that would take wakeup time rather than duration. For example: sleeptill 05:00:00 I can code something up but would rather not re-invent the wheel if there's already something out there. Bonus question: it would be great if it could take the timezone (as in sleeptill 05:00:00 America/New_York). edit Due to the nature of what I am doing, I am looking for a "sleep until T" rather than "run command at T" solution. edit For the avoidance of doubt, if I run the command at 18:00 and tell it to wake up at 17:00, I expect it to sleep for 23 hours (or, in some corner cases having to do with daylight savings time, for 22 or 24 hours.)

    Read the article

  • I want to host my own multiple Websites on my server. How do i setup my own child nameserver?

    - by basilmir
    I host my own multiple Websites on my server. How do i setup my own child nameserver? There are 4 different websites, with .com and .net and .ro at the end. I moved them to my own server. My Domain Administration (Registrar) let's me define my own child nameserver with my own IP and i've added my nameserver as my domain nameservers list. ns.something.ro as the first and only in the nameservers list ns.something.ro with my own IP address as the child nameserver I've setup everything and it works (kind of). When i use my ns IP adress directly i can of course access everything. Using "normal" external dnses will not work. As expected others on the web can't resolve correctly. What's wrong? Am i missing something?

    Read the article

  • Is rsync corrupting my RAR?

    - by Mark Henderson
    We have two qnap devices - one in our datacentre and one off-site. We have hundreds of password protected RAR files stored on the qnap that contain virtual machine image snapshots, with approx 20 of them being created each day. We synchronise the two devices using rsync, and it looks like all the files are being rsynced OK - they come over and have the same file size and all the files are present and accounted for. However, when I try to open the RAR files on the remote site, I get Cannot open \\qnap01\FromDatacentre\Snapshots\DB001SQL1-20110626.rar I can open the RAR files on the local site just fine, so I assume that something is getting mangled during the rsync procedure. However, the older files (pre 2011-06-20) work just fine, it's something that's only started happening in the last week. There haven't been (as far as I know) any changes to any of the devices, setup or configuration in that time. Obviously something has changed though. Where should I start investigating?

    Read the article

  • Redirect audio from laptop to desktop over LAN

    - by Ram Rachum
    I want to be able to play a song on my laptop and have it sound through my desktop's (infinitely better) speakers. If you're familiar with Input Director: I want something that is to audio what Input Director is to mouse/keyboard. I want something that automatically redirects all audio from the laptop to the desktop in real time, and I want that solution to require, like Input Director, minimum maintenance. Beyond the initial setup, I don't want to have to babysit the program that does this. I want something that launches automatically with Windows and just works, and also allows me to cancel it whenever I want. And also doesn't go crazy when the laptop is turned on in a different network where the desktop computer isn't available. Any suggestions for such a program? (I use Windows XP on both computers.)

    Read the article

  • Windows 7, file properties, date modified, how do you show seconds?

    - by Jordan W.
    Anyone know a way to immediately show the seconds of a file's date modified property in the GUI? So if you create a file, any file in any directory, right-click and choose Properties, the date modified (if it's recent) will say something like "dd/mm/yyy hh:mm, one minute ago" - reminder this is in Windows 7. Windows XP did it normally. Then they changed something. If you wait a while, eventually you'll see the seconds, I'm not sure how long a while is, but this is incredibly annoying if you want to troubleshoot something that relies on the seconds of timestamps... is there a setting? registry key I can change perhaps? I'm literally using Chrome, pasting in the path of the directory to be able to see the seconds quickly (as a workaround) but would be nice to be able to use Win7.

    Read the article

  • nginx rewrite base url

    - by ptn777
    I would like the root url http://www.example.com to redirect to http://www.example.com/something/else This is because some weird WP plugin always sets a cookie on the base url, which doesn't let me cache it. I tried this directive: location / { rewrite ^ /something/else break; } But 1) there is no redirect and 2) pages start shooting more than 1,000 requests to my server. With this one: location / { rewrite ^ http://www.example.com/something/else break; } Chrome reports a redirect loop. What's the correct regexp to use?

    Read the article

  • Web Content Filtering for Windows Clients

    - by djoyce
    I'm working with a small business to solve a bunch of problems. One is their Windows 7 POS registers need to have web access restricted to only three remote support sites, but the back office machine needs an unfiltered connection. I'd like something I can install and configure on the few registers to block all but those few sites. In a perfect world this would restrict the normal register user, but the admin user would not be filtered. Free is best, if it works, but a small fee would be alright too. Microsoft's Family Safety filter is close, but requires a Windows Live account, which isn't ideal, but may be alright. Anyone use this in a small business environment? I'd prefer something easily managed at the local machines. K9 Web Protection is interesting and I'm going to look into it more. Are there other options? Seems like someone would have made something simple like this as an open source project, but maybe not.

    Read the article

  • Why can't email clients create rules for moving dates like "yesterday"?

    - by Morgan
    I've never seen an email client that I could easily create a rule to do something like "Move messages from yesterday to a folder?" Is there some esoteric reason why this would be difficult? I know I can easily create rules around specific dates, but that isn't the same thing by a long shot; am I missing something? In Outlook 2010 I can create search folders that do sort of this type of thing, but you can't create rules around a search folder... seems like either I am missing something major, or this is terribly short-sided.

    Read the article

  • Redirecting output to email

    - by Alpha
    I am aware that in Windows you can use the following snippet to redirect the output of a command line tool to the clipboard: mytool | clip Also, it can be done with files: clip < myFile.txt I also know Windows has several other devices you can use as output / input for these redirections (probably the printer is the most famous of them). However, is there something to send the text to the default email program? I would love to find that there is something like: mytool | email. Is there something alike?

    Read the article

  • Monitor torrent user in network [closed]

    - by Usman
    I am using active directory in windows server 2008 R2 having IP address is 10.10.10.10 and my DSL modem Ip is 10.10.10.101. All clients are using 10.10.10.101 default gw to access the Internet. I don't know who is using Torrent in network or downloading something via IDM or something else. I just want to Monitor my clients for who are download via torrent or whatever. Is there anything in Windows Server 2008r2 that would allow me to do this monitoring, or do I need something else?

    Read the article

  • Matching a smaller piece of text in a local variable to a larger piece of text that I have pulled in a query

    - by Hoser
    I think this is a simple question, but google searching for 30 minutes was mostly wasted time as all I can find is matching a variable to a 'randompieceoftext'. Anyways, suppose I have a local variable called @ServerName. This server name will be something like CCPWIQAUL. I need to match this server name to various path names, which is of the form: serverName.something.somethingelse.com These path names are pulled from a database, and will be in the column vManagedEntity.Path How do I do something like this? Is @ServerName is IN vManagedEntity.Path?

    Read the article

  • How do I use a custom 503 error page with Nginx?

    - by Michael Gorsuch
    Hi. I have implemented rate limiting with Nginx (which works excellently, by the way) and would like to display a custom 503 error page. I have followed examples on the web without luck. I am running a simple configuration that looks something like this: listen x.x.x.x:80 server_name something.com root /usr/local/www/something.com; error_page 503 /503.html; location / { limit_req zone=default burst=5 nodelay; proxy_pass http://mybackend; } The idea is that our rate limited users would be shown a special page explaining what was going on. The rate limiting is working, but the built-in 503 page is rendering. Any ideas?

    Read the article

  • Is there a keyboard shortcut to indent a nested bullet point in a table cell the proper way?

    - by ray023
    Open Word and insert a table (1 x 1 will work just fine). Right-click in the table and, in the context menu, select "Bullets" and a bullet image from the bullet library. Type something and press enter. Type something else, but, instead of clicking enter, right-click and select "Increase Indent" Notice something else moves into the proper indentation of a nested bullet: Outside of a Word table, you would simply press tab to get this behavior, but I want a keyboard shortcut (if available) to do this inside the table. This is what I've tried: Ctrl + Tab: Just indents the text, not the bullet Ctrl + T: Same as Ctrl + Tab Ctrl + M: Indents the text and the bullet but does not change the bullet style Can this be done outside the right-click context menu?

    Read the article

  • Alternative to Xcode [closed]

    - by Moshe
    For those familiar with Windows, I used to use "Programmer's Notepad" and "TextPad". I want a Text Editor, not an IDE, with the following few features (in no particular order): Syntax Highlighting Tabbed editing Projects (groups of files together in one "project" file) Spellcheck Lightweight - I prefer something other than Xcode. Updates I am looking for something primarily for web development. Must be Free. +1 For relevant link(s). GUI Based only Conclusion I realized that gEdit is available for OS X. I met it back on Ubuntu 6. Something-or-other. We have a winner. all for input.

    Read the article

  • How to delete a file after vlc finishes playing it?

    - by Codezilla
    I know you can have vlc shut down your pc after you finish playing something but I'm trying to figure out a way to delete the file, preferably with a confirmation dialog box after you finish playing something. Ideally it would only do this if the file was actually played to at least 90% of the file so as not to have unnecessary deletions but that may not be possible. Running a batch file to play the video would work, but it'd have to be something easy to do. Having to edit a batch file with the name of the video to play each time would get old fast. Any ideas?

    Read the article

  • Program to Queue Files for Copy/Move/Delete in linux?

    - by laliga
    I've search the net for Linux's answer to something like Teracopy (Windows)... but could not find anything suitable. The closest things i got are: Krusader. Mentioned in their features but indicated as 'not implemented yet'. MiniCopier. A java based app http://a.courreges.free.fr/projets/minicopier/minicopier-en.php rsync is not an option. Can someone recommend me a simple file copy tool that can queue files for copy/move/delete? Preferably if I can drag and drop from Nautilus. If something like this does not exist, can someone please tell me why? ...am I the only person that needs something like this?

    Read the article

  • I'm Not Bi-Polar, I'm Bi-Winning

    - by David Dorf
    On March 1st, Charlie Sheen joined Twitter and was able to amass 1M followers in 25 hours and 17 minutes, setting an official world record.  So why does it take your brand so long to collect followers?  Easy: you're brand isn't a train wreck.Wouldn't it be great if your customers we chatting about your products as much as they're talking about Charlie #winning?  There are a couple things retailers can do.  First, you can offer check-ins to your customers, which can occasionally get a "ooh, what are you buying there?" in the social network. Another methods is to allow customer to "like" particular products on your Web site.  Companies like Wet Seal excel at that.We've been experimenting with automatic posting from the POS, assuming a customer has opted-in.  When you buy something in a store, the POS can automatically post "Dave just bought something at Wet Seal" to Facebook, Twitter, and Foursquare simultaneously.  We stopped short of mentioning the specific product so we don't pull a Beacon.  The idea is the same: get the conversation started.  Give customers a virtual water-cooler where they can discuss products and influence buying decisions.The guys over at ShopSocially have done something very similar.  On the Facebook page for Cafe Press, customers can claim purchases, effectively bragging on their walls.  Each posting goes through the Facebook newsfeed and gets friends interested.  They are seeing over 1,000 purchases being shared daily, and that's generating over 300,000 brand impressions.Sounds like a winning idea.

    Read the article

  • Is Perl still a useful, viable language?

    - by Bob
    I know it may have been asked before, but here goes nothing... Is Perl still something that would be considered useful? If someone was a new programmer (either completely new to programming or just a few month/years of experience) would Perl be something to be considered worthwhile to learn? Is Perl still used with frequency? Is it still popular? Or is Perl dying out compared to languages like Python, Ruby, PHP, ASP, .NET, etc.? Basically it boils down to this: Is it still used/is it still used frequently? If yes, is it dying? If no, will it make a come back? Is it something that would be worth learning? How does it compare in demand to languages like Python in both popularity and usability/viability? Could languages like Python or Ruby be considered replacements for Perl? Also, will newer versions of Perl really bring a large improvement to the Perl community, and perhaps bring Perl back to centerstage compared to other languages? EDIT: Okay, I suppose here's a better, reworded question: Is Perl still growing, or is it "dying"? Is it still a language worth learning and using? What projects does it really "shine" in compared to other languages? What makes Perl a language to choose? Essentially: is Perl growing obsolete compared to other languages, and if so, do you expect that to change, or to continue? And thank you to everyone who has answered so far, the discussion has been really interesting!

    Read the article

  • Can notes/to-dos in code comments sent to code-reviews result in an effective refactoring process?

    - by dukeofgaming
    I want to start/improve a culture of collective code ownership at my company but at a geographically distributed level... I'd say there is some current collective code-ownership mentality, but only at single geographical sites. This is a follow-up to this question: What is the politically correct way of refactoring other's code? I'm just wondering if submitting *just code comments* for code reviews (we have ReviewBoard, possibly upgrading to Crucible) could actually be an effective mechanism to get the conversation started on improving code, without having others feel territorial about their code. For example, if I add: //ToDo: Refactor this code and that code because of reasons X and Y Then, submit it for code review, and it gets accepted... it could be considered as an agreement (which I think is sometimes harder to get with new code up front). At the same time, the author (and others) might have an easier time digesting and accepting the proposal; rejecting a proposal because it might break things will not longer be a valid reason and therefore the fear of making a change is lost... and at the same time, do not invest 10 hours optimizing something that no one thinks it is worth it and opposes to it just out of fear. This is all conjecture, but I'm feeling something like this (submitting refactoring notes in code comments at the code-review process) would work. Has anyone done something like this in practice?, if so, what have been the results?

    Read the article

  • Scanner that worked with Ubuntu 10.4 cannot be found by 13.4

    - by stevecoh1
    My computer previously ran Ubuntu 10.4. After upgrading to 13.4, my Epson scanner no longer can be found by the system. Following documentation, I find the following: $ sane-find-scanner # sane-find-scanner will now attempt to detect your scanner. If the # result is different from what you expected, first make sure your # scanner is powered up and properly connected to your computer. # No SCSI scanners found. If you expected something different, make sure that # you have loaded a kernel SCSI driver for your SCSI adapter. could not open USB device 0x046d/0x082b at 001:007: Access denied (insufficient permissions) ... # No USB scanners found. If you expected something different, make sure that # you have loaded a kernel driver for your USB host controller and have setup # the USB system correctly. See man sane-usb for details. ... If I instead run sudo sane-find-scanner, I get $ sudo sane-find-scanner # sane-find-scanner will now attempt to detect your scanner. If the # result is different from what you expected, first make sure your # scanner is powered up and properly connected to your computer. # No SCSI scanners found. If you expected something different, make sure that # you have loaded a kernel SCSI driver for your SCSI adapter. found USB scanner (vendor=0x04b8 [EPSON], product=0x0131 [EPSON Scanner]) at libusb:001:009 could not fetch string descriptor: Pipe error could not fetch string descriptor: Pipe error # Your USB scanner was (probably) detected. It may or may not be supported by # SANE. Try scanimage -L and read the backend's manpage. So what do I do? scanimage -L does nothing for me and I don't know what the "backend's manpage" might be. It's seems likely that this is a permissions issue since the scanner can be found as root, but I don't know how to solve it. Can someone help?

    Read the article

  • ASP 3.0 Folder/File Permissions Settings (ASP Classic)

    - by ASP Pee-Wee
    Dear Stack Exchange, Hi, I have built a form input page in HTML that has an action to post to an ASP handler/processor .asp file. The form handler/processor .asp file contains only <% Insert VBScript Here % and no HTML output whatsoever. The .asp file was never intended to be a "web viewable" .asp file like an .asp home page file or html file would. It's supposed to be for my eyes only- not the public's however it does need to take info posted by the public and do something with it on it's end. I have used VBScript/ASP3.0 to build the form handler/processor file and would like to know how to keep someone from viewing the actual VBScript in the handler/processor .asp file. I am aware of obfuscation but I would like to know how to keep prying eyes from even being able to take a look at the obfuscated code in the handler/processor file. I realize that the server executes the .asp file first before outputting anything to the browser so I guess that my main concern is mostly that someone may could "download" the form handler/processor .asp file, then view it's contents on their machine. Assuming the form handler .asp file is where it is, behind the root, and is on a windows server (no htaccess approach) how could one protect it so that it could never be viewed or simply pulled down via anonymous ftp or something like that? Is there something like "script only" permissions that the system administrator could set up for a particular folder? Remember, with shared hosting I can't go above the root. If so, would the form still be able to post? How would any of you guys go about protecting the asp file in addition to obfuscation? Any help would be greatly appreciated. Thanks, ASP Pee-Wee

    Read the article

  • Code Bubbles: Disruption comes to the IDE

    - by andrewbrust
    If you’re like me, you might see the open source Eclipse IDE as a copy or, more generously, a port of the Microsoft’s Visual Studio for the non-.NET world.  It’s not that Microsoft invented the IDE (I would credit Borland with that), but they really took the idea and ran with it for the first version of Visual Studio .NET in 2002.  The question is whether someone outside of Microsoft could take the modern IDE yet another major step forward in both principle and productivity. I think that has actually happened already, and I think the innovator in question is a second-year Computer Science PhD student at Brown, named Andrew Bragdon.  His project, which he calls Code Bubbles, is an IDE that allows for editing, debugging and exploration of code in “bubbles” which remind me a little bit of the discrete note tiles on OneNote…but they’re much more than that.  Bubbles actually allow for call stack traversal, saved debug sessions, sophisticated breakpoint and value watch behaviors and more.  And because bubbles, unlike windows, are borderless, and focus on code fragments rather than whole files, the de-cluttering effect is unbelievably liberating.  The best way to understand what Code Bubbles does is to watch the screencast video:     Code Bubbles is an IDE for Java development.  Why didn’t Microsoft come up with something like this for .NET devs?  Between the existing features in Visual Studio 2010, its WPF code editor, and the fact that OneNote’s UI bears some affinity to Code Bubbles’, it’s interesting that Microsoft still has not thought outside of its own “box” to get us something like this. Heck, that’s easy for me to say.  But it’s easy for you to say that you’d like something like this in Visual Studio sometime soon.  That’s because the ASP.NET site within UserVoice is taking votes on this very issue.  Just click this link and vote! Thanks to my fellow Microsoft Regional Director Sondre Bjellås for making me aware of Code Bubbles, and to RD Steve Smith for creating the UserVoice voting option.

    Read the article

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >