Search Results

Search found 19606 results on 785 pages for 'the thing'.

Page 250/785 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • Bingbot seems to be adding "ForceRecrawl: 0" to URLs when crawling my sites

    - by Louis Somers
    I'm seeing this in the iis-logs of two websites that I maintain: GET /an/existing/page/on/my/site+ForceRecrawl:+0 - 80 - 207.46.195.105 HTTP/1.1 Mozilla/5.0+(compatible;+bingbot/2.0;++http://www.bing.com/bingbot.htm) I get about one or two of these per day from these IP addresses: 207.46.195.105, 65.52.110.190.. an more, all belonging to msnbot-ip.search.msn.com Probably Microsoft has a bug in their crawler? Any way, doing a search on "ForceRecrawl: 0" in major search engines comes up with a bunch of random sites. Doing the search on StackOverflow or here gave no results (to my amazement). Am I the only one seeing this? I first noticed these on the 9th of this month, and I'm seeing them pass almost daily since... Another thing that I think is crazy, is that the URL http://www.bing.com/bingbot.htm redirects to mail.live.com (hotmail). Currently I'm returning 404's but I'm considering to catch these, strip the trailing " ForceRecrawl: 0" and process as if it were a legitimate url. Could anyone shed some light on this? Could it have to do with some configuration or so in Bing's Webmaster Tools?

    Read the article

  • Get rid of Vista security warning

    - by Ken
    I found this question. The question exactly matches my problem, but the solution doesn't work. In the Properties window, I see "Security: This file came from another computer and might be blocked to help protect this computer. ((Unblock))". When I click Unblock and Apply, the Security section disappears. But when I go to run it again, I still get the security warning. If I right-click and choose Properties on the exact same thing, the Security section is back, offering me the chance to Unblock it again. So unblock seems exactly as useless as the "Always ask" checkbox. Anyone seen this before? How do you really Unblock an app that Vista doesn't want to let you Unblock?

    Read the article

  • WP7 &ndash; Oh, You Wanted to Develop On Your New Phone? That&rsquo;ll Cost Ya!

    - by D'Arcy Lussier
    Had an interesting Twitter convo today about WP7 development. Question was raised on how to use a WP7 device as the deployment target from within VS.NET. Thinking that this would be an *obvious* question, I replied that you need to set the right value in one of the drop lists in the IDE… I did this, hooked up my device, then tried to run my app, just as a final test that it was as easy as I thought it would be. It wasn’t. So first, your phone can’t be locked, so make sure you unlock it. Also, don’t kill the Zune software when you notice it automagically started – its needed for VS.NET to deploy to your device. Finally, you need to register your device for development. Aiden Caine has a great article on what you need to do for this, but in a nutshell you need to launch the Windows Phone Developer Registration program found in the Windows Phone Developer Tools folder. Now, here’s the catch to all of this: You must have a Windows Phone AppHub account. As in paid account. That’s right – to do development on your actual device, you need to have a $99 ($120 in Canada) AppHub developer membership. Now, I get this – if Microsoft didn’t put this restriction, then they’d be back in Mobile 6.x land where anyone could install whatever app to whoever, whenever, and without any standards being upheld. This is the same thing that Apple does with their marketplace, its not something unprecedented. But, it is something that will be new to the majority of Microsoft developers that have lived without application restrictions for years. Now, if you’re in the US then you have the opportunity to get a rebate on that $99 fee from Microsoft if you publish two apps successfully. You can get more details on this offer here.

    Read the article

  • Accessing the output of a Bash pipe with 'read'

    - by Karthik
    I'm trying to pipe some data from a Bash pipe into a Bash variable using the read command, like this: $ echo "Alexander the Grape" | read quot $ echo $quot $ But quot is empty. Some Googling revealed that this is not a bug; it's an intended feature of Bash. (Section E5 in the FAQ.) But when I tried the same thing in zsh, it worked. (Ditto for ksh.) Is there any way to make this work in Bash? I really don't want to have to type: $ quot=$(echo "Alexander the Grape") Especially for long commands.

    Read the article

  • Win7-Server2008 RDP connection hangs on "Securing Remote Connection" for 20-30 seconds

    - by JohannesH
    I have a problem that googling has turned up nothing, except this question on experts exchange which I borrowed most of the text from. :) When I connect via Remote Desktop to a new Windows 2008 R2 server it takes 20-30s to get past the "Securing Remote Connection" message during the login. If the password is wrong, it does this every time you attempt a login (ie its not a one time thing). However, after a successful login attempt the following logins to the same server goes faster. Most servers runs on VMware here, but I don't know if that has anything to do with it.

    Read the article

  • Old (Leopard) Expose on Snow Leopard (for mac)

    - by Cawas
    I'm amazed this question haven't been asked here. There are many discussions about it out there and even a way to remove the blue glow. Of course I've already filled my complain with apple... But, is there a way to have the old expose on Snow Leopard? Or maybe a mix of both. The only thing I like on the new one is viewing minimized windows, but not always. So I'd like better been able to tweak it a little bit, but just having the old expose would be fine.

    Read the article

  • Microsoft Test Manager error in displaying test steps caused by malware

    - by terje
    Sometimes the tool is blamed for errors which are not the fault of the tool – this is one such story.  It was however, not so easy to get to the bottom of it, so I hope sharing this story can help some others. One of our test developers started to get this message inside the test steps part of a test case in the MTM. saying “Could not load file or assembly ‘0 bytes from System, Version=4.0.0.0,……..” The same error came up inside Visual Studio when we opened a test case there. Then we noted a similar error on another piece of software – this error: A System.BadImageFormatException, and same message as above, but just for framework 2.0. We found this  description which pointed to a malware problem (See bottom of that post), that is a fake anti-spyware program called “Additional Guard”.  We checked the computer in question using Malwarebytes Anti-Malware tool.  It found and cleaned out 753 registry keys!!  After this cleanup operation the error was gone.  This is a great tool !  The “Additional Guard” program had been inadvertently installed, and then uninstalled afterwards, but the corrupted keys were of course not removed.  We also noted that this computer had full corporate virus scanning and malware protection, but still this nasty little thing still slipped through. Technorati Tags: Malware,BadImageFormatException,Microsoft Test Manager,Malwarebytes

    Read the article

  • Variable parsing with Bash and wget

    - by Bill Westrup
    I'm attempting to use wget in a simple bash script to grab a jpeg image from an Axis camera. This script outputs a file named JPEGOUT, instead of the desired output, which should be a timestamp jpeg (ex: 201209292040.jpg) . Changing the variable in the wget statement from JPEGOUT to $JPEGOUT makes wget fail with "wget: missing URL" error. The weird thing is wget parses the $IP vairable correctly. No luck on the output file name. I've tried single quotes, double quotes, parenthesis: all to no luck. Here's the script !/bin/bash IP=$1 JPEGOUT= date +%Y%m%d%H%M.jpg wget -O JPEGOUT http://$IP/axis-cgi/jpg/image.cgi?resolution=640x480&compression=25 Any ideas on how to get the output file name to parse correctly?

    Read the article

  • How do I get KLIPS or NETKEY on 11.10 server?

    - by Incognito
    I'm attempting to run OpenSWAN on my Ubuntu11.10 server. All I've done so far is install openswan from the package manager and attempt to set up conf files. However, IPSec support seems to be broken, thus OpenSWAN can't do it's thing. Attempt to start IPSec $ sudo ipsec setup --start ipsec_setup: Starting Openswan IPsec 2.6.28... ipsec_setup: No KLIPS support found while requested, desperately falling back to netkey ipsec_setup: Even NETKEY support is not there, aborting Verify IPSec $ sudo ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.28/K(no kernel code presently loaded) Checking for IPsec support in kernel [FAILED] Checking that pluto is running [FAILED] whack: Pluto is not running (no "/var/run/pluto/pluto.ctl") Checking for 'ip' command [OK] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] IPSec Version $ sudo ipsec version Linux Openswan U2.6.28/K(no kernel code presently loaded) See `ipsec --copyright' for copyright information. Linux build: $ uname -a Linux metabox 2.6.18-028stab092.1 #1 SMP Wed Jul 20 19:47:12 MSD 2011 x86_64 x86_64 x86_64 GNU/Linux How can I go about correcting this problem with IPSec? This is a hosted VPS, and I'd like to avoid a kernel rebuild if I can find some other alternative.

    Read the article

  • Sendmail background process sometimes processes queue, but sendmail -q always works

    - by markmcb
    I'm using sendmail version 8.14.4 on Fedora 15 to send email. My Rails app uses delayed_job to queue up emails. Messages will queue up in /var/spool/mqueue as expected, but don't always get processed. I can see the messages and sendmail is definitely running in the background. Restarting the process does nothing. However, when I issue the sendmail -q command, sendmail gets to work and starts sending. The really odd thing is that this behavior only occurs sometimes. Other times message queue up and are delivered as expected. I've tried tweaking various sendmail configs to reduce the time between queue processing (for example, adding define('confMIN_QUEUE_AGE', '0')dnl to /etc/mail/sendmail.mc), but nothing seems to do the trick. Any ideas what might be the root cause?

    Read the article

  • Restart Fibre channel controller after blade bootup IBM HS bladecentre

    - by Spence
    I have a remote system that needs to resume on startup. If the system is simply powered on then the blades boot before the SAN is online and then the only thing you can do is restart the systems. Is it possible to restart the fibre channel controller? That way I could have a system restart the controller after boot, connect to the SAN and then restart all servers requiring SAN information? Please note that I'm not a sys admin, just shooting for ideas to get a clean startup to work, apologies if my terminology is wrong.

    Read the article

  • Windows 7 won't recognize backup set can I script extracting the files in some other way?

    - by datatoo
    The Windows 7 Backup/Restore created multiple backup sets and I was able to restore the oldest version, but not the most recent, which is not seen by the application. I do see all of the zip files and there are hundreds in later versions. Is there a way to extract each of these correctly outside of the regular restoration method? Perhaps scripting an extract of each day one after another? further clarifying The backup files were all made to an external drive. The original computer died completely, power supply, drives everything. I am trying to reconstruct as much as possible and the only backup set recognized is 6 months older. This was recovered over a new install, but unzipping thousands of zip files is not really a simple unzip copy project as the original paths are not a simple thing to reconstruct.

    Read the article

  • Tomcat 6 IP restrictions

    - by KB22
    I need to protect a certain folder within a web application of mine from access from outside of an defined IP range. With O'Reilly's Tomcat Tips I figured that: <Context path="/path/to/secret_files" ...> <Valve className="org.apache.catalina.valves.RemoteAddrValve" allow="127.0.0.1" deny=""/> </Context> Is the way to go? I'm not that much into tomcat configuration so I'm dazzled a little as to where to put these restrictions. Do I put this Within my web.xml or is this a thing I need to add to some general tomcat conf file?

    Read the article

  • SketchUp: Weird regions in sketch

    - by Josh M.
    I'm using SketchUp 8.0.15158. My sketch has weird regions/lines in it that I can't explain or get rid of. I've redrawn almost the whole thing and the lines persist! See here, highlighted in yellow: These dotted lines only show up when I triple-click the back face to select the whole object. Here's what the sketch looks like without selections: What are these lines and how can I get rid of them? They are causing weird issues that I can't seem to get around.

    Read the article

  • Making Google Talk messages appear on all logged in clients

    - by Ryan Hayes
    We're using Google Talk as our unofficial-official chat client around the office at work. One thing that poses a big problem almost every day is the fact that Google Talk only sends a message to the clients that you last used. Even though you may be logged into GTalk on 3 different machines, if you start talking on one machine, that becomes your "active" machine, and if you go to another machine, you will still only get messages on the last active machine. Is there a way that you can force Google Talk to send messages to ALL logged in clients, regardless of which client you are actively using? That way, you don't miss any messages during the time between when you get up from the active machine and then make the new client "active".

    Read the article

  • What Would Google Do?

    - by David Dorf
    Last year I read Jeff Jarvis' book What Would Google Do? and found it very interesting. Jeff is a long-time journalist that's been studying technology, and more specifically the internet. He used his skills to reverse-engineer Google into a list of "Google rules," then goes on to describe a futuristic world driven by these rules. Its an interesting look at crowd-sourcing, openness, and collaboration across many industries, including retail (Google Shops). This year Jeff Jarvis will be a keynote speaker at CrossTalk, Oracle's user conference dedicated to the retail industry. This year's theme is... Retail Redefined: Redesign. Reinvigorate. Reimagine. I think that's pretty appropriate given the massive changes the industry has undergone during the last three years. The thing I really like about this conference is that we try to let the retailers do most of the talking. I'm very interested in hearing about the innovative projects they've got brewing, and where they think our industry is heading. I'll be speaking, but I'm not sure about what so let me know of any interesting topics.

    Read the article

  • Excel hyperlink not redirecting properly (bug?)

    - by Andrej
    I don't know is this is the right place to post this questions, but I have an excel hyperlink problem. Here's the thing. I click on let's say "A1", copy the link in it (http://www.godaddy.com/domains/searchresults.aspx?ci=54814), right click on hyperlink and copy that SAME URL as the link (if it is not automatically detected and changed). When I go to click on it, I am readirected to http://www.godaddy.com/domains/search.aspx?ci=53972. If I copy and paste the link directly into the browser, it works fine. Somebody knows what's going on? Thank you for your time. Andrej

    Read the article

  • The procedure entry php_mb_check_encoding_list could not be located

    - by BlackFire27
    I get this error. The procedure entry php_mb_check_encoding_list could not be located in the dynamic link library php_mbstring.dll The error is related to the php.ini settings, I think.. It finds it difficult to locate the extension I suspect, I put an environment path to the php.exe and its extensions folder, but when I run the command line and call php.exe, that error is being thrown. I use windows. I have got this set in my php.ini. ; On windows: extension_dir = "C:\xampp\php\ext" I installed xampp.. the php version is the latest.. Another thing which is weird is when I try to print out: phpinfo..nothing is being printed out <?php echo phpinfo(); ?> nothing comes out?!?! The question is why and how can I prevent it from being thrown..

    Read the article

  • PHP not loading php.ini in directory or following error_reporting() on Windows 7

    - by Marcus
    Normally I develop under E_ALL error level, but for sanity on this project I want notices and strict off. So initially tried: error_reporting(E_ALL & ~(E_STRICT|E_NOTICE)); And several other combinations of the same thing, nothing worked. Next I tried to create a local php.ini the directory with error_reporting = E_ALL & ~E_NOTICE but nope, that didn't work either. phpinfo() is reporting: Scan this dir for additional .ini files: (none) Can someone help me fix either of these problems? Preferably both! Thanks! I'm running PHP Version 5.2.13 on Apache/2.2.14 under Windows 7 x64.

    Read the article

  • Measuring 'total bytes written' under Linux

    - by badnews
    We're quite interested in exploring the possibility of using SSD drives in a server environment. However, one thing that we need to establish is expected drive longevity. According to this article manufacturer's are reporting drive endurance in terms of 'total bytes written' (TBW). E.g. from that article a Crucial C400 SSD is rated at 72TB TBW. Do any scripts/tools exist under the Linux ecosystem to help us measure TBW? (and then make a more educated decision on the feasibility of using SSD drives)

    Read the article

  • No USB 04b4:0307 mike input after 10.04

    - by Papou
    I am using an USB phone that is in fact a so-called "sound card" based on the 04b4:0307 chip. In fact, I have two different phones using 04b4:0307 and in fact I have a sound USB key too. This, I believe, is the start of why they call 04b4:0307 "ubiquitous" (instead of oh four bee...). But not "eternal". The mike worked in Ubuntu 9.10 and 10.04 but not later ([email protected]). "not working" means that 04b4:0307 shows in Sound Preference but that its vu-meter is mute. I have posted the full lsusb and the result of tests in various systems here: http://www.papou.byethost9.com/tmp/1043601.html Note: Tests done on VirtualBox (thankfully). But UbuntUnity no longer works on VB, so I used the LinuxMint equivalent. ?hat's fate. I could not find any problem report close enough. What should be my next step? I believe the problem occurs in module snd-usb-audio. One thing I might try if I knew how is hacking a DEB with its 10.04's source. I can hack DEBs. Any hint welcome on how to make a DEB overriding a kernel packed module. I mean that the newly installed module should have precedence, be loaded instead, the module installed with the kernel. TIA !

    Read the article

  • Are all bluetooth adapters the same ?

    - by ldigas
    I have a wireless bluetooth mouse which I'm not using. It used to have a bluetooth adapter with it, which I lost a long time ago ... (don't ask). Since my regular mouse just died (bad contact in cable from messing with it too much) I was thinking of buying just the new generic bluetooth adapter ? Are all those adapters the same thing ? Or can this that came with the mouse be somehow different ? Edit by ldigas: How would one find out what bluetooth standard/class/adapter one needs ? (I don't see anything useful written on the sticker on the mouse). Or to put it bluntly - will it work with this one in your opinion ?

    Read the article

  • Repeated Reporting Services Login issue when deploying through BIDS to a remote server

    - by Richard Edwards
    We are having a problem deploying a reporting services report to a sql reporting services computer that is configured in SharePoint Integrated mode. I can successfully deploy to the SharePoint document libraries set up for reports and data connections if I do it locally from the box that SharePoint and Reporting Services are deployed on. If I try and do the same thing with the exact same deployment properties from a remote box, I constantly get a Reporting Services Login dialog popping up and no combination of domain\username and password will work. I've even tried the machines local admin account and still nothing. Any ideas where to start looking?

    Read the article

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

  • Elastic versus Distributed in caching.

    - by Mike Reys
    Until now, I hadn't heard about Elastic Caching yet. Today I read Mike Gualtieri's Blog entry. I immediately thought about Oracle Coherence and got a little scare throughout the reading. Elastic Caching is the next step after Distributed Caching. As we've always positioned Coherence as a Distributed Cache, I thought for a brief instance that Oracle had missed a new trend/technology. But then I started reading the characteristics of an Elastic Cache. Forrester definition: Software infrastructure that provides application developers with data caching services that are distributed across two or more server nodes that consistently perform as volumes grow can be scaled without downtime provide a range of fault-tolerance levels Hey wait a minute, doesn't Coherence fullfill all these requirements? Oh yes, I think it does! The next defintion in the article is about Elastic Application Platforms. This is mainly more of the same with the addition of code execution. Now there is analytics functionality in Oracle Coherence. The analytics capability provides data-centric functions like distributed aggregation, searching and sorting. Coherence also provides continuous querying and event-handling. I think that when it comes to providing an Elastic Application Platform (as in the Forrester definition), Oracle is close, nearly there. And what's more, as Elastic Platform is the next big thing towards the big C word, Oracle Coherence makes you cloud-ready ;-) There you go! Find more info on Oracle Coherence here.

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >