Search Results

Search found 15255 results on 611 pages for 'michael little'.

Page 12/611 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • SQL Azure Federations and Semantic Search by the SQL product team in London tonight (Monday)

    - by simonsabin
    Don’t forget that tonight we have Michael Rys from the SQL Server Product Team presenting on the Federation support coming to SQL Azure and the Semantic search coming in SQL Server Denali. This is a must attend evening for anyone that is serious about scaling SQL or doing search in SQL Server. Michael also has a few other hats including Microsoft’s representative on the W3C XML Query Working Group. To register go to http://sqlsocial20110613.eventbrite.com/   Ps Beer and Pizza will be laid on...(read more)

    Read the article

  • TortoiseSVN and Subversion Cookbook Part 3: In, Out, and Around

    Subversion doesn't have to be difficult, especially if you have Michael Sorens's guide at hand. After dealing in previous articles with checkouts and commits in Subversion, and covering the various file-manipulation operations that are required for Subversion, Michael now deals in this article with file macro-management, the operations such as putting things in, and taking things out, that deal with repositories and projects.

    Read the article

  • T-SQL Tuesday #006: LOB, row-overflow and locking behavior

    - by Michael Zilberstein
    This post is my contribution to T-SQL Tuesday #006 , hosted this time by Michael Coles . Actually this post was born last Thursday when I attended Kalen Delaney's "Deep dive into SQL Server Internals" seminar in Tel-Aviv. I asked question, Kalen didn't have answer at hand, so during a break I created demo in order to check certain behavior. Demo goes later in this post but first small teaser. I have MyTable table with 10 rows. I take 2 rows that reside on different pages. In first session...(read more)

    Read the article

  • Monitoring Database disk space

    - by Michael Freidgeim
    An article Data files: To Autogrow Or Not To Autogrow? recommends NOT to rely on auto-grow, because it causing delays in unplanned times.We should mtonitor database files(both data and log), and if they close to max capacity, manually increase the size. However it doesn't give references, how to monitor the free space inside databases. I've tried to look how to do it. It can be done manually using   execute sp_spaceused for the database in question or  sp_SOS (can be downloaded from http://searchsqlserver.techtarget.com/tip/Find-size-of-SQL-Server-tables-and-other-objects-with-stored-procedure)Alternatively you can run SQL commands as suggested in Http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=82359 by Michael Valentine Jonesselect [FREE_SPACE_MB] = convert(decimal(12,2),round((a.size-fileproperty(a.name,'SpaceUsed'))/128.000,2)) from dbo.sysfiles aMore useful article Monitor database file sizes with SQL Server Jobs describes how to setup monitoring Finally I found the excellent articleManaging Database Data Usage With Custom Space Alerts, that can be followed even support personnel without much DBA experience.

    Read the article

  • URL masking with .htaccess

    - by Michael Nguyen
    I need a hardcore programmer to help me with URL masking. So this is my situation. My website www.michaelfotograf.dk/blog/ is the main site that needs to be configured. I have another website/webhotel called www.umagepar.dk www.umagepar.dk is redirected to www.michaelfotograf.dk/blog/ The blog is an ongoing project where i post a lot of stuff to get my ranking higher on google. www.umagepar.dk which redirect to www.michaelfotograf.dk/blog/ is also a project on is own, so I do not want people to know, that it is a blog connected with www.michaelfotograf.dk/blog/ I therefore need to mask www.michaelfotograf.dk/blog/ so it will be called www.umagepar.dk in the URL in the searchbar at all times! What I need is a programmer that can do this for me. Name your price, and I'll see if I can affort it. Michael

    Read the article

  • Organic Business Networks -Don't Miss This Webcast!

    - by Michael Snow
    TUNE IN TODAY!! Oracle Social Business Thought Leaders Webcast Series Thursday, June 21st 10am PST  Organic Business Networks: Doing Business in a Hyper-Connected World Organic business networks connect people, data, content, and IT systems in a flexible, self-optimizing, self-healing, self-configuring and self-protecting system. Join us for this webcast and hear examples of how businesses today can effectively utilize the interconnectedness of emerging business information environments, adapt to changing conditions, and leverage assets effectively to thrive in a hyper-connected, globally competitive, information driven world. Listen as Featured Speaker, Michael Fauscette, GVP, Software Business Solutions, IDC, discusses: Emerging trends in social business that are driving transformative changes today The dynamic characteristics that make up social, collaborative, and connected enterprises Effective ways that technology combined with culture and process provide unique competitive advantage through new organic networked business models. Register now for the fifth Webcast in the Social Business Thought Leaders Series,“Organic Business Networks: Doing Business in a Hyper-Connected World.”

    Read the article

  • Bluetooth device paired and connected (no sound)

    - by Michael
    I've got a Bluetooth headset which works great in both Windows 8 and Android 4.2 however on Ubuntu (13.10) it just doesn't seem to work. I installed Blueman, it paired and connected successfully when I tried Audio sink but it still doesn't show up in my Sound Settings nor PulseAudio which I installed and tried. All there is in my Sound Settings is "Analog Output". I tried several fix like changing and adding things in /etc/bluetooth/audio.conf without any success. I've restarted the bluetooth service several times in the process as well. Let me know if you need more information from me and my system. Kind regards, Michael.

    Read the article

  • Setting up Matcher for String phrase match in file

    - by randomCoder
    Having trouble figuring out how to match a phrase string to a phrase in file stream. The file I'm dealing with contains random words such as: 3 little pigs built houses and 1 little pig went to the market etc. for many lines Using "little pig" as my pattern and matcher.find() I can locate 2 matches: "little pig" and "little pigs". However, I only want it to match "little pig". What can I do? I thought about using matcher.lookingAt() but I wouldn't know how to set a proper region when I can't rely on the file string phrases I'm matching being on separate lines.

    Read the article

  • C# 4: The Curious ConcurrentDictionary

    - by James Michael Hare
    In my previous post (here) I did a comparison of the new ConcurrentQueue versus the old standard of a System.Collections.Generic Queue with simple locking.  The results were exactly what I would have hoped, that the ConcurrentQueue was faster with multi-threading for most all situations.  In addition, concurrent collections have the added benefit that you can enumerate them even if they're being modified. So I set out to see what the improvements would be for the ConcurrentDictionary, would it have the same performance benefits as the ConcurrentQueue did?  Well, after running some tests and multiple tweaks and tunes, I have good and bad news. But first, let's look at the tests.  Obviously there's many things we can do with a dictionary.  One of the most notable uses, of course, in a multi-threaded environment is for a small, local in-memory cache.  So I set about to do a very simple simulation of a cache where I would create a test class that I'll just call an Accessor.  This accessor will attempt to look up a key in the dictionary, and if the key exists, it stops (i.e. a cache "hit").  However, if the lookup fails, it will then try to add the key and value to the dictionary (i.e. a cache "miss").  So here's the Accessor that will run the tests: 1: internal class Accessor 2: { 3: public int Hits { get; set; } 4: public int Misses { get; set; } 5: public Func<int, string> GetDelegate { get; set; } 6: public Action<int, string> AddDelegate { get; set; } 7: public int Iterations { get; set; } 8: public int MaxRange { get; set; } 9: public int Seed { get; set; } 10:  11: public void Access() 12: { 13: var randomGenerator = new Random(Seed); 14:  15: for (int i=0; i<Iterations; i++) 16: { 17: // give a wide spread so will have some duplicates and some unique 18: var target = randomGenerator.Next(1, MaxRange); 19:  20: // attempt to grab the item from the cache 21: var result = GetDelegate(target); 22:  23: // if the item doesn't exist, add it 24: if(result == null) 25: { 26: AddDelegate(target, target.ToString()); 27: Misses++; 28: } 29: else 30: { 31: Hits++; 32: } 33: } 34: } 35: } Note that so I could test different implementations, I defined a GetDelegate and AddDelegate that will call the appropriate dictionary methods to add or retrieve items in the cache using various techniques. So let's examine the three techniques I decided to test: Dictionary with mutex - Just your standard generic Dictionary with a simple lock construct on an internal object. Dictionary with ReaderWriterLockSlim - Same Dictionary, but now using a lock designed to let multiple readers access simultaneously and then locked when a writer needs access. ConcurrentDictionary - The new ConcurrentDictionary from System.Collections.Concurrent that is supposed to be optimized to allow multiple threads to access safely. So the approach to each of these is also fairly straight-forward.  Let's look at the GetDelegate and AddDelegate implementations for the Dictionary with mutex lock: 1: var addDelegate = (key,val) => 2: { 3: lock (_mutex) 4: { 5: _dictionary[key] = val; 6: } 7: }; 8: var getDelegate = (key) => 9: { 10: lock (_mutex) 11: { 12: string val; 13: return _dictionary.TryGetValue(key, out val) ? val : null; 14: } 15: }; Nothing new or fancy here, just your basic lock on a private object and then query/insert into the Dictionary. Now, for the Dictionary with ReadWriteLockSlim it's a little more complex: 1: var addDelegate = (key,val) => 2: { 3: _readerWriterLock.EnterWriteLock(); 4: _dictionary[key] = val; 5: _readerWriterLock.ExitWriteLock(); 6: }; 7: var getDelegate = (key) => 8: { 9: string val; 10: _readerWriterLock.EnterReadLock(); 11: if(!_dictionary.TryGetValue(key, out val)) 12: { 13: val = null; 14: } 15: _readerWriterLock.ExitReadLock(); 16: return val; 17: }; And finally, the ConcurrentDictionary, which since it does all it's own concurrency control, is remarkably elegant and simple: 1: var addDelegate = (key,val) => 2: { 3: _concurrentDictionary[key] = val; 4: }; 5: var getDelegate = (key) => 6: { 7: string s; 8: return _concurrentDictionary.TryGetValue(key, out s) ? s : null; 9: };                    Then, I set up a test harness that would simply ask the user for the number of concurrent Accessors to attempt to Access the cache (as specified in Accessor.Access() above) and then let them fly and see how long it took them all to complete.  Each of these tests was run with 10,000,000 cache accesses divided among the available Accessor instances.  All times are in milliseconds. 1: Dictionary with Mutex Locking 2: --------------------------------------------------- 3: Accessors Mostly Misses Mostly Hits 4: 1 7916 3285 5: 10 8293 3481 6: 100 8799 3532 7: 1000 8815 3584 8:  9:  10: Dictionary with ReaderWriterLockSlim Locking 11: --------------------------------------------------- 12: Accessors Mostly Misses Mostly Hits 13: 1 8445 3624 14: 10 11002 4119 15: 100 11076 3992 16: 1000 14794 4861 17:  18:  19: Concurrent Dictionary 20: --------------------------------------------------- 21: Accessors Mostly Misses Mostly Hits 22: 1 17443 3726 23: 10 14181 1897 24: 100 15141 1994 25: 1000 17209 2128 The first test I did across the board is the Mostly Misses category.  The mostly misses (more adds because data requested was not in the dictionary) shows an interesting trend.  In both cases the Dictionary with the simple mutex lock is much faster, and the ConcurrentDictionary is the slowest solution.  But this got me thinking, and a little research seemed to confirm it, maybe the ConcurrentDictionary is more optimized to concurrent "gets" than "adds".  So since the ratio of misses to hits were 2 to 1, I decided to reverse that and see the results. So I tweaked the data so that the number of keys were much smaller than the number of iterations to give me about a 2 to 1 ration of hits to misses (twice as likely to already find the item in the cache than to need to add it).  And yes, indeed here we see that the ConcurrentDictionary is indeed faster than the standard Dictionary here.  I have a strong feeling that as the ration of hits-to-misses gets higher and higher these number gets even better as well.  This makes sense since the ConcurrentDictionary is read-optimized. Also note that I tried the tests with capacity and concurrency hints on the ConcurrentDictionary but saw very little improvement, I think this is largely because on the 10,000,000 hit test it quickly ramped up to the correct capacity and concurrency and thus the impact was limited to the first few milliseconds of the run. So what does this tell us?  Well, as in all things, ConcurrentDictionary is not a panacea.  It won't solve all your woes and it shouldn't be the only Dictionary you ever use.  So when should we use each? Use System.Collections.Generic.Dictionary when: You need a single-threaded Dictionary (no locking needed). You need a multi-threaded Dictionary that is loaded only once at creation and never modified (no locking needed). You need a multi-threaded Dictionary to store items where writes are far more prevalent than reads (locking needed). And use System.Collections.Concurrent.ConcurrentDictionary when: You need a multi-threaded Dictionary where the writes are far more prevalent than reads. You need to be able to iterate over the collection without locking it even if its being modified. Both Dictionaries have their strong suits, I have a feeling this is just one where you need to know from design what you hope to use it for and make your decision based on that criteria.

    Read the article

  • DDD Melbourne -lessons leant

    - by Michael Freidgeim
    I've attended DDD Melbourne and want to list the interesting points, that I've leant and want to follow. To read more: * Moles-Mocking Isolation framework for .NET. Documentation is here.   (See also Mocking frameworks comparison created October 4, 2009 ) * WebFormsMVP * PluralSight   http://www.pluralsight-training.net/offers/default.aspx?cc=trial   * ELMAH: Error Logging Modules and Handlers *Rhino.Mocks   * VS UI Test Recorder -see posts Visual Studio 2010 Coded UI Test User Guide. Note that Microsoft Test Manager (MTM) toolis a separate application, that can be started from Program files/VS 2010 menu.It is not a menu inside Visual Studio.   * CodeContract- seems great in Debug. Will be good if in production  will be possible runtime configuration, ability to log instead of throw exception. Current recommendation to customize Debug.Assert is not trivial The programmer is free to use the customization provided by Debug.Assert using assert listeners to obtain whatever runtime behavior they desire (e.g., ignoring the error, logging it, or throwing an exception).   // Clears the existing list of assert listener (the default pop-up box) System.Diagnostics.Debug.Listeners.Clear(); // Install your own listener System.Diagnostics.Debug.Listeners.Add(MyTraceListener); Note that you can't catch specific ContractException, but can catch generic Exception(see How come you cannot catch Code Contract exceptions?)   Books recommended "Working effectively with legacy code" by Michael Feathers (corresponding article)   Fowler, Martin Refactoring: Improving the Design of Existing Code, slides http://jaoo.dk/jaoo1999/schedule/MartinFowlerRefractoring.pdf

    Read the article

  • SharePoint 2010 Design & Deployment Best Practices

    - by Michael Van Cleave
    Well now that SharePoint 2010 has successfully launched and everyone is scratching for every piece of best practices information they can get their hands on, I would like to invite anyone and everyone to come and take part in ShareSquared's next webinar. The webinar will cover some key information such as: Pros and cons of the different approaches to installing and configuring SharePoint 2010 Configuration Best Practices for SharePoint 2010 farms Services architecture; dependencies, licensing, and topologies Information Architecture guidance for sizing, multilingual support, multi-tenancy, and more. Using tools such as SharePoint Composer and SharePoint Maestro to configure and deploy SharePoint 2010 And most of all, avoiding common pitfalls for installation and deployment. What is better than all of that? Well, the even more exciting thing is that the presenters will be our very own SharePoint MVP's Gary Lapointe and Paul Stork. If you don't know who these guys are then you should definitely check out their blogs and their contributions to the SharePoint community. To get more information and register click here: REGISTER Other great links to information in this post: ShareSquared, Inc Gary Lapointe's Blog Paul Stork's Blog SharePoint Composer Check it out and get up to speed from some of the best in the industry. Michael

    Read the article

  • Of C# Iterators and Performance

    - by James Michael Hare
    Some of you reading this will be wondering, "what is an iterator" and think I'm locked in the world of C++.  Nope, I'm talking C# iterators.  No, not enumerators, iterators.   So, for those of you who do not know what iterators are in C#, I will explain it in summary, and for those of you who know what iterators are but are curious of the performance impacts, I will explore that as well.   Iterators have been around for a bit now, and there are still a bunch of people who don't know what they are or what they do.  I don't know how many times at work I've had a code review on my code and have someone ask me, "what's that yield word do?"   Basically, this post came to me as I was writing some extension methods to extend IEnumerable<T> -- I'll post some of the fun ones in a later post.  Since I was filtering the resulting list down, I was using the standard C# iterator concept; but that got me wondering: what are the performance implications of using an iterator versus returning a new enumeration?   So, to begin, let's look at a couple of methods.  This is a new (albeit contrived) method called Every(...).  The goal of this method is to access and enumeration and return every nth item in the enumeration (including the first).  So Every(2) would return items 0, 2, 4, 6, etc.   Now, if you wanted to write this in the traditional way, you may come up with something like this:       public static IEnumerable<T> Every<T>(this IEnumerable<T> list, int interval)     {         List<T> newList = new List<T>();         int count = 0;           foreach (var i in list)         {             if ((count++ % interval) == 0)             {                 newList.Add(i);             }         }           return newList;     }     So basically this method takes any IEnumerable<T> and returns a new IEnumerable<T> that contains every nth item.  Pretty straight forward.   The problem?  Well, Every<T>(...) will construct a list containing every nth item whether or not you care.  What happens if you were searching this result for a certain item and find that item after five tries?  You would have generated the rest of the list for nothing.   Enter iterators.  This C# construct uses the yield keyword to effectively defer evaluation of the next item until it is asked for.  This can be very handy if the evaluation itself is expensive or if there's a fair chance you'll never want to fully evaluate a list.   We see this all the time in Linq, where many expressions are chained together to do complex processing on a list.  This would be very expensive if each of these expressions evaluated their entire possible result set on call.    Let's look at the same example function, this time using an iterator:       public static IEnumerable<T> Every<T>(this IEnumerable<T> list, int interval)     {         int count = 0;         foreach (var i in list)         {             if ((count++ % interval) == 0)             {                 yield return i;             }         }     }   Notice it does not create a new return value explicitly, the only evidence of a return is the "yield return" statement.  What this means is that when an item is requested from the enumeration, it will enter this method and evaluate until it either hits a yield return (in which case that item is returned) or until it exits the method or hits a yield break (in which case the iteration ends.   Behind the scenes, this is all done with a class that the CLR creates behind the scenes that keeps track of the state of the iteration, so that every time the next item is asked for, it finds that item and then updates the current position so it knows where to start at next time.   It doesn't seem like a big deal, does it?  But keep in mind the key point here: it only returns items as they are requested. Thus if there's a good chance you will only process a portion of the return list and/or if the evaluation of each item is expensive, an iterator may be of benefit.   This is especially true if you intend your methods to be chainable similar to the way Linq methods can be chained.    For example, perhaps you have a List<int> and you want to take every tenth one until you find one greater than 10.  We could write that as:       List<int> someList = new List<int>();         // fill list here         someList.Every(10).TakeWhile(i => i <= 10);     Now is the difference more apparent?  If we use the first form of Every that makes a copy of the list.  It's going to copy the entire list whether we will need those items or not, that can be costly!    With the iterator version, however, it will only take items from the list until it finds one that is > 10, at which point no further items in the list are evaluated.   So, sounds neat eh?  But what's the cost is what you're probably wondering.  So I ran some tests using the two forms of Every above on lists varying from 5 to 500,000 integers and tried various things.    Now, iteration isn't free.  If you are more likely than not to iterate the entire collection every time, iterator has some very slight overhead:   Copy vs Iterator on 100% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 5 Copy 5 5 5 Iterator 5 50 50 Copy 28 50 50 Iterator 27 500 500 Copy 227 500 500 Iterator 247 5000 5000 Copy 2266 5000 5000 Iterator 2444 50,000 50,000 Copy 24,443 50,000 50,000 Iterator 24,719 500,000 500,000 Copy 250,024 500,000 500,000 Iterator 251,521   Notice that when iterating over the entire produced list, the times for the iterator are a little better for smaller lists, then getting just a slight bit worse for larger lists.  In reality, given the number of items and iterations, the result is near negligible, but just to show that iterators come at a price.  However, it should also be noted that the form of Every that returns a copy will have a left-over collection to garbage collect.   However, if we only partially evaluate less and less through the list, the savings start to show and make it well worth the overhead.  Let's look at what happens if you stop looking after 80% of the list:   Copy vs Iterator on 80% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 4 Copy 5 5 4 Iterator 5 50 40 Copy 27 50 40 Iterator 23 500 400 Copy 215 500 400 Iterator 200 5000 4000 Copy 2099 5000 4000 Iterator 1962 50,000 40,000 Copy 22,385 50,000 40,000 Iterator 19,599 500,000 400,000 Copy 236,427 500,000 400,000 Iterator 196,010       Notice that the iterator form is now operating quite a bit faster.  But the savings really add up if you stop on average at 50% (which most searches would typically do):     Copy vs Iterator on 50% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 2 Copy 5 5 2 Iterator 4 50 25 Copy 25 50 25 Iterator 16 500 250 Copy 188 500 250 Iterator 126 5000 2500 Copy 1854 5000 2500 Iterator 1226 50,000 25,000 Copy 19,839 50,000 25,000 Iterator 12,233 500,000 250,000 Copy 208,667 500,000 250,000 Iterator 122,336   Now we see that if we only expect to go on average 50% into the results, we tend to shave off around 40% of the time.  And this is only for one level deep.  If we are using this in a chain of query expressions it only adds to the savings.   So my recommendation?  If you have a resonable expectation that someone may only want to partially consume your enumerable result, I would always tend to favor an iterator.  The cost if they iterate the whole thing does not add much at all -- and if they consume only partially, you reap some really good performance gains.   Next time I'll discuss some of my favorite extensions I've created to make development life a little easier and maintainability a little better.

    Read the article

  • Silverlight Cream for May 29, 2010 -- #872

    - by Dave Campbell
    In this Issue: Michael Washington, Chris Koenig, Kunal Chowdhury, SilverLaw, Shayne Burgess, Ian T. Lackey, Alan Beasley, Marlon Grech. Shoutouts: Ozymandias has a post up that's not Silverlight necessarily, but it's pretty cool: Typeface Selection Flowchart Damian Schenkelman posted about the latest: Prism 2.2 Release available. Get it at Codeplex. From SilverlightCream.com: Silverlight 4 OData Paging with RX Extensions Michael Washington continues with this OData and Rx post using the View Model Style. Michael has some good external links, good info, and all the code. WP7 Part 4: Morphing and Mapping Chris Koenig has the 4th in his WP7 series he's doing, and this one is on MVVMLight and BingMaps ... code included. Silverlight 4: Interoperability with Excel using the COM Object Kunal Chowdhury has a post up about Excel Interoperability using the COM object including opening an Excel Workbook and writing data out, then modifying the data in the spreadsheet and seeing it updated in the app. Creating A Flexible Surface Effect – Silverlight 4 (Part 1) SilverLaw put up a demo of an awesome 'water ripple' SL4 demo a couple days ago, and now he's got part 1 of a great tutorial explaining it all. Service Operations and the WCF Data Services Client Shayne Burgess has a post up about Service Operations and how they can be used by the WCF Data Services client. Role Based Silverlight Behaviors Also from the Open Light Group, Ian T. Lackey has a post up about Behaviors that takes a list of roles and updates the UI appropriatetly. How to Toggle (Show/Hide) using Behaviours (Behaviors) between Visual States or Storyboards in Expression Blend for Windows Phone Alan Beasley has a quick post up talking about the solution he found to a problem he was having with state switching in a WP7 app. MEFedMVVM: Testability Marlon Grech has another MEFedMVVM post up and he's discussing Testability all rolled in there with everything else :) Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Problem to install Apache 2.4.2 in Ubuntu 12.04

    - by Michael
    I followed these steps to install Apache 2.4.2 in Ubuntu 12.04, but it seems Apache is not installed, here's what I did (I followed the steps in this site http://www.discusswire.com/apache-2-4-installation-ubuntu/): sudo apt-get install build-essential sudo apt-get build-dep apache2 wget http://apache.mirrors.pair.com/httpd/httpd-2.4.2.tar.gz tar -xzvf httpd-2.4.2.tar.gz && cd httpd-2.4.2 sudo ./configure --prefix=/usr/local/apache2 --enable-mods-shared=all --enable-deflate --enable-proxy --enable-proxy-balancer --enable-proxy-http --with-mpm=prefork sudo make sudo make install when I tried to start by issuing sudo /usr/local/apache2/bin/apachectl start at terminal, I got the following warning: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message" and when I typed top at terminal, the apache is not there. I also tried to go to http://localhost/ or 127.0.0.1 or even 127.0.1.1 it showed "Can't establish connection to server ..." message. ps: I checked the error log and it showed "[Fri Jul 27 15:49:00.703901 2012] [proxy_balancer:emerg] [pid 20781] AH01177: Failed to lookup provider 'shm' for 'slotmem': is mod_slotmem_shm loaded?? [Fri Jul 27 15:49:00.704083 2012] [:emerg] [pid 20781] AH00020: Configuration Failed, exiting" What I'm missing? Thanks Michael

    Read the article

  • ANTS Memory Profiler 7.0

    - by James Michael Hare
    I had always been a fan of ANTS products (Reflector is absolutely invaluable, and their performance profiler is great as well – very easy to use!), so I was curious to see what the ANTS Memory Profiler could show me. Background While a performance profiler will track how much time is typically spent in each unit of code, a memory profiler gives you much more detail on how and where your memory is being consumed and released in a program. As an example, I’d been working on a data access layer at work to call a market data web service.  This web service would take a list of symbols to quote and would return back the quote data.  To help consolidate the thousands of web requests per second we get and reduce load on the web services, we implemented a 5-second cache of quote data.  Not quite long enough to where customers will typically notice a quote go “stale”, but just long enough to be able to collapse multiple quote requests for the same symbol in a short period of time. A 5-second cache may not sound like much, but it actually pays off by saving us roughly 42% of our web service calls, while still providing relatively up-to-date information.  The question is whether or not the extra memory involved in maintaining the cache was worth it, so I decided to fire up the ANTS Memory Profiler and take a look at memory usage. First Impressions The main thing I’ve always loved about the ANTS tools is their ease of use.  Pretty much everything is right there in front of you in a way that makes it easy for you to find what you need with little digging required.  I’ve worked with other, older profilers before (that shall remain nameless other than to hint it was created by a very large chip maker) where it was a mind boggling experience to figure out how to do simple tasks. Not so with AMP.  The opening dialog is very straightforward.  You can choose from here whether to debug an executable, a web application (either in IIS or from VS’s web development server), windows services, etc. So I chose a .NET Executable and navigated to the build location of my test harness.  Then began profiling. At this point while the application is running, you can see a chart of the memory as it ebbs and wanes with allocations and collections.  At any given point in time, you can take snapshots (to compare states) zoom in, or choose to stop at any time.  Snapshots Taking a snapshot also gives you a breakdown of the managed memory heaps for each generation so you get an idea how many objects are staying around for extended periods of time (as an object lives and survives collections, it gets promoted into higher generations where collection becomes less frequent). Generating a snapshot brings up an analysis view with very handy graphs that show your generation sizes.  Almost all my memory is in Generation 1 in the managed memory component of the first graph, which is good news to me, because Gen 2 collections are much rarer.  I once3 made the mistake once of caching data for 30 minutes and found it didn’t get collected very quick after I released my reference because it had been promoted to Gen 2 – doh! Analysis It looks like (from the second pie chart) that the majority of the allocations were in the string class.  This also is expected for me because the majority of the memory allocated is in the web service responses, so it doesn’t seem the entities I’m adapting to (to prevent being too tightly coupled to the web service proxy classes, which can change easily out from under me) aren’t taking a significant portion of memory. I also appreciate that they have clear summary text in key places such as “No issues with large object heap fragmentation were detected”.  For novice users, this type of summary information can be critical to getting them to use a tool and develop a good working knowledge of it. There is also a handy link at the bottom for “What to look for on the summary” which loads a web page of help on key points to look for. Clicking over to the session overview, it’s easy to compare the samples at each snapshot to see how your memory is growing, shrinking, or staying relatively the same.  Looking at my snapshots, I’m pretty happy with the fact that memory allocation and heap size seems to be fairly stable and in control: Once again, you can check on the large object heap, generation one heap, and generation two heap across each snapshot to spot trends. Back on the analysis tab, we can go to the [Class List] button to get an idea what classes are making up the majority of our memory usage.  As was little surprise to me, System.String was the clear majority of my allocations, though I found it surprising that the System.Reflection.RuntimeMehtodInfo came in second.  I was curious about this, so I selected it and went into the [Instance Categorizer].  This view let me see where these instances to RuntimeMehtodInfo were coming from. So I scrolled back through the graph, and discovered that these were being held by the System.ServiceModel.ChannelFactoryRefCache and I was satisfied this was just an artifact of my WCF proxy. I also like that down at the bottom of the Instance Categorizer it gives you a series of filters and offers to guide you on which filter to use based on the problem you are trying to find.  For example, if I suspected a memory leak, I might try to filter for survivors in growing classes.  This means that for instances of a class that are growing in memory (more are being created than cleaned up), which ones are survivors (not collected) from garbage collection.  This might allow me to drill down and find places where I’m holding onto references by mistake and not freeing them! Finally, if you want to really see all your instances and who is holding onto them (preventing collection), you can go to the “Instance Retention Graph” which creates a graph showing what references are being held in memory and who is holding onto them. Visual Studio Integration Of course, VS has its own profiler built in – and for a free bundled profiler it is quite capable – but AMP gives a much cleaner and easier-to-use experience, and when you install it you also get the option of letting it integrate directly into VS. So once you go back into VS after installation, you’ll notice an ANTS menu which lets you launch the ANTS profiler directly from Visual Studio.   Clicking on one of these options fires up the project in the profiler immediately, allowing you to get right in.  It doesn’t integrate with the Visual Studio windows themselves (like the VS profiler does), but still the plethora of information it provides and the clear and concise manner in which it presents it makes it well worth it. Summary If you like the ANTS series of tools, you shouldn’t be disappointed with the ANTS Memory Profiler.  It was so easy to use that I was able to jump in with very little product knowledge and get the information I was looking it for. I’ve used other profilers before that came with 3-inch thick tomes that you had to read in order to get anywhere with the tool, and this one is not like that at all.  It’s built for your everyday developer to get in and find their problems quickly, and I like that! Tweet Technorati Tags: Influencers,ANTS,Memory,Profiler

    Read the article

  • Are you ready for SharePoint 2010?

    - by Michael Van Cleave
    With SharePoint's next release on the horizon (May 12th) many of my clients and colleagues are starting to ramp up for the upcoming tidal wave of functionality. Microsoft has been doing a terrific job of getting as much information out in the public lime light as possible over the last few months and I think that will definitely pay off with regards to acceptance of the new version of SharePoint. However, there are still some aspects of the new platform that are a little murky. Aspects such as: "Should we upgrade?" "Will my current installation upgrade without issues?" "What benefits will I see by upgrading?" "What are the best practices for upgrading or best practice in general relating to 2010?" "How should we plan to deploy SharePoint 2010 in our organization?" There is a ton of information out there, but how do you go about getting some of these questions answered? Well, I am glad you asked. (J) ShareSquared will be delivering a FREE SharePoint 2010 Readiness Webinar that will cover Preparation, Strategies, and Best Practices for the upcoming version of SharePoint. The webinar will be presented by 2 of ShareSquared's outstanding SharePoint MVP's; Gary Lapointe and Paul Stork. As all those T.V. commercials say… "Space is limited, so sign up now!" Just kidding, well kind of but not really. I am sure that the signup will be huge and space is really limited so the sooner you sign up the better. I would hate for any of you to miss out. If you have any questions please don't hesitate to shoot me a e-mail through my blog or contact ShareSquared directly. See you at the webinar! Michael

    Read the article

  • PublishingWeb.ExcludeFromNavigation Missing

    - by Michael Van Cleave
    So recently I have had to make the transition from the SharePoint 2007 codebase to SharePoint 2010 codebase. Needless to say there hasn't been much difficulty in the changeover. However in a set of code that I was playing around with and transitioning I did find one change that might cause some pain to others out there that have been programming against the PublishingWeb object in the Microsoft.SharePoint.Publishing namespace. The 2007 snippet of code that work just fine in 2007 looks like: using (SPSite site = new SPSite(url)) using (SPWeb web = site.OpenWeb()) {      PublishingWeb publishingWeb = PublishingWeb.GetPublishingWeb(web);     publishingWeb.ExcludeFromNavigation(true, itemID);     publishingWeb.Update(); } The 2010 update to the code looks like: using (SPSite site = new SPSite(url)) using (SPWeb web = site.OpenWeb()) {     PublishingWeb publishingWeb = PublishingWeb.GetPublishingWeb(web);     publishingWeb.Navigation.ExcludeFromNavigation(true, itemID); //--Had to reference the Navigation object.     publishingWeb.Update(); } The purpose of the code is to keep a page from showing up on the global or current navigation when it is added to the pages library. However you see that the update to the 2010 codebase actually makes more "object" sense. It specifies that you are technically affecting the Navigation of the Publishing Web, instead of the Publishing Web itself. I know that this isn't a difficult problem to fix, but I thought it would be something quick to help out the general public. Michael

    Read the article

  • sys.dm_exec_query_profiles – FAQ

    - by Michael Zilberstein
    As you probably know, this DMV is new in SQL Server 2014. It had been first announced in CTP1 but only in BOL . Now in CTP2 everyone can “play” with it. Since BOL is a little bit unclear (understatement detected), I’ve prepared this small FAQ as a result of discussion with Adam Machanic ( blog | twitter ) and Matan Yungman ( blog | twitter ). Q: What did you expect from sys.dm_exec_query_profiles? A: Expectations were very high – it promised, for the first time, ability to see _actual_ execution...(read more)

    Read the article

  • Saint Louis Days of .NET 2012

    - by James Michael Hare
    Hey all, just a quick note to let you know I'll be one of the speakers at the St. Louis Days of .NET this year.  I'll be giving a revamped version of my Little Wonders (going to add some new ones to keep it fresh) -- and hopefully other presentations as well, the session selection process is ongoing.St. Louis Days of .NET is a wonderful conference in the midwest and a bargain to boot (only $175 if you register before July 1st!  Hope to see you there.For more information, visit: http://www.stlouisdayofdotnet.com/2012

    Read the article

  • How does a "Variables introduce state"?

    - by kunj2aan
    I was reading the "C++ Coding Standards" and this line was there: Variables introduce state, and you should have to deal with as little state as possible, with lifetimes as short as possible. Doesn't anything that mutates eventually manipulate state? What does "you should have to deal with little state as possible" mean? In an impure language such as C++, isn't state management really what you are doing? And what are other ways to "deal with as little state as possible" other than limiting variable lifetime?

    Read the article

  • Microsoft Visual C# MVP 2012

    - by James Michael Hare
    I was informed on July 1st, 2012 that I was awarded a Microsoft Visual C# MVP recognition for 2012.  This is my second year now, and I'm doubly thankful to have been nominated and selected, and thankful that you guys all find my posts informative and useful! Even though life has thrown me some curve balls in this past last year, I look forward to continuing my posts (especially the Little Wonders) as much as possible!Thanks again!

    Read the article

  • Answer to: How to make Facebook FBML valid (XHTML)!

    - by Michael
    Hi guys, Found this nice article: http://www.ka-mediendesign.com/blog/facebook-markup-language-in-xhtml/ It's german but you can use the google translator. This is the answer to the question "how to make Facebook FBML code - Valid [closed]". Very simple. The social plugins are on this site, too. And the page is valid xhtml 1.1! Cheers, Michael

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >