Search Results

Search found 28957 results on 1159 pages for 'single instance'.

Page 515/1159 | < Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >

  • Running perfmon continuously with periodic files

    - by Sal
    I have a question very similar to this one, but I want to continuously run perfmon, during reboots and throughout the day. Further, I'd like to generate a perfmon report every 10 mins or so. The original question tells me how to run perfmon when the server is restarted, but I don't know how to make perfmon continuously run while throwing periodic files. I've tried setting it as a scheduled task that needs to be done every 10 mins, but this is too sloppy, and when the scheduled task kicks another instance, the current perfmon report writer crashes, and I get a garbage report. I've also tried writing a sloppy batch script that would fire off the task at scheduled intervals, but this is the same problem as the scheduled task. I'm sure I'm just missing something silly, but I don't see it. Ideas? (If it helps, I'm running Windows 7 locally, and I'm trying to set up the processes for boxes running Windows 2008.)

    Read the article

  • VSTO is Free But Aspose is Speed

    - by Ken Cox [MVP]
    I’ve taken over the completion, deployment, and maintenance of an ASP.NET Web site that generates Office documents using VSTO. VSTO’s a decent concept and works fine for small-scale scenarios like a desktop app or small intranet. However, with multiple simultaneous requests via ASP.NET, we found the Web server performance suffered badly. To spread out the server’s workload, I implemented MSMQ task queuing via a WCF Windows service.  That helped a lot. IIS didn’t drag with only one VSTO/Office instance running. But I  still found it taking too long to produce a single report. A nicely formatted VSTO Excel document was taking 45 minutes.  (The client  didn’t know any better and therefore considered 45 minutes tolerable.) On my own time, I pulled out an old copy of Aspose.Total for .NET. Within an hour, I had converted the VSTO Excel C# code to Aspose Cells code. The improvement was astonishing: Instead of the 45-minutes, the report took under a minute! I’ve pasted the client’s exact chat response after he tried the speedy Aspose version: “WWWWWOOOOOOOOWWWWWWWW!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!” Microsoft’s VSTO is a free product while the Aspose components cost $$$.  Certainly, it can be a tough call when budgets are tight. If you’re trying to convince the client to shell out for something more suitable for the application, get an eval version of Aspose.Total and offer a direct comparison demo. Ken Full Disclosure: Aspose (like several other component vendors) gives free copies of their suite to MVPs and other .NET influencers.

    Read the article

  • ASP.NET MVC Framework

    - by Aamir Hasan
     MVC is a design pattern. A reusable "recipe" for constructing your application. Generally, you don't want your user interface code and data access code to be mixed together, it makes changing either one more difficult. By placing data access code into a "Model" object and user interface code into a "View" object, you can use a "Controller" object to act as a go-between, sending messages/calling methods on the view object when the data changes and vice versa. Model-view-controller (MVC) is an architectural pattern used in software engineering. In complex computer applications that present a large amount of data to the user, a developer often wishes to separate data (model) and user interface (view) concerns, so that changes to the user interface will not affect data handling, and that the data can be reorganized without changing the user interface. The model-view-controller solves this problem by decoupling data access and business logic from data presentation and user interaction, by introducing an intermediate component: the controller.Model:    The domain-specific representation of the information that the application operates. Domain logic adds meaning to raw data (e.g., calculating whether today is the user's birthday, or the totals, taxes, and shipping charges for shopping cart items).    Many applications use a persistent storage mechanism (such as a database) to store data. MVC does not specifically mention the data access layer because it is understood to be underneath or encapsulated by the Model.View:    Renders the model into a form suitable for interaction, typically a user interface element. Multiple views can exist for a single model for different purposes.Controller:    Processes and responds to events, typically user actions, and may invoke changes on the model.    

    Read the article

  • How to Switch Chrome’s Default Search to International Google

    - by Erez Zukerman
    Google Chrome’s default search engine is Google. This makes perfect sense; the only problem is that it uses localized Google – for example, Google France or Google Israel. This impacts the interface language, and sometimes even the text orientation. Here’s how you can fix this and get “international” Google results with an English interface. First, we need to figure out what search query we’re going to use. Go to Google.com and execute a simple query for a single word – say “cats”. If you get real-time results, hit Enter so that the address bar updates with the query URL. It should look something like this: http://www.google.com/#sclient=psy&hl=en&site=&source=hp&q=cats&aq=f&aqi=g1g-s1g3&aql=&oq=&pbx=1&bav=on.2,or.&fp=369c8973645261b8 If you wish to customize your search further, click Advanced Search. For example, I would like Google to annotate results with the reading level they require, so I can see what’s going to be difficult to read: Latest Features How-To Geek ETC Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Lakeside Sunset in the Mountains [Wallpaper] Taskbar Meters Turn Your Taskbar into a System Resource Monitor Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu Create Custom Sized Thumbnail Images with Simple Image Resizer [Cross-Platform] Etch a Circuit Board using a Simple Homemade Mixture Sync Blocker Stops iTunes from Automatically Syncing

    Read the article

  • DPM 2010 RC Mailbox Recovery Fails

    - by ITGuy24
    I am testing DPM 2010 with Exchange 2010. I am attempting to restore a single mailbox from a previous backup to a Recovery Database. I created and mounting the Recovery Database on the Exchange 2010 server and set the overwrite property. When I run the restore for the mailbox and point it to the recovery databse I get the following error. The recovery jobs for Exchange Mailbox Database MailboxDatabase01 that started at with the destination of EXCHANGE2010.domain.com, have completed. Most or all jobs failed to recover the requested data. (ID 3111) DPM encountered an error while performaing an operation for E:\DatabaseFiles\MailboxDatabase01.edb on EXCHANGE2010.domain.com (ID 2033 Details: The process cannot access the file because it is being used by another process (0x80070020)) MailboxDatabase01 is one of our MDBs and not the RDB I setup for the recovery. I am confused why it is even trying to access this as I have triple checked that the recovery is pointed to the RDB. Any idea what I am doing wrong?

    Read the article

  • First toe in the water with Object Databases : DB4O

    - by REA_ANDREW
    I have been wanting to have a play with Object Databases for a while now, and today I have done just that.  One of the obvious choices I had to make was which one to use.  My criteria for choosing one today was simple, I wanted one which I could literally wack in and start using, which means I wanted one which either had a .NET API or was designed/ported to .NET.  My decision was between two being: db4o MongoDb I went for db4o for the single reason that it looked like I could get it running and integrated the quickest.  I am making a Blogging application and front end as a project with which I can test and learn with these object databases.  Another requirement which I thought I would mention is that I also want to be able to use the said database in a shared hosting environment where I cannot install, run and maintain a server instance of said object database.  I can do exactly this with db4o. I have not tried to do this with MongoDb at time of writing.  There are quite a few in the industry now and you read an interesting post about different ones and how they are used with some of the heavy weights in the industry here : http://blog.marcua.net/post/442594842/notes-from-nosql-live-boston-2010 In the example which I am building I am using StructureMap as my IOC.  To inject the object for db4o I went with a Singleton instance scope as I am using a single file and I need this to be available to any thread on in the process as opposed to using the server implementation where I could open and close client connections with the server handling each one respectively.  Again I want to point out that I have chosen to stick with the non server implementation of db4o as I wanted to use this in a shared hosting environment where I cannot have such servers installed and run.     public static class Bootstrapper    {        public static void ConfigureStructureMap()        {            ObjectFactory.Initialize(x => x.AddRegistry(new MyApplicationRegistry()));        }    }    public class MyApplicationRegistry : Registry    {        public const string DB4O_FILENAME = "blog123";        public string DbPath        {            get            {                return Path.Combine(Path.GetDirectoryName(Assembly.GetAssembly(typeof(IBlogRepository)).Location), DB4O_FILENAME);            }        }        public MyApplicationRegistry()        {            For<IObjectContainer>().Singleton().Use(                () => Db4oEmbedded.OpenFile(Db4oEmbedded.NewConfiguration(), DbPath));            Scan(assemblyScanner =>            {                assemblyScanner.TheCallingAssembly();                assemblyScanner.WithDefaultConventions();            });        }    } So my code above is the structure map plumbing which I use for the application.  I am doing this simply as a quick scratch pad to play around with different things so I am simply segregating logical layers with folder structure as opposed to different assemblies.  It will be easy if I want to do this with any segment but for the purposes of example I have literally just wacked everything in the one assembly.  You can see an example file structure I have on the right.  I am planning on testing out a few implementations of the object databases out there so I can program to an interface of IBlogRepository One of the things which I was unsure about was how it performed under a multi threaded environment which it will undoubtedly be used 9 times out of 10, and for the reason that I am using the db context as a singleton, I assumed that the library was of course thread safe but I did not know as I have not read any where in the documentation, again this is probably me not reading things correctly.  In short though I threw together a simple test where I simply iterate to a limit each time kicking a common task off with a thread from a thread pool.  This task simply created and added an random Post and added it to the storage. The execution of the threads I put inside the Setup of the Test and then simply ensure the number of posts committed to the database is equal to the number of iterations I made; here is the code I used to do the multi thread jobs: [TestInitialize] public void Setup() { var sw = new System.Diagnostics.Stopwatch(); sw.Start(); var resetEvent = new ManualResetEvent(false); ThreadPool.SetMaxThreads(20, 20); for (var i = 0; i < MAX_ITERATIONS; i++) { ThreadPool.QueueUserWorkItem(delegate(object state) { var eventToReset = (ManualResetEvent)state; var post = new Post { Author = MockUser, Content = "Mock Content", Title = "Title" }; Repository.Put(post); var counter = Interlocked.Decrement(ref _threadCounter); if (counter == 0) eventToReset.Set(); }, resetEvent); } WaitHandle.WaitAll(new[] { resetEvent }); sw.Stop(); Console.WriteLine("{0:00}.{1:00} seconds", sw.Elapsed.Seconds, sw.Elapsed.Milliseconds); }   I was not doing this to test out the speed performance of db4o but while I was doing this I could not help but put in a StopWatch and see out of sheer interest how fast it would take to insert a number of Posts.  I tested it out in this case with 10000 inserts of a small, simple POCO and it resulted in an average of:  899.36 object inserts / second.  Again this is just  simple crude test which came out of my curiosity at how it performed under many threads when using the non server implementation of db4o. The spec summary of the computer I used is as follows: With regards to the actual Repository implementation itself, it really is quite straight forward and I have to say I am very surprised at how easy it was to integrate and get up and running.  One thing I have noticed in the exposure I have had so far is that the Query returns IList<T> as opposed to IQueryable<T> but again I have not looked into this in depth and this could be there already and if not they have provided everything one needs to make there own repository.  An example of a couple of methods from by db4o implementation of the BlogRepository is below: public class BlogRepository : IBlogRepository { private readonly IObjectContainer _db; public BlogRepository(IObjectContainer db) { _db = db; } public void Put(DomainObject obj) { _db.Store(obj); } public void Delete(DomainObject obj) { _db.Delete(obj); } public Post GetByKey(object key) { return _db.Query<Post>(post => post.Key == key).FirstOrDefault(); } … Anyways I hope to get a few more implementations going of the object databases and literally just get familiarized with them and the concept of no sql databases. Cheers for now, Andrew

    Read the article

  • Most efficient way to connect an ISAPI Dll to a windows service

    - by Mike Trader
    I am writing a custom server for a client. They want scalability so I must use a thread pool and probably I/O completion port to regulate it. The main requirement is that a windows service manage the HTTP requests for a number of reasons. An example of one would be that a client session spans many requests and continuity must be maintained. Another would be that the ISAPI Dll will be in the IIS address space and so it's code will be lean and very carefully implemented. The extensive processing in the Windows service may get unruly for the duration of the lengthy development. If the service crashes it will not take out IIS. Anyway, the remaining decision is how to have these two processes communicate. We have talked about pipes, tcp, global memory and even a single pipe with multiplexed data ala FastCGI. Would love to hear anyones experience with a decision like this.

    Read the article

  • Tokyo Tyrant ulog / update log management.

    - by Nathan Milford
    I'm testing Tokyo Tyrant in a master-master setup and have found the ulog grows out of control and locks up the disk. At first I found the -ulim option useful and limited the logfile size, however it simply rolls over to a new log, leaving the old ones to clutter up the partition. I suppose I'll write a shell script that will delete ulogs older than X, once I find out how far back Tokyo Tyrant needs in the update log in order to failover. Does anyone have any experience with this Tokyo Tyrant? Do you have a feel (acknowledging that every install is different based on what is being stored) for the optimal ulog size vs how far back a Tokyo Tyrant instance needs to look in the ulog to assume master status? Thanks, nathan

    Read the article

  • 'rman' cheat-sheet and rlwrap completion

    - by katsumii
    I started using 'rlwrap' some monthes ago like one of my colleague does.bash-like features in sqlplus, rman and other Oracle command line tools (Oracle Luxembourg Core Tech' Blog by Gilles Haro)One can find specific Oracle extension for databases 9i, 10g and 11g (keyword textfile) over here. This will avoid you the need to create this .oracle_keywords file.There is 'rman' keyword file in the link above. I experimented a little and found some missing keywords which are:MAXCORRUPTION PRIMARY NOCFAU VIRTUAL COMPRESSION FOREIGN With these words added, 'rman' works like this:$ rlwrap -f ~/rman $ORACLE_HOME/bin/rman Recovery Manager: Release 11.2.0.3.0 - Production on Mon Dec 3 02:56:04 2012 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. RMAN> <-- Hit TAB Display all 211 possibilities? (y or n) As you can guess, this completion is not context aware.I found these missing words by creating a kind of 'cheat sheet' for rman with the script like below. This sheet contains list of verbs and 1st operands. I uploaded to here so one can create a coffee cup with a lot of esoteric words printed on :)validWords() { sed -n 's/^RMAN-01009: syntax error: found "identifier": expecting one of: //p' \ | sed -r 's/double-quoted-string, single-quoted-string/Some String/;s/, /" "/g;s/""//' } echo "Bogus" | rman | validWords > /tmp/rman.$$ for i in $(cat /tmp/rman.$$) do i=$(echo $i | tr -d '"') echo "#### $i ####" echo "$i Bogus" | rman | validWords done One can find more keywords in the document here.

    Read the article

  • Collision detection with multiple polygons simultaneously

    - by Craig Innes
    I've written a collision system which detects/resolves collisions between a rectangular player and a convex polygon world using the Separating Axis Theorem. This scheme works fine when the player is colliding with a single polygon, but when I try to create a level made up of combinations of these shapes, the player gets "stuck" between shapes when trying to move from one polygon to the other. The reason for this seems to be that collisions are detected after the player has been pushed through the shape by its movement or gravity. When the system resolves the collision, it resolves them in an order that doesn't make sense (for example, when the player is moving from one flat rectangle to another, gravity pushes them below the ground, but the collision with the left hand side of the second block is resolved before the collision with the top of the block, meaning the player is pushed back left before being pushed back up). Other similar posts have resolved this problem by having a strict rule on which axes to resolve first. For example, always resolve the collision on the y axis, then if the object is still colliding with things, resolve on the x axis. This solution only works in the case of a completely axis oriented box world, and doesn't solve the problem if the player is stuck moving along a series of angled shapes or sliding down a wall. Does any one have any ideas of how I could alter my collision system to prevent these situations from happening?

    Read the article

  • SQL Contest – Win 10 Amazon Gift Cards worth (USD 200) and 10 NuoDB T-Shirts

    - by Pinal Dave
    This month, we have yet not run any contest so we will be running a very interesting contest with the help of good guys at NuoDB. NuoDB has just released version 2.0 and You can download NuoDB from here. NuoDB’s NewSQL distributed database is designed to be a single database that works across multiple servers, which can scale easily, and scale on demand. That’s one system that gives high connectivity but no latency, complexity or maintenance issues. MySQL works in some circumstances, but a period of growth isn’t one of them. So as a company moves forward, the MySQL database can’t keep pace. Data storage and data replication errors creep in. Soon the diaspora of the offices becomes a problem. Your telephone system isn’t just distributed, it is literally all over the place. You can read my detailed article about how Why VoIP Service Providers Should Think About NuoDB’s Geo Distribution. Here is the contest: Contest Part 1: NuoDB R2.0 delivered a long list of improvements and new features. List three of the major features of NuoDB 2.0. Here is the hint1, hint2, hint3. Contest Part 2: Download NuoDB using this link. Once you download NuoDB, leave a comment over here with the name of the platform and installer size. (For example Windows Platform Size abc.dd MB) Here is the what you can win! Giveaways 10 Amazon Gift Card (Each of USD 20 – total USD 200) 10 Amazingly looking NuoDB T-Shirts (For the first 10 downloads) Rules Participate before Oct 28, 2013. All the valid answers will be published after Oct 28, 2013 and winners will receive an email on Nov 1st, 2013. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: NuoDB

    Read the article

  • Dynamic LINQ in an Assembly Near By

    - by Ricardo Peres
    You may recall my post on Dynamic LINQ. I said then that you had to download Microsoft's samples and compile the DynamicQuery project (or just grab my copy), but there's another way. It turns out Microsoft included the Dynamic LINQ classes in the System.Web.Extensions assembly, not the one from ASP.NET 2.0, but the one that was included with ASP.NET 3.5! The only problem is that all types are private: Here's how to use it: Assembly asm = typeof(UpdatePanel).Assembly; Type dynamicExpressionType = asm.GetType("System.Web.Query.Dynamic.DynamicExpression"); MethodInfo parseLambdaMethod = dynamicExpressionType.GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = (m.Name == "ParseLambda") && (m.GetParameters().Length == 2)).Single().MakeGenericMethod(typeof(DateTime), typeof(Boolean)); Func filterExpression = (parseLambdaMethod.Invoke(null, new Object [] { "Year == 2010", new Object [ 0 ] }) as Expression).Compile(); List list = new List { new DateTime(2010, 1, 1), new DateTime(1999, 1, 12), new DateTime(1900, 10, 10), new DateTime(1900, 2, 20), new DateTime(2012, 5, 5), new DateTime(2012, 1, 20) }; IEnumerable filteredDates = list.Where(filterExpression); SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • Unix list absolute file name

    - by Matthew Adams
    Given an arbitrary single argument representing a file (or directory, device, etc), how do I get the absolute path of the argument? I've seen many answers to this question involving find/ls/stat/readlink and $PWD, but none that suits my need. It looks like the closest answer is ksh's "whence" command, but I need it to work in sh/bash. Assume a file, foo.txt, is located in my home directory, /Users/matthew/foo.txt. I need the following behavior, despite what my current working directory is (I'm calling the command "abs"): (PWD is ~) $ abs foo.txt /Users/matthew/foo.txt $ abs ~/foo.txt /Users/matthew/foo.txt $ abs ./foo.txt /Users/matthew/foo.txt $ abs /Users/matthew/foo.txt /Users/matthew/foo.txt What would "abs" really be? TIA, Matthew

    Read the article

  • What design patters are the worst or most narrowly defined?

    - by Akku
    For every programming project, Managers with past programming experience try to shine when they recommend some design patterns for your project. I like design patterns when they make sense or if you need a scalbale solution. I've used Proxies, Observers and Command patterns in a positive way for example, and do so every day. But I'm really hesitant to use say a Factory pattern if there's only one way to create an object, as a factory might make it all easier in the future, but complicates the code and is pure overhead. So, my question is in respect to my future career and my answer to manager types throwing random pattern-names around: Which design patterns did you use, that threw you back overall? Which are the worst design patterns, that you shouldn't have a look at if it's not that only single situation where it makes sense (read: which design patterns are very narrowly defined)? (It's like I was looking for the negative reviews of an overall good product of amazon to see what bugged people most in using design patterns). And I'm not talking about Anti-Patterns here, but about Patterns that are usually thought of as "good" patterns.

    Read the article

  • How to create a shared lock blocking an intent exclusive lock

    - by FremenFreedom
    As I understand it, a SELECT statement will place a shared lock on the rows that it will return. While that SELECT is running, if an UPDATE statement comes along and needs to grab an intent exclusive lock then that UPDATE statement will need to wait until the SELECT statement releases its shared locks. I am trying to test this SELECT shared lock thing by doing a BEGIN TRAN and then running a SELECT, not COMMITing, and then running an UPDATE in another session on the exact same row. The UPDATE worked fine -- no lock, no wait. So this must not be a valid way to simulate a shared lock blocking an intent exclusive lock? Can you give me a scenario where I can create a lock with a SELECT that would force an UPDATE to wait? I'm working with SQL Server 2000 and 2005 across a linked server: the table is on the 2005 instance, the select is happening on 2000, and the update is executed from 2005. All in SSMS 2005.

    Read the article

  • RAIDZ vs RAID1+0

    - by Hiro2k
    Hi guys I just got 4 SSDs for my FreeNAS box. This server is only used to serve a single iSCSI extent to my Citrix XenServer pool and was wondering if I should setup them up in a RAIDZ or a RAID 1+0 configuration. This isn't used for anything in production, just for my test lab so I'm not sure which one is going to be better in this scenario. Will I see a major difference in speed or reliability? Currently the server has three 500GB Western Digital Blue drives and it's dog slow when I deploy a new version of our software on it, hence the upgrade.

    Read the article

  • what is acceptable datastore latency on VMware ESXi host?

    - by BeowulfNode42
    Looking at our performance figures on our existing VMware ESXi 4.1 host at the Datastore/Real-time performance data Write Latency Avg 14 ms Max 41 ms Read Latency Avg 4.5 ms Max 12 ms People don't seem to be complaining too much about it being slow with those numbers. But how much higher could they get before people found it to be a problem? We are reviewing our head office systems due to running low on storage space, and are tossing up between buying a 2nd VM host with DAS or buying some sort of NAS for SMB file shares in the near term and maybe running VMs from it in the longer term. Currently we have just under 40 staff at head office with 9 smaller branches spread across the country. Head office is runnning in an MS RDS session based environment with linux ERP and mail systems. In total 22 VMs on a single host with DAS made from a RAID 10 made of 6x 15k SAS disks.

    Read the article

  • SQL Server 2008 R2 Installation and the Phantom of SQL Server 2005 Express

    - by Davide Mauri
    Today I’ve happy started to install SQL Server 2008R2 on my development machine, which has this software installed Windows Server 2008 R2 Standard SQL Server 2008 SP1 CU5 Visual Studio 2008 SP1 BOL October 2009 AdventuresWorks2008 Databases SR4 Visual Studio 2010 RTM So, all the basic standard stuff. SQL Server 2008 R2 installation went smooth ‘till somewhere in the middle, where the rule engine checks that software pre-requisite are satisfied before starting to copy files. Here I had this @][@@[?!?! error: “The SQL Server 2005 Express Tools are installed. To continue, remove the SQL Server 2005 Express Tools.” Fun enough, I don’t have and I’ve never had SQL Server 2005 Express on my machine. Armed with patience I analyzed the install log here C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\yyyymmdd_hhmmss\Detail.txt and I’ve found that the rule “Sql2005SsmsExpressFacet” is the one in charge of this check and it look for existance of the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\90\Tools\ShellSEM (on x86) HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Microsoft SQL Server\90\Tools\ShellSEM (on x64) In my registry I’ve found that key existsing, due to the installation of the uber-cool Red-Gate SQL Search. I removed the registry key and here it is! SQL Server 2008 R2 is installing while I’m writing this post. A note to Microsoft: can you please add more detailed information on the setup while such error happens. Just saying “you have SQL Server 2005 Express installed” is not enough. Please show us what the rule look for and why it has failed directly in the Detailed Report, so that we don’t have to spend time to look for the needle in the logs. Thanks! :) PS I did a side-by-side installation with the existing SQL Server 2008 instance. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Node remains in commissioning status

    - by Vinitha
    I have been trying to set up ubuntu cloud 12.04. I'm kind of new to MAAS and ubuntu. Here is what I followed. Have installed MAAS server using the steps provided in https://wiki.ubuntu.com/ServerTeam/MAAS For the node, I installed the Ubuntu 12.04 Server Image on a USB Stick. Then restarted the node and opted to enlist the node via boot media, with PXE. once the process was done, the node was powered off as expected. I manually powered on the node, as my node is not PXE enabled. Result - No node was visible on MAAS UI Since step 2 didn't work, I added the node via maas-cli. command. After the execution of this command I got the node reflected on to my MAAS UI. But the status continues to be in "Commissioning" for a long time. Then I executed "maas-cli maas nodes check-commissioning " and i got "Unrecognised signature: POST check_commissioning". I'm not sure where is the error. Could some one please help me solve this issue. I checked the following log file but found no error related to commissioning (pserv.log / maas.log / celery.log/celery-region.log). I found this entry in my auth.log "Nov 16 18:20:34 ubuntuCloud sshd[4222]: Did not receive identification string from xxx.xx.xx.x" not sure if it indicates anything as the ip that is mentioned is not of the node nor of the MAAS server. I also verified the time on the server and node using date cmd - (at one instance the times are : Server: Fri Nov 16 18:15:51 IST 2012 and Node Fri Nov 16 18:15:43 IST 2012). Not sure if 'date' the right cmd to set the time. I have also check maas_local_settings.py for the MAAS url. I'm not sure what are the logs that need to be verified. Is there any log that can be checked on the Node. Thanks Vinitha

    Read the article

  • Magento Community - Hosting :: Need Advice which sharing hosting will run magento fast

    - by user43353
    Hi, Need Advice which sharing hosting will run Magento Community fast or some other not expensive solution. This website will not have a lot of users, I only need that it will run fast for 100-20 users in same time. The problem with magento is database design that make this system very slow , also other staff not the best. I had hostmonster.com and justhost.com for previous website but it wasn't fast enough for single user that not located in USA (my customer areas: Asia, Africa). each action that involve database takes a lot of time. Thanks

    Read the article

  • SSH: Tunnel multiple ports to remote server

    - by user1594322
    See attached diagram. Host A - Windows server Host B - Linux server Host C - VMWare ESXi server From host A I can SSH to host B over the VPN tunnel. I can ping host C from host B, but not from host A. I am assuming this is because host C has lost its default gateway. Host C is a VMWware ESXi server, so I would need to tunnel several ports (80,443,902) in order to reach host C from host A. What is the correct ssh syntax to create the tunnel in order to reach host C from host A, and can I do it using a single command, or do I need to run three commands (one for each port, 80,443,902)?

    Read the article

  • Get the MakeUseOf eBook Guide to Speeding Up Windows for Free

    - by ETC
    Our friends over at MakeUseOf.com have released yet another eBook in their series of Guides to, well, just about everything. This one gives you some tips for speeding up your Windows PC. The guide has a ton of different tips, and while I wouldn’t necessarily say you follow every single tip to the letter (since everybody’s setup is different), it does give you lots of great ideas for speeding up your PC, as well as links to resources, and instructions for how to perform various cleanup tasks. The best tips? Make sure to keep your PC crapware-free, upgrade your RAM if you’re low, scan for viruses, and run some type of disk cleanup on a regular basis. Download the MakeUseOf Windows on Speed Guide (PDF) [Direct Download Link] Windows on Speed [MakeUseOf] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic] Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox Androidify Turns You into an Android-style Avatar Reader for Android Updates; Now with Feed Widgets and More

    Read the article

  • index.html redirecting to cgi-sys/defaultwebpage.cgi

    - by Andrew De Forest
    This problem has been persisting over the last couple of days. I thought I fixed it Friday only to get in on Monday morning and see that cPanel is still giving me issues. On Friday, all incoming traffic to my index.html page were being redirected to cgi-sys/defaultwebpage.cgi. Upon further investigation, I found that my entire index.html code had been overwritten and contained only a single meta tag, which was causing the aforementioned redirect. I re-uploaded the original index page and overwrote the one that was causing the redirects and it seemed to have fixed the problem. Fast forward to Monday. The problem began happening again and I took the same steps as before to fix it, but now I'm wondering how to permanently stop this from happening. From what I've found, it's related to cPanel. I personally did not make any changes to the server, but there are a few people with access who might have. I'm on a VPS with my own dedicated IP address. I've been hosting fine for a few months without this problem. The server is runs a traditional LAMP stack.

    Read the article

  • JMeter Stress testing

    - by mcondiff
    MAMP server hosting a Joomla instance. I'd like to hear the community's thoughts on the best way to stress test the server and find it's breaking point on concurrent users etc. Currently I have setup a test plan which I have going to the home page, grabbing the index.php, css, js and all images and have run tests on 1 to 100 users and a varying number of loops. What I'd like to know is how do I determine at what number of concurrent requests or looping requests is a good way to gauge if my server can handle the proposed increase in traffic? What is a good KB/sec, Throughput, Average, Max, Min via the Aggregate Report and at what number of threads/loops etc? I have googled and have not found immediate answers to these questions and thought to come here. More or less I have just used this http://jakarta.apache.org/jmeter/usermanual/jmeter_proxy_step_by_step.pdf to guide me and then I have been winging it in terms of Thread and Loop numbers. Any light shed on these subject would be much appreciated.

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

< Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >