Search Results

Search found 3205 results on 129 pages for 'unexpected shutdown'.

Page 97/129 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Zero downtime deployment (Tomcat), Nginx or HAProxy, behind hardware LB - how to "starve" old server?

    - by alexeypro
    Currently we have the following setup. Hardware Load Balancer (LB) Box A running Tomcat on 8080 (TA) Box B running Tomcat on 8080 (TB) TA and TB are running behind LB. For now it's pretty complicated and manual job to take Box A or Box B out of LB to do the zero downtime deployment. I am thinking to do something like this: Hardware Load Balancer (LB) Box A running Nginx on 8080 (NA) Box A running Tomcat on 8081 (TA1) Box A running Tomcat on 8082 (TA2) Box B running Nginx on 8080 (NB) Box B running Tomcat on 8081 (TB1) Box B running Tomcat on 8082 (TB2) Basically LB will be directing traffic between NA and NB now. On each of Nginx's we'll have TA1, TA2 and TB1, TB2 configured as upstream servers. Once one of the upstreams's healthcheck page is unresponsive (shutdown) the traffic goes to another one (HttpHealthcheckModule module on Nginx). So the deploy process is simple. Say, TA1 is active with version 0.1 of the app. Healthcheck on TA1 is OK. We start TA2 with Healthcheck on it as ERROR. So Nginx is not talking to it. We deploy app version 0.2 to TA2. Make sure it works. Now, we switch the Healthcheck on TA2 to OK, switch Healthcheck to TA1 to ERROR. Nginx will start serving TA2, and will remove TA1 out of rotation. Done! And now same with the other box. While it sounds all cool and nice, how do we "starve" the Nginx? Say we have pending connections, some users on TA1. If we just turn it off, sessions will break (we have cookie-based sessions). Not good. Any way to starve traffic to one of the upstream servers with Nginx? Thanks!

    Read the article

  • How i can setup a nginx cache strategy that first try amazon s3, then memcache and do a fallback on miss?

    - by Tim
    i have a large site with lot of pages that almost never change, right now i am using two memcache servers (amazon elasticache), but this its really expensive. Thats why for this files that barely never change i want to upload them to amazon s3 and shutdown 1 memcache server. Here is my conf; location ~ /longterm/(.*){ proxy_pass http://amazonS3bucket; proxy_intercept_errors on; proxy_next_upstream http_404; error_page 404 503 = @fallback_memcached } location @fallback_memcache { set $memcached_key $uri; memcached_pass name:11211; error_page 404 @fallback; } location @fallback { try_files $uri $uri/index.html } I dont know why but the config doesnt work on the final fallback; if i got an amazon S3 hit it works, if i got an amazon S3 miss and a memcache hit it works, but if i got an amazon S3 miss then a memcache miss when it try to resolve the las fallback it fails. I am also thinking in use the amazon s3 fuse http://code.google.com/p/s3fs/ instead of the proxy pass i think it would be easier for implement, i would also be less performant?

    Read the article

  • Question marks showing in ls of directory. IO errors too.

    - by jaymoo
    Has anyone seen this before? I've got a raid 5 mounted on my server and for whatever reason it started showing this: jason@box2:/mnt/raid1/cra$ ls -alh ls: cannot access e6eacc985fea729b2d5bc74078632738: Input/output error ls: cannot access 257ad35ee0b12a714530c30dccf9210f: Input/output error total 0 drwxr-xr-x 5 root root 123 2009-08-19 16:33 . drwxr-xr-x 3 root root 16 2009-08-14 17:15 .. ?????????? ? ? ? ? ? 257ad35ee0b12a714530c30dccf9210f drwxr-xr-x 3 root root 57 2009-08-19 16:58 9c89a78e93ae6738e01136db9153361b ?????????? ? ? ? ? ? e6eacc985fea729b2d5bc74078632738 The md5 strings are actual directory names and not part of the error. The question marks are odd, and any directory with a question mark throws an io error when you attempt to use/delete/etc it. I was unable to umount the drive due to "busy". Rebooting the server "fixed" it but it was throwing some raid errors on shutdown. I have configured two raid 5 arrays and both started doing this on random files. Both are using the following config: mkfs.xfs -l size=128m -d agcount=32 mount -t xfs -o noatime,logbufs=8 Nothing too fancy, but part of an optimized config for this box. We're not partitioning the drives and that was suggested as a possible issue. Could this be the culprit?

    Read the article

  • upload process times out for different time zones

    - by shilezi
    I have an inhouse(NY) app that cients can upload files to. Usually uploads go pretty quickly for most clients but this particularly cient in the UK always have problems with uploads. not sure if they get any errors but since we log all exceptions, we see this...Error: System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.Net.WebException: The operation has timed out at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at ... This shouldn't even be an issue because since we noticed they have had this issue before so we bumped up these to maxRequestLength="2097151" executionTimeout="14400". Further investigating this error, i read that it could be a thread timeout since the default is 20minute. A worker process with process id of serving application pool was shutdown due to inactivity. Application Pool timeout configuration was set to 20 minutes. A new worker process will be started when needed. Problem is I am not entirely sure if that really is the case as all other cients mostly North American, have no issues but then their uploads don't seem to go beyond 80mb and the UK client have done a 700mb upload before that i know of. We have tested 750mb before and the whole process took about 15min for upload and processing. Any help on what the real issue here might be? Thanks.

    Read the article

  • Amazon EC2 - Free memory

    - by Damo
    We have an amazon ec2 small instance running and over the past few days we noticed that the memory is going down and down. On the small instance, we are running apache and tomcat6 Tomcat is started with the following JVM parameters -Xms32m -Xmx128m -XX:PermSize=128m -XX:MaxPermSize=256m We use nagios to monitor stuff like updates to apply, free disk space and memory. Everything else is behaving as expected but our memory is going down all the time. Our app receives approx half a million hits a day When I shutdown apache and tomcat, and ran free -m, we had only 594mb of memory free out out of the 1.7gb of memory. Not much else is running on the small instance and when running the top command I cannot see where the memory is going. The app we run on tomcat is a grails webapp. Could there be a possibility that there is a memory leak within our application? I read online and folks say that a small amazon instance is perfect for running apach and tomcat. I found a few posts online that showed how to setup apache and tomcat to limit the memory usage and I have already performed those steps. The memory is not being used up as quick but the memory is still decreasing over time. We have other amazone ec2 small instances running grails apps and the memory is fairly standard on those nodes. But they would not be receiving as much traffic Just to add, when I run the top command on the problem server, I cannot see where all the memory is being used Any help with this is greatly appreciated The output of free -m when run on my server is as follows total used free shared buffers cached Mem: 1657 1380 277 0 158 773 -/+ buffers/cache: 447 1209 Swap: 895 0 895 In your opinion, does this look ok? At what stage would the OS give back memory, would it wait to the memory reaches 0% or is this OS dependent?

    Read the article

  • Can't start my phpMyAdmin

    - by vrynxzent
    i am creating my own portable server but i can't make it to run the phpMyAdmin, the mysql, php and apache is running except for phpMyAdmin. When i check Apache's error log, it states [Fri Nov 09 08:54:37 2012] [warn] pid file F:/Drive WebServer/Drive WebServer/bin/Debug/Apache2bak/logs/httpd.pid overwritten -- Unclean shutdown of previous Apache run? PHP Warning: PHP Startup: Unable to load dynamic library './php_mysql.dll' - The specified module could not be found.\r\n in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library './php_mysqli.dll' - The specified module could not be found.\r\n in Unknown on line 0 [Fri Nov 09 08:54:37 2012] [notice] Apache/2.0.64 (Win32) PHP/5.2.17 configured -- resuming normal operations [Fri Nov 09 08:54:37 2012] [notice] Server built: Oct 18 2010 01:36:23 [Fri Nov 09 08:54:37 2012] [notice] Parent: Created child process 6784 i manually assign the exact path for this F:/php/ext/php_mysql.dll PHP Warning: PHP Startup: Unable to load dynamic library 'F:/php/ext/php_mysql.dll' - The specified module could not be found.\r\n in Unknown on line 0 but still the same error. i set this option in php.ini extension_dir = "./" another error goes pops out It says libmysql.dll is missing. PHP Version : 5.2.17 Any help would be appreciated. ;)

    Read the article

  • NGINX + PHP-FPM - Strange issue when trying to display images via php-gd / readfile - Connection wont terminate

    - by anonymous-one
    Ok, to get the details out of the way: The php script can be anything as simple as: <? header('Content-Type: image/jpeg'); readfile('/local/image.jpg'); ?> When I try to execute this via nginx + php-fpm what happens is the image shows up in the browser, here is what happens: IE - The page stays blank for a long period of time, and eventually the image is shown. Chrome - The image shows, but the loading spinner spins and spins for a long period of time. Eventually the debugger will show the image in red as in error, but the image shows up fine. Everything else on the server works great. Its pushing out about 100mbit steady serving static content. So this is definatly a php-fpm related issue. I THINK this may have something to do with the chunked encoding being sent back wrong? Also, I threw in a pause before the image was read, and got the pid of the fpm process, and it looks as tho its terminatly correctly (from strace): shutdown(3, 1 /* send */) = 0 recvfrom(3, "\1\5\0\1\0\0\0\0", 8, 0, NULL, NULL) = 8 recvfrom(3, "", 8, 0, NULL, NULL) = 0 close(3) = 0 The above was dumped long before ie/chrome decided to give up (even tho the image was shown) loading the image. Displaying HTML / text content is fine. Big bodies etc all load nice and fast and terminate right away (as they should). Doing something like: THIS IS THE IMAGE ---BINARY DUMP OF IMAGE--- Works fine too. Any ideas?

    Read the article

  • Lenovo Thinkpad T430 not booting from HDD if there is a USB modem connected

    - by user93353
    I have a T430 Levono Thinkpad running Win7. I use a ZTE USB modem (something like this) for my internet connection. I usually keep the modem plugged into the USB drive even when the laptop is shutdown or hibernating. This worked fine on my earlier laptops. But with the Lenovo, my laptop doesn't boot if the modem is in the USB drive. It shows the initial character based screen where it gives the Thinkpad message & BIOS details and then waits. If I pull out the modem, it goes ahead. I have disabled USB as a boot option in my BIOS settings, but even then this happens sometimes (but not all the times). Likewise while resuming from hibernation. The USB modem also has drivers & ISP connection client which getting installed the first time you use it on any machine. I have used multiple laptops (HP, DELL, Acer, Gateway) but never faced this problem before. I have friends who use other Thinkpad models but haven't faced this issue. Any resolutions, workarounds for this?

    Read the article

  • Windows 7 clean install becomes corrupt after reboot (repeated many fresh installs)

    - by pjotr_dolphin
    My laptop keeps crashing on boot after clean Windows 7 install. Ok, here is the story, and some fact. Computer: Samsung NP900X3C-A04HK (256GB SSD, 8GB RAM) OS to install: Windows 7 Ultimate SP1 (not from Samsung, own fresh Win) I purchased this laptop about a year ago, never booted it into the Windows Home that was installed on it, installed directly Ubuntu on the machine. Full disc encryption was the selected install, so of course it wiped the complete disc (including Samsung Recovery Partition). After some time, I felt like going back to Windows, as Windows 7 is actually quite nice. So I went to buy a fresh Windows 7Ultimate with SP1. Now to the tricky part. Windows installs perfectly, and after installing all Windows updates, drivers from Samsung, software I need, it is time for shutting it down and go to bed. Starting it up again, and it is not booting, these are the type of errors I have gotten so far (fresh installed it more then a dozen times now, and tried different suggestions from threads on the net). Windows failed to start... Status: 0xc000000f Info: The boot selection failed because a required device is inaccessible. File: /boot/bcd Status: 0xc000000f Info: an error occurred while attempting to read the boot configuration data. And some other errors, not all the same. Not memory of this. I have run different disc checks, and all says my SSD is in perfect shape. Note: Soft reboots from Windows menu works, never gets corrupted. But if I Shutdown and then start it up again, this is when it happens. Can someone help me not get back to Ubunut? What can be the cause, and how can it be fixed so I do not get there problems again?

    Read the article

  • Monitor programs accessing my keyboard?

    - by Anti Earth
    As of a few days ago, my computer is behaving 'erratically'. When I am typing, my pointer will randomly move to another place in the text and start typing a semi-random string of characters. ("gvyfn" is common; It has typed this about 8 times whilst I composed all the text above) It often highlights part of or all the text and overwrites it. It sometimes goes into loops of pressing Control-alt-delete down, bringing up Windows 7 menu thing. It sometimes even messes with mouseclicks; they have unexpected results, like requesting admin priveledges from applications, instead of switching to their window. I believe this is because it is holding a alt-function key down. This behaviour happens periodically, in waves. It might subside for an hour, then continue to haunt me. I believe it to be a virus or malicious program. My anti-virus (Symantec) and multiply MS rootkit removers could not find anything suspicious. I've noticed that sometimes it re-maps keys, and types gibberish when I press certain keys (though no pattern is evident). I believe a malicious program has installed a keyhook on my computer. I'm wondering... - Is there a way to let me view which programs are emulating keystrokes? - Is there a way to view what keyboard hooks are installed? (I'm also at liberty to try any other techniques to remove this blasted thing. It is easily the most fustrating computer problem I've encountered). Thanks!

    Read the article

  • SSH connect error

    - by DMDGeeker
    I use a notebook with Ubuntu 12.10 and try to connect a server with Ubuntu 12.04. The server has already installed openssh-server. And allow publick key and password to login. But I connect the server sometime well but after minutes it will be error. First, it will show me these messages:    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!    ......    Add correct host key in /home/myname/.ssh/known_hosts to get rid of this message. but I never reinstall system and the openssh-server. Both are never changed! The server nerver shutdown or reboot. Second, after I remove the relative key from my known_hosts and use ssh connect the server again, it will let me type my password. then my nightmare coming... Permission denied (publickey,password) But I typed the correct password! PS: I used password and public key both success. But the problem will appear again after i logout then login.

    Read the article

  • Ubuntu 9.10 won't reboot after replacing a failed drive

    - by user149041
    Hello Serverfault community. I hope someone can shed light on a peculiar problem I am having with an Ubuntu 9.10 server install. I am not a Linux expert but have the responsibility of fixing the box if something goes wrong. DOH! I have Ubuntu 9.10 server installed on on a desktop platform: Compaq Presario SR5027CL. There are two 1TB SATA drives configured in a RAID 1 array; I use the box as an email backup server for a small group of users. Last week one of the drives failed and was replaced with a new drive of the same type. The problem I have been having is getting the box to reboot after a restart or a shutdown halt. The OS and the RAID 1 array are on the same drives that make up the RAID 1 array. The replacement drive (sda) was added to the box and the partitions were created to match the existing good drive (sdb). The array is made up of sda1 and sdb1. I found an interesting point while checking the BIOS settings: there is a "HDD Boot Group Priority" section, and the new drive was selected as the "1. 3rd master"; the server wouldn't boot configured like that, but when I set the old drive to be "1. 4th master", the box will reboot. I'm checking some more things, but I would certainly appreciate any useful information. Thanks in advance.

    Read the article

  • BizTalk and IBM WebSphere MQ Errors

    - by Christopher House
    The project I'm currently working on is going to make heavy use of IBM WebShere MQ to send messages from BizTalk to the client's iSeries box.  I'd never previously worked with WebSphere MQ, so I didn't really have any idea what it would take to get this to work.  I was pleasantly surprised that it wasn't too difficult to configure a send port and pass messages through it to a queue.  Or so I thought... A couple of weeks ago, the client gave me the name of a host, queue manager and queue that I'd been using for my development.  Everything was going great, I was able to put messages onto the queue, I was happy, the client was happy.  Life was good.  Then the client tells me that the host I've been connecting to is actually a Solaris box and that in prod, we'll actually be sending to an iSeries.  We both agree that it would behoove us to start pointing my dev environment to their dev iSeries box in order to flush out any weirdness there might be.  As it turns out, it was a good thing we made the change.  As soon as I reconfigured my BRE policy that sets endpoint information to point to the iSeries queue, we started seeing failures in the event log.  An example from the event log: Event Type: Error Event Source: BizTalk Server 2009 Event Category: BizTalk Server 2009 Event ID: 5754 Date:  6/9/2010 Time:  10:16:41 AM User:  N/A Computer: WINDOWS2003 Description: A message sent to adapter "MQSC" on send port "<my dynamic sendport name>" with URI "mqsc://client/tcp/<hostname>(1414)/<queue manager name>/<queue name>" is suspended.  Error details: Failure encountered while attempting to open queue. queue = <queue name> queueManager = <queue manager name>, reasonCode = 6124  MessageId:  {76825C7C-611A-4A56-8A6F-35E1124BDB5C}  InstanceID: {BA389103-DF9B-493F-8C61-44574822AAD6} The key piece of information in the event entry is the reasonCode, 6124.  A quick Google search shows that reasonCode 6124 is the code for MQRC_NOT_CONNECTED.  According to IBM's docs, this means that you've tried to send a message without first opening a connection to the queue manager.  Obviously, in the context of BizTalk, this is an unexpected error, since this sort of thing should be managed entirely by the send adapter. Perusing IBM's documentation a bit more, I came across some info on how to turn on tracing for MQ.  With tracing enabled, I tried sending a message again, then went and reviewed the trace files.  The bulk of the information in the trace files didn't mean a thing to me, but at the end of one of the files, I did notice this: 00006257 15:40:20.327795   3500.4      RSESS:000009 ------{  reqReleaseConn 00006258 15:40:20.328714   3500.4      RSESS:000009 ------}  reqReleaseConn (rc=OK) 00006259 15:40:20.328727   3500.4      RSESS:000009 ------{  xcsClearTraceIdent 0000625A 15:40:20.328739   3500.4           :       ------}  xcsClearTraceIdent (rc=OK) 0000625B 15:40:20.328752   3500.4           :       -----}! trmzstMQCONNX (rc=MQRC_NOT_AUTHORIZED) 0000625C 15:40:20.328765   3500.4           :       ----}! MQCONNX (rc=MQRC_NOT_AUTHORIZED) 0000625D 15:40:20.328766   3500.4           :       ---}! ImqQueueManager::connect (rc=MQRC_NOT_AUTHORIZED) 0000625E 15:40:20.328767   3500.4           :       --}! ImqObject::open (rc=MQRC_NOT_CONNECTED) 0000625F 15:40:20.328768   3500.4           :       --{  ImqQueue::lock 00006260 15:40:20.328769   3500.4           :       --}! ImqQueue::lock (rc=Unknown(1)) 00006261 15:40:20.328769   3500.4           :       --{  ImqQueue::unlock 00006262 15:40:20.328769   3500.4           :       --}! ImqQueue::unlock (rc=Unknown(1)) It seemed like the MQRC_NOT_CONNECTED error was being caused by a security related issue (MQRC_NOT_AUTHORIZED).  I did notice something earlier in the log where it appeared that MQ was passing a field named UID with a value equal to the account name that my BizTalk service was running under.  I ended up creating a new local account on the BizTalk server that had the same name as a user which had access to the queue manager on the iSeries.  I then created a new host instance that ran under this new account, created a send handler for the MQSC adapter on this new host instance and reconfigured my orchestration to run on the new host instance.  After bouncing all my host instances, I was now able to send messages to the iSeries. It's still not clear to me why we were able to connect to the Solaris server.  I ended up contacting IBM's support and they did confirm that the process sending to MQ does in fact pass the identity to the queue manager it's connecting to.

    Read the article

  • SQL Server Configuration timeouts - and a workaround [SSIS]

    - by jamiet
    Ever since I started writing SSIS packages back in 2004 I have opted to store configurations in .dtsConfig (.i.e. XML) files rather than in a SQL Server table (aka SQL Server Configurations) however recently I inherited some packages that used SQL Server Configurations and thus had to immerse myself in their murky little world. To all the people that have ever gone onto the SSIS forum and asked questions about ambiguous behaviour of SQL Server Configurations I now say this... I feel your pain! The biggest problem I have had was in dealing with the change to the order in which configurations get applied that came about in SSIS 2008. Those changes are detailed on MSDN at SSIS Package Configurations however the pertinent bits are: As the utility loads and runs the package, events occur in the following order: The dtexec utility loads the package. The utility applies the configurations that were specified in the package at design time and in the order that is specified in the package. (The one exception to this is the Parent Package Variables configurations. The utility applies these configurations only once and later in the process.) The utility then applies any options that you specified on the command line. The utility then reloads the configurations that were specified in the package at design time and in the order specified in the package. (Again, the exception to this rule is the Parent Package Variables configurations). The utility uses any command-line options that were specified to reload the configurations. Therefore, different values might be reloaded from a different location. The utility applies the Parent Package Variable configurations. The utility runs the package. To understand how these steps differ from SSIS 2005 I recommend reading Doug Laudenschlager’s blog post Understand how SSIS package configurations are applied. The very nature of SQL Server Configurations means that the Connection String for the database holding the configuration values needs to be supplied from the command-line. Typically then the call to execute your package resembles this: dtexec /FILE Package.dtsx /SET "\Package.Connections[SSISConfigurations].Properties[ConnectionString]";"\"Data Source=SomeServer;Initial Catalog=SomeDB;Integrated Security=SSPI;\"", The problem then is that, as per the steps above, the package will (1) attempt to apply all configurations using the Connection String stored in the package for the "SSISConfigurations" Connection Manager before then (2) applying the Connection String from the command-line and then (3) apply the same configurations all over again. In the packages that I inherited that first attempt to apply the configurations would timeout (not unexpected); I had 8 SQL Server Configurations in the package and thus the package was waiting for 2 minutes until all the Configurations timed out (i.e. 15seconds per Configuration) - in a package that only executes for ~8seconds when it gets to do its actual work a delay of 2minutes was simply unacceptable. We had three options in how to deal with this: Get rid of the use of SQL Server configurations and use .dtsConfig files instead Edit the packages when they get deployed Change the timeout on the "SSISConfigurations" Connection Manager #1 was my preferred choice but, for reasons I explain below*, wasn't an option in this particular instance. #2 was discounted out of hand because it negates the point of using Configurations in the first place. This left us with #3 - change the timeout on the Connection Manager. This is done by going into the properties of the Connection Manager, opening the "All" tab and changing the Connect Timeout property to some suitable value (in the screenshot below I chose 2 seconds). This change meant that the attempts to apply the SQL Server configurations timed out in 16 seconds rather than two minutes; clearly this isn't an optimum solution but its certainly better than it was. So there you have it - if you are having problems with SQL Server configuration timeouts within SSIS try changing the timeout of the Connection Manager. Better still - don't bother using SQL Server Configuration in the first place. Even better - install RC0 of SQL Server 2012 to start leveraging SSIS parameters and leave the nasty old world of configurations behind you. @Jamiet * Basically, we are leveraging a SSIS execution/logging framework in which the client had invested a lot of resources and SQL Server Configurations are an integral part of that.

    Read the article

  • SQLAuthority News – Wireless Router Security and Attached Devices – Complex Password

    - by pinaldave
    In the last four days (April 21-24), I have received calls from friends who told me that they have got strange emails from me. To my surprise, I did not send them any emails. I was not worried until my wife complained that she was not able to find one of the very important folders containing our daughter’s photo that is located in our shared drive. This was alarming in my par, so I started a search around my computer’s folders. Again, please note that I am by no means a security expert. I checked my entire computer with virus and spyware, and strangely, there I found nothing. I tried to think what can cause this happening. I suddenly realized that there was a power outage in my area for about two hours during the days I have mentioned. Back then, my wireless router needed to be reset, and so I did. I had set up my WPA-PSK [TKIP] + WPA2-PSK [AES] very well. My key was very simple ( ‘SQLAuthority1′), and I never thought of changing it. (It is now replaced with a very complex one). While checking the Attached Devices, I found out that there was another very strange computer name and IP attached to my network. And so as soon as I found out that there is strange device attached to my computer, I shutdown my local network. Afterwards, I reconfigured my wireless router with a more complex security key. Since I created the complex password, I noticed that the user is no more connecting to my machine. Subsequently, I figured out that I can also set up Access Control List. I added my networked computer to that list as well. When I tried to connect from an external laptop which was not in the list but with a valid security key, I was not able to access the network, neither able to connect to it. I wasn’t also able to connect using a remote desktop, so I think it was good. If you have received any nasty emails from me (from my gmail account) during the afore-mentioned days, I want to apologize. I am already paying for my negligence of not putting a complex password; by way of losing the important photos of my daughter. I have already checked with my client, whose password I saved in SSMS, so there was no issue at all. In fact, I have decided to never leave any saved password of production server in my SSMS. Here is the tip SQL SERVER – Clear Drop Down List of Recent Connection From SQL Server Management Studio to clean them. I think after doing all this, I am feeling safe right now. However, I believe that safety is an illusion of many times. I need your help and advice if there is anymore I can do to stop unauthorized access. I am seeking advice and help through your comments. Reference : Pinal Dave (http://www.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • How To Clear An Alert - Part 2

    - by werner.de.gruyter
    There were some interesting comments and remarks on the original posting, so I decided to do a follow-up and address some of the issues that got raised... Handling Metric Errors First of all, there is a significant difference between an 'error' and an 'alert'. An 'alert' is the violation of a condition (a threshold) specified for a given metric. That means that the Agent is collecting and gathering the data for the metric, but there is a situation that requires the attention of an administrator. An 'error' on the other hand however, is a failure to collect metric data: The Agent is throwing the error because it cannot determine the value for the metric Whereas the 'alert' guarantees continuity of the metric data, an 'error' signals a big unknown. And the unknown aspect of all this is what makes an error a lot more serious than a regular alert: If you don't know what the current state of affairs is, there could be some serious issues brewing that nobody is aware of... The life-cycle of a Metric Error Clearing a metric error is pretty much the same workflow as a metric 'alert': The Agent signals the error after it failed to execute the metric The error is uploaded to the OMS/repository, where it becomes visible in the Console The error will remain active until the Agent is able to execute the metric successfully. Even though the metric is still getting scheduled and executed on a regular basis, the error will remain outstanding as long as the Agent is not capable of executing the metric correctly Knowing this, the way to fix the metric error should be obvious: Take the 'problem' away, and as soon as the metric is executed again (based on the frequency of the metric), the error will go away. The same tricks used to clear alerts can be used here too: Wait for the next scheduled execution. For those metrics that are executed regularly (like every 15 minutes or so), it's just a matter of waiting those minutes to see the updates. The 'Reevaluate Alert' button can be used to force a re-execution of the metric. In case a metric is executed once a day, this will be a better way to make sure that the underlying problem has been solved. And if it has been, the metric error will be removed, and the regular data points will be uploaded to the repository. And just in case you have to 'force' the issue a little: If you disable and re-enable a metric, it will get re-scheduled. And that means a new metric execution, and an update of the (hopefully) fixed problem. Database server-generated alerts and problem checkers There are various ways the Agent can collect metric data: Via a script or a SQL statement, reading a log file, getting a value from an SNMP OID or listening for SNMP traps or via the DBMS_SERVER_ALERTS mechanism of an Oracle database. For those alert which are generated by the database (like tablespace metrics for 10g and above databases), the Agent just 'waits' for the database to report any new findings. If the Agent has lost the current state of the server-side metrics (due to an incomplete recovery after a disaster, or after an improper use of the 'emctl clearstate' command), the Agent might be still aware of an alert that the database no longer has (or vice versa). The same goes for 'problem checker' alerts: Those metrics that only report data if there is a problem (like the 'invalid objects' metric) will also have a problem if the Agent state has been tampered with (again, the incomplete recovery, and after improper use of 'emctl clearstate' are the two main causes for this). The best way to deal with these kinds of mismatches, is to simple disable and re-enable the metric again: The disabling will clear the state of the metric, and the re-enabling will force a re-execution of the metric, so the new and updated results can get uploaded to the repository. Starting 10gR5, the Agent performs additional checks and verifications after each restart of the Agent and/or each state change of the database (shutdown/startup or failover in case of DataGuard) to catch these kinds of mismatches.

    Read the article

  • Error Handling Examples(C#)

    “The purpose of reviewing the Error Handling code is to assure that the application fails safely under all possible error conditions, expected and unexpected. No sensitive information is presented to the user when an error occurs.” (OWASP, 2011) No Error Handling The absence of error handling is still a form of error handling. Based on the code in Figure 1, if an error occurred and was not handled within either the ReadXml or BuildRequest methods the error would bubble up to the Search method. Since this method does not handle any acceptations the error will then bubble up the stack trace. If this continues and the error is not handled within the application then the environment in which the application is running will notify the user running the application that an error occurred based on what type of application. Figure 1: No Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); dt.ReadXml(BuildRequest(searchTerm, resultCount)); return dt; } Generic Error Handling One simple way to add error handling is to catch all errors by default. If you examine the code in Figure 2, you will see a try-catch block. On April 6th 2010 Louis Lazaris clearly describes a Try Catch statement by defining both the Try and Catch aspects of the statement. “The try portion is where you would put any code that might throw an error. In other words, all significant code should go in the try section. The catch section will also hold code, but that section is not vital to the running of the application. So, if you removed the try-catch statement altogether, the section of code inside the try part would still be the same, but all the code inside the catch would be removed.” (Lazaris, 2010) He also states that all errors that occur in the try section cause it to stops the execution of the try section and redirects all execution to the catch section. The catch section receives an object containing information about the error that occurred so that they system can gracefully handle the error properly. When errors occur they commonly log them in some form. This form could be an email, database entry, web service call, log file, or just an error massage displayed to the user.  Depending on the error sometimes applications can recover, while others force an application to close. Figure 2: Generic Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); try { dt.ReadXml(BuildRequest(searchTerm, resultCount)); } catch (Exception ex) { // Handle all Exceptions } return dt; } Error Specific Error Handling Like the Generic Error Handling, Error Specific error handling allows for the catching of specific known errors that may occur. For example wrapping a try catch statement around a soap web service call would allow the application to handle any error that was generated by the soap web service. Now, if the systems wanted to send a message to the web service provider every time a soap error occurred but did not want to notify them if any other type of error occurred like a network time out issue. This would be varying tedious to accomplish using the General Error Handling methodology. This brings us to the use case for using the Error Specific error handling methodology.  The Error Specific Error handling methodology allows for the TryCatch statement to catch various types of errors depending on the type of error that occurred. In Figure 3, the code attempts to handle DataException differently compared to how it potentially handles all other errors. This allows for specific error handling for each type of known error, and still allows for error handling of any unknown error that my occur during the execution of the TryCatch statement. Figure 5: Error Specific Error Handling public DataSet Search(string searchTerm, int resultCount) { DataSet dt = new DataSet(); try { dt.ReadXml(BuildRequest(searchTerm, resultCount)); } catch (TimeoutException ex) { // Handle Timeout TimeoutException Only } catch (Exception) { // Handle all Exceptions } return dt; }

    Read the article

  • Ask the Readers: Backing Your Files Up – Local Storage versus the Cloud

    - by Asian Angel
    Backing up important files is something that all of us should do on a regular basis, but may not have given as much thought to as we should. This week we would like to know if you use local storage, cloud storage, or a combination of both to back your files up. Photo by camknows. For some people local storage media may be the most convenient and/or affordable way to back up their files. Having those files stored on media under your control can also provide a sense of security and peace of mind. But storing your files locally may also have drawbacks if something happens to your storage media. So how do you know whether the benefits outweigh the disadvantages or not? Here are some possible pros and cons that may affect your decision to use local storage to back up your files: Local Storage Pros You are in control of your data Your files are portable and can go with you when needed if using external or flash drives Files are accessible without an internet connection You can easily add more storage capacity as needed (additional drives, etc.) Cons You need to arrange room for your storage media (if you have multiple externals drives, etc.) Possible hardware failure No access to your files if you forget to bring your storage media with you or it is too bulky to bring along Theft and/or loss of home with all contents due to circumstances like fire If you are someone who is always on the go and needs to travel as lightly as possible, cloud storage may be the perfect way for you to back up and access your files. Perhaps your laptop has a hard-drive failure or gets stolen…unhappy events to be sure, but you will still have a copy of your files available. Perhaps a company wants to make sure their records, files, and other information are backed up off site in case of a major hardware or system failure…expensive and/or frustrating to fix if it happens, but once again there is a nice backup ready to go once things are fixed. As with local storage, here are some possible pros and cons that may influence your choice of cloud storage to back up your files: Cloud Storage Pros No need to carry around flash or bulky external drives All of your files are accessible wherever there is an internet connection No need to deal with local storage media (or its’ upkeep) Your files are still safe if your home is broken into or other unfortunate circumstances occur Cons Your files and data are not 100% under your control Possible hardware failure or loss of files on the part of your cloud storage provider (this could include a disgruntled employee wreaking havoc) No access to your files if you do not have an internet connection The cloud storage provider may eventually shutdown due to financial hardship or other unforeseen circumstances The possibility of your files and data being stolen by hackers due to a security breach on the part of your cloud storage provider You may also prefer to try and cover all of the possibilities by using both local and cloud storage to back up your files. If something happens to one, you always have the other to fall back on. Need access to those files at or away from home? As long as you have access to either your storage media or an internet connection, you are good to go. Maybe you are getting ready to choose a backup solution but are not sure which one would work better for you. Here is your chance to ask your fellow HTG readers which one they would recommend. Got a great backup solution already in place? Then be sure to share it with your fellow readers! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know Winter Sunset by a Mountain Stream Wallpaper Add Sleek Style to Your Desktop with the Aston Martin Theme for Windows 7 Awesome WebGL Demo – Flight of the Navigator from Mozilla Sunrise on the Alien Desert Planet Wallpaper Add Falling Snow to Webpages with the Snowfall Extension for Opera [Browser Fun] Automatically Keep Up With the Latest Releases from Mozilla Labs in Firefox 4.0

    Read the article

  • VMWare player - compiling server modules - Ubuntu 13.10

    - by user211976
    While running Ubuntu 13.04 whenever the Linux kernel had been updated, this used to make vmware player happy: sudo apt-get install linux-headers-$(uname -r) sudo vmware-modconfig --console --install-all Yesterday I upgraded to Ubuntu 13.10 and lo and behold, the above workaround does not work anymore: Unable to install all modules. See log for details. I assume by "See log" it means the files in /tmp/vmware-root/*log root@hugin:/tmp/vmware-root# ls -ltr /tmp/vmware-root/ totalt 16 -rw-r--r-- 1 root root 3815 nov 6 13:54 vmware-apploader-17267.log -rw-r--r-- 1 root root 0 nov 6 13:54 vmware-vmis-17693.log -rw-r--r-- 1 root root 0 nov 6 13:54 vmware-vmis-17742.log -rw-r--r-- 1 root root 0 nov 6 13:54 vmware-vmis-18701.log -rw-r--r-- 1 root root 0 nov 6 13:54 vmware-vmis-18750.log -rw-r--r-- 1 root root 0 nov 6 13:54 vmware-vmis-19100.log -rw-r--r-- 1 root root 0 nov 6 13:54 vmware-vmis-19149.log -rw-r--r-- 1 root root 9250 nov 6 13:54 vmware-modconfig-17267.log root@hugin:/tmp/vmware-root# tail /tmp/vmware-root/vmware-modconfig-17267.log 2013-11-06T13:54:28.950+01:00| modconfig| I120: Copied Module.symvers from "/tmp/modconfig-wpDrtf/vmci-only/Module.symvers" to "/tmp/modconfig-wpDrtf/vsock-only/Module.symvers". 2013-11-06T13:54:28.950+01:00| modconfig| I120: Building module with command "/usr/bin/make -j8 -C /tmp/modconfig-wpDrtf/vsock-only auto-build HEADER_DIR=/lib/modules/3.11.0-12-generic/build/include CC=/usr/bin/gcc IS_GCC_3=no" 2013-11-06T13:54:31.048+01:00| modconfig| I120: Successfully built vsock. Module is currently at "/tmp/modconfig-wpDrtf/vsock.o". 2013-11-06T13:54:31.048+01:00| modconfig| I120: Found the vsock symvers file at "/tmp/modconfig-wpDrtf/vsock-only/Module.symvers". 2013-11-06T13:54:31.048+01:00| modconfig| I120: Installing vsock from /tmp/modconfig-wpDrtf/vsock.o to /lib/modules/3.11.0-12-generic/misc/vsock.ko. 2013-11-06T13:54:31.048+01:00| modconfig| I120: Registering file "/lib/modules/3.11.0-12-generic/misc/vsock.ko". 2013-11-06T13:54:31.400+01:00| modconfig| I120: "/usr/lib/vmware-installer/2.1.0/vmware-installer" exited with status 0. 2013-11-06T13:54:31.400+01:00| modconfig| I120: Registering file "/usr/lib/vmware/symvers/vsock-3.11.0-12-generic". 2013-11-06T13:54:31.764+01:00| modconfig| I120: "/usr/lib/vmware-installer/2.1.0vmware-installer" exited with status 0. 2013-11-06T13:54:31.786+01:00| modconfig| I120: We are now shutdown. Ready to die! root@hugin:/tmp/vmware-root# tail /tmp/vmware-root/vmware-apploader-17267.log 2013-11-06T13:54:20.911+01:00| appLoader| I120: libglib-2.0.so.0 <SYSTEM> 2013-11-06T13:54:20.911+01:00| appLoader| I120: libz.so.1 <SYSTEM> 2013-11-06T13:54:20.911+01:00| appLoader| I120: libvmware-modconfig-console.so <SHIPPED> 2013-11-06T13:54:20.912+01:00| appLoader| I120: Shipped glib version is 2.24 2013-11-06T13:54:20.912+01:00| appLoader| I120: System glib version is 2.38 2013-11-06T13:54:20.912+01:00| appLoader| I120: Using system version of glib. 2013-11-06T13:54:20.912+01:00| appLoader| I120: Loading system version of libgcc_s.so.1. 2013-11-06T13:54:20.912+01:00| appLoader| I120: Loading system version of libglib-2.0.so.0. 2013-11-06T13:54:20.912+01:00| appLoader| I120: Loading system version of libz.so.1. 2013-11-06T13:54:20.912+01:00| appLoader| I120: Loading shipped version of libxml2.so.2.

    Read the article

  • Fusion Concepts: Fusion Database Schemas

    - by Vik Kumar
    You often read about FUSION and FUSION_RUNTIME users while dealing with Fusion Applications. There is one more called FUSION_DYNAMIC. Here are some details on the difference between these three and the purpose of each type of schema. FUSION: It can be considered as an Administrator of the Fusion Applications with all the corresponding rights and powers such as owning tables and objects, providing grants to FUSION_RUNTIME.  It is used for patching and has grants to many internal DBMS functions. FUSION_RUNTIME: Used to run the Applications.  Contains no DB objects. FUSION_DYNAMIC: This schema owns the objects that are created dynamically through ADM_DDL. ADM_DDL is a package that acts as a wrapper around the DDL statement. ADM_DDL support operations like truncate table, create index etc. As the above statements indicate that FUSION owns the tables and objects including FND tables so using FUSION to run applications is insecure. It would be possible to modify security policies and other key information in the base tables (like FND) to break the Fusion Applications security via SQL injection etc. Other possibilities would be to write a logon DB trigger and steal credentials etc. Thus, to make Fusion Applications secure FUSION_RUNTIME is granted privileges to execute DMLs only on APPS tables. Another benefit of having separate users is achieving Separation of Duties (SODs) at schema level which is required by auditors. Below are the roles and privileges assigned to FUSION, FUSION_RUNTIME and FUSION_DYNAMIC schema: FUSION It has the following privileges: Create SESSION Do all types of DDL owned by FUSION. Additionally, some specific priveleges on other schemas is also granted to FUSION. EXECUTE ON various EDN_PUBLISH_EVENT It has the following roles: CTXAPP for managing Oracle Text Objects AQ_SER_ROLE and AQ_ADMINISTRATOR_ROLE for managing Advanced Queues (AQ) FUSION_RUNTIME It has the following privileges: CREATE SESSION CHANGE NOTIFICATION EXECUTE ON various EDN_PUBLISH_EVENT It has the following roles: FUSION_APPS_READ_WRITE for performing DML (Select, Insert, Delete) on Fusion Apps tables FUSION_APPS_EXECUTE for performing execute on objects such as procedures, functions, packages etc. AQ_SER_ROLE and AQ_ADMINISTRATOR_ROLE for managing Advanced Queues (AQ) FUSION_DYNAMIC It has following privileges: CREATE SESSION, PROCEDURE, TABLE, SEQUENCE, SYNONYM, VIEW UNLIMITED TABLESPACE ANALYZE ANY CREATE MINING MODEL EXECUTE on specific procedure, function or package and SELECT on specific tables. This depends on the objects identified by product teams that ADM_DDL needs to have access  in order to perform dynamic DDL statements. There is one more role FUSION_APPS_READ_ONLY which is not attached to any user and has only SELECT privilege on all the Fusion objects. FUSION_RUNTIME does not have any synonyms defined to access objects owned by FUSION schema. A logon trigger is defined in FUSION_RUNTIME which sets the current schema to FUSION and eliminates the need of any synonyms.   What it means for developers? Fusion Application developers should be using FUSION_RUNTIME for testing and running Fusion Applications UI, BC and to connect to any SQL front end like SQL *PLUS, SQL Loader etc. For testing ADFbc using AM tester while using FUSION_RUNTIME you may hit the following error: oracle.jbo.JboException: JBO-29000: Unexpected exception caught: java.sql.SQLException, msg=invalid name pattern: FUSION.FND_TABLE_OF_VARCHAR2_255 The fix is to add the below JVM parameter in the Run/Debug client property in the Model project properties -Doracle.jdbc.createDescriptorUseCurrentSchemaForSchemaName=true More details are discussed in this forum thread for it.

    Read the article

  • Your Job Search Should be More Than Just a New Year's Resolution

    - by david.talamelli
    I love the beginning of a new year, it is a great chance to refocus and either re-evaluate goals you are working to or even set new ones. I don't have any statistics to measure this but I am sure that one of the more popular new year's resolutions in the general workforce is to either get a new job or work to further develop one's career. I think this is a good idea, in today's competitive work force people should have a plan of what they want to do, what role they are after and how to get there. One common mistake I think many people make though is that a career plan shouldn't be a once a year thought. When people finish with the holiday season with their new year's resolution to find a new job fresh in their mind, you can see the enthusiasm and motivation a person has to make something happen. Emails are sent, calls are made, applications are made, networking is happening, etc..... Finding the right role that you are after however can be difficult, while it would be great if that dream role was available just at the time you happened to be looking for it - in reality this is not always the case. Job Seekers need to keep reminding themselves that while sometimes that dream job they are after is available at the same time they are looking, that also a Job search can be a difficult and long process. Many people who set out with the best of intentions in January to find a new job can soon lose interest in a job search if they do not immediately find a role. Just like the Christmas decorations are put away and the photos from New Year's are stored away - a Job Seeker's motivation may slowly decrease until that person finds themselves 12 months later in the same situation in same role and looking for that new opportunity again. Rather than just "going for it" and looking for a role in the month of January, a person's job search or career plan should be an ongoing activity and thought process that is constantly updated and evaluated over the course of the year. It can be hard to stay motivated over an extended period of time, especially when you are newly motivated and ready for that new role and the results are not immediate. Rather than letting your job search fall down the priority list and into the "too hard basket" a few ideas that may keep your enthusiasm fresh Update your resume every 6 months, even if you are not looking for a job - it is easy to forget what you have accomplished if you don't keep your details updated. Also it is good to be prepared and have a resume ready to go in case you do get an unexpected phone call for that 'dream job' you have been hoping for. Work out what you want out of your next role before you begin your job search - rather than aimlessly searching job ads or talking to people - think of the organisations or type of role you would like before you search. If you know what you are looking for it will be much easier to work out how to get there than if you do not know what you want. Don't expect immediate results once you decide to look for another job, things don't always fall into place. Timing and delivery can be important pieces of being selected for a role, companies don't hire every role in January. Have an open mind - people you meet or talk to may not result in immediate results for your job search but every connection may help you get a bit closer to what you are after . These actions will not guarantee a positive result, but in today's competitive work force every little of extra preparation and planning helps. All the best for 2011 and I hope your career plan whatever it may be is a success.

    Read the article

  • SharePoint 2010 Sandboxed solution SPGridView

    - by Steve Clements
    If you didn’t know, you probably will soon, the SPGridView is not available in Sandboxed solutions. To be honest there doesn’t seem to be a great deal of information out there about the whys and what nots, basically its not part of the Sandbox SharePoint API. Of course the error message from SharePoint is about as useful as punch in the face… An unexpected error has been encountered in this Web Part.  Error: A Web Part or Web Form Control on this Page cannot be displayed or imported. You don't have Add and Customize Pages permissions required to perform this action …that’s if you have debug=true set, if not the classic “This webpart cannot be added” !! Love that one! but will a little digging you should find something like this… [TypeLoadException: Could not load  type Microsoft.SharePoint.WebControls.SPGridView from assembly 'Microsoft.SharePoint, Version=14.900.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c'.]   Depending on what you want to do with the SPGridView, this may not help at all.  But I’m looking to inherit the theme of the site and style it accordingly. After spending a bit of time with Chrome’s FireBug I was able to get the required CSS classes.  I created my own class inheriting from GridView (note the lack of a preceding SP!) and simply set the styles in there. Inherit from the standard GridView public class PSGridView : GridView   Set the styles in the contructor… public PSGridView() {     this.CellPadding = 2;     this.CellSpacing = 0;     this.GridLines = GridLines.None;     this.CssClass = "ms-listviewtable";     this.Attributes.Add("style", "border-bottom-style: none; border-right-style: none; width: 100%; border-collapse: collapse; border-top-style: none; border-left-style: none;");       this.HeaderStyle.CssClass = "ms-viewheadertr";          this.RowStyle.CssClass = "ms-itmhover";     this.SelectedRowStyle.CssClass = "s4-itm-selected";     this.RowStyle.Height = new Unit(25); }   Then as you cant override the Columns property setter, a custom method to add the column and set the style… public PSGridView() {     this.CellPadding = 2;     this.CellSpacing = 0;     this.GridLines = GridLines.None;     this.CssClass = "ms-listviewtable";     this.Attributes.Add("style", "border-bottom-style: none; border-right-style: none; width: 100%; border-collapse: collapse; border-top-style: none; border-left-style: none;");       this.HeaderStyle.CssClass = "ms-viewheadertr";          this.RowStyle.CssClass = "ms-itmhover";     this.SelectedRowStyle.CssClass = "s4-itm-selected";     this.RowStyle.Height = new Unit(25); }   And that should be enough to get the nicely styled SPGridView without the need for the SPGridView, but seriously….get the SPGridView in the SandBox!!!   Technorati Tags: Sharepoint 2010,SPGridView,Sandbox Solutions,Sandbox Technorati Tags: Sharepoint 2010,SPGridView,Sandbox Solutions,Sandbox

    Read the article

  • Monitoring the Application alongside SQL Server

    - by Tony Davis
    Sometimes, on Simple-Talk, it takes a while to spot strange and unexpected patterns of user activity, or small bugs. For example, one morning we spotted that an article’s comment count had leapt to 1485, but that only four were displayed. With some rooting around in Google Analytics, and the endlessly annoying Community Server admin-interface, we were able to work out that a few days previously the article had been subject to a spam attack and that the comment count was for some reason including both accepted and unaccepted comments (which in turn uncovered a bug in the SQL). This sort of incident made us a lot keener on monitoring Simple-talk website usage more effectively. However, the metrics we wanted are troublesome, because they are far too specific for Google Analytics to measure, and the SQL Server backend doesn’t keep sufficient information to enable us to plot trends. The latter could provide, for example, the total number of comments made on, or votes cast for, articles, over all time, but not the number that occur by hour over a set time. We lacked a baseline, in other words. We couldn’t alter the database, as it is a bought-in package. We had neither the resources nor inclination to build-in dedicated application monitoring. Possibly, we could investigate a third-party tool to do the job; but then it occurred to us that we were already using a monitoring tool (SQL Monitor) to keep an eye on the database. It stored data, made graphs and sent alerts. Could we get it to monitor some aspects of the application as well? Of course, SQL Monitor’s single purpose is to check and monitor SQL Server, over time, rather than to monitor applications that use SQL Server. However, how different is the business of gathering and plotting SQL Server Wait Stats, from gathering and plotting various aspects of user activity on the site? Not a lot, it turns out. The latest version allows us to write our own custom monitoring scripts, meaning that we could now monitor any metric in the application that returns an integer. It took little time to write a simple SQL Query that collects basic metrics of the total number of subscribers, votes cast, comments made, or views of articles, over time. The SQL Monitor database polls Simple-Talk every second or so in order to get the latest totals, and can then store and plot this information, or even correlate SQL Server usage to application usage. You can see the live data by visiting monitor.red-gate.com. Click the "Analysis" tab, and select one of the "Simple-talk:" entries in the "Show" box and an appropriate data range (e.g. last 30 days). It’s nascent, and we’re still working on it, but it’s already given us more confidence that we’ll spot quickly trends, bugs, or bursts of ‘abnormal’ activity. If there is a sudden rise in comments, we get an alert, and if it’s due to a spam attack, we can moderate or ban the perpetrator very quickly. We’ve often argued that a tool should perform a single job well rather than turn into a Swiss-army knife, but ironically we’ve rather appreciated being able to make best use of what’s there anyway for a slightly different purpose. Is this a good or common practice? What do you think? Cheers, Tony.

    Read the article

  • Preview Before You Paste with Live Preview in Office 2010

    - by DigitalGeekery
    Do you often find yourself frustrated that content you just copied and pasted didn’t turn out the way you expected? With the new Live Preview in Office 2010, you can preview how copied content will look when it’s pasted even between Office applications. Not every paste preview option will be available in every circumstance. The available options will be based on the applications being used and what content is copied. Copy your content like normal by right-clicking and selecting Copy, pressing Crtl + C, or selecting Copy from the Home tab. Next, select your location to paste the content. Now you can access the Paste Preview buttons either by selecting the Paste dropdown list from the Home tab…   …Or by right-clicking. As you hover your cursor over each of the Paste Options buttons, you will see a preview of what it will look like if you paste using that option. Click the corresponding button when you find the paste option you like. The “Paste” will paste all the content and formatting as you can see below. Values will paste values only, no formatting.   Formatting will paste only the formatting, no values. Hover over Paste Special to reveal any additional paste options. The process is similar in other Office applications. As you can see in the Word document below, Keep Text Only will paste the text, but not the orange color format from the original text.   Even after you’ve pasted, there is still time to change your mind. After you paste content you’ll see a Paste Option button near your content. If you don’t, you can pull it up by pressing the Ctrl key. Note: This is also available after using Ctrl + V to paste. Click to enable the dropdown and select one of the available options.   Using Live Paste Preview between multiple applications is just as easy. If we preview pasting the content from our Word document into PowerPoint by using the Keep Source Formatting option, we’ll see that the outcome looks awful. Selecting the Use Destination Theme will merge the text into the theme of the PowerPoint document and looks a lot better on our slide.   Live Paste Preview is a nice addition to Office 2010 and is sure to save time spent undoing the unexpected consequences of pasting content. Looking for more Office 2010 tips? Check out some of our other Office 2010 posts like how to create a customized tab on the Office 2010 ribbon, and how to use the streamlined printing features in Office 2010. Similar Articles Productive Geek Tips Edit Microsoft Word 2007 Documents in Print PreviewPreview Documents Without Opening Them In Word 2007How to See Where a TinyUrl Is Really Linking ToHow To Upload Office 2010 Documents to Web Apps Technical PreviewPreview Links and Images in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor

    Read the article

  • Creating and using VM Groups in VirtualBox

    - by Fat Bloke
    With VirtualBox 4.2 we introduced the Groups feature which allows you to organize and manage your guest virtual machines collectively, rather than individually. Groups are quite a powerful concept and there are a few nice features you may not have discovered yet, so here's a bit more information about groups, and how they can be used.... Creating a group Groups are just ad hoc collections of virtual machines and there are several ways of creating a group: In the VirtualBox Manager GUI: Drag one VM onto another to create a group of those 2 VMs. You can then drag and drop more VMs into that group; Select multiple VMs (using Ctrl or Shift and click) then  select the menu: Machine...Group; or   press Cmd+U (Mac), or Ctrl+U(Windows); or right-click the multiple selection and choose Group, like this: From the command line: Group membership is an attribute of the vm so you can modify the vm to belong in a group. For example, to put the vm "Ubuntu" into the group "TestGroup" run this command: VBoxManage modifyvm "Ubuntu" --groups "/TestGroup" Deleting a Group Groups can be deleted by removing a group attribute from all the VMs that constitute that group. To do this via the command-line the syntax is: VBoxManage modifyvm "Ubuntu" --groups "" In the VirtualBox Manager, this is more easily done by right-clicking on a group header and selecting "Ungroup", like this: Multiple Groups Now that we understand that Groups are just attributes of VMs, it can be seen that VMs can exist in multiple groups, for example, doing this: VBoxManage modifyvm "Ubuntu" --groups "/TestGroup","/ProjectX","/ProjectY" Results in: Or via the VirtualBox Manager, you can drag VMs while pressing the Alt key (Mac) or Ctrl (other platforms). Nested Groups Just like you can drag VMs around in the VirtualBox Manager, you can also drag whole groups around. And dropping a group within a group creates a nested group. Via the command-line, nested groups are specified using a path-like syntax, like this: VBoxManage modifyvm "Ubuntu" --groups "/TestGroup/Linux" ...which creates a sub-group and puts the VM in it. Navigating Groups In the VirtualBox Manager, Groups can be collapsed and expanded by clicking on the carat to the left in the Group Header. But you can also Enter and Leave groups too, either by using the right-arrow/left-arrow keys, or by clicking on the carat on the right hand side of the Group Header, like this: . ..leading to a view of just the Group contents. You can Leave or return to the parent in the same way. Don't worry if you are imprecise with your clicking, you can use a double click on the entire right half of the Group Header to Enter a group, and the left half to Leave a group. Double-clicking on the left half when you're at the top will roll-up or collapse the group.   Group Operations The real power of Groups is not simply in arranging them prettily in the Manager. Rather it is about performing collective operations on them, once you have grouped them appropriately. For example, let's say that you are working on a project (Project X) where you have a solution stack of: Database VM, Middleware/App VM, and  a couple of client VMs which you use to test your app. With VM Groups you can start the whole stack with one operation. Select the Group Header, and choose Start: The full list of operations that may be performed on Groups are: Start Starts from any state (boot or resume) Start VMs in headless mode (hold Shift while starting) Pause Reset Close Save state Send Shutdown signal Poweroff Discard saved state Show in filesystem Sort Conclusion Hopefully we've shown that the introduction of VM Groups not only makes Oracle VM VirtualBox pretty, but pretty powerful too.  - FB 

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >