Search Results

Search found 33499 results on 1340 pages for 'down tools week'.

Page 63/1340 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Massive SQL issue shutting down our site.

    - by Pselus
    Our website has started timing out like crazy today. All of our clients are finding it unusable. The only error we can seem to trace down as a potential problem is this: SQLAllocHandle on SQL_HANDLE_DBC failed Error ASP Description Error Category Microsoft OLE DB Provider for ODBC Drivers I have no idea what it means or how to go about fixing it. Anyone ever encountered this error before? Currently, you can log in to our site, but then once you go to do anything else, you find yourself logged out or nothing happens. We have a lot of Ajax going on so the "nothing happens" probably has to do with the Ajax pages not loading properly due to logouts and so nothing displays to the user. Like I said, I'm at a loss. Anyone have any advice on this error? EDIT I realize that this isn't necessarily a programming question, but we are a small startup company that just yesterday started talking about how we need to get a backup server running. Apparently we talked about it too late. We don't have a DBA, just 2 mid level programmers trying their hardest to keep our clients happy. So please, if you have any assistance give it but please don't close my question right now. EDIT 2 Turns out we had something on our server running called "ServerMask" that makes our IIS server look like Apache to the outside world. Shutting it down fixed our issue. Still no idea why it was messing up but it was the problem apparently. Thanks to everyone who tried to help.

    Read the article

  • Strict Pomodoro and other time management Chrome extensions

    - by kerry
    I have recently begun using the Pomodoro Technique to increase my productivity. However, I still find myself getting sucked in to the vortex of useless information that is the internet. With that in mind I began searching for a useful chrome extension to replace the Android Pomodoro app I have been using to manage my ‘doros. I even considered writing it myself. Luckily, I stumbled on one that had a similar featureset to what I was looking for. Strict Pomodoro is an excellent Chrome extension for practicing Pomodoro. Though lacking a few key features, such as the ability to set the duration of your pomodoros and breaks, it still has a key feature that helps me stay on task. It blocks time sucking websites. You can set filter lists and it will keep you from accessing them during a Pomodoro. Effectively reminding you to stay on task. Also, the author readily admits that it was quickly put together and new features may be added down the road. For now, it is still an excellent option. For those of you who do not practice Pomodoro but are trying to stay on task. The StayFocusd extension will effectively manage the amount of time you spend on useless (non-productive) sites. It also has a rich feature set that may be better for your work habits. OK, breaks over. Time to get back to work. 25 minutes at a time.

    Read the article

  • 'Binary XML' for game data?

    - by bluescrn
    I'm working on a level editing tool that saves its data as XML. This is ideal during development, as it's painless to make small changes to the data format, and it works nicely with tree-like data. The downside, though, is that the XML files are rather bloated, mostly due to duplication of tag and attribute names. Also due to numeric data taking significantly more space than using native datatypes. A small level could easily end up as 1Mb+. I want to get these sizes down significantly, especially if the system is to be used for a game on the iPhone or other devices with relatively limited memory. The optimal solution, for memory and performance, would be to convert the XML to a binary level format. But I don't want to do this. I want to keep the format fairly flexible. XML makes it very easy to add new attributes to objects, and give them a default value if an old version of the data is loaded. So I want to keep with the hierarchy of nodes, with attributes as name-value pairs. But I need to store this in a more compact format - to remove the massive duplication of tag/attribute names. Maybe also to give attributes native types, so, for example floating-point data is stored as 4 bytes per float, not as a text string. Google/Wikipedia reveal that 'binary XML' is hardly a new problem - it's been solved a number of times already. Has anyone here got experience with any of the existing systems/standards? - are any ideal for games use - with a free, lightweight and cross-platform parser/loader library (C/C++) available? Or should I reinvent this wheel myself? Or am I better off forgetting the ideal, and just compressing my raw .xml data (it should pack well with zip-like compression), and just taking the memory/performance hit on-load?

    Read the article

  • Problem installing SQL Server client tools

    - by Shiraz Bhaiji
    We are tring to install SQL Server 2005 Standard on Windows 2008 Standard both 64 bit. Have done this before with no problems. This time we get an error during the installation of client tools: There was an unexpected failure during the setup wizard Link Id 20476, message ID 50000 There are no errors in the event log. Anybody have any ideas what could be wrong?

    Read the article

  • PCI scan findings and problems with week ciphers on ports 993,443,995,465

    - by user64991
    From PCI scan results: Synops is : The remote service encrypts traffic using a protocol with known weaknesses . Description : The remote service accepts connections encrypted using SSL 2.0, which reportedly suffers from several cryptographic flaws and has been deprecated for several years. An attacker may be able to exploit these issues to conduct man-in-the-middle attacks or decrypt communications between the affected service and clients . See also : http://www.schneier.com/paper-ssl.pdf Solution: Consult the application's documentation to disable SSL 2.0 and use SSL 3.0 or TLS 1.0 instead. Risk Factor: Medium / CVSS Base Score : 2 (AV:R/AC:L/Au:NR/C:P/A:N/I:N/B:N) I have tried to change SSLProtocol all -SSLv2 to SSLProtocol -ALL +SSLv3 +TLSv1 And SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW To SSLCipherSuite ALL:!ADH:RC4+RSA:+HIGH:!MEDIUM:!LOW:!SSLv2:!EXPORT But using SSLdigger, it shows the same result. Is this the right way to do something like this?

    Read the article

  • How to manage two separate testing teams using different test tracking tools

    - by newuser
    I have two independent testing teams currently testing the same application. One team is using ClearQuest, and the other is using Mantis. It has been a huge effort to manage all of the duplicate reported bugs. What options would improve this situation? My constraint is that the ClearQuest team will not change test reporting tools. The migration to ClearQuest also comes with a large training effort.

    Read the article

  • Rack layout tools

    - by Luke
    I'm wondering if there's any tools (preferably offline) that would allow me to layout all of the new equipment that will be going into several standard racks. Currently I'm using Excel to map out all of the slots columns for the data but I suspect that there is some better method of doing this. Suggestions? Edit: Dell has an online tool, but doesn't seem very good at actually saving the data that you're working on (and obviously it's geared towards Dell hardware).

    Read the article

  • Cannot install PC Tools firewall

    - by Philip
    I am unable to install PC Tools firewall..It is fine up until after rebooting then the PcTools icon in the system tray hangs and indicates it is "initializing." I have waited 5 minutes and no change. I have tried multiple times.The Vista firewall was turned off prior to attempting to the attempted installation. Help!Thank You

    Read the article

  • Samsung laptop randomly shuts down

    - by Dmatig
    I've rewritten this question because it turned into an indecipherable mess. I have a Samsung R560 laptop that is overheating, and shutting itself down under load consistantly. Thank you quickcel for reccomending me Speedfan to monitor my temps. Here they are (Load / Idle): (Ignore "Temp1 and Temp2", whatever sensors they are they'd always random, pretty sure they're broke). The load temperature is after just 5 minites of playing Fallout 3 - another 5 minutes and it (the GPU - 9600M GS) consistantly breaches the mid 90's then shuts down, so it's hard to get a good picture of it. I'm looking for some solution or way to decrease these temperatures, because they seem far too high even idle. I've tried: Opening up the case and clearing of all dust with compressed air. Updating drivers for my Graphics card Have purchased and am using a notebook cooler I don't want to: Undervolt / underclock (defeats the point of having a more expensive card) Use lower power / performance settings (again, i might as well have bought something cheaper) Is there anything else i can try (software or inexpensive hardware) that can help me fix this? Has anybody had a Samsung laptop and knows if this can be sorted under my warranty, and the turnaround time of sending it off (UK?)(it has always ran hotter than it should, but now at 6 months old is getting hot enough to power off)

    Read the article

  • Rack layout tools

    - by Luke
    I'm wondering if there's any tools (preferably offline) that would allow me to layout all of the new equipment that will be going into several standard racks. Currently I'm using Excel to map out all of the slots columns for the data but I suspect that there is some better method of doing this. Suggestions? Edit: Dell has an online tool, but doesn't seem very good at actually saving the data that you're working on (and obviously it's geared towards Dell hardware).

    Read the article

  • Meaning of Crawl errors

    - by com
    My question is about definition of Crawl errors in Google Webmaster Tools. Crawl errors is devided into few sections. Let's first consider HTTP section. I assume that all broken links in this section was somehow found by crawler, this is not the links from sitemap. If all this links was found by scanning pages from sitemap for links, why it doesn't mention what was the source page, like in sitemap section with column Linked From. Please correct me if I am wrong. Sitemap section. Looks like all those links came from my sitemap. But there is Linked From column, I already know, that all those broken links is from sitemap, so in order to fix the error, I should revise my sitemap. Am I wrong? Not followed section. I don't know what does it mean. Looks like it accumulates all links that caused redirect, but for some reason Google considers all those redirect as wrong redirect. Do you know if there are any set of rules how to determine wrong redirect. Actually I found were was my mistake, I tried to normalize URL and redirect it to the right URL, but I did normalization in a wrong way. Not found section. This section like HTTP section but with 404 errors. This section has Linked From column. But very often Linked From has unavailable. What does it mean, Google can not say me how it found this non existing page. How this section related to sitemap section. Does this section contains all 404 links from sitemap too. But there is too many 404 links, much more than in sitemap. I tried to take a look what we have in Linked From, and I saw that this link came from sitemap two month ago. But why Google keeps it indexed, the link is already dead, new sitemap doesn't have it. If there is any expire date for old links? Unreachable section. Looks like this section for 500 errors. This section doesn't contain Linked From column. There are too many completely meaningless links, I really don't know where this stuff came from, and without Linked From I am not able to figure out how to deal with it. Sorry for such a big topic, but I just want to make it clear, what every section stands for, because it's extremely crucial in order to deal with all those problems. Hopefully it will be useful not just for me. Thanks!

    Read the article

  • GWT: Generate more complete crawl error report

    - by Mike
    I'm a developer in charge of managing Webmasters and related issues (including correcting crawl errors) for dozens (hundreds, maybe?) of active sites and as part of my duties I create a report of every discrepancy, including all pages generating a 404 and all pages that link to those pages. Currently within Webmaster Tools I'm able to download a csv file of all pages with a 404 response, but I'm then having to manually click on every single one of those links and copy the "linked from" field to paste into my spreadsheet. This is extremely tedious and seems unnecessary; I would expect the ability to download all that data at once. I'm ultimately looking for the end result of one csv file that has every url with a 404, but also has every url that links to each one of them. Am I overlooking this functionality somewhere or does anyone have a good solution? Edit 1 (2/11/2013): Example of what the csv output looks like now: URL,Response Code,News Error,Detected,Category http://www.abcdef.com/123.php,404,,11/12/13,Not found http://www.abcdef.com/456.php,404,,11/12/13,Not found Which is great, but let's say 123.php has 5 pages that link to it. Now I have to duplicate that row in my spreadsheet 4 more times, then go into Webmasters, get all the url's that link to the page, and add that data to my spreadsheet. The output I would prefer: URL,Response Code,Linked From,News Error,Detected,Category http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found Note the (hypothetical) addition of a "Linked From" column, as well as the fact there are only 2 unique URL's now (like before) but all of the "Linked To" pages are shown in one report. Edit 2 (2/12/2013): To clarify, my question is less about detecting and correcting 404's, but more about generating a report of what Google has listed as errors. Oftentimes, these errors aren't even valid anymore but I still need documentation to show that Google detected a problem and that problem is now fixed. Many of the "linked from" url's I find are actually outdated, cached resources. For example, I'll frequently see that the linked-from url is the sitemap, which is actually an old sitemap cached by Google that points to an old page. Neither the sitemap or old page exist, but they still appear in my crawl error reports because they are cached resources.

    Read the article

  • Debian Linux server hangs after a week or so

    - by Alex Flo
    I have 2 Debian Linux 6.0.4 servers that have a strange behaviour: after 5-7-10 days they hang. By this I mean the servers need to be restarted and before that ping won't answer. I've been struggling with this problem for a couple of months now and here's some thoughts/what I tried without being able to solve the problem. I changed the RAM on a server. Being 2 different servers I doubt that it could be something related to hardware as a 3rd identical server won't have this problem. I logged the server load and when it crashes the load is fine (quite low) I cannot find anything in the server logs, logs are fine till the server freezes. I don't have access to console unfortunately. While I have years of admin experience I have never encountered such an issue and right now I have no idea where else to investigate. If you have an idea of what I could try in order to fix the problem please share it with me:-)

    Read the article

  • Block users from Social networking websites while firewall is down

    - by SuperFurryToad
    We currently have a SonicWall firewall, which does a pretty good job a blocking Social networking websites like Facebook and Bebo. The problem we are having is that sometimes we need to temporarily disable our firewall blocklist so we can update our company's page on Facebook for example. Whenever we do this, have see an avalanche of users logging on to their Facebook pages during work time. So what we need a way to block access while the firewall is down. For the sake of argument, we have two groups of users - "management" and "standard users". "standard users" would have no access to Facebook, but "management" users would have access. Perhaps something like a host file redirect for non-management users. This could probably be enforced via group policy that would call a bat file to copy down the host file, depending if the user was management or not. I'm keen to hear any suggestions for what the best practice would be for this in a Windows/AD environment. Yes, I know what we're doing here is trying to solve a HR problem using IT. But this is the way management wants it and we have a lot of semi-autonomous branch offices that we don't have a lot of day to day contact with, so an automated way of enforcing this would be the most preferable method.

    Read the article

  • Remote paging with Nagios when network is down and email won't work -- cellular modems and alternatives

    - by Quinten
    What is the best option for remote paging when network services are down? I'm looking for a solution that can let me know when network services are down during off-hours only, and especially when email/smtp services are out. Therefore, it needs to be redundant to our network and power supply. I'm imagining a cellular modem is one option. What's the price range for these? Is anybody using them and feel that they are worth the cost? I'm imagining that it's something we would end up sending an emergency page ~ 1x/month at most, so I'd like the pricing to reflect that--I don't mind a high per-page cost as long as it has a low recurring cost. Another option would be to expose at least one server to remote ping, and run a check script on a remote server. Are there paid options for this? Currently, we run Nagios on a Linux VM on a Windows 2008 Hyper-V host. It would be great if the solution would work in that environment, but I know it's tricky with external devices, and we could move Nagios to a standalone workstation if needed.

    Read the article

  • Help, my CentOS servers keep going down , No route to host after a random uptime

    - by user249071
    Hello , I have a couple of Centos linux servers, that have a very simple task, they run nginx + fastcgi for php , and some NFS mounts between them, readonly They have some RPC commands to start some downloading processes with wget, nothing fancy , from a main server, but their behavior is very unstable, they simply go down, we tried to monitor ram , processor usage, even network connections, they don't load up so much, max network connections up to... 250 max, 15% processor usage and memory , well, doesn't even fill up, 2.5GB from 8GB max , I have no ideea why can a linux server go down like that, they aren't even public servers, no domain names installed no public serving, for sites. The only thing that I've discovered was that if i didn't restart the network service every couple of hours or so... the servers were becoming very slow, starting apps very slow, but not repoting a high usage of resources...Maybe Centos doesn't free the timeout connections, or something like that...It's based on Red Hat right? I'm not a linux expert , but I'm sure that there are a few guys out there that can easily have an answer to this , or even have some leads to what i can do ... I haven't installed snort, or other things to view if we have some DOS attacks, still the scheduled script that restarts the network each hour should put the system back online, and it doesn't.... Thank you in advance

    Read the article

  • Network and Server Management Tools

    - by jessieE
    We are building a farm of test servers. Currently we have 8 servers. We are planning to use the servers to test the following Mysql Cluster Xen or KVM virtualization Heartbeat/Pacemaker/DRDB What tools do experienced sysads use for: Initial installation of operating system( installing centos 5 or ubuntu server manually 8 times seems like a tedious task that just begs for automation) Centralized Configuration Management and Software Updates for Host and possibly Guest(virtualized) servers Hardware, Services and Network Monitoring

    Read the article

  • Nagios send mail when server is down

    - by tzulberti
    I am using nagios 3.06 to monitor the servers. When a service is critical, it sends a mail, but when a server is down no mail is sent. Even if all the services go to critical state, no mail is sent. I have the following configuration: define command {     command_name notify-host-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$" "******** Nagios ****\n\n Host: $HOSTNAME$\n Description: the server is down" } define command{     command_name notify-service-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$: $SERVICEDESC$ ($NOTIFICATIONTYPE$)" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\nDate/Time: $LONGDATETIME$\nAdditional Info:$SERVICEOUTPUT$" } The python script is a script to sent a mail. It works if I execute it from the command line, but it doesn't sents an email from nagios. What I am doing wrong? UPDATE: The contact data is: define contact{     contact_name root     alias Root     service_notification_period 24x7     host_notification_period 24x7     service_notification_options w,u,c,r     host_notification_options d,r     service_notification_commands notify-service-by-email     host_notification_commands notify-host-by-email     email [email protected] } define contactgroup{     contactgroup_name admins     alias Nagios Administrators     members root }

    Read the article

  • Facebook Like javascript related to Time Spent Downloading a page Increase in GWT?

    - by donaldthe
    Hi, I installed the Facebook Like button Javascript version on my website on December 15th. Take a look at this report from Google Webmaster Central. Crawl stats Googlebot activity in the last 90 days The crawl stats are from Googlebot which as far as I know doesn't execute Javascript. Could the Facebook Like Javascript code, "The XFBML version" be related to large spike in Time spent downloading a page? (By the way the huge spike in November was caused by a mistake where every image request was getting a 301.) I'm not sure what caused the spike to go down by half somewhere in December. It may have been related to a faulty setting in web.config. I'm at a loss as to what I can do about this or even how to tell if this is my problem or Googlebots crawl problem. Here is the Facebook code I am using to create the like button. It is right after the opening body tag <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({appId: 'xxxxx', status: true, cookie: true, xfbml: true}); }; (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js'; document.getElementById('fb-root').appendChild(e); }()); ` and this creates the like box: <fb:like show_faces="false"></fb:like> If the Javascript can't be the problem any ideas on where to start looking would be appreciated.

    Read the article

  • HP tx2510us shuts down without warning, now won't boot, no BIOS codes, screen doesn't light

    - by Tim S
    Hey all, HP tx2150us worked great for a year and a half, rarely it would shut down instead of sleeping. It started running hot sometimes, I installed Win7, worked great, still running hot. Before Xmas, it gave me evidence of video errors - jagged lines, rows missing. That same night, it started rebooting, then it shut off and wouldn't boot. It gave me a BIOS code, I shut it off to look up the code online. When I turned it on again, it won't boot at all. The LEDs all light up, the fan, HD, and optical all spin up, but the screen never lights up and it doesn't try to boot, nor does it blink any BIOS codes. It just acts like it's sleeping, and won't wake up. I suspected heat problems, so I disassembled it, cleaned the crap out of the fan, and reassembled it, breaking the stereo mic connector. Oh well. When I reassembled it, it booted up into Win7 again, but kept shutting down for no discernible reason. After a dozen or so random reboots like that, it is now back to where it was - turns on but doesn't boot or give BIOS codes. The screen never lights, and everything spins up then idles. Any ideas? I really can't afford to buy a new one and I use(d) it to take ALL my notes, that's why I got a tablet in the first place.

    Read the article

  • Linux tools to choose suitable Cisco ASA 5500

    - by linuxcore
    I have a linux webhosting server which affects a high DDOS. I want to use Cisco ASA 5500 Series Adaptive Security Appliances to protect the linux server from this DDOS. I know there are many factors should you know before you choose the suitable hardware firewall like the amount of this DDOS and pps ..etc Please suggest a linux tools to measure those factors and to help me collect the required informations ( pps - amount of DDOS - concurrent connections and other factors ) Regards,

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >