Search Results

Search found 6068 results on 243 pages for 'goal tracking'.

Page 134/243 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • SMTP message rate control on Ubuntu 8.04, preferably with postfix

    - by TimDaMan
    Maybe I am chasing a bug but I am trying to set up a smtp proxy of sorts. I have a postfix server which receives all the email for a collection of servers/clients. It them uses a smarthost (relayhost=...) to forward it's mail to our corporate MTA. I would like to limit the number of messages an individual server can relay to prevent swamping the corporate MTA. Postfix has a program called "anvil" that is capable of tracking stats about mail to be used for such things but it doesn't seem to be executed. I ran "inotifywait -m /usr/lib/postfix/anvil" while I started postfix and sent a number of messages through it from a remote server. inotifywait indicated anvil was never run. Anyone gotten postfix/anvil rate controls to work? main.cf smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no readme_directory = no myhostname = site-server-q9 alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = localhost relayhost = Out outgoing mail relay mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 10.0.0.0/8 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = 10.X.X.X smtpd_client_message_rate_limit = 1 anvil_rate_time_unit = 1h master.cf extract anvil unix - - - - 1 anvil smtp inet n - - - - smtpd

    Read the article

  • What is the recommended glusterFS configuration for a growing website?

    - by montana
    Hello, I have a website that is tracking towards 50 million hits per day average, and within the next 3 months should be over 100 million hits per day. We are trying to use GlusterFS v 3.0.0 (with latest patches as of 1-17-2010) Currently, we've just upgraded to a load balancer environment that has 3 physical hosts with 6 Xen-Server 5.5u1 VM's (2 on each host) to serve webpage traffic. Each machine has 6 Raid-6 local storage drives (7200RPM-SATA). The old machine we came from had 1 mirrored SAS 10k drive. We also set up glusterFS currently with 3 bricks, one on each host, and it is serving the 6 VM's as clients. In testing, everything seemed fine. However when we went to production, it seemed that there just wasn't enough I/O's available to serve traffic even upwards of 15mil hits. Weeks prior, our old server was able to handle traffic, maxed out, at 20mil. Is there any recommended configurations for such an application, or things to be aware of that isn't apparent with their documentation at gluster.org for a site our size?

    Read the article

  • Oracle: Getting ORA-01195 and ORA-01110 when attempting resetlogs

    - by MacAnthony
    I am trying to get our database to startup. When I login to sqlplus and do a startup, I get the message: Total System Global Area 534462464 bytes Fixed Size 2215064 bytes Variable Size 331350888 bytes Database Buffers 192937984 bytes Redo Buffers 7958528 bytes Database mounted. ORA-01589: must use RESETLOGS or NORESETLOGS option for database open So I do a shutdown, startup mount (which works fine) and then run: SQL> alter database recover using backup controlfile until cancel; alter database recover using backup controlfile until cancel * ERROR at line 1: ORA-00283: recovery session canceled due to errors ORA-19909: datafile 1 belongs to an orphan incarnation ORA-01110: data file 1: '/<path>/system01.dbf' SQL> alter database open resetlogs; alter database open resetlogs * ERROR at line 1: ORA-01195: online backup of file 1 needs more recovery to be consistent ORA-01110: data file 1: '/<path>/system01.dbf' I know I've used instructions to get me past this error before, but I seem to be having trouble tracking it down. A bit of history: We wanted to refresh the data in this from another db so we attempted to do a expdb/impdb into this instance. The impdb did not complete correctly and got an end of file error message in it and hung (I still have the message in a log if it's important). Since the instance would start at this point, we decided to use the hotbackup process we have to restore the db. The hotbackups are from another server/instance. We went through the same process 2 weeks ago. At the point of recreating the control file is where we got to the issue above.

    Read the article

  • Multiple *NIX Accounts with Identical UID

    - by Tim
    I am curious whether there is a standard expected behavior and whether it is considered bad practice when creating more than one account on Linux/Unix that have the same UID. I've done some testing on RHEL5 with this and it behaved as I expected, but I don't know if I'm tempting fate using this trick. As an example, let's say I have two accounts with the same IDs: a1:$1$4zIl1:5000:5000::/home/a1:/bin/bash a2:$1$bmh92:5000:5000::/home/a2:/bin/bash What this means is: I can log in to each account using its own password. Files I create will have the same UID. Tools such as "ls -l" will list the UID as the first entry in the file (a1 in this case). I avoid any permissions or ownership problems between the two accounts because they are really the same user. I get login auditing for each account, so I have better granularity into tracking what is happening on the system. So my questions are: Is this ability designed or is it just the way it happens to work? Is this going to be consistent across *nix variants? Is this accepted practice? Are there unintended consequences to this practice? Note, the idea here is to use this for system accounts and not normal user accounts.

    Read the article

  • 'Bug in Mailman version 2.1.12'

    - by davorg
    I'm working on setting up a server running Plesk 10.4.4 Update #13 on Centos 6.2. I've configured Mailman and now I want to set up some mailing lists. I've created a list in the Plesk control panel, but when I try to administer the new list (by visiting http://lists.[domain].com/mailman/admin/[listname] I see the following error: Bug in Mailman version 2.1.12 We're sorry, we hit a bug! Please inform the webmaster for this site of this problem. Printing of traceback and other system information has been explicitly inhibited, but the webmaster can find this information in the Mailman error logs. I see exactly the same error if I try to go to the list info page at http://lists.[domain].com/mailman/listinfo/[listname]. I would follow the instructions and look in the error logs, but I can't find them. I would expect to find a file at /var/log/mailman/error, but there's nothing there. My test list seems to work correctly. It sends all the expected email. It's just the web pages for the list that seem to be broken. Has anyone else seen this? Any suggestions for tracking down and fixing the problem? p.s. I think I've chosen the correct Stack Exchange site, but it this question would be better asked elsewhere, please let me know. Update: I got to the bottom of this, so I'm documenting the answer in case anyone else has the same problem. The fact that I couldn't find the error log was the clue. The problem was that the Mailman process didn't have permissions to create an error log. And it seems that if Mailman can't create an error log then it will respond to any web request with this error page. Creating an error log file (in /var/log/mailman/error) and giving it the correct permissions fixed the problem.

    Read the article

  • Monitoring AWS Systems Behind ElasticBeanStalk

    - by A. Avadis
    So I'm getting a company set up in the Amazon Cloud -- creating IAAS protocol/solutions/standardized implementation, etc while also being the SysAdmin for individual systems, app environments, and day-to-day uptime. One of the biggest issues I'm having is tracking various system/application logs, as well as logging/monitoring/archiving system metrics like memory usage, cpu usage, etc etc In a centralized fashion. E.g. -- Nagios + Urchin. The BIGGEST impediment to my endeavors is the following: The company application is deployed in the form of a Java *.WAR file, uploaded to an Elastic BeanStalk application environment, load balancing and auto-scaling between 3(min) and 10(max) servers, and the EC2's that run the application are fired up and disposed of ad-hoc. That is to say, I can't monitor the individual EC2's for very long because so many are being terminated then auto-provisioned/auto-scaled on the fly -- so I'd constantly be having to "monitor what I'm monitoring", and continuously remove/add EC2 machine addresses to my monitoring lists. IS there some sort of way to use monitoring tools like Zabbix or Nagios to monitor the ElasticBeanStalk, and have it automatically add on new EC2's, and remove terminated/failed EC2's from its monitoring list automatically? Furthermore, is there anything I can do with GrayLog to achieve similar results with the aggregation/centralization of my application logs from multiple EC2 instances into ONE consolidated set of logs/events? If not GrayLog, is there ANYTHING LIKE GrayLog that can automatically detect what EC2 members are being added/removed from the environment, and collect the logs from them automatically? Any and all advice or direction is appreciated. Thanks much, and cheers!!

    Read the article

  • how to block spam email using Microsoft Outlook 2011 (Mac)?

    - by tim8691
    I'm using Microsoft Outlook 2011 for Mac and I'm getting so much spam I'm not sure how to control it. In the past, I always applied "Block Sender" and "Mark as Junk" to any spam email messages I received. This doesn't seem to be enough nowadays. Then I've started using Tools Rules to create rules based on subject, but the same spammer keeps changing subject lines, so this isn't working. I've been tracking the IP addresses they also seem to be changing with each email. Is there any key information I can use in the email to apply a rule to successfully place these spam emails in the junk folder? I'm using a "Low" level of junk email protection. The next higher level, "high", says it may eliminate valid emails, so I prefer not to use this option. There's maybe one or two spammers sending me emails, but the volume is very high now. I'm getting a variation of the following facebook email spam: Hi, Here's some activity you have missed. No matter how far away you are from friends and family, we can help you stay connected. Other people have asked to be your friend. Accept this invitation to see your previous friend requests Some variations on the subject line they've used include: Account Info Change Account Sender Mail Pending ticket notification Pending ticket status Support Center Support med center Pending Notification Reminder: Pending Notification How do people address this? Can it be done within Outlook or is it better to get a third party commercial software to plug-in or otherwise manage it? If so, why would the third party be better than Outlook's internal tools (e.g. what does it look for in the incoming email that Outlook doesn't look at)?

    Read the article

  • Ubuntu server crashes; need help figuring how to figure out why

    - by neezer
    I have a 768 Slice at slicehost.com running Ubuntu Server 8.04.2 LTS (hardy) with a LAMP stack on it that periodically crashes, though why I am not sure. From what I can tell, there is a process that basically goes rogue and consumes all the memory on the slice, suffocating all the other programs running until the whole thing comes to a grinding halt, and I have to do a hard reboot of the slice to get it back up and running again. I can't detect any pattern for this (it seems to happen about once a month, more or less). Here's a screenshot of my console during the last crash: I would assume that a possible cause might a PHP script or an apache configuration rule that might cause the crash if triggered? How would I be able to find out which one is the offending one? I've checked and rechecked all my PHP scripts, and running them doesn't seem to trigger the crash. I've also been able to log on to my system during a crash and see what's running (with top), but I can't tell how the offending process was started, so I can't trace the root of the problem! I know my description is overly generic, but unfortunately my expertise in tracking down the source of these glitches is very limited. If you need any additional information about my system in order to help me figure this out, please let me know in the comments, and I will append it to the question. My only other lead as to the culprit here is Wordpress, which we have installed on this server. Here are the details: Wordpress 3.0.3 with the following plugins installed and activated: Addmarx - Bookmark/Share/Email Dropdown, Akismet, All in One SEO Pack, Animated Banners, Automatically publish highlights of any website, directly to your Blog, Broken Link Checker, CMS Dashboard, Collapsing Categories, Status Updater, SubHeading, Ultimate Google Analytics, VastSubCat, WP-CMS Post Control, and WP Super Cache

    Read the article

  • Advice: USB Monitoring Programming

    - by Kashif
    I need an advice about USB programming in linux. i have to design a USB monitoring program that 'll keep checking usb ports of a linux cent os. as soon as a usb or external hard disk is connected, this program will shoot an email to some specific person about detail of usb (as size, mount on, time). when usb is disconnected, it will again shoot an email to some person with same kind of information. mean while this program will also write logs in syslog/messages with name of programing for easy tracking. Now I want ask that what is best way to develop this program. as I'm new to this field so i know nothing about it? either i should use perl, bash scripting or some other language? I have no idea what is right way to adopt coz this program will keep running all the time to keep a check on usb ports. I know few commands in like lsusb, fdisk (to check attached usb) and df -h (to get detail of usb) but dont know how i can achieve using these commands that i am thinking. also one more thing that in future i also need to modify this program for ubuntu and Citrix XenServer and it should be same everywhere.

    Read the article

  • Emails not sending from outlook / OWA - Not even hitting the mail queue in exchange

    - by webnoob
    We are having an issue this morning where we can receive external emails but cannot send internal or external ones from Outlook or OWA. If I use: Send-MailMessage –From <[email protected]> –To <[email protected]> –Subject “Test #01”-Body “Just a test message.” –SMTPServer <Server-Name> –Credential <domain\user> the email is sent correctly which makes me think there is a connection issue with OWA and Outlook. However, outlook is reporting as Connected with exchange. I have checked the message tracking in exchange tools and emails sent via outlook and OWA do not appear. Nothing has changed on the server on the weekend so I don't really know where to start debugging this issue. We are using Windows SBS 2011. We only have one send connector which isn't using Smart Hosts and is set to use DNS MX records. Use external DNS is not checked and I can ping google.com etc so doesn't appear to be a DNS issue (plus the email sends from the console anyway). EDIT It appears that users using IMAP can send emails correctly, its only ones that rely on the normal exchange connection type that don't work. EDIT Emails from IMAP are hitting the email queue's where as emails from the normal exchange accounts aren't. EDIT It seems that some of the emails we tried to send yesterday sent at about 1am but now it won't work again..

    Read the article

  • Windows disk change monitoring for malware analysis

    - by SuperDuck
    Not sure if this question belongs to here, because it has some relations with 'serverfault' (system backups) and 'stackoverflow' (software analysis). I'm looking for a solution to monitor disk changes on a Windows system and selectively revert them. It should be able to handle live files like registry parts, so may need to be an offline backup software. It shouldn't silently pass over files which the current admin user doesn't have permissions on (files with no permission entries or owned by the 'system' user) Registry change tracking would be a bonus but is not a requirement I use virtual machines for malware analysis, there is even no solution to list file changes in disk snapshot files (delta VMDK). I currently use Ashampoo for monitoring changes. Though it's the best one between similars, it's not a good software and hasn't really evolved in many 'platinum', 'deluxe' versions released in the last 10 years (it even used non-resizable windows until the latest version). The real problem is it misses some disk / registry changes. Perhaps it only compares modification dates and doesn't catch a change if the dates are preserved. So, I think the solution should compare files using hashes, or file sizes at least. There are numerous backup software out there and I'm sure one can handle this, offline or online.

    Read the article

  • INSERT DELAYED on locked tables blocks PHP processes to continue

    - by sw0x2A
    Our webservers write some tracking information into a MySQL database (using INSERT DELAYED into MyISAM table). When a huge SELECT query is executed on this table or when it is locked for another reason, the webserver processes (with INSERT DELAYED) are waiting for the database and in some cases the MaxServer limit is reached in Apaches, so they will stop serving requests. We use INSERT DELAYED because The DELAYED option for the INSERT statement is a MySQL extension to standard SQL that is very useful if you have clients that cannot or need not wait for the INSERT to complete. This is a common situation when you use MySQL for logging and you also periodically run SELECT and UPDATE statements that take a long time to complete. Quote from MySQL documentation. I am wondering why the Apache processes are waiting for the INSERT DELAYED to finish. And what can I do to just send the data and forget about it. (Since this is logging data, I do not care if we lose some entries.) Even when the table is locked the PHP script should just go on and should not wait for an answer of MySQL. (We do not want to setup Master-slave for this table but we are thinking about move this data to some NoSQL database. But for now I would like to know why INSERT DELAYED is not working as expected.)

    Read the article

  • Thousands of visits a day from untraceable traffic to website - Serious issue

    - by kel
    At the end of January we noticed a spike in traffic to what JetPack stats says was home/archive page and what Google was classifying as going to /gaming/ which is an archive list in WordPress. This started off as ~3,000 unique visitors and jumped up to 65,000 unique visitors in one day, again all to the "home" page. This happened over a course of a couple of weeks and we thought we were getting attacked. The traffic then dropped off for a few days but then came back but came back as only about ~15,000 uniques a day and has been like that every day since. We came to the conclusion that something wasn't tracking right somewhere and this is legitimate traffic and brushed it off. Now here comes the problem, Google AdSense has just disabled our account for "invalid clicks". We are trying to figure out where this traffic is coming from and stop it if it's not legitimate or figure out a way to track it correctly. Specs for the site: Dedicated server running CentOS 6 with nginx, php-fpm and MySQL. The site is built in WordPress and we use CloudFlare and W3 Total Cache. Analytics being used are Google Analytics, Quantcast, Alexa and Compete. Any kind of help would be awesome. UPDATE: I'm finding more people with the same type of problem and there doesn't seem to be a solution. http://netmeg.com/bot-attack/ http://stkywll.com/2012/03/02/annoying-cyborgs-attach-distort-analytics/ After looking at the access logs I noticed they were all CloudFlare IP's. I looked into that and found out CloudFlare acts as a proxy and there was a way to fix the logs in nginx. They are coming from many different ISP's in the US. They are going to /games/ or /gaming/ (/games/ redirects to /gaming/) and all seem to have the same user agent of Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0).

    Read the article

  • Why does Google Analytics use two domains?

    - by AKeller
    I'm building a distributed widget that is comparable to Google Analytics. Users will add a <script> tag to their site that references my widget's JavaScript file. The Google Analytics tracking code looks like this: var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-XXXXXXXX-X']); _gaq.push(['_trackPageview']); (function () { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); Can anyone explain the reasoning behind separate HTTP and HTTPS hostnames? My instinct is to just secure the www address and then use the protocol-less syntax, like //www.google-analytics.com/ga.js. But I'm sure the Google Analytics architects put a lot of thought into this approach. I'd love to understand their logic before I follow/ignore their model.

    Read the article

  • Switch between network configurations via command line in fedora 17

    - by Mike Fairhurst
    I have two different setups I use on my work laptop; one enables synergy over an ethernet ssh tunnel with my work computer on the local network, and the other opens an HTTP tunnel to my work computer from outside the network. When I have wifi enabled at work, my laptop seems to use it by preference. This makes synergy run incredibly slowly. At home I must use wifi. I have scripts that begin my ssh tunnels, add my ssh keys, and starts up other programs like synergy, and close themselves when I shut my laptop. However, every day I have to start out my routine by opening my gnome-control-center and turning on my ethernet. I have tried route add and ifup, none of it works, so I dove into gnome-control-center's source code and found that it enabled the connection by libnm's method nm_client_activate_connection with some libnm specific structs that I am having trouble tracking down. I'm not much of a c programmer, and I'm not familiar with either GTK or libnm. Does anybody know what fedora 17 does with ethernet connections to fully enable them? Or does anybody know what libnm does to fully enable an ethernet connection? Do I have to write a c script to run libnm for me to fully emulate whatever gnome-control-center is trying to do?

    Read the article

  • iptables: How to combine DNAT and SNAT to use a secondary IP address?

    - by Que_273
    There are lots of questions on here about iptables DNAT/SNAT setups but I haven't found one that solves my current problem. I have services bound to the IP address of eth0 (e.g. 192.168.0.20) and I also have a IP address on eth0:0 (192.168.0.40) which is shared with another server. Only one server is active, so this alias interface comes and goes depending on which server is active. In order to get traffic accepted by the service a DNAT rule is used to change the destination IP. iptables -t nat -A PREROUTING -d 192.168.0.40 -p udp --dport 7100 -j DNAT --to-destination 192.168.0.20 I also wish all outbound traffic from this service to appear to come from the shared IP, so that return responses will work in the event of a active-standby failover. iptables -t nat -A POSTROUTING -p udp --sport 7100 -j SNAT --to-source 192.168.0.40 My problem is that the SNAT rule is not always run. Inbound traffic causes a connection tracking entry like this. [root]# conntrack -L -p udp udp 17 170 src=192.168.0.185 dst=192.168.0.40 sport=7100 dport=7100 src=192.168.0.20 dst=192.168.0.185 sport=7100 dport=7100 [ASSURED] mark=0 secmark=0 use=2 which means the POSTROUTING chain is not run and outbound traffic leaves with the real IP address as the source. I am thinking I can set up a NOTRACK rule in the raw table to prevent conntracking for this port number, but is there a better or more efficient way to make this work? Edit - Alternative question: Is there a way (in CentOS/Linux) to have an interface that can be bound to but not used, such that it can be attached to the network or detached when a shared IP address is swapped between servers?

    Read the article

  • Small maximum number of connections on a Linux router

    - by Eugene
    I have a Linux box acting as a router with no iptables or other firewall and no networking applications running on it, just pure router. I've put it in a test environment that generates many TCP connections, each having unique source and destination IP, and those connections go through this router. I'm observing that number of connections successfully created rise to approximately 500 and then no more connections can be created for several minutes, then another 100 connections can be created and there is another pause, and so on. If 10 connections for each source-destination pair are created, then maximum numbers go about 10 times up, so the problem is probably with many connections from different IPs. As traffic is simply routed, it doesn't have to do with number of file descriptors, iptables connection tracking and other things often proposed to check in similar cases. The box has plenty of free RAM and CPU, both NICs are gigabit. The kernel is 2.6.32. I've already tried increasing net.core.*mem_max, net.core.netdev_max_backlog and txqueuelen on both NICs, with completely no effect. What else should I check ? Is there some rate-limit in the kernel itself ?

    Read the article

  • Unusual HEAD requests to nonsense URLs from Chrome

    - by JeremyDWill
    I have noticed unusual traffic coming from my workstation the last couple of days. I am seeing HEAD requests sent to random character URLs, usually three or four within a second, and they appear to be coming from my Chrome browser. The requests repeat only three or four times a day, but I have not identified a particular pattern. The URL characters are different for each request. Here is an example of the request as recorded by Fiddler 2: HEAD http://xqwvykjfei/ HTTP/1.1 Host: xqwvykjfei Proxy-Connection: keep-alive Content-Length: 0 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.98 Safari/534.13 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 The response to this request is as follows: HTTP/1.1 502 Fiddler - DNS Lookup Failed Content-Type: text/html Connection: close Timestamp: 08:15:45.283 Fiddler: DNS Lookup for xqwvykjfei failed. No such host is known I have been unable to find any information through Google searches related to this issue. I do not remember seeing this kind of traffic before late last week, but it may be that I just missed it before. The one modification I made to my system last week that was unusual was adding the Delicious add-in/extension to both IE and Chrome. I have since removed both of these, but am still seeing the traffic. I have run virus scan (Trend Micro) and HiJackThis looking for malicious code, but I have not found any. I would appreciate any help tracking down the source of the requests, so I can determine if they are benign, or indicative of a bigger problem. Thanks.

    Read the article

  • Emails sent from Coldfusion using the same SMTP/Exchange server works from one machine but fails for another

    - by Peter Herdenborg
    First, apologies if this question is too vague or has too little information to really be answerable. I am not normally working with these issues, and I don't have full access to the environment. However, the hosting provider seems to have a hard time tracking down the issue, so I am hoping that someone can at least provide me with some qualified guesses about the most likely problem. Here goes: A client I work for has a hosted IT environment, based on virtual machines running Windows 2008 R2 Standard. Our website, based on Coldfusion 9 was recently migrated from one virtual machine to another, and though Coldfusion is configured in the exact same way, using the same SMTP server, i.e. the client's Exchange server hosted in the same environment and in the same AD as both web servers, sending emails to external recipients is no longer working. It is still working fine when testing from the old machine. This is what I've learnt so far (all emails are sent using a valid from-address on the client's domain): Emails sent to other recipients on the same domain are delivered without any problem. Emails sent to external recipients on other domains are never delivered. When sending emails to both internal and external recipients, no emails are delivered. When receiving one of these emails to an internal address, the sender is now indicated as "[email protected]", while when sent from the old machine, it used to say just "sender". This seems to me that it could hint that the Exchange machine "recognizes" the old web server while it is a stranger to the new. In Coldfusion's mail log, all messages appear to be successfully delivered to the SMTP server. Any ideas what settings to look at, what log entries to search for or how to compare the old web server with the new one will be highly appreciated.

    Read the article

  • IIS7ASP.Net 4.0 - 404 errors only for external clients

    - by dmcgiv
    recently we moved an ASP.Net 3.5 website to 4.0 (integrated mode) and when we deployed to the clients server (Windows Server 2008 Web edition) we notice that some .aspx pages are serving 404 errors. What is strange is that 1) the pages exist 2) if you browse from the server itself the page is served as normal, only external clients get the 404 3) it's the default 404 error page not the one configured in the web.config 4) it only happens for some .aspx pages, and I've not been able to establish a link between the pages that are not being served externally. We are using a URL rewriter module which I first thought may be at fault but then realised that only some of the failing pages are being rewritten. I've also tested removing the http module and the problem still persists. As everything is working as expected when logged onto the server I was thinking it my be some sort of permission issue, but why would it only affect a few pages? I turned on failed request tracking and the debug files are being generated with the expected 404 error, although at the moment I'm not sure what most of the data means so can't decipher what's going on internally. I'd really appreciate some help with this one.

    Read the article

  • Unable to connect to SQL Database (can the password be reset)

    - by user45450
    I have recently joined a company which has an SQL 2005 Server running a few databases. The server looks like no one has touched it in a couple of years and has this week it ran out of disk space. After a quick hard drive scan it looks like some of the databases have become a little bloated and particularly the Sharepoint_config~*~_log and WSS_Content_log.ldf have grown to about 15GB. I have been able to log into a couple of the other databases and use the shrinkfile command to free up disk space but for some reason I am unable to log into the sharepoint and Microsoft#SSEE databases (which gives me the "cannot connect to Sharepoint, a network related or instance specific error occurred..." when I try and connect) I can see that the database is running via the SQL surface configuration and I have made sure that the remote connection settings allow me to connect locally but I am still unable to log in either with windows authentication or locally. Is there any way to reset or recover the database login details so I can get in? ( I have tried logging in with all the administrative passwords I can find and after tracking down the company who installed it in the first place I found out that they have no idea what the password could have been)

    Read the article

  • vSphere education - What are the downsides of configuring virtual machines with *too* much RAM?

    - by ewwhite
    VMware memory management seems to be a tricky balancing act. With cluster RAM, Resource Pools, VMware's management techniques (TPS, ballooning, host swapping), in-guest RAM utilization, swapping, reservations, shares and limits, there are a lot of variables. I'm in a situation where clients are using dedicated vSphere cluster resources. However, they are configuring the virtual machines as though they were on physical hardware. In turn, this means a standard VM build may have 4 vCPUs and 16GB or more of RAM. I come from the school of starting small (1 vCPU, minimal RAM), checking real-world use and adjusting up as necessary. Some examples from a "problem" cluster. Resource pool summary - Looks almost 4:1 overcommitted. Note the high amount of ballooned RAM. Resource allocation - The Worst Case Allocation column shows that these VMs would have access to less than 50% of their configured RAM under constrained conditions. The real-time memory utilization graph of the top VM in the listing above. 4 vCPU and 64GB RAM allocated. It averages under 9GB use. Summary of the same VM What are the downsides of overcommitting and overconfiguring resources (specifically RAM) in vSphere environments? Assuming that the VMs can run in less RAM, is it fair to say that there's overhead to configuring virtual machines with more RAM than they need? What is the counter-argument to: "if a VM has 16GB of RAM allocated, but only uses 4GB, what's the problem??"? E.g. do customers need to be educated? What specific metric should be used to meter RAM usage. Tracking the peaks of "Active" versus time?

    Read the article

  • How can I import tasks into Project that have already been started?

    - by unknown
    I am writing a feature to import tasks from an online bug tracking / project management tool into Microsoft Project, primarily for resource leveling. Currently I am importing all tasks in as Fixed work, and giving them an assignment to a single resource for 100% of the work. The duration is then dynamically calculated, which at import time is equal to the amount of work. However, I am not a project manager by any means, and am having difficulty on how to get the start dates to calculate correctly. I've never used Project either. I am using Schedule From Start and setting the Project Start Date to the date the contract was signed / work was approved. However, this can be in the past, and I do not want current tasks scheduled from that date. Should it be today then? Another problem I have is with tasks that were already started. I have remaining work set, and I was placing a constraint on them to be started on the day the work was first applied. However, the remaining work for the task would be scheduled from that date, which was sometimes in the past. Using task constraints, a project start date, and whatever other settings available to me that I don't know about, what is the correct way to have the tasks scheduled?

    Read the article

  • Which free RDBMS is best for small in-house development?

    - by Nic Waller
    I am the sole sysadmin for a small firm of about 50 people, and I have been asked to develop an in-house application for tracking job completion and providing reports based on that data. I'm planning on building it as a web application. I have roughly equal experience developing for MySQL, PostgreSQL, and MSSQL. We are primarily a Windows-based shop, but I'm fairly comfortable with both Windows and Linux system administration. These are my two biggest concerns: Ease of managability. I don't expect to be maintaining this database forever. For the sake of the person that eventually has to take over for me, which database has the lowest barrier to entry? Data integrity. This means transaction-safe, robust storage, and easy backup/recovery. Even better if the database can be easily replicated. There is not a lot of budget for this project, so I am restricted to working with one of the free database systems mentioned above. What would you choose?

    Read the article

  • Exchange 2003 - The Case of the Disappearing Message

    - by John Gardeniers
    We run a single Exchange 2003 Standard server. On two separate occasions we have had the following happen, both times with the same sender and recipients. A user with an an email address in domain A sends a message to two others with email addresses in domain B. I must stress that this is all internal and on the same Exchange server. Both recipients were listed in the "to" field. The first listed recipient received the message, the other didn't. Checking message tracking in ESM, as well as manually checking the server Application event log, indicates that both messages were delivered without a hitch. However, the message just cannot be found in the second recipient's mailbox. Using both Outlook and OWA I have searched all the recipient's Outlook folders, in case it accidentally got moved to another folder or was deleted. I also checked the deleted message recovery folder. No sign of it anywhere. I'm completely baffled. How could a message that the system insists was delivered not actually be in the recipient's mailbox? What else can I do to try and track it down?

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >