Search Results

Search found 23890 results on 956 pages for 'issue'.

Page 700/956 | < Previous Page | 696 697 698 699 700 701 702 703 704 705 706 707  | Next Page >

  • Why can't this user connect to domain share?

    - by Saariko
    Part of my reorganizing credentials in the domain, I have created several users that will be used solely for services (backup, LDAP, etc) The idea is that systems that need specific usage will use a user/service user, that will give them what they need. However, I am having trouble setting the correct needed data. For this example, I have a NAS (Ready NAS 1100 by Netgear), that runs it's own backup jobs. The job reads from a domain share: \domain\qa and copies all data to another location. When using the domain\administrator everything works. When I input the domain\srv.backup user I get an error connecting to the folder. The srv.backup is part of the 'Domain Admins' group, which is a member of 'Administrators' I thought there might be propagation issues, but even when the srv.backup user was a direct member of 'Administrators' the error still occurred. I have 2 DC's (W2K8R2 replicas) - I thought that could also cause a problem, AFAIKT it's not the issue. Sharing permissions are open to everyone The Security on the folder is as follow This is the test window from the NAS dashboard I doubled check that the 'srv.domain' is part of the 'Domain Admins' group As well as tried with a simple 1-9 password. What else do I need to check? thanks.

    Read the article

  • ColdFusion 9 server not restaring - “Permission denied” errors

    - by Xevi Pujol
    I had to restart my ColdFusion 9 server on CentOS because of a memory performance issue, but now the server won't restart again. When looking at cfserver.log I can see how there's "Permission denied" errors all along. The ColdFusion application folder (/opt/coldfusion9/) is owned by nobody:root, as that fixed a similar problem that we had a few weeks ago. Also, the last time this CF server was running correctly, the JRE user that was being used was nobody. Maybe CF is trying to restart using another user (presumably apache) and that creates permission issues? However, I'm not sure how to check this hypothesis. Where's the config file that tells CF what JRE user to utilize? If I can change that, I could try to specify nobody there. Any other ideas also welcome. UPDATE: The runtime user that Coldfusion will utilise is defined in /etc/init.d/coldfusion_9. I fixed the problem by being consistent with the users: I needed to revert the ownership of the folder /opt/coldfusion9/ back to apache:root, which matches the init file.

    Read the article

  • Setting SVN permissions with Dav SVN Authz

    - by Ken
    There seems to be a path inheritance issue which is boggling me over access restrictions. For instance, if I grant rw access one group/user, and wish to restrict it some /../../secret to none, it promptly spits in my face. Here is an example of what I'm trying to achieve in dav_svn.authz [groups] grp_W = a, b, c, g grp_X = a, d, f, e grp_Y = a, e, [/] * = @grp_Y = rw [somerepo1:/projectPot] @grp_W = rw [somerepo2:/projectKettle] @grp_X = rw What is expected: grp_Y has rw access to all repositories, while grp_W and grp_X only have access to their respective repositories. What occurs: grp_Y has access to all repositories, while grp_W and grp_X have access to nothing If I flip the access ordering where I give everyone access and restrict it in each repository, it promply ignores the invalidation rule (stripping of rights) and gives everyone the access granted at the root level. Forgoing groups, it performs the same with user specific provisions; even fully defined such as: [/] a = rw b = c = d = e = f = g = rw [somerepo1:/projectPot] a = rw b = rw c = rw d = e = rw f = g = rw [somerepo2:/projectKettle] a = rw b c d = rw e = rw f = rw g Which yields the exact same result. According to the documentation I'm following all protocols so this is insane. Running on Apache2 with dav_svn

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • Nginx + PHP5-FPM repeated cut outs 502

    - by James
    I've seen a number of questions here that highlight random 502 (Nginx + PHP-FPM = "Random" 502 Bad Gateway) and similar time outs when using Nginx + PHP-FPM. Even with all the questions, I'm still unable to find a solution. Using Ubuntu 10.10 + Nginx + PHP5-FPM + APC and every 1 out of 4 requests ends in a timeout and failure. This isn't a load issue or large traffic, it happens even in dev environment with one person. I am doing this across 3 1GB machines, each with the same configurations and same problems. fastcgi_params fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REDIRECT_STATUS 200; /etc/php5/fpm/main.conf ; FPM Configuration ; ;include=/etc/php5/fpm/*.conf ; Global Options ; pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log ;log_level = notice ;emergency_restart_threshold = 0 ;emergency_restart_interval = 0 ;process_control_timeout = 0 ;daemonize = yes ; Pool Definitions ; include=/etc/php5/fpm/pool.d/*.conf /etc/php5/fpm/pool.d/www.conf [www] listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data ;pm.max_children = 50 pm.max_children = 15 ;pm.start_servers = 20 pm.min_spare_servers = 5 ;pm.max_spare_servers = 35 pm.max_spare_servers = 10 ;pm.max_requests = 500 ;pm.status_path = /status ;ping.path = /ping ;ping.response = pong request_terminate_timeout = 30 ;request_slowlog_timeout = 0 ;slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes

    Read the article

  • Computer locking up, looking for bootable hardware diagnostic tool.

    - by Carl Menke
    Well today I helped my friend build a computer. All went pretty well until we got to installing Win7. Thing is, we thought, it was crashing constantly. I adjusted pretty much every setting in the BIOS and removed as much hardware as possible to try and prevent a crash. No dice. So far I've tried running an Ubuntu live cd without the harddrive installed. Nope, crashed on boot. And then I just tried Microsoft's ram utility disk and it eventually locked up on that (the ram passed though). So it seems to me like it's either the CPU (AMD PhenomII x3) or the motherboard that could be bad, but I don't know how to test them individually for problems. I thought it could be a overheating issue, but the BIOS reports that the CPU temp is fine idling around 34C. Any advice or diagnostic disk that could help me out? TL;DR: Computer locks up frequently during use (cannot even boot/install an operating system), memory is fine, probably CPU or Mobo, BIOS says CPU temps are fine. What should I try?

    Read the article

  • Why am I getting blank error messages in my Apache error log?

    - by Jason Lamoreux
    I am running Apache 2.2 on 64bit Windows Server 2008 Std edition with ActivePerl 5.8.9. My error log is filling up with blank error messages like these: [Wed Mar 31 14:08:31 2010] [error] [client 10.6.1.164] [Wed Mar 31 14:10:32 2010] [error] [client 10.6.1.89] [Wed Mar 31 14:13:20 2010] [error] [client 10.6.1.131] By looking in the access log I can tell that it occurs when our client machines issue a GET to a very simple Perl script. #!perl.exe use strict; no warnings; $|=1; use CGI::Carp('fatalsToBrowser'); use CGI qw(:standard); print header; my $CRLF = "\r\n<br>"; my $Port = '10116'; print "Success!${CRLF}PollInterval=5${CRLF}LMProMode${CRLF}Version=7${CRLF}ConnectionPort=$Port"; exit; The weird thing is that it does not look like this error message is inserted every time a GET to this Perl script occurs. What could cause this error message to appear in the Apache error log?

    Read the article

  • sysctl.conf not running on boot

    - by Brian
    At what point is sysctl.conf supposed to be read during boot, and why might it not be running? I have the following settings which are not being applied when I reboot: net.bridge.bridge-nf-call-arptables = 0 net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-filter-pppoe-tagged = 0 net.bridge.bridge-nf-filter-vlan-tagged = 0 fs.nfs.nlm_udpport = 32768 fs.nfs.nlm_tcpport = 32768 The first section is needed for KVM bridging, and the second is to run the NFS lock manager on a known port. However, after booting, these values have not taken effect. If I run sysctl -p, then they do. This wouldn't be a huge issue, except that I can't figure out how to restart the lock manager without rebooting. I would really like to know why sysctl.conf isn't working at boot, but I'd settle for just being able to restart the lock manager. This is on Ubuntu server 10.04.2, kernel 2.6.32-31-server. I know some daemons check the permissions on their config files and refuse to work if they're too permissive, but sysctl.conf is 644 root:root, which I'm pretty sure is the default.

    Read the article

  • Ubuntu 11.10 ATI Drivers vesa park

    - by Matthias
    This is probably not an issue, from all I can get it seems my hardware and drivers are properly installed. However when I go to system settings - system info - graphics. I get Driver: VESA:PARK. Experience: Standard. my graphics card is a: Ati Mobility Radeon HD 5470 512MB. I am pretty sure it's not a same-die GPU since there is a fan exhaust at the side of my laptop which I presume is the exhaust for the GPU... I have no clue whatsoever what this means. I installed the ati drivers first using the 'additional drivers' method. However I also decided to look a manual installation up via the terminal since I've had problems before with Ubuntu and ati cards. I used wget and something among the lines of sh dpkg -i. I can recall exactly, I took them from another stackoverflow answer. Anyway, it seems everything is installed properly since it shows up with these commands: sudo lshw -C video fglrxinfo however the first command seems to detect hardware, not the driver per se, although the driver is probably needed to detect the hardware anyway which would indicate its properly installed. I am still not sure about that VES:PARK thing though. I'd like to know what it means.. Also, if someone happens to know a good way of testing if the gpu is connected/being used...some sort of benchmark maybe...I'd like to hear it. P.s. I can find my way around in Ubuntu but I would probably still be considered a rookie by more experienced users.

    Read the article

  • Need to Remove Exchange 2003 Server That Crashed During Transition to 2010

    - by ThaKidd
    As the title stated, we were running an Exchange 2003 server that we knew was going down soon so we purchased a second server and installed Exchange 2010 into the AD. We managed to move all of the mailboxes off of 2003 and also managed to get the Offline Address Book setup on 2010. At this point the 2003 server bit the dust and will no longer boot. Therefore we were unable to properly uninstall Exchange and remove the last 2003 server so it still exists in AD. As far as the clients are concerned, everything is working properly. However, when I run the Microsoft Exchange Profile Analyzer, I still see the old server and its Administrative Group. I am going to guess that since the old server is showing up in AD, I will not be able to raise Exchange or AD functionality (as the 2003 server was also the only AD DC) levels. I have forced the 2003 DC out of AD so that is no longer an issue. Old Setup: Windows 2003 Server Enterprise & Exchange 2003 Standard New Setup: Windows 2010 Server Enterprise & Exchange 2010 Standard Two Questions: How do you go about manually forcing the 2003 server and its administrative group out of AD? When that is finished, where do you raise the Exchange mode (can't find this for the life of me)?

    Read the article

  • Apache: rewrite port 80 and 443 - multiple SSL vhosts setup

    - by Benjamin Jung
    SETUP: multiple SSL domains are configured on a single IP, by using vhosts with different port numbers (on which Apache listens) Apache 2.2.8 on Windows 2003 (no comments on this pls) too many Windows XP users so SNI isn't an option yet There may be reasons why it's wrong to use this approach, but it works for now. vhosts setup: # secure domain 1 <VirtualHost IP:443> SSL stuff specifying certificate etc. ServerName domain1.org </VirtualHost> # secure domain 2 <VirtualHost IP:81> SSL stuff for domain2.org ServerName domain2.org </VirtualHost> GOAL: Some folders inside the domain2.org docroot need to be secure. I used a .htaccess file to rewrite the URL to https on port 81: RewriteEngine On RewriteCond %{SERVER_PORT} !^81$ RewriteRule (.*) https://%{HTTP_HOST}:81%{REQUEST_URI} [R] Suppose I put the .htaccess in the folder 'secfolder'. When accessing http://domain2.org/secfolder this gets succesfully rewritten to https://domain2.org:81/secfolder. ISSUE: When accessing https://domain2.org/secfolder (without port 81), the certificate from the first vhost (domain1.org) is used and the browser complains that the site is insecure because the certificate is not valid for domain2.org. I thought that RewriteCond %{SERVER_PORT} !^81$ would also rewrite https://domain2.org to https://domain2.org:81, but it doesn't. It seems that the .htaccess file is not being used at all in this case. At this point I am not sure how to apply a RewriteRule to https://domain2.org. I tried creating an additional vhost for domain2 on port 443 before the one for domain1.org, but Apache seems to choke on that. I hope someone of you has an idea how to approach this. TIA.

    Read the article

  • 2000 Server, User can't logon

    - by Mike I
    I hope you can help me. I recently upgraded a workstation at my office (to a whole new machine) and ran into a pretty serious problem. Friday until 5:00 PM, I could access my mail on 2000 Exchange server. When I shut the old workstation down and put in the new workstation, I tried to set up an account. When I put the server name in appropriate field and typed my username and hit check names, my username does not come up. So to troubleshoot, (It also is a SMB server) I try to logon to my file share. (My local credentials are the same as server credientials of user account) When I try to logon to share, I just get the Username/Password screen (Never had gotten that before since credentials are the same) Again, in troubleshooting mode, I try to log on to my user from another workstation. Still can't authenticate via my user. Every other user can authenticate and load up their shares/mailboxes. I have restored Exchange from the backup as of 3 days ago (Thursday) but the exact same issue is still there. I really do not understand what is wrong and what else I can do to troubleshoot. If anyone has some pointers for me, I will surely accept them. Thanks, Mike

    Read the article

  • Failed to su after making a chroot jail

    - by arepo21
    On a 64 bit CentOS host I am using script make_chroot_jail.sh to put a user in a jail, not permitting it to see anything expect it's home at /home/jail/home/user1. I did it typing this: sudo ./make_chroot_jail.sh user1 after, when trying to connect to user1 first i was getting an error like: /bin/su: user guest does not exist i have fixed this by copying some missed libraries: sudo cp /lib64/libnss_compat.so.2 /lib64/libnss_files.so.2 /lib64/libnss_dns.so.2 /lib64/libxcrypt.so.2 /home/jail/lib64/ sudo cp -r /lib64/security/ /home/jail/lib64/ But now, when trying to connect to user1 typing su user1 and then typing it's password, i am getting this error: could not open session So the question is how to connect to user1 in this situation? P.S. Here are the permissions of some files, this might be helpful in order to provide a solution: -rwsr-xr-x 1 root root /home/jail/bin/su drwxr-xr-x 4 root root /home/jail/etc -rw-r--r-- 1 root root /home/jail/etc/pam.d/su -rw-r--r-- 1 root root /home/jail/etc/passwd -rw------- 1 root root /home/jail/etc/shadow UPDATE1 After some modifications i managed to connect to user1, but the session closes immediately! I guess this a PAM issue, however cant find a way to fix it. Here the log entry for close action from /val/log/secure: Oct 6 15:19:42 localhost su: pam_unix(su:session): session closed for user user1 What makes the session to exit immediately after launching?

    Read the article

  • Can't make Dovecot communicate with Postfix using SASL (warning: SASL: Connect to private/auth failed: No such file or directory)

    - by Fred Rocha
    Solved. I will leave this as a reference to other people, as I have seen this error reported often enough on line. I had to change the path smtpd_sasl_path = private/auth in my /etc/postfix/main.cf to relative, instead of absolute. This is because in Debian Postfix runs chrooted (and how does this affect the path structure?! Anyone?) -- I am trying to get Dovecot to communicate with Postfix for SMTP support via SASL. the master plan is to be able to host multiple e-mail accounts on my (Debian Lenny 64 bits) server, using virtual users. Whenever I test my current configuration, by running telnet server-IP smtp I get the following error on mail.log warning: SASL: Connect to /var/spool/postfix/private/auth failed: No such file or directory Now, Dovecot is supposed to create the auth socket file, yet it doesn't. I have given the right privileges to the directory private, and even tried creating a auth file manually. The output of postconf -a is cyrus dovecot Am I correct in assuming from this that the package was compiled with SASL support? My dovecot.conf also holds client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } I have tried every solution out there, and am pretty much desperate after a full day of struggling with the issue. Can anybody help me, pretty please?

    Read the article

  • Ubuntu Server mdadm drbd ocfs2 kvm hangs under heavy file reading

    - by Stefano Annese
    I have deployed four ubuntu 10.04 server. They are coupled two by two in a cluster scenario. on both sides we have software raid1 disks, drbd8 and OCFS2 and on top of it some kvm machines run with qcow2 disks. I followed this: Link corosync is just used for DRBD and OCFS, the kvm machines are run "manually" When it works is fine: good performances, good I/O, but at a given time one of the two cluster started hanging. Then we tried with just one server turned on and it hangs the same. It seems to happen when an heavy READ in one of the virtual machines occurs, that is during rsyn backup. When the fact occurs the virtual machines are not reachable any more and the real server responds with good delay to the ping but no screen and no ssh is available. All we can do is force shutdown (hold the button) and restart and when it turns on again the raid on which relay drbd is resyncing. All the time it hangs we see such fact. After a couple of week of pain on one side this morning also the other cluster hung, but it has different moteherboard, ram, kvm instances. What is similar is reading for rsync scenario and Western Digital RAID Edistion disks on both side. Can anybody give me some input to solve such issue?

    Read the article

  • CIFS mounted drive setting "stick-bit" on all files, cannot change permissions or modify files

    - by mattmcmanus
    I have a folder mounted on an Ubuntu 8.10 sever through cifs that I simply cannot change the permissions on once mounted. Here is a breakdown of what's going on: All files within the mounted folder automatically have their permissions set to -rwxrwSrwx regardless of whether the file is create on the windows server or on the linux machine. I have the same directory mounted on two other linux servers (both running 9.10 instead of 8.10) with no problems at all. They all are using the same fstab options and the same credentials. //server/folder /media/backups cifs credentials=/etc/samba/.arcadia_cred,noexec,noserverino 0 0 I've I run a chmod command a million different ways, all of which report successfully changing the permissions. However it doesn't. The issue began after I updated from 8.04 to 8.10 Any idea why this may be happening on one machine? Since it started after an upgrade I'm not sure what is the bes thing to do. Any help you could give would great! None of my automated backup scripts are working because of this!

    Read the article

  • Webservice randomly dropping connections - possibly due to firewall nonevent data?

    - by adam
    I have a hosted webapp which requests data from a REST webservice in our office. Each page calls one (or several) webservices, which go from our host, via our firewall (a Watchguard Firebox) to a server in our office. All of a sudden, the app has dramatically slowed. We have determined that the webservice is timing out at random when called externally (it's fine when called within the office network). I'm pretty certain it's our connection which is dropping the webservice call, so I've written a quick php/curl script which calls the webservice over many iterations and shows the various timings. Below is an example output, showing both a failed and a successful call (with a 5 second timeout): http_code namelookup_time connect_time pretransfer_time starttransfer_time total_time 1 0 0.000096 0.0342 0.0000 0.0000 0.0342 2 200 0.000052 0.0332 0.1327 0.1751 0.1752 As per iteration #1 above, failed requests seem to be failing between connect and pretransfer. I'm not sure if this shows that the connection is successfully past the firewall, or could the firewall still cause an issue? Our firewall is showing a series of nondata event log messages for the relevant access rule. Our IT team tells me these are routine, although I can find no mention of these in Google. I'm not sure if this fits in between connect and pretransfer. Having elinated the webservice server (by testing internally) and the live webapp (by testing different code on different external servers, I am left suspecting the connection to the office. Could the firebox nondata events be causing a problem between connect and pretransfer?

    Read the article

  • App-V Problems After SCCM 2007 SP2 Upgrade

    - by GAThrawn
    We're running an SCCM (MS System Centre Config Manager, successor to SMS) 2007 environment and delivering a number of applications to clients virtualized using App-V 4.5.1. The App-V apps are delivered by SCCM in Download-and-Execute, not streaming mode. The SCCM environment was recently service packed to SCCM 2007 SP2 (amongst other things this gives Win7 support). We also pushed out the updated SCCM clients to our workstations. This seems to have broken file associations for virtual apps for a large number of our users. The users can still open their App-V apps by finding the specific app on their Start Menu and clicking it's icon, but double-clicking an associated file in Explorer, or opening an email attachment gives the "This action is only valid for installed applications" error. There is a Technet blog entry from the App-V team talking about this issue "Upgrade to ConfigMgr 2007 SP2 may break App-V File Type Associations" but running the script there comes back saying "The User Interface option has been updated" but hasn't seemed to fix the problems for any of our users. Unfortunately the Technet blogs don't seem to have comments switched on so you can't see how this has worked out for other people. Anyone else had this problem, have you found any other way to fix it?

    Read the article

  • No LPT port in Windows 7 virtual machines

    - by KeyboardMonkey
    Windows 7 has MS virtual PC integrated, the VM settings don't give a parallel LPT port mapping to the physical machine. Where did it go? Has anyone else noticed this, and found a solution? Update: After much digging, I found the one and only reference to this issue, on the VPC Blog: "Parallel port devices are not supported, as they are relatively rare today." -More details- It's a XP VM I've been using since VPC 2007 days, which did have this functionality. This is to configure barcode printers via the LPT port. Since the (new) MS VM can't map to my physical LPT port, I'm having a hard time configuring printers. My physical ports are enabled in the BIOS. It has worked the past 3 years, before switching to Win 7. Any help is appreciated. This screen shot of the VM settings shows COM ports, but LPT is no more In contrast, here is a screen shot of VPC 2007 (before it got integrated into Win 7). Notice how it has LPT support

    Read the article

  • Do registry issues with Win7 persist through a recovery from a system image?

    - by user59089
    So I need a bit of advice, please; here's my situation: I have 1) a system image on an a brand new external 1 TB SATA drive, that I managed to successfully capture before my 2) primary system drive went down. I realize this is a fairly simple matter of buying a new primary drive and performing the recovery to the fresh disk...however, the issue is that I believe Win7 was also having some significant issues of its own--basically, Update unable to install updates, and Backup continually ditching the auto backup schedule. I'd been trying to address those issues when my system was still working, but it's been so fruitless, I'm convinced a Win7 re-install would be best, and now I'm concerned that if I was in fact having what I believe are likely registry-related issues before, that these will persist through a recovery--would that likely be correct? I'm mainly worried about recovering my files, so if I did a full recovery from the image, should I be able to then access my individual files, and copy them manually to an external drive, so I can then do a full re-install of Win7? Sory if this seems obvious, but I've never done a recovery before and just trying to make sure there's no red flags with what I have in mind...

    Read the article

  • Accounting setup in freeradius with mikrotik and the "always" module

    - by Matt
    I have a freeradius setup that is being used to provide authentication for users on a wireless network. The access points are all Mikrotik hardware and the users are connected 24/7. We've been using Daloradius with mysql and freeradius 2. The boss wants to use the accounting information and while this is all set up and appears to be working, I've found that not all the accounting information is present. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly. So he started poking around at this link: http://wiki.mikrotik.com/wiki/RouterOs_MySql_Freeradius#Configuring_RouterOs_for_Radius_.26_PPP.2A_AAA And was looking specifically at the following section. Since our users may be connected for more than 24 hours at a time we keep this in here, it will reset some attributes daily so that the accounting packets work correctly always fail { rcode = fail } always reject { rcode = reject } always ok { rcode = ok simulcount = 0 mpp = no } However, that link references freeradius 1 and I can't find this in the radius.conf file for freeradius 2. What does it do and could it be a reason I'm missing data? EDIT: I have found one issue. We have a backup freeradius server that is also receiving the accounting packets. Although they are replicating, it's only a master/slave configuration. If the slave receives accounting packets it won't replicate them back to the master. Although I suspect this might solve it, the boss is not convinced due to the always module. Is there anything special I need to configure in the mikrotik AP's or freeradius 2 for clients connected 24/7.

    Read the article

  • Updating a backup image (.wim and/or Acronis .tib)

    - by Backdraft
    Anyways, I've got a Windows 7 installation that I want to make a generalized backup image of so I can use it for future installs on not only my desktop from which the image is to be derived from, but also other systems with dissimilar hardware. Therefore I've arrived at either 2 options, using either sysprep/imagx from WAIK (guide here), or the simpler Acronis True Image w/ their Universal Restore addon. Of course, they create distinct image file types, .wim and .tib respectively. What I'd like to do is to periodically update this image, say with Windows Updates, by booting it to either a physical partition or using virtualization (VirtualBox/VMWare), perform the updates, and save the updated .wim or .tib image file again. What's the simplest way I could do this? Another question is, I created this generalized backup image on a 500GB Seagate 7200RPM HDD. Say I get an SSD as an OS drive in the future, can I just deploy this backup image to the SSD normally, or are there any potential problems to be aware/avoid (ie. is it best to completely reinstall the OS on the SSD from scratch, or can I use the image created on the normal HDD with no issue)? Thanks and Happy Holidays.

    Read the article

  • Exchange 2007 OWA returns blank page with url xxxxx&reason=0

    - by Dayton Brown
    Hi All: I've just run into an issue with my exchange OWA. It returns a blank page with the url string https://www.xxxxxxxx/&reason=0. Nothing in the logs gives me any good reasons. Here's what I've done so far; 1) reinstall Exchange roll-up 7. 2) recreate virtual directories. 3) reboot. (this was mostly a shot in the dark, but what the hell) Exchange via rpc/https is still working great. Anyone run into this before? EDIT Here is the last snippet from the OWASetupLog. doesn't look like anything blew up. [09:45:36] ******************************************* [09:45:36] * UpdateOwa.ps1: 5/27/2009 9:45:36 AM [09:45:40] Updating OWA on server HOMER [09:45:40] Finding OWA install path on the filesystem [09:45:40] Updating OWA to version 8.1.375.2 [09:45:40] Copying files from 'C:\Program Files\Microsoft\Exchange Server\ClientAccess\owa\Current' to 'C:\Program Files\Microsoft\Exchange Server\ClientAccess\owa\8.1.375.2' [09:45:41] Getting all Exchange 2007 OWA virtual directories [09:45:42] Found 1 OWA virtual directories. [09:45:42] Updating OWA virtual directories [09:45:42] Processing virtual directory with metabase path 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa'. [09:45:42] Metabase entry 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2' exists. Removing it. [09:45:42] Creating metabase entry IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2. [09:45:42] Configuring metabase entry 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2'. [09:45:43] Saving changes to 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa/8.1.375.2' [09:45:43] Saving changes to 'IIS://HOMER.DG.LOCAL/W3SVC/1/ROOT/owa'

    Read the article

  • vDS - vCenter Problem

    - by rbmadison
    We are implementing a vSphere farm and are using a distrubuted switch. The VC is a VM within the farm connected to the distrubuted switch. We had a SAN issue and all of our VMs were down. When the SAN recovered and we restarted the ESX host containing the VC the VC couldn't connect to the network through the vDS. We had to remove a NIC from the vDS on that host and create a regular vswitch and then connect the VC to that before the VC would connect to the network. Is this typical behavior? If the VC goes down does all vDS networking stop on all the hosts? That seems to be a very bad thing. I thought networking would work even though the VC is down because the hosts have the vDS configuration cached. Is there a better way to configure it to prevent this from happening. We want to keep the VC as a VM for HA and recoverabilty purposes. Can anyone offer suggestions or explanations? I appreciate the help. Thanks, Rick

    Read the article

  • Snow Leopard takes a long time to connect to Windows/Samba server

    - by hood
    We run a very heterogeneous network here: There is some XP, Vista, 7, Leopard, Snow Leopard clients, and Windows 2003 (one remaining legacy app), 2008, and Linux servers. The main file server runs Ubuntu Linux and has been added to the Windows Domain and has been used for many years; SBS 2008 is the PDC (the 2003 and 2008 are on the domain also). In Leopard there were no problems at all authenticating to the file servers. We've upgraded one of the Leopard iMacs to Snow Leopard, though the same problem occurs in a new MBP which came with the newer OS as well as a clean install on another iMac. It does not matter whether connected through wired or wireless. In the Finder when clicking on the server - whether on first boot or after it is connected - it will display "Connecting..." for up to a few minutes before either generally working (if username/password in keychain) or displaying "Connection Failed" - at which time clicking "Connect As" and typing in the username/password will take some more time and eventually work. Sometimes it will display "Connecting..." indefinitely. (I've left it as long as 15 minutes before trying something else) Accessing shares on the the 2003 and SBS servers have the problem (so I don't think it's a Samba server issue). The Server 2008 Standard is connecting instantly at the moment. Accessing the share through an alias/stacks doesn't have this problem. Leopard and Windows clients still have no problem. I've searched Google but hasn't yielded any working result. How do I get rid of this delay?

    Read the article

< Previous Page | 696 697 698 699 700 701 702 703 704 705 706 707  | Next Page >