Search Results

Search found 22633 results on 906 pages for 'service accounts'.

Page 462/906 | < Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >

  • Active Directory server down, recovering without reinstalling

    - by whatever
    My Windows 2003 server suddenly ceased to function as a DC (this server is the only DC of the domain). All AD related services are down. The only way I can login to the AD is physically to the machine. Everytime I access an AD-related service (e.g. "AD users and computers") I get the below error: Naming information cannot be located because: The specified directory service attribute or value does not exist. Contact your system administrator to verify that your domain is properly configured and is currently online. I found the below system event which matches the time when the issue started, this re-occurs everytime I reboot the server. NTDS General | Global Catalog | Active Directory was unable to establish a connection with the global catalog. Additional Data Error value: 1355 The specified domain either does not exist or could not be contacted. Internal ID: 3200d33 I started the troubleshooting with DNS. Netdiag throws the below error although I think this is simply a consequence of not being able to access the Global Catalog. The procedure entry point DnsGetPrimaryDomainName_UTF8 could not be located in the dynamic link library DNSAPI.dll. Anyway DNS seems OK because I can ping the DC FQDN from the DC itself. I found the below solution which is supposed to help by doing some cleanup of the metadata: http://support.microsoft.com/kb/216498 If I follow procedure 1 here is what I get at step 9: no current site Domain - DC=<mydomain>,DC=<com> no current server no current naming context I can continue the procedure until step 14. I haven't tested step 15 as my understanding is that I will have to reinstall the whole AD again. Is there any way I can recover my AD from there without having to reinstall the whole thing? Update: Yes, the server was powered off/on because reboot would take forever (not because I thought power cycling the unit would fix it more than a reboot).

    Read the article

  • Enterprise Redirection Services?

    - by Aaron Alton
    This is probably a case of "if I new what it was called, I could google it in 5 minutes" - but I don't know what it's called. It's probably best to explain the requirement using an example. We have a number of services (vpn, owa, etc) which we host from one of our datacenters. We have a number of datacenters, and we technically have the infrastructure already in place to support these services at a number of our datacenters. To provide access to these "services", I would create an external DNS entry (ex. VPN.MyCompany.com Gateway IP for one of my DCs), and clients will connect to it via the DNS entry. Since I have multiple datacenters that can support this service, I could theoretically offer a "highly available, geographically dispersed" solution if I could point this DNS entry to some sort of third party who offers highly available "redirection" services. If my primary site goes down, I could just make a change via some management console and configure the redirector to point to a different DC. Of course, it would be fairly straightforward to set this sort of thing up on one of our servers, but that would kinda defeat the purpose of a highly available third party. Is anyone familiar with a service like this? I'm thinking something like DynDNS, but with Enterprise availability guarantees.

    Read the article

  • iptables: How to combine DNAT and SNAT to use a secondary IP address?

    - by Que_273
    There are lots of questions on here about iptables DNAT/SNAT setups but I haven't found one that solves my current problem. I have services bound to the IP address of eth0 (e.g. 192.168.0.20) and I also have a IP address on eth0:0 (192.168.0.40) which is shared with another server. Only one server is active, so this alias interface comes and goes depending on which server is active. In order to get traffic accepted by the service a DNAT rule is used to change the destination IP. iptables -t nat -A PREROUTING -d 192.168.0.40 -p udp --dport 7100 -j DNAT --to-destination 192.168.0.20 I also wish all outbound traffic from this service to appear to come from the shared IP, so that return responses will work in the event of a active-standby failover. iptables -t nat -A POSTROUTING -p udp --sport 7100 -j SNAT --to-source 192.168.0.40 My problem is that the SNAT rule is not always run. Inbound traffic causes a connection tracking entry like this. [root]# conntrack -L -p udp udp 17 170 src=192.168.0.185 dst=192.168.0.40 sport=7100 dport=7100 src=192.168.0.20 dst=192.168.0.185 sport=7100 dport=7100 [ASSURED] mark=0 secmark=0 use=2 which means the POSTROUTING chain is not run and outbound traffic leaves with the real IP address as the source. I am thinking I can set up a NOTRACK rule in the raw table to prevent conntracking for this port number, but is there a better or more efficient way to make this work? Edit - Alternative question: Is there a way (in CentOS/Linux) to have an interface that can be bound to but not used, such that it can be attached to the network or detached when a shared IP address is swapped between servers?

    Read the article

  • Limited connections to Ubuntu 12.04 server

    - by Luis M. Valenzuela
    I'm having a weird problem with my server. The server is inside my network, connected to a 3com switch which is connected to the router that handles the internet connection. The main purpose of the server is to host a php application. What's happening is that user 1 to 15 in the private network have no problems connecting to the server, when user 16 tries to connect a time out comes out and is unable to connect to the server. It's not just to the php application, but to any service from the server. When the 15 users are using the application, the server doesn't even answer to ping. I haven't set any special limit in Apache's ini file or MySql and the firewall is being turned off because the server is only to give service to the internal network. Is there a parameter in any of the network's card conf. files that might me causing this ? Or should I suspect from the router's or switches configuration ? UPDATE. Tomorrow, I'm gonna do some test on the server modifying two kernel params in : /etc/sysctl.conf The settings are: net.core.somaxconn which has the limit on simultaneous network connections to the server and kernel.shmmax which controls the amount of memory the system can use for managing connections.

    Read the article

  • Recommend a web file sharing software please.

    - by Baczek
    I'm looking for a web platform to put company files at. My requirements are: should be accessible via a browser should be open source must be installable (dropbox is a no-go) must have an option to put a access time limit on a file must perform garbage collection automatically after a file expires must be able to mark files as public or private an option to protect a file via a pin-code for users without accounts in the system would be nice to have The problem is I don't even know what to search for - all my googling results in either complete groupware solutions or p2p file sharing software. If such a thing doesn't exist, please don't hestitate to say so, so I can crawl to a corner and cry myself to sleep. TIA

    Read the article

  • Open Directory authenticated bind succeeds, but creates incomplete record

    - by Jay Thompson
    I have about a dozen Macs running 10.6.7 or 10.6.8, which are all failing to bind properly to my new 10.7.4 Server OD. I can bind them just fine via Directory Utility or dsconfigldap, and it reports success. However, when I look at the record, it is failing to write the MAC address. Even if I manually update the record with the MAC address, MCX doesn't do anything and clients can't log in to OD accounts. All of the affected clients have hundreds of lines in the /Library/Logs/DirectoryService.error.log like so: 2012-09-15 22:23:18 EDT - T[0x00007FFF70292CC0] - GetMACAddress returned 0x *** bad control string *** 8x I do know that all of these clients were previously managed with the Guest computer account, and I also know that they were all imaged with a DeployStudio image when they were purchased. I've tried dscacheutil -flushcache, but after that I'm drawing a blank. Google has a few hits, but nothing very helpful. Re-imaging would be ideal but probably isn't going to happen. Anyone come across this before?

    Read the article

  • I can't use a custom theme on a network account

    - by Rev
    I'm an administrator for the computer I use, but I'm using a network account. I can set custom themes (non-Microsoft, I mean) on my local account but not on the network account. It's the same machine, just different accounts/domains. I tried to repatch the files from the network account, but it says they're already patched. Any ideas why this won't work? The themes don't show up in the Personalize menu, and I can't just double click the .theme file from the Themes folder in Windows 7 Pro. This is the theme I'm trying to use, by the way: http://fediafedia.deviantart.com/art/Windows-8-VS-for-Win7-258514188?q=boost%3Apopular%20windows%208%20theme&qo=0 Tried repatching the files, still nothing.

    Read the article

  • Mavericks: Safari does not login in into web services

    - by Roberto
    Since when I upgraded ML to Mavericks Safari is no longer able to log me into Facebook. When I go to the login page it suggests me the correct credentials, I hit the Login button, the page refreshes but nothing happens, like if the credentials where empty. Firefox works perfectly, I even logged out and back in to make sure the credentials are the same that Safari suggests, and so they are. Needless to say for a different user on the same Mavericks Safari logs in correctly. The same happens with most web pages that need a login, web mails for instances, I have tow accounts on different webmail providers and none of them works. Of course using the same mail services with POP3 works fine. Even on this very site I cannot post a thing with Safari, I'm going to switch to Firefox to be able to post this question. Again, Firefox or a different user are OK. Do you have any idea/suggestion?

    Read the article

  • Preferred method for allowing unprivileged UNIX/Linux users to view syslog information

    - by Joshua Hoblitt
    I have some non-privileged "role accounts" that need the ability to view [some of] the local syslogs (eg. /var/log/messages) for debugging purposes. This is explicitly local log data, not remote syslog, logstash, etc. Obviously, there's several ways to address this issue. What I'd like to know is if there is a fairly "standardized" way to solve this issue. Typically, I solve this problem with sudo but either POSIX groups or acls is attractive as it's few chars for the users to type and it removes entries from the sudo log. However, I don't believe I've ever seen that done before. What is your experience? How do large install base sites address this?

    Read the article

  • Graphing/charting of CPU utilisation [on hold]

    - by Peter
    So nagios can be good at graphing particular resource utilisation or other metrics, but I'm looking for a tool that can output a chart or other graphical representation of how much CPU time/CPU utilisation /all/ services on a server are currently consuming. I think New Relic could probably achieve this to an extent, but I was wondering if there was a popular open source app used for this. In case I am explaining this in a bad way, my actual problem is that I have a shared server with suexec enabled (ie. httpd cgi running under multiple user accounts). I'd like to know which users are using the most CPU during periods of the day.

    Read the article

  • Can’t connect to SQL Server 2008 - looks like Shared Memory problem

    - by Proposition Joe
    I am unable to connect to my local instance of SQL Server 2008 Express using SQL Server Management Studio. I believe the problem is related to a change I made to the connection protocols. Before the error occurred, I had Shared Memory enabled and Named Pipes and TCP/IP disabled. I then enabled both Named Pipes and TCP/IP, and this is when I started experiencing the problem. When I try to connect to the server with SSMS (with either my SQL server sysadmin login or with windows authentication), I get the following error message: A connection was successfully established with the server, but then an error occurred during the login process. (provider: Named Pipes Provider, error: 0 - No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) Why is it returning a Named Pipes error? Why would it not just use Shared Memory, as this has a higher priority order in the list of connection protocols? It seems like it is not listening on Shared Memory for some reason? When I set Named Pipes to enabled and try to connect, I get the same error message. My windows account is does not have administrator priviliges on my computer - perhaps this is making a difference in some way (as some of the discussions in this post about an "SuperSocketNetLib\Lpc" registry key seems to suggest). I have tried restarting the SQL Server service, by the way, and also tried to get someone to log onto the machine with an admin account to restart the SQL Server service. Still no luck.

    Read the article

  • Postfix appears to ignore domain's MX records

    - by DisgruntledGoat
    On my dedicated server, I have Postfix installed for sending email through the websites. One of my clients hosts their email with a third party so we have MX records set up on the domain. However, when sending any Postfix emails from the server, they do not get the emails. I think since the domain is pointing at the server itself, it tries to send mail to itself, but there is nothing on the server to handle the email for that domain. (There are mail accounts for other domain which are working fine.) How do I get Postfix to use the domain's MX records to send email? Server is Ubuntu 8.10 with standard LAMP stack. I have Webmin installed, and a control panel called "Matrix" provided by the host.

    Read the article

  • How can I get the current OU with a PowerShell login script?

    - by Frans
    I am setting up a Terminal Server 2008 which will be used by different client organisations, each with multiple individual user accounts. I would like each client organisation to have a drive mapped to \server\clients\ Their OU name is also their client name, so I would like to be able to find their current OU and then use it for the mapping command. The OUs are hierarchicals, so it is the bottom-most OU name I need. Example OU: Dedicated Clients\AjaxCorp Should get a drive mapped to \\server1\shares\AjaxCorp Any suggestions on how I can get the OU? I am sure it must be easy, I just haven't figured it out... I did find information about how to do this with VB script, but as it is a whole new environment I thought it would be nice to use PowerShell instead.

    Read the article

  • Outlook 2010 : "An error occurred when setting schedule permissions"

    - by francisswest
    Computer: Windows 7 x64 Enterprise. Office 2010 with Outlook 2010. Exchange account and a couple of IMAP accounts connected to the Outlook profile. Error: When attempting to share the calendar with anyone (selected from the address book) by click on "Calendar Permissions" seen here You can add or remove people without issue, except that no matter what, even if nothing is done, just opening the sharing window and closing it, you get an error message that says "An error occurred when setting schedule permissions" Verified that the issue doesnt actually effect the sharing, or the general setting of permissions. The error however is disconcerting to my user, and I would like to alleviate the problem if possible. Thanks!

    Read the article

  • Email server - Disk quota sizes - suggestions?

    - by Ian H
    Working out a new server for an agency of 200 Employees - with approx 240 email accounts. Internally I'm arguing with myself over the amount of drive space to allocate to each user for the disk quota, I'm just looking for suggestions. Once i have a quota size decided, it will define the solution for storage. I've had everything from 4 GB per account ( which i feel is being generous ) down to 500 Mb ( with is rather restrictive in today's day and age. ) Thing is 4 GB per acocunt is just under 1 TB of allocated storage for email alone. Does anyone follow a "rule of thumb" or have thoughts on this? thanks in advance

    Read the article

  • Thunderbird 3.0.2 stops automatically checking for email

    - by Chris
    Since upgrading to Thunderbird 3.0.2 on Mac OS X 10.6, Thunderbird will intermittently stop checking for messages automatically. Frequently I am presented with the error "This folder is being processed" when manually checking for mail. Restarting Thunderbird fixes the problem, but I have noticed hours can go by where auto checking isn't working, and restarting shows there were dozens of messages waiting to be delivered in that down time. Occasionally I will see "indexing 1 of 2 messages - 0% complete" in the status bar, it will sit there for a long time. Actions I've taken to fix: Deleted all .msf files in all accounts Removed News & Blogs account Staggered the "automatically check for mail" interval for each account Worth noting: after waking up the macbook, TB usually needs to be restarted to resume auto checking for mail. Has anyone experienced such trouble?

    Read the article

  • How to migrate Lotus Notes Mail in database to Exchange public folder?

    - by elsni
    I need to migrate a Lotus Notes Mail database to Exchange. In the past, Users get mails in their Notes mail accounts and sort them manually by drag and drop in a specific folder structure in a seperate Notes mail database. This should be mapped to Exchange. I considered using outlook and a sharepoint Library mapped to outlook, but Outlook does not support drag and drop from the mail account to a standard doclib and the discussion libs do not support folders. So I think it's the easiest way to use an Exchange public folder instead of a sharepoint lib, which should work as in Notes (correct me if I'm wrong). But how I do migrate the old Notes db to the public folder including all subfolders? Thank you!

    Read the article

  • Anyone knows an IMAP proxy, single client to multiple servers?

    - by dtrosset
    I am looking for an IMAP proxy that would allow me to use it as a single server from my mail reader. But behind the scenes, I'd like this proxy to act as a client to my 3 email accounts IMAP servers transparently. In the end, I want my mail client to show a single INBOX with all the mail from the 3 servers'INBOXES. +-- IMAP --> MailServer0 MailClient -- IMAP --> PROXY --+-- IMAP --> MailServer1 +-- IMAP --> MailServer2 Anyone knows about such a piece of software?

    Read the article

  • Invalid user names when creating a LDAP account

    - by h1d
    I'm trying to set up a system where a visitor can enter any user name in a form to create a new user and in the end it gets built on LDAP directory and I'm planning that to be mapped as a UNIX account as well (on Ubuntu Linux) by making the system look up for system accounts on the LDAP. Doing so is fine, but I feel that many user names should be avoided, one of the obvious being 'root' and all the other user names taken for daemons etc. How do you tackle at this problem? Do you make up a list of disallowed user names by checking /etc/passwd? I was thinking that if, internally, the user names could be prepended as 'ldap_' or something, it will avoid any naming conflicts but that seems hard when the LDAP entry name is 'joe' but the system account will look like 'ldap_joe'. Not even sure how that can be achieved.

    Read the article

  • What is the risk of introducing non standard image machines to a corporate environment

    - by Troy Hunt
    I’m after some feedback from those in the managed desktop or network security space on the risks of introducing machines that are not built on a standard desktop image into a large corporate environment. This particular context relates to the standard corporate image (32 bit Win XP) in a large multi-national not being suitable for a particular segment of users. In short, I’m looking at what hurdles we might come across by proposing the introduction of machines which are built and maintained by a handful of software developers and not based on the corporate desktop image (proposing 64 bit Win 7). I suspect the barriers are primarily around virus definition updates, the rollout of service packs and patches and the compatibility of existing applications with the newer OS. In terms of viruses and software updates, if machines were using common virus protection software with automated updates and using Windows Update for service packs and patches, is there still a viable risk to the corporate environment? For that matter, are large corporate environments normally vulnerable to the introduction of a machine not based on a standard image? I’m trying to get my head around how real the risk of infection and other adverse events are from machines being plugged into the network. There are multiple scenarios outside of just the example above where this might happen (i.e. a vendor plugging in a machine for internet access during a presentation). Would a large corporate network normally be sufficiently hardened against such innocuous activity? I appreciate the theory as to why policies such as standard desktop images exist, I’m just interested in the actual, practical risk and how much a network should be protected by means other than what is managed on individual PCs.

    Read the article

  • sql server uninstallation issue

    - by angel
    I'm unable to remove SQL Server 2008 sp1 completely from my system. I'm using windows 7 ultimate. Everytime I try uninstalling it i get the following error. How can I remove it? here is the log: Overall summary: Final result: Failed: see details below Exit code (Decimal): -2068643839 Exit facility code: 1203 Exit error code: 1 Exit message: Failed: see details below Start time: 2013-06-24 21:10:38 End time: 2013-06-24 21:21:17 Requested action: Uninstall Log with failure: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\sql_rs_Cpu64_1.log Exception help link: http://go.microsoft.com/fwlink?LinkId=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=10.0.1600.22 Machine Properties: Machine name: ABHI-PC Machine processor count: 4 OS version: Windows Vista OS service pack: Service Pack 1 OS region: United States OS language: English (United States) OS architecture: x64 Process architecture: 64 Bit OS clustered: No Product features discovered: Product Instance Instance ID Feature Language Edition Version Clustered Sql Server 2008 MSSQLSERVER MSRS10.MSSQLSERVER Reporting Services 1033 Enterprise Edition 10.0.1600.22 No Sql Server 2008 Management Tools - Basic 10.0.1600.22 No Package properties: Description: SQL Server Database Services 2008 SQLProductFamilyCode: {628F8F38-600E-493D-9946-F4178F20A8A9} ProductName: SQL2008 Type: RTM Version: 10 SPLevel: 0 Installation edition: ENTERPRISE User Input Settings: ACTION: Uninstall CONFIGURATIONFILE: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\ConfigurationFile.ini FEATURES: RS,SSMS,SNAC_SDK,CE_RUNTIME,CE_TOOLS,SNAC HELP: False INDICATEPROGRESS: False INSTANCEID: INSTANCENAME: MSSQLSERVER MEDIASOURCE: QUIET: False QUIETSIMPLE: False X86: False Configuration file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\ConfigurationFile.ini Detailed results: Feature: SQL Client Connectivity Status: Skipped MSI status: Passed Configuration status: Passed Feature: SQL Client Connectivity SDK Status: Skipped MSI status: Passed Configuration status: Passed Feature: Reporting Services Status: Failed: see logs for details MSI status: Passed Configuration status: Failed: see details below Configuration error code: 0xFFD65603 Configuration error description: Input string was not in a correct format. Configuration log: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\Detail.txt Feature: SQL Compact Edition Tools Status: Passed MSI status: Passed Configuration status: Passed Feature: SQL Compact Edition Runtime Status: Skipped MSI status: Passed Configuration status: Passed Feature: Management Tools - Basic Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Rules with failures: Global rules: There are no scenario-specific rules. Rules report file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\SystemConfigurationCheck_Report.htm

    Read the article

  • exim sending mail to wrong mail server

    - by Chris Bull
    I have recently taken over management of a server running centos and whm. It has several websites and domains running under reseller accounts. Email has always been on a server at our actual office. I have recently changed our email from our local email server to gapps for business (our local email server was pretty archaic). However, when any email is sent from the web server (such as wordpress etc) to our domain - it still routes it to the IP of our old email server. I am not very familiar with WHM or exim so don't really know how to address this. Email to other domains works fine and the gapps is working perfectly. Any help is much appreciated.

    Read the article

  • Why does my ftp(e)s server fails like half of the time

    - by user1092608
    I have this discussion at work regarding our ftp server running via vsftpd. Initially, we have opted to serve ftpes instead of sftp because this seemed the most flexible and straightforward solution for our server to have secure file transmission. Afterwards, our ftp server seems to be a source of issues for our end users. Half of the time, users complain about not working ftp connections. I must say, i tested our FTP trough different infrastructures (=in the field, at random times at random places) and indeed, sometimes behind some configurations (=no idea how they are configured, because the 'field' testing), i recieve errors. Some of the are: Error: Failed to retrieve directory listing (filezilla) Furthermore, behind my basic home configuration, everything seems to be running fine. I (think I) did all the basic configuration checks (passive mode?, firewall for all ports?, ...) and can't seem to find the source. Being a bunch of techies at our small office, yet knowing nothing about infrastructure, some start suggesting that ftps protocol could be the source of issues. ("No, i only knew sftp so far" "Ftps is not widespread"). I, however, strongly doubt this hypothesis, since reading around on the www, asking questions on serverfault, everyone seems to deny this. So, as I would like to avoid reconfiguring, since this involves messing around in our SSH service, our virtual user setup and ftp service, i would need some advice on 1) what could be potentially the general cause? 2) do you have some general tips? 3) would you mind having a look at my configuration file? ----- General Settings ----- write_enable=YES dirmessage_enable=YES nopriv_user=ftpsecure ftpd_banner="Welcome to XXXX FTP!" hide_ids=YES hide_file=.* max_per_ip=10 max_clients=10 local_enable=YES local_umask=022 chroot_local_user=YES secure_chroot_dir=/usr/share/empty userlist_enable=NO userlist_deny=YES userlist_file=/etc/vsftp_deny_users guest_enable=YES guest_username=ftpvirtual virtual_use_local_privs=YES user_sub_token=$USER local_root=/srv/ftp/ftpvirtual/$USER anonymous_enable=NO syslog_enable=NO xferlog_enable=YES xferlog_file=/var/log/vsftpd_xfer.log connect_from_port_20=YES pam_service_name=vsftpd listen=YES listen_port=21 pasv_enable=YES pasv_min_port=30000 pasv_max_port=30030 pasv_address=foo ssl_enable=YES rsa_cert_file=/etc/vsftpd.pem rsa_private_key_file=/etc/vsftpd.pem force_local_data_ssl=YES force_local_logins_ssl=YES ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES ssl_ciphers=HIGH anon_mkdir_write_enable=NO anon_root=/srv/ftp anon_upload_enable=NO idle_session_timeout=900 log_ftp_protocol=NO dsa_cert_file=/etc/vsftpd.pem Thanks

    Read the article

  • WHM Backup recommended?

    - by user77284
    I have a VPS (CentOS) with WHM, about 25 GB. It has about 20 accounts on it. I am looking to effectively back it up. My thoughts: Back it up with WHM Backup locally. Use Rsync to mirror it to another server. My questions: Is WHM Backup a good solution? How can I keep several backups while keeping a minimal amount of space? Is there a different solution, I should consider? I am not an expert, so I want something simple that works with minimal maintenance. Thanks.

    Read the article

  • AD password not synchronising properly

    - by Kaczmar
    I have 600+ users in AD, but only one causes me trouble. The problem is I can reset his password from AD, he can then log in to his machine. After that he would like to change his password from Windows 7, and proceeds without errors. Logs out or locks the workstation but cannot access it again using either old or new password. So I have to reset it again and he can only use the one I provide for him. All our machines are in the same physical location in the same subnet. Functional level is 2003. I'm totally out of ideas. I could create him new user account, but I'd possibly like to know what causes this. I can only suspect some sort of synchronisation problems but other accounts work fine, and I don't know how to dig deeper into this. Thanks, Piotr

    Read the article

< Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >