Search Results

Search found 19625 results on 785 pages for 'local groups'.

Page 525/785 | < Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >

  • Table 'mysql.host' doesn't exist

    - by eriktm
    100913 10:21:29 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql /usr/local/mysql/libexec/mysqld: Table 'mysql.plugin' doesn't exist 100913 10:21:29 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 100913 10:21:29 [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.host' doesn't exist 100913 10:21:29 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended This is the output from the log-file for mysqld I get when I try to start mysqld with the mysqld_safe command. I tried to run mysql_upgrade to correct the first error, but this command seems to require the server to be started, which is my original problem. Next, it says that the table mysql.host does not exist. I was unable to figure out what this is caused by.

    Read the article

  • Wired card not connecting after trying to connect

    - by Mike
    I have an Ubuntu 12.10 PC. When I plug in my internet cable it starts connecting and after a minute it says it can't connect. I tried different cables but nothing works. Wlan works. I think it's the network driver but I don't know how to install/update it. Here's the ifconfig info (if it helps): eth0 Link encap:Ethernet HWaddr 00:01:6c:39:2a:8d inet6 addr: fe80::201:6cff:fe39:2a8d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2011 errors:0 dropped:362 overruns:0 frame:0 TX packets:586 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:175452 (175.4 KB) TX bytes:147211 (147.2 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:9779 errors:0 dropped:0 overruns:0 frame:0 TX packets:9779 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8460080 (8.4 MB) TX bytes:8460080 (8.4 MB) wlan0 Link encap:Ethernet HWaddr 08:10:74:35:99:9d inet6 addr: fe80::a10:74ff:fe35:999d/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:1790 errors:0 dropped:0 overruns:0 frame:0 TX packets:3250 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:401664 (401.6 KB) TX bytes:2898773 (2.8 MB)

    Read the article

  • How to reference a git repository?

    - by Anonymous
    What should the actual path to a git repository 'file' be? It's common practise when cloning from github to do something like: git clone https://[email protected]/repo.git And I'm used to that. If I init a repo on my local machine using git init, what is the 'git file' for me to reference? I'm trying tot setup Capifony with a Symfony2 project and need to set the repository path. Specifying the folder of the repository isn't working. Is there a .git file for each repository I should be referencing?

    Read the article

  • How to route traffic from VM (Parallels) over an Open VPN connection on the host (OS X)

    - by withakay
    Scenario: I have a Mac running Lion that is connected to an OpenVPN server I have a Windows XP VM (running on parallels, but I don't think this is important) I want to be able to route traffic from the XP VM via the host Mac's OpenVPN connection so that I can log on to a domain. The remote network is 172.16.0.0/23 (255.255.254.0) Open VPN is configured to supply address in the 10.100.101.0/24 range and sets up the routing to 172.16.0.0 using the gateway 10.100.101.1/32 My local network is 192.16.1.0/24 NOTE: I do not want to install OpenVPN into the XP virtual machine as I would have to use a passwordless key in order for OpenVPN to connect before logon. Anyone got any ideas?

    Read the article

  • Where can I get a splitter to connect a device with a single 3.5 mm plug into the audio input/output jacks on my laptop?

    - by XinJeisan
    I recently bought the :Hype Retro Handset for Mobile Phone" -- its just a device that looks like a handset to use when chatting on a computer or mobile phone that plugs into the phone/computer with a single 3.5 mm plug. I was hoping to use it on my windows 7 Toshiba laptop. I can hear audio fine through the handset but what I'm saying is not being picked up on the handset. On the box it says "some phones and computers may need additional adapters," so I'm hoping it is possible to get a splitter or something for this to work properly. I did email the parent company (http://dglusa.com/) but I haven't heard from them, and, looking over their website, I doubt I will. I also went to the local radio shack, and the guy said I needed a splitter, but he didn't know where to get one. I can find the kind of splitter I think I need online, but I'm unsure whether they are just for output or can also do input/output.

    Read the article

  • Drupal + GMap macro, map markers not displayed

    - by mingos
    Hi. I've run into a strange problem in the GMap module for Drupal. When I display a map inside a node using a GMap macro, everything is displayed correctly (according to what I specify in the macro or leave at default), save for map markers. I'm trying to specify a map marker, but it refuses to be displayed. My macro is the following: [gmap zoom=17 |center=53.77420697757659,20.474138259887695 |markers=big blue::53.77420697757659,20.474138259887695] I was unable to find any help on Drupal forums, both the official one and one local to my country. For completeness' sake, I do not wish to use a GMap view, just add a macro in a regular node. Hope you can help me find a solution. Thanks in advance for your replies...

    Read the article

  • Overrideen ASPNet.config does not apply for legacyImpersonationPolicy

    - by Grumbler85
    I tried to override the <legacyImpersonationPolicy> Element, so a single application, will enable this policy (which is necessary, since this application breaks if disabled). So my Framework64/aspnet.config states: <configuration> <runtime> <legacyUnhandledExceptionPolicy enabled="false" /> <legacyImpersonationPolicy enabled="false" /> <alwaysFlowImpersonationPolicy enabled="false" /> <SymbolReadingPolicy enabled="1" /> <shadowCopyVerifyByTimestamp enabled="true"/> </runtime> <startup useLegacyV2RuntimeActivationPolicy="true" /> </configuration> And a local aspnet.config file has this change: <legacyImpersonationPolicy enabled="false" /> Procmon tells me the file is read by the w3wp.exe, but the settings will not apply. Can anyone point out a way how to correctly override the setting? *The Server has been restarted meanwhile, but still no changes.

    Read the article

  • I want to deploy my php based web application with apache-ant. How can I do that?

    - by codeperl
    I googled it. But unfortunately did not get the specific answer. I am a fan of command line and typing. So now, I want to deploy my php based web application with apache-ant. How can I do that? Also I want to practice these deployment in my local pc. Is it possible? Phing is there and what i heard phing works on the top of apache-ant for php application deployment. But I want to face the hassel and want to write in my own hand.

    Read the article

  • Strange RDP / Remote Desktop problem

    - by John Landheer
    I'll try to be as specific as I can be: Server is running SBS 2008 R2 (with all updates) Server is connected to the internet Server has 2 NIC's, one is disabled Server is running RDP Service (accessible directly from the internet, I know, not as secure as it should be) Computers A and B are on the same local net. Computers A and B are both Windows 7. Users X and Y are both admins on the server Computer A can connect as user X to the server with mstsc Computer A can connect as user Y to the server with mstsc Computer B can connect as user X to the server with mstsc computer B CANNOT connect as user Y to the server with mstsc! The last point is the problem, I get an authentication error. This used to work flawlessly for the last year. The server and desktops have been rebooted. I find it very strange....

    Read the article

  • Block Domain User login

    - by Param
    I have created a Domain User id ( for example - Auser ). I have integrated my LDAP login with Firewall. I use this user to login in to firewall only. So, I want to block all the login for this User except on Firewall. Is there any way to accomplish this? As per my knowledge, we can specify :- By right click on Domain User -- Properties -- Account tab -- Logonto ( but here we have to specify Computer Name, we don't have any computer name for Firewall -- So i can't use this option ) Through Group Policy Window Setting -- Security Setting -- Local Policies -- User Rights Assignment -- Allow logon Locally (But it has to apply on Computer OU -- So i can't use this option also ) Any Other Option you know ??

    Read the article

  • Windows 7: Use VirtualStore as user for own benefit?

    - by mfn
    I've researched a bit and came to the understanding that VirtualStore is part of the new UAC feature in Vista/W7 which is the file system part of the transparent data redirection and redirects write access to folders like program files to C:\User\<username>\AppData\Local\VirtualStore\ in lack of applications respecting the LUA principles. Now I'm interested if that kind of transparent redirection can also be used as a power to the user. Here's an example which comes to my mind: I install any kind of software to e.g. D:\Whatever\ThisAndThisApp\ and I set up things that, after initial installation, any write access to this folder is transparently redirected to e.g. D:\MyOwnVirtualStore\Whatever\ThisAndThisApp\file_only_writable_here.txt. Is this thinking too far or can I actually use that power of VirtualStore as a user on Windows 7? I'm using the Professional version of that matters.

    Read the article

  • How to fix windows new line character on sftp synchronization in eclipse (pdt)

    - by superspace
    Hello, I have a problem with windows new line characters being introduced into text files on eclipse sftp synchronization (via jcraft's sftp plugin). I've set "New text file line delimiter" to Unix and have even sanitized the file with "fromdos" but every time i upload using the sftp plugin, windows new line characters can be seen in the remote file as "^M" characters (when viewed in vi). A point to note is that if i upload using an external sftp client, it's all fine. Eclipse Version: PDT (Helios) SFTP: jcraft sftp plugin Local Environment: Ubuntu 10.04 Remote Environments: FreeBSD 6.4, Debian 4.0 What am i missing? My co-workers would thank you for the solution :) Thanks in advance.

    Read the article

  • Accessing C$ over LAN on Win2008R2 - cannot by hostname but can by IP and FQDN

    - by Idgoo
    Having an issue with one of our Win2k8 R2 file servers. Trying to access C$ or the Admin share is giving us an error (see error details that the bottom), however we are able to connect using the server's IP and FQDN. can access \\172.16.x.x\c$ with domain cred can access \\server.domain.local\c$ with domain creds cannot access \\servername\c$ with same domain creds Server pings fine with Hostname, IP, FQDN, the Primary DNS suffix is also correct. DNS, PTR and Wins records are all correct for the Server I have checked that I am not trying to connect with cached credentials in the Windows vault, the server is also appending primary and connection specific DNS suffixes to the hostname. Any ideas what might be causing this issue? Error Details: c$ is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions

    Read the article

  • Hosts file in Apache keep changing for OS Linux Redhat [on hold]

    - by jack f
    I have installed Apache server. Two clients ex client_1 and client_2. The operation that we are performing on client_1 reflecting to client_2. We have etc/hosts file in our software install location which is keep on changing for client_2 with client_1 IP address. If I correct the entries in hosts file to client_2 also in the next few minuets it is changing automatically to the client_1(if we start the client_1 service). Please explain the use of hosts file and where and when it will change by Apache service. The hosts file in the location /etc/hosts/ for the both clients are same ============================================= Do not remove the following line, or various programs that require network functionality will fail. 127.0.0.1 localhost.localdomain localhost Local LAN 190.0.0.1 client_1.Example.com client_1 190.0.0.2 client_2.Example.com client_2 HR LAN 10.1.74.2 client_1hr peer 10.1.74.3 client_2hr ESP LAN 10.69.69.1 client_1esp 10.69.69.2 client_2esp Any help will be appreciated. Thanks in advance, Jack F

    Read the article

  • Cannot copy anything onto WD Elements 1TB External USB HDD

    - by Aashish Vaghela
    I have a Western Digital 1023 Elements 1TB External USB HDD. Recently, it has started an unusual problem. I cannot copy any file of any size on to that 1TB hard-drive, eventhough it has more than 400 GB free (out of 931GB actual size). I tried copying movies from one friends laptop, which did not work. I also tried another desktop to copy some study material e-books (in PDF), which also did not work. I get same CRC error when I try to copy anything from a computer's hard-drive onto this WD 1TB hard-drive. Vice-versa it's working. I mean, I can copy any file from the USB HDD onto local machine's HDD on any computer. It's like one-way traffic. This HDD is only 1 year old. What are my options ? Any suggestions ? Regards, Aashish.V

    Read the article

  • Uralelektrostroy Improves Turnaround Times for Engineering and Construction Projects by Approximately 50% with Better Project Data Management

    - by Melissa Centurio Lopes
    LLC Uralelektrostroy was established in 1998, to meet the growing demand for reliable energy supply, which included the deployment and operation of a modern power grid system for Russia’s booming economy and industrial sector. To rise to the challenge, the country required a company with a strong reputation and the ability to strategically operate energy production and distribution facilities. As a renowned energy expert, Uralelektrostroy successfully embarked on the mission—focusing on the design, construction, and operation of power grids, transmission lines, and generation facilities. Today, Uralelektrostroy leads the Russian utilities industry with operations across the country, particularly in the Ural, Western Siberia, and Moscow regions. Challenges: Track work progress through all engineering project development stages with ease—from planning and start-up operations, to onsite construction and quality assurance—to enhance visibility into complex projects, such as power grid and power-transmission-line construction Implement and execute engineering projects faster—for example, designing and building power generation and distribution facilities—by better monitoring numerous local subcontractors Improve alignment of project schedules with project owners’ requirements—awarding federal and regional authorities—to avoid incurring fines for missing deadlines Solutions: Used Oracle’s Primavera P6 Enterprise Project Portfolio Management 8.1 to streamline communication with customers and subcontractors through better data management and harmonized reporting, reducing construction project implementation and turnaround times by approximately 50%, on average Enabled fast generation of work-in-progress reports that track project schedules, budgets, materials, and staffing—from approval and material procurement, to construction and delivery Reduced the number of construction sites by nearly 30% (from 35 to 25) by identifying unprofitable sites—streamlining operations at the company’s construction site network and increasing profitability Improved project visibility by enabling managers to efficiently track project status, ensuring on-time reporting and punctual project deliveries to federal customers to reduce delay penalties to zero “Oracle’s Primavera P6 Enterprise Project Portfolio Management 8.1 drastically changed the way we run our business. We’ve reduced the number of redundant assets, streamlined project implementation and execution, and improved collaboration with our customers and contractors. Overall, the Oracle deployment helped to increase our profitability.” – Roman Aleksandrovich Naumenko, Head of Information Technology, LLC Uralelektrostroy Read the complete customer snapshot here.

    Read the article

  • JiglibX addition to existing project questions

    - by SomeXnaChump
    Got a very simple existing project, that basically contains a lot of cubes. Now I am wanting to add a physics system to it and JiglibX seemed like the simplest one with some tutorials out there. My main problem is that the physics don't seem to be working how I imagined, I expected my tower of cubes to come crashing down, but they dont seem to do anything. I think my problem is that my cubes do not inherit DrawableGameComponent, they are managed by a world object that will update and render them. So they are at no point put into the games component list. I am not sure if this means that JiglibX will not be able to interact with them as in all the tutorials there are no explicit calls to add the Body objects to the physics system, so I can only presume that they are using a static/singleton under the hood which automatically hooks in all things, or they use the game objects component list somehow. I also noticed that in alot of the tutorials they use the following when setting up the physics system: float timeStep = (float)gameTime.ElapsedGameTime.Ticks / TimeSpan.TicksPerSecond; PhysicsSystem.CurrentPhysicsSystem.Integrate(timeStep); Would it not be better to keep a local instance of the created PhysicsSystem object and just call myPhysicsSystem.Integrate(timeStep)?

    Read the article

  • how to install the IIS in windows 7 ?

    - by prateeksaluja20
    Hello Experts, I have tried to install IIS in My local machine in windows 7.My purpose is to install the IIS & change its Seesion timeout property bcoz I want to increase the Session time out Period.so I went to Control Panel-All Control Panel Items-Programs and Features then clicked Windows features on & off. after that clicked on Internet Information Service then Web Management Tool then IIS Management Console then pressed ok. It installed but when i went to see in Inetpub\wwwroot I got nothing.when i went to see "http:\localhost\" result shown blank page. i tried to search in Google but still i Don't get the answer.so plz help me shortout this problem. Thanx.

    Read the article

  • Snow Leopard, PHP, and MySQL

    - by Peter
    I have just installed Snow Leopard and now my PHP/MySQL program ends in a "Segmentation fault". I have been searching the web for a solution, I realize that there are some issues with SL/PHP/MySQL, but I have not found anything that works yet. I downloaded the binary MySQL package mysql-5.1.42-osx10.6-x86_64. I have updated the php.ini file as suggested at various posts. When I run PHP and connect to the MySQL server the behavior is a bit random. In many cases it works fine to connect and read data. In my specific case the PHP program constructs a LOAD DATA LOCAL INFILE ... statement to load data from a text file. It should do several such queries after each other in a loop. It works one time but the halts in a "Segmentation fault". The program worked fine in Leopard, but not now. My versions are: OS 10.6.2, PHP 5.3.0, MySQL 5.1.42

    Read the article

  • github team workflow - to fork or not?

    - by aporat
    We're a small team of web developers currently using subversion but soon we're making a switch to github. I'm looking at different types of github workflows, and we're not sure if the whole forking concept in github for each developer is such a good idea for us. If we use forks, I understand each developer will have his own private remote & local repositories. I'm worried it will make pushing changesets hard and too complex. Also, my biggest concern is that it will force each developer to have 2 remotes: origin (which is the remote fork) and an upstream (which is used to "sync" changes from the main repository). Not sure if it's such a easy way to do things. This is similar to the workflow explained here: https://github.com/usm-data-analysis/usm-data-analysis.github.com/wiki/Git-workflow If we don't use forks, we can probably get by fine by using a central repo creating a branch for each task we're working on, and merge them into the development branch on the same repository. It means we won't be able to restrict merging of branches and might be a little messy to have many branches on the central repository. Any suggestions from teams who tried both workflow?

    Read the article

  • VIsual Studio SP1 Fatal Installation Error

    - by user39593
    I have visual studio 2008 professional installed. I want to install SP1. When I try and install SP1 the following happens. MSI (s) (20:E4) [15:40:00:165]: Product: Microsoft Visual Studio 2008 Professional Edition - ENU - Update 'KB945140' could not be installed. Error code 1603. Additional information is available in the log file C:\Users\bjbell\AppData\Local\Temp\Microsoft Visual Studio 2008 SP1_20100609_151708728-Microsoft Visual Studio 2008 Professional Edition - ENU-MSP0.txt. My machine is running Windows 7 Enterprise 64bit.

    Read the article

  • Why are my DNS Lookups so long (300+ms) when accessing my web site?

    - by Travis
    I'm running a Fedora 11 server with Apache 2. I'm trying to optimize so things are as fast as possible from the server side, and I'm noticing (via Firebug for Firefox) that upon loading the homepage of one of the sites on the web server that for every file it loads (HTML, CSS, JavaScript, GIF, PNG, JPG, etc.), it does a DNS lookup. All of the files it is looking up are local to the server, so I'm surprised to see it even do a DNS lookup. Also, each of these lookups is in the 150-450ms range, which is way too high for my liking. I've tried adjusting /etc/resolve.conf to use Google's Public DNS servers. I restarted the network service and tapped the page again, but the numbers didn't go down. I've reverted back to the default DNS servers since I didn't see any gain. Any ideas on what is causing it to: a) do the dns lookup in the first place, and b) take so long when doing the actual lookup? Thanks in advance.

    Read the article

  • Advice on off-site backup of Hyper-V Failover Cluster

    - by Paul McCowat
    We are currently setting up a Server 2008 R2 which will be off-site over a leased line with VPN. At the main site is 2 x Hyper-V hosts in a failover cluster with PowerVault M3000i iSCSI SAN. We are using BackupAssist for local backups and each host backups up itself and it's guests nightly creating a 500GB backup each which is copied to a 2TB rotated NAS drive. Files and SQL DB's are also backed up / log shipped etc. Looking for the best way to backup the Hyper-V VM's and copy them off-site so that the OS's are only a month old and the data is a day old. The main backups are too large to transfer between backups so options discussed so far are: Take rotating individual backups of the VM's each day and copy over, Day 1 SQL VM, Day 2 Exchange VM etc, would require more storage. Look in to Hyper-V snapshots, however don't believe these are supported in clustering. 3rd party replication tools

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • Postfix not delivering mails

    - by Sotocan
    I have problems with a recently configured postfix MTA. When postfix starts the following warning appears: "postfix/qmgr[5078]: warning: connect to transport private/filter: No such file or directory" I have amavis-new as a content-filter, but even if I comment-out the relevant line, the warning appears. As a result (I think), of the above, I get errors like below, for every virtual domain that I have: "postfix/error[5080]: 254851834107: to=, relay=none, delay=13082, delays=13082/0.01/0/0.01, dsn=4.3.0, status=deferred (mail transport unavailable)" The good news for me, is that somehow I managed to fix that (don't ask me how!!!!) The problem is that now I have 50 or so mails, that were affected by the aforementioned problem, in the mail-queue... If I "postqueue -f " I get the same style of error as before (mail transport unavailable)...however new mails are delivered to their final destination properly... Any suggestions? Kind regards. P.S. Local mail delivery from/to Unix and virtual users, was OK write from the beginning!

    Read the article

< Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >