Search Results

Search found 3171 results on 127 pages for 'manual manuel'.

Page 90/127 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • postfix smtpd rejecting mail from outside network match_list_match: no match

    - by Loopo
    My postfix (V: 2.5.5-1.1) running on ubuntu server (9.04) started to reject mail arriving in from outside about 2 weeks ago. Doing a "manual" session via telnet shows that the connection is always closed after the MAIL FROM: [email protected] line is input, with the message "Connection closed by foreign host." Doing the same from another client inside the LAN works fine. In the log files I get the line "lost connection after MAIL from xxxxx.tld[xxx.xxx.xxx.xxx]" This is after some lines like: match_hostaddr: XXX.XXX.XXX.XXX ~? [::1]/128 match_hostname: XXXX.tld ~? 192.168.1.0/24 ... match_list_match: xxx.xxx.xxx.xxx: no match which seem to suggest some kind of filter which checks for allowed addresses. I have been unable to locate where this filter lives, or how to turn it off. I'm not even sure if that's what's causing my problem. Connections from inside the LAN don't get disconnected even though they also show a "match_list_match: ... no match" line. I didn't change any configuration files recently, below is my main.cf as it currently stands. I don't really know what all the parameters do and how they interact. I just set it up initially and it worked fine (up to recently). smtpd_banner = $myhostname ESMTP $mail_name (GNU) biff = no readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/server.crt smtpd_tls_key_file=/etc/ssl/private/server.key #smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_sasl_auth_enable = no smtp_use_tls=no smtp_sasl_password_maps = hash:/etc/postfix/smtp_auth myhostname = XXXXXXX.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = XXXX.XXXX.com, XXXX.com, localhost.XXXXX.com, localhost relayhost = XXX.XXX.XXX.XXX mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.1.0/24 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all smtpd_sasl_local_domain = #smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous smtpd_sasl_authenticated_header = yes broken_sasl_auth_clients = yes smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,reject_unauth_ when checking the process list, postfix/smtpd runs as smtpd -n smtp -t inet -u -c -o stress -v -v Any clues?

    Read the article

  • Mac OS Leopard: SyncServer process constantly using 100% CPU

    - by macca1
    I am running Leopard that I upgraded from Tiger. I've been noticing that every once in a while the SyncServer process starts up and eats up all the CPU. The fans will start going at full blast and the laptop will slow down to a crawl. I need to force quit the process from Activity Monitor to get it under control. It disappears for a while, but eventually gets started again. I do have an iphone as well that I sync so I'm wondering if syncServer might be an apple process checking for my phone plugged in. Edit: Tried iSync and the manual resetsync as suggested, but got this output: Vince-2:~ vince$ /System/Library/Frameworks/SyncServices.framework/Versions/A/Resources/resetsync.pl full 2010-03-12 08:03:50.230 perl[176:10b] SyncServer is unavailable: exception when connecting: connection timeout: did not receive reply PerlObjCBridge: NSException raised while sending reallyResetSyncData to NSObject object name: "ISyncServerUnavailableException" reason: "Can't connect to the sync server: NSPortTimeoutException: connection timeout: did not receive reply ((null))" userInfo: "" location: "/System/Library/Frameworks/SyncServices.framework/Versions/A/Resources/resetsync.pl line 16" ** PerlObjCBridge: dying due to NSException Vince-2:~ vince$ And during that syncServer started spinning up 95-100% just like it always does.

    Read the article

  • Google Chrome sync: limit for bookmarks & extensions?

    - by Lyubomyr Shaydariv
    Actually, Chrome is my favorite web-browser, and one of its most powerful features is synchronizing the actual data into a Google account. For the last years I gained a lot of bookmarks and from time to time browse the extensions gallery to find new valuable ones. Really, synchronizing between my work and home PC's freed me from manual sync. And for the recent months I experience strange glitches. I guess it may be caused by a lot of stored bookmarks (potentially about 3K [in estimate], but please don't ask why :)) and extensions (about 130 installed but only 10-15 daily used). I can mention the following strange things: Recently added bookmarks sometimes are not synchronized (e.g. I put a bookmark at work, but it's not guaranteed I can see it that evening), despite about:sync indicates a good sync process. Sometimes recently modified bookmarks appear in either (let's call) last at home or last at work bookmark folders. Sometimes bookmarks are not synced at all. (Moreover, Chromium versions may even crash) Extensions are not synced now at all. Perhaps, there's another reason, but Google Mail Checker and Google Reader Notifier do not show indicators of incoming e-mails and news. ... I'm not sure but it looks like I might exceed Chrome internal sync limits... Is it right? Are there any workarounds, or should I make a massive bookmarks/extensions cleanup (I really don't want it :()? I mostly use Google Chrome Canary builds, and the my current one is 12.0.732.0. Thanks in advance. Update #1 (2001-04-19): I removed about 50 extensions that I'm not interested in (or that I consider as trash), and gained pretty some results: The extensions count is below 100 (exactly 97); The chrome://extensions page does not get slow (or even frozen) any more on enabling/disabling/uninstalling extensions; The extensions are seem to be synchronized now again.

    Read the article

  • VPN issue: SSTP Service service started and then stopped

    - by Ampersand
    When I was trying to set up a VPN connect on my laptop running Windows 7 Ultimate, I got this error: Network Connections Cannot load the Remote Access Connection Manager service. Error 711: The operation could not finish because it could not start the Remote Access Connection Manager service in time. Please try the operation again. I traced through some service dependencies and discovered that Secure Socket Tunneling Protocol Service was set to Manual. However, when I try to manually start the service, I get: Services The Secure Socket Tunneling Protocol Service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs. Setting all the services involved to Automatic did not help. SSTP just showed Automatic and Stopped in the Services panel. I found a solution that involved booting in Safe Mode and deleting the contents of C:\Windows\System32\LogFiles\WMI\RtBackup. This solution worked, and I could set up a vpn connection, but only until I rebooted again. TL;DR I'm looking for a way to permanently enable Secure Socket Tunneling Protocol Service and other vpn-related services permanently so I don't have to reboot into safe mode and delete files every time I need to connect to a vpn.

    Read the article

  • Netgear GS724Tv3 and link aggregation Mac OS X Server 10.6.8

    - by Manca Weeks
    I need to link aggregate 2 sets of ports on the Netgear GS724T with my Apple server tower (latest generation). I have 2 built in ports and 2 ports on a PCIe ethernet card. It is not obvious to me how to properly configure the Netgear end. I have access to the Netgear box through its web interface, just don't know how to properly set the settings. I tried going to Netgear for help, but they said my software support has expired. I bought this unit on their recommendation - they say it is compatible with 802.3ad protocol. I cannot locate any references to this protocol in the manual and I noticed some people in formus say that this device is actually not compatible with 802.3ad and that Netgear is misleading potential customers by saying it is. Any help will be appreciated. Thanks, M My own answer - posted as edit because of restrictions on my user: OK folks, turns out one must use a Windows machine on this one or nothing makes sense. I was unable to get much farther than viewing the default inactive LAGs because in Firefox and Safari on Mac things don't make much sense - i.e. the Apply buttons (supposedly JavaScript) don't work. You can view the configurations, but none of the modifications you make stick. Then, in Switching - LAGs, choose the ports to include and make sure you switch the LAG type from Static to LACP and all is well. Haven't tested the performance of the config yet, but both sides appear to be happy with the configuration. Apple server says link active and so does the Netgear. Will report if any other discoveries. Thanks for all who read and to user84104 for responding. M

    Read the article

  • GA 8KNXP Rev1.0: 4 GB installed, only 3.5 GB recognized by BIOS

    - by hurikhan77
    I've installed 2x 1 GB and 4x 512 MB memory into my GA-8KNXP system which would sum up to 4 GB. The specification from the manual says: Maximum memory support: 4 GB. If all six slots are utilized, slot 5+6 may only equipped with single-sided RAM modules. And so I did. Anyway: The BIOS counts up to 3.5 GB and finishes there. Also my Linux system reports only 3.5 GB of memory although 4 GB memory support is activated in the kernel. So I suppose this is a memory mapping issue or a hardware issue. I've tried removing only on of the 512 MB memory modules leaving 5 modules in place. But that just stopped the system from powering on correctly (screen stays black although fans and leds come to live). Dual Channel was detected and enabled so the system technically found all 6 modules. "dmidecode" in Linux reports only memory in slots 1 to 4 and ignores slots 5+6, so it only detects 3 GB of memory. It also says the system would support up to 16 GB of memory with 4 GB modules per slot. I think technically the chipset should be able to offer and utilize the complete 4 GB memory range. Any clues what else I could check? Or do I have just to live with 0.5 GB wasted memory?

    Read the article

  • What privilege level is required on a Windows client workstation on an ActiveDomain to break file lo

    - by Mike Burton
    I'm not sure if I should be asking this here or on StackOverflow, but here goes: I'm part of a team maintaining a document management application, and I'm trying to figure out Windows file locking permissions. We use a utility somebody downloaded years ago called psunlock to remotely close all locks on a file. We recently discovered that this does not work across different domains on our VPN. A little bit of digging lead me to the samba manual's discussion of file locking. I still don't really "get it", though. Does anyone have any insight to share into how the process of locking and breaking locks on files works in a network context? My thinking is that privileges are required both on the file appliance and on the client workstations which hold locks. Is that accurate? Can anyone give a more specific version? Ideally I'm looking for something along the lines of A user must have privilege level X in order to break locks held from a client workstation. In practice I'd be happy with a hotlink to a good white paper on the subject.

    Read the article

  • Remote search system for samba shares

    - by fostandy
    I have several shares residing on a samba server in a small business environment that I would like to provide search facilities for. Ideally this would be something like google desktop with some extra features (see below), but lacking this the idea is to take what I can get, or at least get an idea for what is out there. Using google desktop search as a reference model, the principle additional requirement is that it is usable from clients over the network. In addition there are some other notes (note that none of these are hard requirements) The content is always files, residing on a single server, accessible from samba shares. Standard ms office document fare Also a lot of rars and zips which it is necessary to search inside. Permissions support, allowing for user-based control to reflect current permission access in samba shares. The userbase will remain fairly static, so manual management of users is fine. majority of users will be Windows based I know there are plenty of search indexers out there: beagle and tracker seem to be the most popular. Most do not seem to offer access control and web-based/remote search does not seem to be high priority. I've also seen a recent post on the samba mailing list asking for pretty much the exact same thing. (They mention a product called IBM OmniFind Yahoo! Edition and while their initial reception seems positive, I am pretty skeptical. RHEL 4? Firefox 2? Updated much?) edit: similar question here What else is out there? Are you in a similar situation? What do you use?

    Read the article

  • External USB HD with -optional- mains?

    - by Stephen
    Hi, I'm Christmas-present-buying, and I'd appreciate recommendations for a USB HD with an optional mains power input. I've hunted, but can't find all the information I want (partially due to sketchy product specifications). Background: This is for a digital TV which I do not own, and so I'd like to get it correct first time. The TV has a USB port to allow recording straight to disk, but the manuals don't say how much power can be drawn through the USB port. The manual's instructions state, possibly generically, to plug the drive in before connecting to the TV. Ideally I'd like a small (2.5"?) drive which can draw power over USB, with an mains power input if it turns out the USB port on the TV doesn't offer enough juice. The ideal is to use one cable, two max. A powered USB hub would introduce too much clutter. I've spotted that the LaCie Petit drives have what appears to be an additional power input, but I'm not even sure from the specs what that is. And the device doesn't ship with a mains adapter. Suggestions?

    Read the article

  • Can I autoregister my clients/servers in local DNS?

    - by Christian Wattengård
    Right now I have a W2k12 server at home that I run as a domain controller. This has the extra benefit of registering every "subordinate" computers name in it's DNS so that I don't have to go around remembering IP's all the time. (And it let's me easily run dhcp also on my servers). I need to rework my home network for several odd reasons, and in this new scenario there is no place for a big honking W2k12 server box. I have a RasPI, and I have other smallish linux boxen I can use. (In a worst case scenario I'll use my NUC, but then I'll be forced to use my home cinema's UPnP-client for media... The HORROR!!) Is it possible to set up a DNS-server-"appliance" that somehow autoregisters it's own hostname.. Scenario: Router (N66u) on 172.20.20.1. Runs DHCP on 172.20.20.100-200 range. Server [verdant] of a *nix flavor on 172.20.20.2 Laptop [speedy] of W8 flavor on DHCP assigned Laptop [canary] of W8 flavor on DHCP assigned Desktop [lianyu] of Ubunto flavor on DHCP assigned What I would like is that all of the above servers (except possibly the router) would be available on verdant.starling.lan and canary.starling.lan and so on. This is how it works right now (except the Ubuntu box... I haven't cracked that one yet) because Windows just does this for you.. I would also be able to do this without any manual labor on the server. When I tell my box it's name is smoak it should "immediately" be available as smoak.starling.lan without any extra configuration on my part. How can I do this in a Linux (Ubuntu) environment? (Bonus comment upvote for naming the naming scheme :P )

    Read the article

  • Network problems with Ubuntu on VMware

    - by svick
    I'm running Ubuntu 10.04 inside VMware Player running under Windows Vista and I can't connect to the internet or the host computer from the Ubuntu. I have set all the VMware services to “manual” (like VMware DHCP Service), but starting them manually doesn't help. In VMware, the network seems to work (there is a green dot beside the network icon) and I have tried both Bridged and NAT settings. ifconfig doesn't show the eth1 interface, unless I give it as a parameter (or use -a). I think this means that Ubuntu thinks that the network isn't connected at all. How do I fix this? ifconvadmin@ubu1004:~$ ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:56 errors:0 dropped:0 overruns:0 frame:0 TX packets:56 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4192 (4.1 KB) TX bytes:4192 (4.1 KB) vadmin@ubu1004:~$ ifconfig eth1 eth1 Link encap:Ethernet HWaddr 00:0c:29:2d:a0:6f BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:18 Base address:0x2000

    Read the article

  • How would I setup iMail to forward a user's mail to another service w/o leaving a copy locally?

    - by Scott Mayfield
    I have an iMail 2006 server installation in which I have a particular user that has several aliases that all point to a single user (me, for the record). I've been copying all of my mail to GMail and reading it there, but it annoys me that I have to go back weekly and log into my mail account on iMail and delete between 6 and 10 thousand copies of messages I've already received, in order to keep my mailbox from filling up (yes, I have it set with no quota, but I consider it bad form to just let the box grow indefinitely). I've got the copying setup via an inbound user rule, but I'm wondering how to accomplish a "copy and delete" rule. The manual isn't clear on what happens with multiple matching rules (will they be processed in order, or is it a first match situation?) and there isn't a means to combine multiple actions into a single rule. If I use the "forward" action, I THINK that it's going to screw up all the sender information once the mail reaches my GMail account and show it as coming from me instead of the original senders (can anyone confirm that this is accurate?) An easy answer would be to delete my user account entirely, replace it with an alias that maps to my GMail account, but then I would lose my ability to log into the system for admin duties. So that leads me to creating a second, lesser known account for admin use, but since it's a real account, sooner or later I'm going to get mail sent to it and I'll be back to the same situation of having a user account that doesn't get emptied periodically. I imagine I can set the quota to 0 MB to cause all incoming mail to my admin account to bounce, or setup an inbound rule to bounce everything, but this is starting to sound kludgy to me. Does anyone know of a more direct work around to copying a user's incoming mail to an outside server and then deleting the local copy w/o removing their account entirely? Or is this just wishful thinking? Thanks in advance. Scott

    Read the article

  • PHP Output buffer flush issue on Apache/Linux

    - by Iiro Vaahtojärvi
    Hi, I'm running into issues with the PHP output buffer flushing on my Linux web server. The output buffer is maintained correctly and all the right data is pushed to it in my code, but the usual flushing mechanisms won't flush it to the browser. I have tried everything posted here: http://php.net/manual/en/function.flush.php but no success so far. I got a small script from php.net to test it: <?php ob_start(); for($i=0;$i<70;$i++) { echo 'printing...<br />'; ob_get_flush(); flush(); usleep(300000); } ?> This should print "printing..." to the browser 70 times, one line every three seconds. This works fine on my other testing environment which is based on Windows (still using apache, XAMPP package), but on my Linux server it doesn't. It waits for the script to finish before giving anything to the browser, basically ignoring the whole flush command. If anyone has experienced this before or knows of anything that could help (be it server configuration or adjustment to code) it would be greatly appreciated!

    Read the article

  • How do I hook into Tar with BASH?

    - by orb
    Long Story Short I am working with Tar archives that contain PNG images in base64 encoding. I would like to use BASH (or whatever else works) to hook into the extraction function of Tar to decode PNG images from base64 encoding to standard PNG encoding after the files are unpacked. A simple cat $input-file | base64 -d >$output-file will successfully decode the images. Is there a way I can hook into tar -xf so that users do not have to do any (or minimal) extra work to decode the images? In the GNU Tar documentation (http://www.gnu.org/software/tar/manual/html_chapter/Backups.html#SEC97) I found that there are in fact variables reserved to hold the names of functions I desire to be hooked into various moments in Tar program execution. However, the documentation explains that these variables, along with other variables that can be set to configure Tar, are located in a file named backup-specs. Unfortunately, the path to this file is not given. Further, running sudo find / -name backup-specs tells me that this file is not present on my Ubuntu version 13.04 system. Background Information not included in the Long Story Short I have been working on a browser-based (WebGL) particle effect creation application (http://www.particleeffect.org), (https://github.com/cgrabowski/webgl-particle-effect-editor), (https://github.com/cgrabowski/webgl-particle-effect). I have began to write a client-side-only solution for saving and loading effect data as a tar archive. However, since client-side JavaScript has limited capability to process binary data, the images used as textures in the effect are saved with base64 encoding. I have been able to implement saving effect data as a Tar archive (haven't pushed that to Github yet). However, the images present in said Tar archive cannot be manipulated unless they are decoded from base64 encoding.

    Read the article

  • Automate creation of Windows startup script?

    - by Niten
    Is there a good way to automate installing local startup (rather than login) scripts in Windows XP and Windows 7, via the command line, WMI, or otherwise (even COM or Win32 if it comes to that)? I need to setup a local startup script on a large number of computers, and unfortunately, Active Directory is absolutely not an option. I would like to write a script or small program that I can run on each computer to perform the startup script installation in order to save myself a lot of error-prone point-and-click manual labor. I see that when one uses gpedit.msc to create a local startup script, information about the script gets stored in the registry here: HKLM\Software\Policies\Microsoft\Windows\System\Scripts\Startup However, if you create such a script and then delete its registry key, the script will remain listed in the local Group Policy editor; as is so often the case in Windows, apparently there is more going on there than meets the eye. This leads me to question whether it's safe to manually add subkeys for new startup scripts here (I wouldn't want my script to be overwritten by later changes made using the local Group Policy editor, for instance)... Another option that's occurred to me is to create an item in the Task Scheduler configured to run at system startup. However, my concerns there are twofold: Can this be automated any more easily? For instance, the at command doesn't appear to let you schedule a task for system startup, and WMI's Win32_ScheduledJob interface looks unreliable (it fails to show any of my currently scheduled tasks, for one thing). Would I be able to prevent users from logging in until the scheduled startup task is completed, as can be done with "normal" Windows startup scripts? Thanks in advance for any suggestions, I've been banging my head against this one for a bit...

    Read the article

  • pppd disconnects from 3G, doesn't reconnect, w/ persist set

    - by bytenik
    I am trying to configure pppd to connect to a 3G network (Sprint, in this case) and then stay connected, reconnecting automatically if the remote connection is terminated. I have enabled the persist option. My configuration file is as follows: hide-password noauth connect "/usr/sbin/chat -v -f /etc/chatscripts/cellular" debug /dev/cell 921600 defaultroute noipdefault user " " persist maxfail 0 lcp-echo-failure 10 lcp-echo-interval 60 holdoff 5 However, when the peer disconnects the connection, pppd often waits a long time (substantially more than my holdoff) to reconnect the modem -- if it ever reconnects at all! An example log showing this: May 23 05:17:24 00270e0a8888 pppd[2408]: rcvd [LCP TermReq id=0x26] May 23 05:17:24 00270e0a8888 pppd[2408]: LCP terminated by peer May 23 05:17:24 00270e0a8888 pppd[2408]: Connect time 60.1 minutes. May 23 05:17:24 00270e0a8888 pppd[2408]: Sent 0 bytes, received 0 bytes. May 23 05:17:24 00270e0a8888 pppd[2408]: Script /etc/ppp/ip-down started (pid 2456) May 23 05:17:24 00270e0a8888 pppd[2408]: sent [LCP TermAck id=0x26] May 23 05:17:24 00270e0a8888 pppd[2408]: Script /etc/ppp/ip-down finished (pid 2456), status = 0x0 May 23 05:17:24 00270e0a8888 pppd[2408]: Hangup (SIGHUP) May 23 05:17:24 00270e0a8888 pppd[2408]: Modem hangup May 23 05:17:24 00270e0a8888 pppd[2408]: Connection terminated. May 23 05:17:24 00270e0a8888 pppd[2408]: Terminating on signal 15 May 23 05:17:24 00270e0a8888 pppd[2408]: Exit. May 23 06:08:07 00270e0a8888 pppd[2500]: pppd 2.4.5 started by root, uid 0 May 23 06:08:10 00270e0a8888 pppd[2500]: Script /usr/sbin/chat -v -f /etc/chatscripts/cellular finished (pid 2530), status = 0x0 May 23 06:08:10 00270e0a8888 pppd[2500]: Serial connection established. May 23 06:08:10 00270e0a8888 pppd[2500]: using channel 11 The disconnect at the request of the peer occurs at 5:17, but the reconnect didn't happen until 6:08. I had a friend monitoring the server so I'm not certain that this wasn't a manual reconnection. Either way, it either took almost an hour to reconnect or never reconnected. Shouldn't persist + holdoff 5 cause this to automatically reconnect after 5 seconds of the link terminating?

    Read the article

  • Excel Help: Data Input Help

    - by B-Ballerl
    Everyday I download data from a site that will have rows each filled with individual data for clients. I'm able to input the data into excel as a whole but after that I'm having trouble figuring out how to put it into a chart. For example Web visits time. So say Client 1 stayed for 5 min increasing his total time on the site to 20 min and Client 2 stayed for 0 min keeping his time of 10 min and they were both registered on new years eve, and R1's last login was today and R2's was yesterday. (R for some reason repersents Client, no idea why...). Client 3 hasn't been on since he registered keeping his total at 4 min So my data would look something like this for Today (20110104) R1,20101231,20110104,20 R2,20101231,20110103,10 R3,20101231,20101231,4 And this for the day before (201101030), R1,20101231,20110102,15 R2,20101231,20110103,10 R3,20101231,20101231,4 I get about 200+ client rows each day where even the names of the Client list are changing. Is it possible to import the data each day and fill it in a excel sheet where the Client number is off on the left hand side in a table, and the amount of time (Whole Number ex. 4) each day it spends on the site extend to the right under it's specific date see Picture? I've manage to create a manual sheet but have been unsucessful at getting excel to do any of it for me. Here are two pictures:

    Read the article

  • Uninstall php5 installed from source

    - by diegomichel
    I have tried to install php5 from source , and it worked... Then for some reason need to install the official packets, so i tried a make uninstall and for my surprise there is such make uninstall... so i tried delete all the installed files by hand. Then installed the official debian packages and it worked fine... till i need install sqlite module, which give me the following error: php --version PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626/pdo_sqlite.so' - /usr/lib/php5/20090626/pdo_sqlite.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626/sqlite.so' - /usr/lib/php5/20090626/sqlite.so: undefined symbol: php_pdo_register_driver in Unknown on line 0 PHP 5.3.1-5 with Suhosin-Patch (cli) (built: Feb 22 2010 22:46:05) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2009 Zend Technologies So i remember that manual install i did, and i think there is some old lib installed causing that problem, the bad thing is that there is not such make uninstall on the source code of php5... php-5.2.13 > make uninstall make: *** No rule to make target `uninstall'. Stop. I have tried reinstall and purge all php related packages via aptitude with not success. OS: Debian Squeeze. uname -a Linux desktop 2.6.32-trunk-amd64 #1 SMP Sun Jan 10 22:40:40 UTC 2010 x86_64 GNU/Linux Any idea how to fix that?

    Read the article

  • How to diagnose website performance/app pool recycling with Windows 2008/IIS7

    - by ilasno
    Ok, so there are various symptoms here (clients and and our own employees complaining of intermittent slowdowns, getting 'kicked out' to login page or just having a save request not properly save the submitted data). The environment: Windows Server 2008 (Datacenter), Service Pack 2, 64-bit, 2x2.8 GHz processors, 7.5 GB RAM MS SQL Server 2008 (running on the same machine) IIS 7 There are ~10 websites running on the server, each in their own application pool - most of these pools are running in Integrated mode, 2 are in Classic, all are on .NET 2.0 and all run as ApplicationPoolIdentity. I'm trying to analyze, diagnose, and troubleshoot and am struggling with where to get more info about what could be happening. Here are some steps i have already taken: Set each application pool to recycle once per day, and removed any other automatic recycling Set a Virtual Memory Limit for each to 1024000KB (1GB) Enabled ALL 'Generate Recycle Event Log Entry' entries (Config Changes, Isapi Reported Unhealthy, Manual Recycle, Private Memory Limit Exceeded, Regular Time Interval, Request Limit Exceeded, Specific Time, Virtual Memory Limit Exceeded) I have seen the app pool processes recycle (in Task Manager) - a new one will start up, and then the first one dies off - and this has happened without the memory or time going over the settings. This is a fairly new server, and most of these came from Windows Server 2003/IIS6. Any 'next steps' for setting up information gathering, logging, diagnosing, etc. would be much appreciated! j

    Read the article

  • PC can't detect second RAM installed

    - by kulwinder
    I have PC with 512 MB RAM installed (motherboard manufacture MICRO STAR, chipset P4M800), pc was running very slow so I decided to upgrade the ram. I installed CPU-Z and check the ram installed on the machine, also had a look at the stick installed. 512 MB PC 3200 400 MHz DDR but my mother supports 200 MHz and it was working ok. So I bought 2GB which I checked on manual that it support upto 2 GB Ram. So I installed 2GB PC 3200 400 MHz same as the old one, I plugged in both eventhough motherboard only support upto 2 GB but system spec only shows 512 (deducts 64 MB shared vga memory) I checked on CPU-Z, it detects both, slot 1 512 MB, slot 2 2048 MB, comparing screen for both slots, both the same, volt 2.5, frequency 166 MHz and 200MHz, only difference on those is 2gb ram shows under timings table 133MHz 166 MHz and 200MHz but 512 MB shows only 166MHz and 200MHz. I checked on Google and can't seems to figure out whats wrong with it. If I only plug in 2GB. Pc doesn't boot up like ram not working.With only 512 MB plugged in seems ok. Please help.

    Read the article

  • How can I set the CD audio volume in Linux?

    - by user1296362
    In Windows 7 Control Panel - Sound - Sound Properties window there's an slider for setting CD Audio volume: And it's pretty strange that I can't find corresponding one in generic Linux mixers: alsamixer or amixer. I connected a CD drive to try to set CD audio volume with cdcd (CD Player): $ cdcd setvol 0 Invalid volume It isn't actually an invalid volume, it is because ioctl() call fails. I found that out after searching and changing a bit the source code of this utility (in the libcdaudio): --- cdaudio.c.orig 2004-09-09 06:26:20.000000000 +0600 +++ cdaudio.c 2012-05-30 21:34:34.167915521 +0600 @@ -578,8 +578,10 @@ cdvol_data.CDVOLCTRL_BACK_RIGHT_SELECT = CDAUDIO_MAX_VOLUME; #endif - if(ioctl(cd_desc, CDAUDIO_SET_VOLUME, &cdvol) < 0) - return -1; + if(ioctl(cd_desc, CDAUDIO_SET_VOLUME, &cdvol) < 0) { + printf("*** cd_set_volume: ioctl() returned error\n"); + return -1; + } return 0; } By the way cdcd's get volume command yields rather weird output: Left Right Front 1281734864 32767 Back 0 0 Also I tried aumix: $ aumix -c 0 But all with no success. I read from this manual — http://tldp.org/HOWTO/Alsa-sound-6.html (section 6.2 The mixer) that CD channel can present in amixer output. Maybe some drivers for sound card are missing in my Ubuntu 12.04 LTS installation. Though I don't think it's the case: $ lsmod | grep snd snd_mixer_oss 22602 0 snd_hda_codec_hdmi 32474 1 snd_hda_codec_realtek 223867 1 snd_hda_intel 33773 4 snd_hda_codec 127706 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel snd_hwdep 13668 1 snd_hda_codec snd_pcm 97188 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq snd 78855 19 snd_mixer_oss,snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep ,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 15091 1 snd snd_page_alloc 18529 2 snd_hda_intel,snd_pcm All I need is just mute or set to 0 volume level of CD Audio channel, like I did in Windows 7, to get rid of sibilant noise in the speakers.

    Read the article

  • Have an unprivileged non-account user ssh into another box?

    - by Daniel Quinn
    I know how to get a user to ssh into another box with a key: ssh -l targetuser -i path/to/key targethost But what about non-account users like apache? As this user doesn't have a home directory to which it can write a .ssh directory, the whole thing keeps failing with: $ sudo -u apache ssh -o StrictHostKeyChecking=no -l targetuser -i path/to/key targethost Could not create directory '/var/www/.ssh'. Warning: Permanently added '<hostname>' (RSA) to the list of known hosts. Permission denied (publickey). I've tried variations using -o UserKnownHostsFile=/dev/null and setting $HOME to /dev/null and none of these have done the trick. I understand that sudo could probably fix this for me, but I'm trying to avoid having to require a manual server config since this code will be deployed on a number of different environments. Any ideas? Here's a few examples of what I've tried that don't work: $ sudo -u apache export HOME=path/to/apache/writable/dir/ ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=path/to/apache/writable/dir/.ssh/known_hosts -l deploy -i path/to/key targethost $ sudo -u apache ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -l deploy -i path/to/key targethost Eventually, I'll be using this solution to run rsync as the apache user.

    Read the article

  • When I restart my LXC environment, the container does not re-bind to the IP address

    - by RoboTamer
    The IP does no longer respond to a remote ping With restart I mean: lxc-stop -n vm3 lxc-start -n vm3 -f /etc/lxc/vm3.conf -d -- /etc/network/interfaces auto lo iface lo inet loopback up route add -net 127.0.0.0 netmask 255.0.0.0 dev lo down route add -net 127.0.0.0 netmask 255.0.0.0 dev lo # device: eth0 auto eth0 iface eth0 inet manual auto br0 iface br0 inet static address 192.22.189.58 netmask 255.255.255.248 gateway 192.22.189.57 broadcast 192.22.189.63 bridge_ports eth0 bridge_fd 0 bridge_hello 2 bridge_maxage 12 bridge_stp off post-up ip route add 192.22.189.59 dev br0 post-up ip route add 192.22.189.60 dev br0 post-up ip route add 192.22.189.61 dev br0 post-up ip route add 192.22.189.62 dev br0 -- /etc/lxc/vm3.conf lxc.utsname = vm3 lxc.rootfs = /var/lib/lxc/vm3/rootfs lxc.tty = 4 #lxc.pts = 1024 # pseudo tty instance for strict isolation lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.name = eth0 lxc.network.mtu = 1500 #lxc.cgroup.cpuset.cpus = 0 # security parameter lxc.cgroup.devices.deny = a # Deny all access to devices lxc.cgroup.devices.allow = c 1:3 rwm # dev/null lxc.cgroup.devices.allow = c 1:5 rwm # dev/zero lxc.cgroup.devices.allow = c 5:1 rwm # dev/console lxc.cgroup.devices.allow = c 5:0 rwm # dev/tty lxc.cgroup.devices.allow = c 4:0 rwm # dev/tty0 lxc.cgroup.devices.allow = c 4:1 rwm # dev/tty1 lxc.cgroup.devices.allow = c 4:2 rwm # dev/tty2 lxc.cgroup.devices.allow = c 1:9 rwm # dev/urandon lxc.cgroup.devices.allow = c 1:8 rwm # dev/random lxc.cgroup.devices.allow = c 136:* rwm # dev/pts/* lxc.cgroup.devices.allow = c 5:2 rwm # dev/pts/ptmx lxc.cgroup.devices.allow = c 254:0 rwm # rtc # mounts point lxc.mount.entry=proc /var/lib/lxc/vm3/rootfs/proc proc nodev,noexec,nosuid 0 0 lxc.mount.entry=devpts /var/lib/lxc/vm3/rootfs/dev/pts devpts defaults 0 0 lxc.mount.entry=sysfs /var/lib/lxc/vm3/rootfs/sys sysfs defaults 0 0

    Read the article

  • Setting MSN or Yahoo! Messenger status to Invisible or Offline when idle for an hour

    - by Jian Lin
    Where, or how, do I set it up in MSN Messenger or Yahoo! Messenger to automatically switch my status to either "Invisible" or "Offline" when idle for a half hour, or an hour? I know how to set my status as "Away" or "Busy" after 10 minutes, but can't seem to find a way to set the offline status options without manual intervention. Back story As a software developer, I am very used to turning the computer on for the whole day and not turning it back off. (For example, checking email for urgent fixes, fix issue and push to web server). It's not even turned off when heading to sleep in case I might find it hard to fall asleep and come back to check on the computer. Or to have it there ready in the morning to check that everything is okay. If I'm seen as being online for 24 hours of a day, some people see me as weird. Their perception of my value decreases as I'm always there (hard to get = high value; always there = low value). Leaving it on makes everyone in my contacts list think I have nothing better to do all day than sit in front of the computer. Even though it's my job and I do admittedly spend more time online than other people. That's why I'd like to find a way to set my status as Invisible or Offline.

    Read the article

  • SQL Server Database In Single User Mode after Failover

    - by jlichauc
    Here is a weird situation we experienced with a SQL Server 2008 Database Mirroring Failover. We have a pair of mirrored databases running in high-availability mode and both the principal and mirror showed as synchronized. As part of some maintenance I triggered a manual failover of the principal to the mirror. However after the failover the principal was now in single-user mode instead of the expected "Principal/Synchronized" state we usually get. The database had been in multi-user mode on the previous principal before this had happened. We ended up stopping all applications, restarting the SQL Server instances, and executing "ALTER DATABASE ... SET MULTI_USER" to bring the database back to the expected "Principal/Synchronized" state in a multi-user mode. Question. Does anyone know where SQL Server stores information about whether a database should be in single-user mode or not? I'm wondering if there is some system database or table that has this setting recorded somewhere. In particular we had an incident once with the database on the original principal (the one I was failing over to) where when trying to detach the database it was put into single-user mode. I'm wondering if that setting is cached somewhere and is the reason that SQL Server put it back into single-user mode after a failover.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >