Search Results

Search found 23084 results on 924 pages for 'jeff main'.

Page 636/924 | < Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >

  • Two audio streams - headphones and speakers

    - by Sylvester
    What I want (this is probably hard for most to answer, as this is a very unique setup) is to have two different streams (this means audio splitter is not an option, as it will still only be one stream) of audio - one through the headphones and one through the main speakers. I can do the audio rerouting using virtual audio cables, however the problem is this: i cannot get both headphones AND speakers to play even just one stream, let alone two seperate ones. using "split front and back audio into seperate streams is not an option, as the actual MB F_PANEL is faulty (nothing to do with the case front panel, just so you know. that works fine) So, first things first. I need it to recognise the headphones as a seperate audio device so that Virtual Audio Cables will detect it and allow me to route the necessary audio to the headphones only. I also need to be able have sound play through speakers and headphones together what i want to achieve overall, is this: have the ENTIRE computer's sounds picked up by VAC, and stream them to Line1. then have Line1 stream to the headphones. that way whatever's being streamed is heard through the headphones, while the entire system sounds (including those not streamed) are played through speakers.

    Read the article

  • GlassFish v3: Security related updates?

    - by chris_l
    I've used GlassFish v3.0 as my main development application server for a few weeks now. Now that I want to install it on my VPS, I'd like to get the latest security updates, because Glassfish v3 Release 3.0 (Open Source Edition or not) is already a few months old, and v3.1 is only available as "early access" nightlies (see https://glassfish.dev.java.net/public/downloadsindex.html). GlassFish offers an update mechanism (via pkg or updateTool), but when I simply try to get the latest updates (pkg image-update), it finds nothing. However, when I change the preferred publisher to dev.glassfish.org, I get a list with lots of updates. The interesting thing is, that I haven't been able to find any description about the exact meaning of the diverse publishers/repositories (release, stable, contrib and dev) anywhere on the web, most importantly answering the question: Am I supposed to use the "dev" repository for security updates, or is it (probably more likely) for unstable updates? Where do I get security updates from then? Or are there simply no security updates yet? Asking on the GlassFish forum resulted in 56 views, but 0 answers.

    Read the article

  • ffmpeg - creating DNxHD MFX files with alphas

    - by Hugh
    Hi all, I'm struggling with something in FFMpeg at the moment... I'm trying to make DNxHD 1080p/24, 36Mb/s MXF files from a sequence of PNG files. My current command-line is: ffmpeg -y -f image2 -i /tmp/temp.%04d.png -s 1920x1080 -r 24 -vcodec dnxhd -f mxf -pix_fmt rgb32 -b 36Mb /tmp/temp.mxf To which ffmpeg gives me the output: Input #0, image2, from '/tmp/temp.%04d.png': Duration: 00:00:01.60, start: 0.000000, bitrate: N/A Stream #0.0: Video: png, rgb32, 1920x1080, 25 tbr, 25 tbn, 25 tbc Output #0, mxf, to '/tmp/temp.mxf': Stream #0.0: Video: dnxhd, yuv422p, 1920x1080, q=2-31, 36000 kb/s, 90k tbn, 24 tbc Stream mapping: Stream #0.0 -> #0.0 [mxf @ 0x1005800]unsupported video frame rate Could not write header for output file #0 (incorrect codec parameters ?) There are a few things in here that concern me: The output stream is insisting on being yuv422p, which doesn't support alpha. 24fps is an unsupported video frame rate? I've tried 23.976 too, and get the same thing. I then tried the same thing, but writing to a quicktime (still DNxHD, though) with: ffmpeg -y -f image2 -i /tmp/temp.%04d.png -s 1920x1080 -r 24 -vcodec dnxhd -f mov -pix_fmt rgb32 -b 36Mb /tmp/temp.mov This gives me the output: Input #0, image2, from '/tmp/1274263259.28098.%04d.png': Duration: 00:00:01.60, start: 0.000000, bitrate: N/A Stream #0.0: Video: png, rgb32, 1920x1080, 25 tbr, 25 tbn, 25 tbc Output #0, mov, to '/tmp/1274263259.28098.mov': Stream #0.0: Video: dnxhd, yuv422p, 1920x1080, q=2-31, 36000 kb/s, 90k tbn, 24 tbc Stream mapping: Stream #0.0 -> #0.0 Press [q] to stop encoding frame= 39 fps= 9 q=1.0 Lsize= 7177kB time=1.62 bitrate=36180.8kbits/s video:7176kB audio:0kB global headers:0kB muxing overhead 0.013636% Which obviously works, to a certain extent, but still has the issue of being yuv422p, and therefore losing the alpha. If I'm going to QuickTime, then I can get what I need using Shake, but my main aim here is to be able to generate .mxf files. Any thoughts? Thanks

    Read the article

  • Identifying Httpd error log in Fedora 16

    - by Cerin
    How do you find the cause of httpd errors in Fedora 16? The new systemctl command in Fedora 16 seems to horribly obscure any useful logging info. [root@host ~]# systemctl start httpd.service Job failed. See system logs and 'systemctl status' for details. [root@host ~]# systemctl status httpd.service httpd.service - The Apache HTTP Server (prefork MPM) Loaded: loaded (/lib/systemd/system/httpd.service; enabled) Active: failed since Thu, 21 Jun 2012 16:26:56 -0400; 1min 23s ago Process: 2119 ExecStop=/usr/sbin/httpd $OPTIONS -k stop (code=exited, status=0/SUCCESS) Process: 2215 ExecStart=/usr/sbin/httpd $OPTIONS -k start (code=exited, status=1/FAILURE) Main PID: 1062 (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/httpd.service So the first command fails...and it tells me to run another command...which simply tells me that the command returned an error code. Where's the actual error? Even more frustrating is nothing seems to have been written to the logs: [root@host ~]# ls -lah /var/log/httpd/ total 8.0K drwx------. 2 root root 4.0K Jun 21 16:19 . drwxr-xr-x. 21 root root 4.0K Jun 20 16:33 .. -rw-r----- 1 root root 0 Jun 21 16:19 modsec_audit.log -rw-r----- 1 root root 0 Jun 21 16:19 modsec_debug.log

    Read the article

  • Nginx + PHP5-FPM repeated cut outs 502

    - by James
    I've seen a number of questions here that highlight random 502 (Nginx + PHP-FPM = "Random" 502 Bad Gateway) and similar time outs when using Nginx + PHP-FPM. Even with all the questions, I'm still unable to find a solution. Using Ubuntu 10.10 + Nginx + PHP5-FPM + APC and every 1 out of 4 requests ends in a timeout and failure. This isn't a load issue or large traffic, it happens even in dev environment with one person. I am doing this across 3 1GB machines, each with the same configurations and same problems. fastcgi_params fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REDIRECT_STATUS 200; /etc/php5/fpm/main.conf ; FPM Configuration ; ;include=/etc/php5/fpm/*.conf ; Global Options ; pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log ;log_level = notice ;emergency_restart_threshold = 0 ;emergency_restart_interval = 0 ;process_control_timeout = 0 ;daemonize = yes ; Pool Definitions ; include=/etc/php5/fpm/pool.d/*.conf /etc/php5/fpm/pool.d/www.conf [www] listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data ;pm.max_children = 50 pm.max_children = 15 ;pm.start_servers = 20 pm.min_spare_servers = 5 ;pm.max_spare_servers = 35 pm.max_spare_servers = 10 ;pm.max_requests = 500 ;pm.status_path = /status ;ping.path = /ping ;ping.response = pong request_terminate_timeout = 30 ;request_slowlog_timeout = 0 ;slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes

    Read the article

  • Sync Two Exchange accounts or Ready Only access to subfolders

    - by cpgascho
    This is two questions kind of. The situation is as follows. I am running SBS 2008 with Exchange 2007. There is a shared account which has subfolders to keep track of the process of jobs that are coming into the company (ie: sales) I need to give other people in the company read access to this mailbox not full control. When I give ready only access to the root other users can only see the Inbox and not subfolders. Permissions have to be applied to each folder. One solution I have considered is creating a secondary mailbox that everyone could have full access too which would have a one way sync from the sales mailbox to the secondary mailbox. Then people could see what was happening without messing up the main mailbox by accident (at worst they would mess up the secondary mailbox) Ideally I could find a way to propgate the READ ONLY Permissiosn to all the subfolders. I have tried using PFDavAdmin to do this but have not been able to get it to connect successfully from Windows 7 To Exchange 2007 Any idea on how to 1. Propogate permissions (get PFDavAdmin to work??!) 2. Sync mailboxes 3. Other solution? Thanks Chris

    Read the article

  • Bad sectors, S.M.A.R.T., SpinRite, firmware on platter and drive id questions.

    - by Christopher Galpin
    Is it possible for S.M.A.R.T. to give false readings (say I was fiddling with lots of recovery programs, transfers, so on and so forth) or is it absolutely a read-only direct correlation to the physical status of a drive? Does SpinRite level 5 "recover bad sectors" operate on those marked at the factory? Are they on the same level as your generic bad sector, with SpinRite thus having full access? (Also I'm curious if SMART's bad sector count is zero'd afterward or if it includes factory marked sectors.) The main firmware of some drives, like a WD Passport is stored on the platter. How is it protected? Is it through marking them as bad sectors? If so, I'm wondering if SpinRite's sector recovery could bring about firmware corruption on these drives. Is the failure of a drive to report valid identity information (hdparm -I /dev/xx) consistent with corrupted firmware, or just general disk failure? I may be misunderstanding the role of firmware here. I feel I've read a drive's identity information is on the platter, just like the partition tables and so on. Is this true? (Apologizes if this is more appropriate for SuperUser.)

    Read the article

  • Possible DNS issue after a reinstall of Windows Server 2000 (get off my lawn)

    - by cop1152
    I just replaced a drive on a Win2000 Server that replicates AD and issues out DHCP at one of our offices. I successfully joined it to the domain, setup range of IP's, etc, but am still having issues. I cannot RDC to it with name or IP. I can ping it, browse to it with Windows Explorer, and remote to it with some other software, but not RDC. The other issue is this: Users are unable to authenticate on it. They receive the message 'username or password incorrect' (or something like that). Changes made on the main domain controller seem to take forever to trickle down. The most significant entry in the DNS Server Log is Event ID 7062: The DNS Server Encountered a Packet Addressed to Itself. At least, I think its significant. The Directory Services Log shows numerous Event IDs 1265: The attempt to establish a replication link with parameters failed with the following status: The DSA operation is unable to proceed because of a DNS lookup failure. Does this make any sense to anyone? I feel like its something very simple that I am overlooking. Thanks in advance.

    Read the article

  • Exchange server intermittently not receiving or delivering emails to a few addresses?

    - by Gary Willoughby
    This is a strange problem. We are using an Exchange 2007 server to handle the emails to and from the company. There are two main problems which are probably related. None of our mails sent to one single customer are ever received. When we send any type of mail to one particular customer, they never get it. We have confirmed the address and tried to send more to other mail addresses on the same domain and they still don't receive it. No error (email or otherwise) is ever issued. (Domain related? Blacklisted?) Sometimes (intermittently) a mail sent to our company (can be any address on our domain) is never received. I tried this the other day from home and sent a mail to my work address. It was never received. But then a day later i sent another and it was received fine (so the mail address is fine). No error (email or otherwise) is ever issued. Any ideas where to start looking for causes?

    Read the article

  • Apache Tomcat Server Error

    - by Sam....
    I M trying to install Tomcat But Getting this Error Every Time ..whether it is binary or Exe install *SEVERE: Begin event threw exception java.lang.ClassNotFoundException: org.apache.catalina.core.AprLifecycleListener at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at org.apache.commons.digester.ObjectCreateRule.begin(ObjectCreateRule.java:204) at org.apache.commons.digester.Rule.begin(Rule.java:152) at org.apache.commons.digester.Digester.startElement(Digester.java:1286) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown Source) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(Unknown Source) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source) at org.apache.commons.digester.Digester.parse(Digester.java:1572) at org.apache.catalina.startup.Catalina.start(Catalina.java:451) at org.apache.catalina.startup.Catalina.execute(Catalina.java:402) at org.apache.catalina.startup.Catalina.process(Catalina.java:180) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:202) * Can any one please solve this...need urgent rly

    Read the article

  • OpenSwan (IPSEC) on Fedora 13 with Snow Leopard as a client

    - by sicn
    I recently installed OpenSwan on my Fedora 13 machine. I want to use it to connect with Mac OS X with L2TP over IPSEC, unfortunately I am already stuck on the IPSEC-negotation part. My server is running behind a NATted firewall so my external IP differs from the server's IP. The server has a fixed IP on the network and the same is almost always valid for the clients (they are usually behind a NATted firewall). I installed OpenSwan on Fedora 13 and have following configuration: config setup protostack=netkey nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off nhelpers=0 conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT authby=secret pfs=no auto=add keyingtries=3 rekey=no ikelifetime=8h keylife=1h type=transport left=my.servers.external.ip leftprotoport=17/1701 right=%any rightprotoport=17/0 IPSEC starts fine and listens to UDP 500 and 4500. These two ports are opened in the firewall and are forwarded fine to the server. In my /etc/ipsec.secrets file I have my.servers.external.ip %any: "LongAndDifficultPassword" And finally in my sysctl.conf (the redirect-entries are there because OpenSwan was strongly protesting about send/accept_redirects being active) I have net.ipv4.ip_forward = 1 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.all.accept_redirects = 0 Running "ipsec verify" gives me "all greens" (except Opportunistic Encryption Support, which is DISABLED), however, when trying to connect my Mac gives me following in the logs: Nov 1 19:30:28 macbook pppd[4904]: pppd 2.4.2 (Apple version 412.3) started by user, uid 1011 Nov 1 19:30:28 macbook pppd[4904]: L2TP connecting to server 'my.servers.ip.address' (my.servers.ip.address)... Nov 1 19:30:28 macbook pppd[4904]: IPSec connection started Nov 1 19:30:28 macbook racoon[4905]: Connecting. Nov 1 19:30:28 macbook racoon[4905]: IKE Packet: transmit success. (Initiator, Main-Mode message 1). Nov 1 19:30:31 macbook racoon[4905]: IKE Packet: transmit success. (Phase1 Retransmit). Nov 1 19:30:38: --- last message repeated 2 times --- Nov 1 19:30:38 macbook pppd[4904]: IPSec connection failed Any ideas at all?

    Read the article

  • Backup hardware and strategy on distributed Windows Server 2008 network

    - by CesarGon
    This question is a follow up to this. We have a Windows Server 2008 R2 domain over a network that spans two different buildings, linked by a 100-Mbps point-to-point line. Over 60 users work in the organisation. We are planning to use DFS folders and DFS replication for file serving across the organisation. The estimated data volume is over 2 TB, and will grow at approximately 20% annually. The idea is to set up a DFS file server in each building and use DFS so that all the contents stay replicated over the 100-Mbps link. We are now considering backup hardware and strategies. We are Dell customers and, after browsing the online Dell catalogue, I can see a number of backup hardware options. My main doubts are the following: Would you go for a tape library, disk backup, or are there other options worth considering? Would you perform batch backups (i.e. nightly) or would you use continuous backup (i.e. while users are working)? Would you use a dedicated backup server to which the tape library (or any other backup device) is attached, or is there any other alternative way of doing things? My experience with backup hardware and overall setup is limited, so I appreciate any good piece of advice that you may have. Thanks.

    Read the article

  • Multiple Internet connections, multiple networks and split access in Linux

    - by Swapneel Patnekar
    I am having trouble setting up multiple internet connections for split access in Linux. We have 3 internet connections from 3 different ISP's. We want to configure our Linux gateway machine such that our three internal networks 10.2.1.0/24, 192.168.20.0/24 & 192.168.2.0/24 use ISP1, ISP2 and ISP3 respectively in a split access manner. Outlined below is the layout/settings, Interfaces of the Linux Gateway connected to Routers: eth0: 10.1.1.2<---------->10.1.1.1(Internal Interface of ADSL Router)[ISP1] eth1: 192.168.15.2<------>192.168.15.1(Internal Interface of 3G Router)[ISP2] eth3: 192.168.1.2<------->192.168.1.1(Internal Interface of ADSL Router)[ISP3] Kindly note that none of the interfaces in the Linux gateway has a public static IP address. Routers of ISP1 and ISP2 get assigned a dynamic public IP address when connected to the Internet, router of ISP3 has been assigned a public static IP address. Interface of Linux gateway connected to a switch, eth4: 10.2.1.1(LAN Interface for ISP1) eth4:0 192.168.20.1(LAN interface for ISP2) eth4:1 192.168.2.1(LAN Interface for ISP3) eth4:0 & eth4:1 are virtual interfaces with eth4 being the interface connected physically. Based on http://linux-ip.net/html/adv-multi-internet.html I've set the following routes, ip route flush table 4 ip route show table main | grep -Ev ^default | while read ROUTE ; do ip route add table 4 $ROUTE done ip route add table 4 default via 192.168.15.1 ip rule add fwmark 4 table 4 ip route flush cache Additionally, using the following iptables rules to mark & route packets as per the guide mentioned above : http://pastebin.com/KzWHFGJA At this point, computers from 192.168.2.0/24 network are successfully able to reach the Internet through ISP3. 192.168.20.0/24 and 10.2.1.0/24 are unable to access the Internet through ISP1 and ISP2 respectively. Any inputs will be much appreciated !

    Read the article

  • How can I run Gnome or KDE locally in Cygwin?

    - by John Peter Thompson Garcés
    Apparently it is possible to do this using cygwin ports, as can be seen in screenshots. I followed this how-to to get apt-cygports set up, and I used it to install gnome-session. This how-to supposedly gives the commands needed to run Gnome or KDE, but whenever I try to run Gnome, a blank X-window pops up and then quickly disappears. Here is the terminal output: $ startx /usr/bin/dbus-launch gnome-session xauth: file /home/jpthomps/.serverauth.4168 does not exist Welcome to the XWin X Server Vendor: The Cygwin/X Project Release: 1.10.3.0 OS: Windows 7 Service Pack 1 [Windows NT 6.1 build 7601] (WoW64) Package: version 1.10.3-12 built 2011-08-22 XWin was started with the following command line: /usr/bin/X :0 -auth /home/jpthomps/.serverauth.4168 (II) xorg.conf is not supported (II) See http://x.cygwin.com/docs/faq/cygwin-x-faq.html for more information LoadPreferences: /home/jpthomps/.XWinrc not found LoadPreferences: Loading /etc/X11/system.XWinrc LoadPreferences: Done parsing the configuration file... winDetectSupportedEngines - DirectDraw installed, allowing ShadowDD winDetectSupportedEngines - Windows NT, allowing PrimaryDD winDetectSupportedEngines - DirectDraw4 installed, allowing ShadowDDNL winDetectSupportedEngines - Returning, supported engines 0000001f winSetEngine - Using Shadow DirectDraw NonLocking winScreenInit - Using Windows display depth of 32 bits per pixel winFinishScreenInitFB - Masks: 00ff0000 0000ff00 000000ff Screen 0 added at virtual desktop coordinate (0,0). MIT-SHM extension disabled due to lack of kernel support XFree86-Bigfont extension local-client optimization disabled due to lack of shared memory support in the kernel (II) AIGLX: Loaded and initialized /usr/lib/dri/swrast_dri.so (II) GLX: Initialized DRISWRAST GL provider for screen 0 winPointerWarpCursor - Discarding first warp: 637 478 (--) 5 mouse buttons found (--) Setting autorepeat to delay=500, rate=31 (--) Windows keyboard layout: "00000409" (00000409) "US", type 4 (--) Found matching XKB configuration "English (USA)" (--) Model = "pc105" Layout = "us" Variant = "none" Options = "none" Rules = "base" Model = "pc105" Layout = "us" Variant = "none" Options = "none" winBlockHandler - pthread_mutex_unlock() winProcEstablishConnection - winInitClipboard returned. winClipboardProc - DISPLAY=:0.0 winClipboardProc - XOpenDisplay () returned and successfully opened the display. xinit: XFree86_VT property unexpectedly has 0 items instead of 1 xinit: connection to X server lost waiting for X server to shut down winClipboardProc - winClipboardFlushWindowsMessageQueue trapped WM_QUIT message, exiting main loop. winClipboardProc - XDestroyWindow succeeded. winClipboardProc - Clipboard disabled - Exit from server winDeinitMultiWindowWM - Noting shutdown in progress

    Read the article

  • postfix says mail sent ok, message does not arrive in ISPs inbox? no reject in log?

    - by Nick
    When I send a test message from my mail server to my @bellsouth.net email, The postfix log shows it was sent OK, but the message never arrives in my bellsouth inbox. Shouldn't I get a failure notice or a bounce if At&T is blocking the messages? I'm trying to troubleshoot why some customers aren't getting emails, but if there's nothing in mail.log to say the message is rejected, how do I know which messages were delivered successfully? The log shows: Feb 27 09:02:36 MyHOSTNAME postfix/pickup[26175]: D53A72713E5: uid=0 from=<root> Feb 27 09:02:36 MyHOSTNAME postfix/cleanup[26487]: D53A72713E5: message-id=<[email protected]> Feb 27 09:02:36 MyHOSTNAME postfix/qmgr[5595]: D53A72713E5: from=<[email protected]>, size=878, nrcpt=1 (queue active) Feb 27 09:02:37 MyHOSTNAME postfix/smtp[26490]: D53A72713E5: to=<[email protected]>, relay=gateway-f1.isp.att.net[204.127.217.16]:25, delay=0.57, delays=0.11/0.03/0.23/0.19, dsn=2.0.0, status=sent (250 ok ; id=20120227140036M0700qer4ne) Feb 27 09:02:37 MyHOSTNAME postfix/qmgr[5595]: D53A72713E5: removed The AT&T server accepted the message, right? I happen to have an At&T/Bellsouth email, but I don't have an account with every ISP we send to. I need some way of knowing if a message is getting to its destination or not. Is there any setting in my main.cf file that would affect whether or not we get reject/bounce notices?

    Read the article

  • MediaWiki migrated from Tiger to Snow Leopard throwing an exceptions

    - by Matt S
    I had an old laptop running Mac OS X 10.4 with macports for web development: Apache 2, PHP 5.3.2, Mysql 5, etc. I got a new laptop running Mac OS X 10.6 and installed macports. I installed the same web development apps: Apache 2, PHP 5.3.2, Mysql 5, etc. All versions the same as my old laptop. A Mediawiki site (version 1.15) was copied over from my old system (via the Migration Assistant). Having a fresh Mysql setup, I dumped my old database and imported it on the new system. When I try to browse to mediawiki's "Special" pages, I get the following exception thrown: Invalid language code requested Backtrace: #0 /languages/Language.php(2539): Language::loadLocalisation(NULL) #1 /includes/MessageCache.php(846): Language::getFallbackFor(NULL) #2 /includes/MessageCache.php(821): MessageCache->processMessagesArray(Array, NULL) #3 /includes/GlobalFunctions.php(2901): MessageCache->loadMessagesFile('/Users/matt/Sit...', false) #4 /extensions/OpenID/OpenID.setup.php(181): wfLoadExtensionMessages('OpenID') #5 [internal function]: OpenIDLocalizedPageName(Array, 'en') #6 /includes/Hooks.php(117): call_user_func_array('OpenIDLocalized...', Array) #7 /languages/Language.php(1851): wfRunHooks('LanguageGetSpec...', Array) #8 /includes/SpecialPage.php(240): Language->getSpecialPageAliases() #9 /includes/SpecialPage.php(262): SpecialPage::initAliasList() #10 /includes/SpecialPage.php(406): SpecialPage::resolveAlias('UserLogin') #11 /includes/SpecialPage.php(507): SpecialPage::getPageByAlias('UserLogin') #12 /includes/Wiki.php(229): SpecialPage::executePath(Object(Title)) #13 /includes/Wiki.php(59): MediaWiki->initializeSpecialCases(Object(Title), Object(OutputPage), Object(WebRequest)) #14 /index.php(116): MediaWiki->initialize(Object(Title), NULL, Object(OutputPage), Object(User), Object(WebRequest)) #15 {main} I tried to step through Mediawiki's code, but it's a mess. There are global variables everywhere. If I change the code slightly to get around the exception, the page comes up blank and there are no errors (implying there are multiple problems). Anyone else get Mediawiki 1.15 working on OS X 10.6 with macports? Anything in the migration from Tiger that could cause a problem? Any clues where to look for answers?

    Read the article

  • Port forwarding on D-Link DIR-615 super-slow, useless

    - by Jaroslav Záruba
    Hello I have replaced my old router with DIR-615 from D-Link, and now the port forwarding is so slow it makes the router practically useless for requests coming from outside of my network. Accessing the router itself (admin UI) from outside is without any issues, no delay whatsoever. But when I try to access a service residing on any of the computers in my network from outside the requests take minutes and minutes. (E.g. I can see source of my GWT-app main page, but loading additional CSS and JS files takes years.) If anyone could recommend any further diagnostics I should do to figure out what is happening it would be great. Few notes: happens with more services (web-app on Tomcat, viewing directory index via Apache) it does not make a difference whether the service is hosted on wired or wireless PC accessing the service on a localhost works fine, as does any 'inner' communication turning off firewall on target PC does not make difference either (makes sense) when I replace this router with the old one (both 192.168.1.1) everything works fine I see nothing suspicious in the router's log I believe I have the latest firmware (4.11) DIR-615 sucks, it already died once completely Regards Jarda Z.

    Read the article

  • Software mirroring (RAID1) versus "Fake Raid" for new Windows 7 install

    - by kquinn
    I've just ordered two new hard drives for my main desktop and a copy of Windows 7 Professional 64-bit. I'd like to do a clean install of Win7 onto the new drives (leaving my old XP Pro boot partition around for a while in case something goes disastrously wrong, etc.). I want to have them set up in mirrored (RAID-1) mode. My understanding is that Win7 Pro can do software mirroring, but can I set this up directly at install time? If so, how? Note that I'd like the disk to be split into three partitions (OS/Apps&Data/Bulk data), all of which should be mirrored. Would it be better (more reliable or faster) to use my motherboard's hardware RAID support? My motherboard is an older nVidia nForce 680i SLI, which is not the most stable of motherboards, and I'm not sure how trustworthy its RAID1 configuration might be (or if Win7 could even detect and install onto a hardware-mirrored volume). Also, the performance characteristics of RAID1 are rather different than RAID0 or RAID5, and I'm wondering if Win7's software mirroring might actually be faster than hardware RAID1 (for example, I'm more of a Unix admin when I have to wear the sysadmin hat, and I've had great success deploying ZFS; most hardware RAID1 implementations have to read both disks and compare results to look for data errors, but ZFS can read from only one disk in the mirror and just use the built-in checksum, meaning it can have up to 2x the number of reads in-flight, as long as there's no data corruption). Edit: Okay, my question about whether Windows 7 can do software mirroring has been answered, and it can. I'm still unsure whether Windows software RAID or my motherboard's hardware "fake RAID" function is a better choice, though. Remember, I'm only interested in mirroring -- not the more complicated striping or parity operations that generally show the poor performance of crappy motherboard RAID solutions.

    Read the article

  • Cyrus: In practical terms, how do end users administer their shared mailboxes?

    - by Nick
    Let's say we have four customer service reps: Billy, Bob, Joe, and Tom. Tom is the department manager. There's a shared Customer Service mailbox on the Cyrus server that they all have access to. Tom, as the manager also has administrative privileges for the shared mailbox. They decide they want to create sub-folders a certain way, and Tom creates them. They're all running Thunderbird, so Tom right-clicks the main folder and chooses "New Subfolder". Now Tom has the Subfolders he needs and the other sales reps have... nothing! Because Cyrus created the Subfolders giving Tom "Full Access" permissions, and everyone else gets no access. So how does Tom give the other reps in his department access to the new folders? As far as Cyrus is concerned, Tom has permission to grant others access to his new mailboxes- But as far as I can tell, there's no option in Thunderbird for granting mailbox permissions. An IT staff member should not have to receive a support request every time someone wants to add a Subfolder to a shared mailbox. That's why we make certain users into mailbox admins in the first place! But asking (non-technical) users to SSH into an IMAP server to run cyradm seems like a bad idea too. Certainly someone has found a solution for this dilemma. Perhaps a Thunderbird extension for setting Cyrus permissions? Or something like umask that forces subfolders to have identical permissions to their parents on creation? And related, what about Sieve configuration? Is there anyway that can be done from the client machine too? Thanks, Nick

    Read the article

  • Skip Corrupt Revisions During SvnAdmin Load

    - by cisellis
    I have a dump file that I am generating from VSS with the use of the VSS2SVN script. I've tested the generated dump file before and some of the revisions are corrupt for one reason or another (binary data or long path strings seem to be the main culprit). This is fine. In the past I have used svndumpfilter to split the dump file, remove the corrupt revisions and continue to load the repository. It worked but took a lot of manual effort to start the load, hit the bad revision, split the dump file, continue loading the repo, etc. This dump file is pretty large (~5GB) and takes several hours to load. I think I know the answer to this but is there any way to simply tell svnadmin load to keep going and skip corrupt revisions? I know how to verify, backup, etc. the dump file and don't need any of that. I don't care about recovering corrupt revisions. I just want to start the load, walk away, and not worry about checking it every few hours to manually remove the corrupt revisions. Is that possible? Thanks.

    Read the article

  • 3 Servers, is this is a cluster?

    - by Andy Barlow
    Hello, At the moment I have one Ubuntu server, 9.10, running with a simple Samba share, a mail server, DNS server and DHCP server. Mostly its just there for file sharing and email server. I also have 2 other servers that are exactly the same hardware and spec as the first, which have an rsync set up to retrieve the shared folders and backs them up. However, if the first server goes down, all of our shares disappear along with our mail and the system must be rebuilt. Also I tend to find if people are downloading a large amount from the file server, no-one can access there emails - especially in the morning when everyone is signing in at once. Would it be more beneficial for me to have all 3 servers, all running the same services, doing the same thing with some sort of cluster with load balancing? I'm not really sure where to begin looking, or how to go about such a setup where 3 servers are all identical, but perhaps one acts as the main load balancer?? If someone can point me in the right direction, or if this simply sounds like one of those Enterprise Cloud's that is now a default setup in Ubuntu Server 9.10+, then I'll go down that route. Cheers in advance. Andy

    Read the article

  • Firefox 29 - how do I delete history entries visited fewer than x times

    - by lousyuser
    Context: I've been using my Firefox profile for a couple of years now. My history file has become huge, naturally. I got Firefox Sync set up between my main desktop PC and my laptop. HW configs: PC: i5-3450, 8 GB DDR3 RAM, Crucial M4 128 GB SSD laptop: Pentium SU4100, 4 GB DDR3 RAM, WD 5400 rpm HDD Accessing history entries when typing into the Awesome Bar on my desktop takes quite a long time despite the decent config, the laptop is even slower. The experience is quite unresponsive. I figured if I cleared the history up a little bit, I might avoid creating a new profile to speed things up. The question itself: to illustrate: Is there a way to delete all history entries that have been visited fewer than x (let's say 5) times and at the same time the recent visit is fewer than y (let's say 120) days old? afaik the history file is some kind of SQL database, but I'm not really sure how the data is saved, if there's a "safe way" to edit it and what the query to do what I need would look like. Thanks in advance for any help. I kept browsing through previous SuperUser questions to see if I could find relevant information. "In my Firefox profile directory, there is a filed named places.sqlite. Opening it with sqlite reveals (amongst others) the tables moz_places and moz_historyvisits. It seems that moz_historyvisits uses the primary of moz_places to refer to the URLs." As I'm unfamiliar with databases, I don't really understand the way the two tables mentioned in the quote are related. screenshot of a part of the tables I've noticed the visit_count is in a standard format, making it easy to work with. The last_visit_date looks encrypted to my naked eye, but I can't see in which way. Hope that helps, I'm at my wits' end.

    Read the article

  • Is there any way to send Outlook meeting requests from a non-default calendar?

    - by rbeier
    Hi, We have a user with two Outlook accounts. [email protected] is of type Exchange; [email protected] is of type IMAP/SMTP. Both are actually on our Exchange server; but since an Outlook profile can only have one Exchange account, the second one is set up as IMAP. The user would like to send a meeting request from her xyz.com account, so the "from" address appears as [email protected]. Unfortunately that doesn't work. If she creates the meeting in her xyz.com calendar, the meeting request still goes out through her Exchange account, [email protected]. The meeting request "compose message" window has an Account dropdown below the Send button, but this has no effect. Before she sends the invitation, a warning appears: "Responses to this meeting request will not be tallied because this meeting is not in your main Calendar folder. Is this OK?" Is there any workaround for this? We're using Outlook 2007 and Exchange 2003 SP2. Thanks, Richard

    Read the article

  • How to restrict postfix send limited email with policyd v2?

    - by Shalini Tripathi
    I have installed cluebringer-2.0.7 for postfix and enabled below lines in the main.cf file of postfix. But I could not see any policy working smtpd_recipient_restrictions = permit_mynetworks permit_sasl_authenticated reject_unauth_destination check_policy_service inet:127.0.0.1:10031 smtpd_end_of_data_restrictions=check_policy_service inet:127.0.0.1:10031 To check further I enabled logging in policyd and its only shows below logs and there is no logs getting populated when I send new emails.. [2012/06/12-21:18:50 - 13949] [CORE] NOTICE: Process Backgrounded [2012/06/12-21:18:50 - 13949] [CBPOLICYD] NOTICE: Policyd v2 / Cluebringer - v2.0.7 [2012/06/12-21:18:50 - 13949] [CBPOLICYD] NOTICE: Initializing system modules. [2012/06/12-21:18:50 - 13949] [CBPOLICYD] NOTICE: System modules initialized. [2012/06/12-21:18:50 - 13949] [CBPOLICYD] NOTICE: Module load started... [2012/06/12-21:18:50 - 13949] [CORE] NOTICE: = AccessControl: enabled [2012/06/12-21:18:50 - 13949] [CORE] NOTICE: = CheckHelo: enabled [2012/06/12-21:18:50 - 13949] [CORE] NOTICE: = CheckSPF: enabled [2012/06/12-21:18:50 - 13949] [CORE] NOTICE: = Greylisting: enabled [2012/06/12-21:18:50 - 13949] [CORE] NOTICE: = Quotas: enabled [2012/06/12-21:18:50 - 13949] [CORE] NOTICE: = Protocol(Postfix): enabled [2012/06/12-21:18:50 - 13949] [CORE] NOTICE: = Protocol(Bizanga): enabled [2012/06/12-21:18:50 - 13949] [CBPOLICYD] NOTICE: Module load done. [2012/06/12-21:18:50 - 13949] [CORE] NOTICE: 2012/06/12-21:18:50 cbp (type Net::Server::PreFork) starting! pid(13949) [2012/06/12-21:18:50 - 13949] [CORE] NOTICE: Binding to TCP port 10031 on host * [2012/06/12-21:18:50 - 13949] [CORE] WARNING: Group Not Defined. Defaulting to EGID '0 10 6 4 3 2 1 0' [2012/06/12-21:18:50 - 13949] [CORE] WARNING: User Not Defined. Defaulting to EUID '0' Do I need to do anymore settings for postfix to listen on policyd???Please help

    Read the article

  • Trying to understand Wireless N vs Wireless AC

    - by EGHDK
    Whenever a new wireless standard gets approved you expect faster speeds and longer range. From everything that I've read about it, it seems that AC will only transfer over the 5GHz band and up to 3Gbps. Studying the new AC routers on the market, it seems that they will transfer over 5GHz and 2.4GHz. And 5GHz will only transfer at 1.3Gbps. Which isn't what AC is supposed to be. I know there is a difference between what the standard actually says, and what products will actually do, but is there any reason for this? Is there any other main differences between AC and N? I've heard people discussing AC and saying that it's finally "fixing" what N was supposed to fix... what do they mean by that? Any security benefits? I have seen this image online: Will AC really do that? Will that require an AC network card in my laptop for that to actually happen? Lastly, will the router only be able to communicate with AC devices if I have beamforming technology on? I know it's a ton of questions, but most articles online seem to be outdated, and don't provide too much reliability.

    Read the article

< Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >