Search Results

Search found 28818 results on 1153 pages for 'main loop'.

Page 318/1153 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • How do I fix a corrupt calendar cache?

    - by Blacklight Shining
    I was tailing /var/log/system.log and noticed a sudden wall of text. Looking closer, I saw it was an error CalendarAgent got while trying to save something: Nov 18 11:42:45 rainbow-dash.local CalendarAgent[12321]: CoreData: error: (11) Fatal error. The database at /Users/blackl/Library/Calendars/Calendar Cache is corrupted. SQLite error code:11, 'database disk image is malformed' Nov 18 11:42:45 rainbow-dash.local CalendarAgent[12321]: Core Data: annotation: -executeRequest: encountered exception = Fatal error. The database at /Users/blackl/Library/Calendars/Calendar Cache is corrupted. SQLite error code:11, 'database disk image is malformed' with userInfo = { NSFilePath = "/Users/blackl/Library/Calendars/Calendar Cache"; NSSQLiteErrorDomain = 11; } 2 messages repeated several times Nov 18 11:42:49 rainbow-dash.local CalendarAgent[12321]: [com.apple.calendar.store.log.subscription] [WARNING: CalSubscriptionSession :: persistError :: save failed] This entire sequence is repeated many times throughout the log. file said the file in question was a SQLite 3.x database, so I did a bit of searching and came up with a way to check those. blackl% cp -i ~/Library/Calendars/Calendar\ Cache /tmp blackl% sqlite3 /tmp/Calendar\ Cache SQLite version 3.7.12 2012-04-03 19:43:07 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> pragma integrity_check ; *** in database main *** Main freelist: Bad ptr map entry key=863 expected=(2,0) got=(5,21) On page 21 at right child: 2nd reference to page 863 This is followed by a few dozen lines like these: rowid <number> missing from index <name> and then: wrong # of entries in index <name> I'm at a bit of a loss as to what to do now—I couldn't find anything on how to fix the errors that I found. Also, it would probably be a good idea to disable Calendar Agent so it doesn't try to use the database while it's being fixed (that's why I copied it to /tmp before running sqlite3 on it.) How do I disable CalendarAgent and fix its cache?

    Read the article

  • Symbolic link not allowed or link target not accessible

    - by TK Kocheran
    I can't seem to get a symlink working in my Apache VirtualHost, no matter what I try and I see the following error in the error log: Symbolic link not allowed or link target not accessible: /var/www/carddesigner I can browse the actual symlink from Linux with no problems whatsoever: $ ls -l /var/www | grep "carddesigner" lrwxrwxrwx 1 rfkrocktk rfkrocktk 64 2011-02-28 16:52 carddesigner -> /home/rfkrocktk/Documents/Projects/Work/carddesigner/build/main/ Additionally, I've made sure that the my VirtualHost allows the FollowSymLinks option: /etc/apache2/sites-enabled/000-localhost: <VirtualHost 127.0.0.1:80> ServerAdmin ########## DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Deny from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel debug CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> RewriteEngine On RewriteLog "/var/log/apache2/mod_rewrite.log" RewriteLogLevel 9 </VirtualHost> I can't seem to find any other configuration files that seem to override this and/or prevent symlinks from being loaded. Any ideas? Here are my permissions on the actual referenced files: $ ls -l ~/Documents/Projects/Work/carddesigner/build/main total 12 drwxrwxrwx 5 rfkrocktk rfkrocktk 4096 2011-02-28 16:11 advanced drwxrwxrwx 2 rfkrocktk rfkrocktk 4096 2011-02-28 16:10 core drwxrwxrwx 2 rfkrocktk rfkrocktk 4096 2011-02-28 16:10 simple Seems like the permissions are good to go, right?

    Read the article

  • How do I prevent infinite recursion in X11 start-up process?

    - by chrisaycock
    I wasn't able to run X11 or Terminal after rebooting my Mac. After digging around, I got them to work when I commented-out this line in my .cshrc: xset b off It appears that xset will attempt to launch X11 if it isn't running already, and since X11 will launch the default shell through xterm and thus encounter the xset line above, we will have an infinite loop. I would like to keep the above line in my .cshrc. Is there a way to prevent X11 from launching itself?

    Read the article

  • Flash alternative for iBook Mac?

    - by Hunter Dolan
    I have a old Apple iBook G4 that I decided to hook up to my main TV. I like the setup because I can surf the internet on my TV now. The only thing that I can't seem to do is watch Flash videos. Apparently Flash Player 10 doesn't play nice with the iBook's graphics card's GPU, leaving all the graphics processing to the CPU which is a disaster. Others suggested downgrading to Flash Player 9, I did that, and youtube worked fine, but Hulu (The main reason I wanted to hook it up to the TV in the first place) did not. Anyone know of a Flash alternative or a Flash 10 fix for the iBook? Or even a Hulu client that doesn't require Flash. Here are my iBook's Specs Model Name: iBook G4 <br> Model Identifier: PowerBook6,5 <br> Processor Name: PowerPC G4 (1.2) <br> Processor Speed: 1.2 GHz <br> Number Of CPUs: 1 <br> L2 Cache (per CPU): 512 KB <br> Memory: 512 MB <br> Bus Speed: 133 MHz <br> Boot ROM Version: 4.8.7f1 <br> Mac OS X Version: 10.5.8 <br> PS: Don't tell me that I need to buy a new computer. I know that I would have better results with a new computer but I don't want to buy a new computer just for Hulu.

    Read the article

  • mrepo and grouplist/groupinstall?, mrepo not working as expected with group

    - by user52874
    All, I'm trying to set up mrepo so we can have internal repositories. After quite the slog, things seem to be working as expected EXCEPT for groups. From man createrepo: EXAMPLES Here is an example of a repository with a groups file. Note that the groups file should be in the same directory as the rpm packages (i.e. /path/to/rpms/comps.xml). createrepo -g comps.xml /path/to/rpms So here's what I'm doing: wget -c http://ftp.scientificlinux.org/linux/scientific/6/x86_64/os/repodata/comps-sl6-x86_64.xml cp comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/comps-sl6-x86_64.xml createrepo -g comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/ lots of output, no apparent errors or warnings BUT.. from a client: yum grouplist Loaded plugins: refresh-packagekit Setting up Group Process Error: No group data available for configured repositories Here's /etc/mrepo.conf: ### Configuration file for mrepo ### The [main] section allows to override mrepo's default settings ### The mrepo-example.conf gives an overview of all the possible settings [main] srcdir = /var/mrepo wwwdir = /var/www/mrepo confdir = /etc/mrepo.conf.d arch = x86_64 mailto = root@localhost smtp-server = localhost pxelinux = /usr/lib/syslinux/pxelinux.0 tftpdir = /tftpboot #rhnlogin = username:password ### Any other section is considered a definition for a distribution ### You can put distribution sections in /etc/mrepo.conf.d ### Examples can be found in the documentation. Here's /etc/mrepo.conf.d/sl6.mrepo: ### Scientific Linux 6 [SL6] name = Scientific Linux 6 release = 6 arch = x86_64 metadata = repomd repoview os = rsync://rsync.scientificlinux.org/scientific/$release/$arch/os/ updates = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/ security = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/security/ fastbugs = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/fastbugs/

    Read the article

  • Two NIC's 2 Internet Connections, 1 Windows Server 2008 RC2, Routing help required

    - by PJZ
    Hello, I have a Windows 2008 server and 4 other client machines on my home network. I have two internet connections. The main connection is setup with a home router and DHCP on that for all the clients on the network. The secondary connection is just a cable modem which is plugged directly into the server. Local Area Connection: This NIC has an external IP and is connected to the Cable Modem. Local Area Connection 2: This NIC has an internal IP (192.168.0.102) and allows access to all the internal computers. It also has internet access via the local router. So here lies the problem, I want to use the Cable connection on the server for the internet traffic (so that the traffic for server/clients are seperated) but I also need to maintain local access. I am wondering how to make it so that all the internet traffic goes via that NIC because at the moment it goes through the local NIC. As a secondary problem I would also like to forward the connection of one application used by the clients via the server and the cable/server internet because of poor routing for it on the main connection. This perhaps is something for another question though. Thanks for any help you can offer me. Regards PJ

    Read the article

  • Windows Media Sharing not 'always' being detected by PS3

    - by Ahmad
    I'm having a weird problem with Windows Media Sharing on Windows 7 .. I have the following hardware in my network: PC 1 --- My main PC --- runs Windows 7 Ultimate x64 PC 2 --- My backup PC --- runs Windows 7 Ultimate x32 PS3 PC 1 is my main PC which has all my data/media on it .. PC 2 is a backup PC I have, but I use it like once in 2 months .. It has nothing installed on it apart from some very very basic software ... Problem is, my PS3 always sees the media sharing service coming from PC 2, but it never sees the media sharing service coming from PC 1 initially .. Both PC 1 and PC 2 have the same media sharing configuration (All everything on all devices on all networks) ... But when I restart both PCs, the PS3 will only detect PC 2's media sharing service, not PC1 .... However here's the twist .. When PC 1 is restarted, and if I view my 'Network' on PC 2, I do see PC 1's Media Sharing Service, and I'm able to play from it too on PC 2 .. To get my PS3 to also see PC 1's media sharing service, I have to do either of the following 2 things: 1) Play something from PC 1's media sharing service on PC 2 ... The PS3 will then magically also detect PC 1's media sharing service .. 2) Go into the Services area on PC 1 and restart the 'Windows Media Player Network Sharing Service' ... After this, the PS3 also instantly starts to see PC 1's media sharing service .. Since my PS3 is like a month old and is properly detecting PC 1's media sharing service, I think the problem is somewhere in the configuration of PC 1's media sharing service ... Also, on PC 1 I have Norton Internet Security 2012 installed, but I've disabled it completely, and have also disabled Windows Firewall (from PC 1 only) .. Can someone shed some light onto this ?

    Read the article

  • DNS Does Not Register at Off-site Locations

    - by Russ Warren
    First of all, let me give you the specifics of our setup: Windows Small Business Server 2008 Domain w/ all applicable updates on the DC The DC does DHCP for the main site The DC does DNS for all sites 3 sites including our headquarters where the DC is located All sites are connected through OpenVPN SSL tunnels terminated by an Untangle box at each site The 2 remote sites us the Untangle box as a DHCP server for their subnet, which assigns the DC as the primary DNS server Collection of Windows XP and Windows 7 workstations connected to the domain Here's the issue: All of the workstations at the main site register with the DNS server on the domain controller fine. As they grab an IP from the DHCP server, it updates the DNS server with the new host record. I have 2 systems (each at different remote sites) that fail to register with the DNS server. I've attempted the following troubleshooting steps: Confirmed the network adapter is using the DC as a DNS server Confirmed 2-way traffic is possible between DC and workstation Verified the "Register with DNS server" setting was checked in the adapter properties Attempted ipconfig /registerdns and received no errors For the time being, I have setup a DHCP reservation for these systems and manually created a host record. This seems to work fine, but I need a solution for any new systems that go out there.

    Read the article

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

  • Mysql Fail to start

    - by John Naegle
    I'm running a Ubuntu 12.04 LTS Virtual Machine. Last week, the VM stopped unexpectedly now mysql will not start on the VM. These two events may be related, they may not be. When I try to connect: $ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) Then: $ sudo service mysql start start: Job failed to start And $ dmesg [ 1838.218400] type=1400 audit(1374633238.253:50): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=18473 comm="apparmor_parser" [ 1838.358656] init: mysql main process (18477) terminated with status 1 [ 1838.358695] init: mysql main process ended, respawning [ 1839.269303] init: mysql post-start process (18478) terminated with status 1 And $ service mysql status mysql stop/waiting I think this means mysql is crashing when it starts: $ sudo mysqld start 130723 21:51:24 InnoDB: Assertion failure in thread 3064211200 in file fut0lst.ic line 83 InnoDB: Failing assertion: addr.page == FIL_NULL || addr.boffset >= FIL_PAGE_DATA InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 02:51:24 UTC - mysqld got signal 6 ; Per the manual, I went to the data directory (/var/lib/mysql) and ran this: myisamchk --silent --force */*.MYI Then: $ sudo mysqld ... InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: for more information. ... Is my database corrupt? What can I do to recover? Re-install mysql? Something less drastic? I'm fine with losing the database, I just want a working system.

    Read the article

  • New AD-DC in a new Site is refusing cross-site IPv4 connections

    - by sysadmin1138
    We just added a new Server 2008 (sp2) Domain Controller in a new Site, our first such config. It's over a VPN gateway WAN (10Mbit). Unfortunately it is displaying a strange network symptom. Connections to the SMB ports (TCP/139 and TCP/445) are being actively refused... if the connection is coming in on pure IPv4. If the incoming connection is coming by way of the 6to4 tunnel those connections establish and work just fine. It isn't the Firewall, since this behavior can be replicated with the firewall turned off. Also, it's actually issuing RST packets to connection attempts; something that only happens with a Windows Firewall if there is a service behind a port and the service itself denies access. I doubt it's some firewall device on the wire, since the server this one replaced was running Samba and access to it from our main network functioned just fine. I'm thinking it might have something to do with the Subnet lists in AD Sites & Services, but I'm not sure. We haven't put any IPv6 addresses in there, just v4, and it's the v4 connections that are being denied. Unfortunately, I can't figure this out. We need to be able to talk to this DC from the main campus. Is there some kind of site-based SMB-level filtering going on? I can talk to the DC's on campus just fine, but that's over that v6 tunnel. I don't have access to a regular machine on that remote subnet, which limits my ability to test.

    Read the article

  • USB device goes to "Unknown device" randomly

    - by ILovePaperTowels
    We have a kiosk that uses a bio scanner from m2sys (usb device). It scans your palm to recognize you. Every so often, maybe 1-3 times a day, the bio scanner will become an unknown device. We are unable to see any patterns or commonalities. When we unplug and plug it back in, it becomes available again. We have custom software that uses the bio scanner's software to communicate with it. We've added a crap load of logging on everything but there doesn't seem to be any pattern of when the thing shuts off. We have these devices deployed to multiple locations (100+) and they are all seeing the issues but we can not reproduce it here, at the main office. I've evaluated the software and I don't see anything. I'm thinking it's a driver or hardware issue (but we can't reproduce the issues here in the main office) or maybe environmental interference of some sort like from scan guns, automatic doors, microwaves or something else. Any ideas would be welcome. I'm looking for possible causes of the usb device becoming unknown or ways to figure out what the cause is. no other usb devices have this issue, only the scanner We've contacted the manufacturer and they blame our software We're getting help from Microsoft, but they haven't found anything OS is embedded XP http://www.m2sys.com/palm-vein-reader.htm

    Read the article

  • Help: Setup Outgoing Mail Server Only for Multiple Domains Using Postfix?

    - by user57697
    I want an outgoing mail server ONLY for multiple domains. I plan to use Postfix as that seems to be the easiest to setup being very new to Ubuntu/Linux. The setup I plan to have are as follows: I want to use virtual domain with postfix i.e. my multiple websites must be able to send an email from each their respective domains i.e. [email protected] is sent from my domain1.com website and [email protected] is sent from domain2.com website This is an outgoing mail server only i.e. I don't want any returned (or otherwise) email sent to my postfix server. Incoming mail is handled by Google Apps/Gmail and is already setup. I already set my SPF recording to designate my mx records and postfix server ip as valid email servers i.e. "v=spf1 mx include:mydomain.com -all" How can I achieve this? I'm frankly a little confused, so some help would be appreciated. I attempted to follow these guides here, but it doesn't seem right (and it isn't clear what all the settings mean): How to configure Postfix virtual domains http://www.sysdesign.ca/guides/postfix_virtual.html Postfix Installation *.slicehost.com/2008/7/29/postfix-installation Basic Postfix settings (main.cf) *.slicehost.com/2008/7/31/postfix-basic-settings-in-main-cf I can only post one link, but those articles above can be found by replacing * with articles in the hyperlink.

    Read the article

  • Apache 2.4, Ubuntu 12.04 Forbidden Errors

    - by tubaguy50035
    I just installed Apache 2.4 today, and I'm having some issues getting vhost configuration to work correctly. Below is the vhost conf <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /hosting/Client/site.com/www ServerName site.com ServerAlias www.site.com <Directory "/hosting/Client/site.com/www"> Options +Indexes +FollowSymLinks Order allow,deny Allow from all </Directory> DirectoryIndex index.html </VirtualHost> There is an index.html file in /hosting/Client/site.com/www. When I go to the site, I receive a 403 forbidden error. The www-data group is the group on the www folder, which I've already given all permissions (r/w/x). I'm really at a loss as to why this is happening. Any thoughts? If I remove the vhost and go straight to the IP address, I get the default, "It works!" page. So I know that it's working. The error log says "client denied by server configuration". apache2ctl -S dump: nick@server:~$ apache2ctl -S /usr/sbin/apache2ctl: 87: ulimit: error setting limit (Operation not permitted) VirtualHost configuration: *:80 is a NameVirtualHost default server site.com (/etc/apache2/sites-enabled/site.com.conf:1) port 80 namevhost site.com (/etc/apache2/sites-enabled/site.com.conf:1) alias www.site.com port 80 namevhost site.com (/etc/apache2/sites-enabled/site.com.conf:1) alias www.site.com ServerRoot: "/etc/apache2" Main DocumentRoot: "/var/www" Main ErrorLog: "/var/log/apache2/error.log" Mutex watchdog-callback: using_defaults Mutex default: dir="/var/lock/apache2" mechanism=fcntl Mutex mpm-accept: using_defaults PidFile: "/var/run/apache2.pid" Define: DUMP_VHOSTS Define: DUMP_RUN_CFG Define: ENALBLE_USR_LIB_CGI_BIN User: name="www-data" id=33 not_used Group: name="www-data" id=33 not_used Ouput of namei -mo /hosting/Client/site/www/index.html f: /hosting/Client/site.com/www/index.html drwxr-xr-x root root / drwxr-xr-x root root hosting drwxr-xr-x root root Client drwxr-xr-x nick www-data site.com drwxr-xr-x nick www-data www -rw-rwxr-x nick www-data index.html

    Read the article

  • need help with logparser on iis logs

    - by user36440
    I am using logparser 2.2 and need a script that does two things: finding urls that contain a value within referer need to loop over 30 folders logparser -rpt:-1 "select count(*)INTO feeds.txt from u_ex100302*.log where to_lowercase(cs(Referer)) like '/feeds%'"

    Read the article

  • Automate GUI tasks?

    - by overtherainbow
    I have a Windows application which is deadware, and it lacks the option of exporting data into a file. The only way to extract data is to copy each line into the clipboard, and paste it in an editor. As a work-around, I'm thinking of recording the action once, and loop through it until it reaches the last line in the application. I know there are quite a few such utilities for Windows, so I'd appreciate it if you could recommend one for this task. Thank you.

    Read the article

  • Best practice for Exchange 2010 HA topology considering 6 x Exchange licenses and TMG 2010

    - by MadBoy
    What would be best topology considering that: 6 x Exchange 2010 Standard Licenses 2 x Separate locations that are supposed to support redundancy in case of link problems 4 x Forefront TMG 2010 with Forefront Security and Forefront Protection/Security Multiple locations worldwide using those Exchange. Most locations will be connected with VPN Tunnel (the ones hosting Exchange for sure). I was thinking something like this: Location MAIN (about 70-100 people): 2x TMG 2010 in NLB 1x Exchange 2010 CAS/HUB Role 2x Exchange 2010 Mailbox Role (Active + Passive) Location SUPPORT (about 20 people): 2x TMG 2010 in NLB 1x Exchange 2010 CAS/HUB Role 2x Exchange 2010 Mailbox Role (Active + Passive) Management wants to make sure that in case of problems in main location (power failure, link loss etc) second location can support all traffic from around the world and vice-versa. We have 6-7 locations and more comming up (not big ones but like 10+ people per each location). I do know that CAS/HUB is single point of failure (and no NLB), but i simply lack more licenses to do some redundancy on that. What do you think about this approach? What would be better approach according to you?

    Read the article

  • ssh use with netcat to forward connections via bastion host to inside machine

    - by Registered User
    Hi, I am having a server in a corporate data centre who's sys admin is me. There are some virtual machines running on it.The main server is accessible from internet via SSH. There are some people who within the lan access the virtual machines whose IPs on LAN are 192.168.1.1 192.168.1.2 192.168.1.3 192.168.1.4 the main machine which is a bastion host for internet has IP 192.168.1.50 and only I have access to it. I have to give people on internet the access to the internal machines whose IP I mentioned above.I know tunnel is a good way but the people are fairly non technical and do not want to get into a tunnel etc jargons.So I came across a solution as explained on this link On the gateway machine which is 192.168.1.50 in the .ssh/config file I add following Host securehost.example.com ProxyCommand ssh [email protected] nc %h %p Now my question is do I need to create separate accounts on the bastion host (gateway) to those users who can SSH to the inside machines and in each of the users .ssh/config I need to make the above entry or where exactly I put the .ssh/config on the gateway. Also ssh [email protected] where user1 exists only on inside machine 192.168.1.1 and not on the gateway is that right syntax? Because the internal machines are accessilbe to outside world as site1.example.com site2.example.com site3.example.com site4.example.com But SSH is only for example.com and only one user.So How should I go for .ssh/config 1) What is the correct syntax for ProxyCommand on gateway's .ssh/config should I use ProxyCommand ssh [email protected] nc %h %p or I should use ProxyCommand ssh [email protected] in nc %h %p 2) Should I create new user accounts on gateway or adding them in AllowedUsers on ssh_config is sufficient?

    Read the article

  • Problems with X11GraphicsDevice on Suse 11

    - by Daniel
    Hi, On servers running Suse 11 I'm experiencing hangups in sun.awt.X11GraphicsDevice.getDoubleBufferVisuals(Native Method) when connecting via Citrix (and setting DISPLAY to localhost:11.0). Running exactly the same code in exactly the same environment, excepth through Exceed (with DISPLAY set to my workstation's IP) it runs like clockwork. The error is not intermittent, it happens every time Reinstalling the OS does not help Can not reproduce it on Suse 10 This is what the main thread stack looks like: [junit] "main" prio=10 tid=0x0000000040112000 nid=0x6acc runnable [0x00002b9f909ae000] [junit] java.lang.Thread.State: RUNNABLE [junit] at sun.awt.X11GraphicsDevice.getDoubleBufferVisuals(Native Method) [junit] at sun.awt.X11GraphicsDevice.makeDefaultConfiguration(X11GraphicsDevice.java:208) [junit] at sun.awt.X11GraphicsDevice.getDefaultConfiguration(X11GraphicsDevice.java:182) [junit] - locked <0x00002b9fed6b8e70 (a java.lang.Object) [junit] at sun.awt.X11.XToolkit.(XToolkit.java:92) [junit] at java.lang.Class.forName0(Native Method) [junit] at java.lang.Class.forName(Class.java:169) [junit] at java.awt.Toolkit$2.run(Toolkit.java:834) [junit] at java.security.AccessController.doPrivileged(Native Method) [junit] at java.awt.Toolkit.getDefaultToolkit(Toolkit.java:826) [junit] - locked <0x00002b9f94b8ada0 (a java.lang.Class for java.awt.Toolkit) [junit] at java.awt.Toolkit.getEventQueue(Toolkit.java:1676) [junit] at java.awt.EventQueue.invokeLater(EventQueue.java:954) [junit] at javax.swing.SwingUtilities.invokeLater(SwingUtilities.java:1264) ... Has anyone experienced something similar? Could this be a problem in Suse 11's display handling? I'm thankful for any input at this point - I'm fresh out of ideas :)

    Read the article

  • How can I parse/ transform text log data before it gets captured in SCOM 2007 R2?

    - by Abs
    I'm pretty much a noob with System Center Operations Manager 2007, and I'm probably missing something pretty basic, but I'm stumped anyway. We're setting up monitoring on some of our servers, and we'd like to capture data from some plain text log files (e.g. DNS debug logs, DHCP logs). It looks to me like I can set up a generic text file monitoring rule and get events captured into the main Ops Manager database, but my understanding is that the whole line of text from the plain text log gets captured as one field. In an ideal world, we'd be able to parse or transform that log file data to make it easier to query later. Is this possible? Is it easy? Do I have to buy expensive 3rd-party software to do it? One more thing: it would be even better if there was a way to stuff this data into the Audit Collection Services (ACS) database instead of the main one, but I'll take what I can get. Any help would be greatly appreciated.

    Read the article

  • Outlook 2007, how to stop replies from being sent if they meet criteria

    - by CChriss
    I have my college's email account auto-forwarded (they don't offer pop3 or imap) to my main gmail account so I don't have to go to their website to check for mail. That gmail account, and others, use pop3 down to my Outlook 2007. In my Outlook, I have a rule that puts mail sent to my @college address into a folder named Collegemail. Since the college email system lacks pop3 or imap, I have to go to their site to send a reply, so that the reply will be sent from my @college email address. The problem is that I cannot seem to get into the habit of going to my college's website to reply to those emails, so quite often I reply to them while in Outlook, which means my replies are sent "from" my main gmail address. How can I set up Outlook to either 1) disable replying to emails in the Collegemail folder or to emails where my college email address is in the original To: line, or else 2) set it to send those replies from a non-functional dummy account. The dummy account would be unable to send the email so the email would be stuck in the Outbox. (I already have the non-functional dummy account set up as Outlook's "Default" account. When I compose a new mail I have to deliberately pick one of my good accounts to send the new mail with, or else Outlook uses the default dummy account and the email stays in the Outbox with a Send error.) Thanks. Update: Anybody? Please help.

    Read the article

  • Make isolinux 4.0.3 chainload itself in VMWare

    - by chainloader
    I have a bootable iso which boots into isolinux 4.0.3 and I want to make it chainload itself (my actual goal is to chainload isolinux.bin v4.0.1-debian, which should start up the Ubuntu10.10 Live CD, but for now I just want to make it chainload itself). I can't get isolinux to chainload any isolinux.bin, no matter what version. It either freezes or shows a "checksum error" message. I'm using VMWare to test the iso. Things I have tried: .com32 /boot/isolinux/chain.c32 /boot/isolinux/isolinux-debug.bin (chainload self) this shows Loading the boot file... Booting... ISOLINUX 4.03 2010-10-22 Copyright (C) 1994-2010 H. Peter Anvin et al isolinux: Starting up, DL = 9F isolinux: Loaded spec packet OK, drive = 9F isolinux: Main image LBA = 53F00100 ...and the machine freezes. Then I've tried this (chainload GRUB4DOS 0.4.5b) chainloader /boot/isolinux/isolinux-debug.bin Result: Error 13: Invalid or unsupported executable format Next try: (chainload GRUB4DOS 0.4.5b) chainloader --force /boot/isolinux/isolinux-debug.bin boot Result: ISOLINUX 4.03 2010-10-22 Copyright (C) 1994-2010 H. Peter Anvin et al isolinux: Starting up, DL = 9F isolinux: Loaded spec packet OK, drive = 9F isolinux: No boot info table, assuming single session disk... isolinux: Spec packet missing LBA information, trying to wing it... isolinux: Main image LBA = 00000686 isolinux: Image checksum error, sorry... Boot failed: press a key to retry... I have tried other things, but all of them failed miserably. Any suggestions?

    Read the article

  • What to do when launchpad is down?

    - by Jon
    As I am writing this (Friday, November 8, 2013 at 9:59:18 PM EST) launchpad is down. Apparently there is a power failure (https://twitter.com/launchpadstatus/status/398980619880775680). I tried running sudo apt-get update on my Ubuntu install. However, I simply get stuck on this: Ign http://ppa.launchpad.net precise InRelease 100% [Waiting for headers] Being a Ubuntu newbie, I tried to point my sources.list file to a different source. I backed up the original sources.list and then deleted the entire file to start afresh. I then added the following lines to it: deb http://mirror.anl.gov/pub/ubuntu/ precise main deb-src http://mirror.anl.gov/pub/ubuntu/ precise main I figured that since I have a different mirror, there would be no problem updating. I was wrong. I get stuck at the same place. I have several questions: Why do I need to hit launchpad? I do not reference it in my sources.list file at all. Is this something where the mirror redirects me to launchpad? Is there a good article out there that I can read on how exactly this whole apt-get update thing works that will help me understand why it is hitting launchpad? Is there any way to get my Ubuntu to update while launchpad is down? Isn't there any redundancy for the launchpad servers?

    Read the article

  • Need help with MS Access 07 & Reports

    - by Moe
    Hey there, I'm finding it difficult to get MS reporting working to what I'd like to show. What I'm trying to do is: a) In my database store a URL file (HTTP external file), that is a .jpeg. I'd like to use that URL to call the image on the report sheet. I have tried to use 'Control source' on the data panel, but with no success. Any way I can get Dynamic Images to show up on each database. Also, I have a couple of Relational Databases. One Defines Values: For Example: DefinePets('petID','Name of Pet') The other one links the Main DB with the 'DefinePets' database. Eg: connect('petID','mainID','extraFeild') I'd like my report to Go into the "connect" Table, where the the currently viewed Record Value = mainID, then find petID and return Name of Pet. There is a many to many link between definePets and the main Table. (Therefore connect is joining them up) Or is that too much to ask from a simple package like Access? Thanks.

    Read the article

  • Toshiba External Hard Drive freezes computer

    - by Ephraim
    I bought a Toshiba Canvio Basics E05A032BAU2XK Portable External 320GB 2.5 Hard Drive: My computer has two Os's on it Win7 and Win XP. I need both. The main one I use is XP. When booting my computer in any OS the computer and hard drive work fine. The same holds true for plugging in the hard drive while running Win7. However, when running WinXP, if the hard drive gets plugged in the computer freezes(my main point is that the HD is portable so it is essential that it does not do this, as I said I usually run XP). After reading some online forums I was informed that there is a compatibility issue with the newest version of Eset Smart Security(I still don't understand this because it works fine in Win7 or when connected on boot...). I disabled the AV and plugged in the HD... Walla! The comnputer did not freeze. However the disk is not recognized in explorer or disk management. In device manager I removed the device and did a scan and installation of device failed. It pretty much sounds like a driver issue but I cannot find any drivers for this HD. In fact, Toshiba claims that there are no downloadable drivers for it and that XP should take care of the drivers itself. What to do? As far as I can tell, all other USB devices work just fine on both OS. Please Help!

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >