Search Results

Search found 6169 results on 247 pages for 'future proof'.

Page 177/247 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • SpamAssassin 2010 Bug still active on my mailserver despite the offending rule being fixed - where t

    - by Ibrahim
    The SpamAssassin 2010 bug was supposed to be fixed not long after the bug became widely known, and indeed the offending rule in my /usr/share/spamassassin/72_active.cf has been updated. However, incoming messages are still being tagged by this eg: X-Spam-Status: No, score=3.188 tagged_above=-999 required=6.31 tests=[BAYES_50=0.001, FH_DATE_PAST_20XX=3.188, SPF_PASS=-0.001] Here is the relevant rule: ##{ FH_DATE_PAST_20XX header FH_DATE_PAST_20XX Date =~ /20[2-9][0-9]/ [if-unset: 2006] describe FH_DATE_PAST_20XX The date is grossly in the future. ##} FH_DATE_PAST_20XX I'm on spamassassin/3.2.5-2+lenny1.1~volatile1 on Debian Lenny, completely up to date. Any pointers on where to look to figure out what's going on? I don't know anything about SpamAssassin; someone else usually manages this but I'm free right now and am trying to figure out what the problem is because it's been annoying us for a while and we only just realized this bug was still affecting us. Update: I've lowered the score for the FH_DATE_PAST20XX rule to 0.1, both in /etc/spamassassin/local.cf and /usr/share/spamassassin/50_scores.cf and it's still giving 3.188 points for this rule. Any idea what's going on? This really has me stumped. Update 2: It seems that after restarting amavisd, it's been fixed. What's the difference between amavisd and spamd? It seems like both should not be running, or something.

    Read the article

  • Advice on migrating from a Samba PDC

    - by pgb
    When we started our software development company, we decided to use Samba as a PDC for the few Windows workstations we had. We use Samba with OpenLDAP, and it has been a good replacement for AD for almost 6 years now (using Windows XP workstations). Now I'm facing a few problems with our setup: The Linux server where the PDC runs is very outdated (and is a Gentoo install, don't ask why!) We started using Windows 7 on some of the workstations, and these can't join the Samba domain (there's a workaround, I know) Our company has grown a bit, and we have now about 20 workstations (and plan to have more in the near future). I have to reinstall our PDC, and was thinking on updating to another Linux distro and the latest Samba 3.4. However, I started having second thoughts, and now I think going to a Windows Server for the PDC is the way to go. The main drivers to opt for a Windows Server would be its easy administration and the ability to use Windows 7 out of the box, without any registry hacks. My question(s) then is(are): How should I do this migration? Can I keep the same domain name? What will happen to the users? Will they be recreated and won't be identified by the workstations as being the same user, even if the actual username is the same? What steps would you recommend me to migrate from Samba to Windows Server? Bonus question: If you think staying in Samba is the way to go with my current setup, I'm also interested on your thoughts.

    Read the article

  • Why are my Windows 7 updates continuously failing?

    - by Chris C.
    I'm an advanced level user here with an odd issue. I have two Windows Updates that are failing to install, every single time. I'm getting a mysterious "Code 1" error on both updates, an error for which I'm having difficulty finding a solution. The updates in question are: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Because these updates are failing, the Shut Down button in my start menu always has the shield icon next to it, indicating that "new" updates will be installed on shut down. But, of course, they'll fail and when the PC is restarted, the shield icon is still there. When checking the update history and viewing the details of the failed updates, I get the following: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) Installation date: ?6/?29/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important A security issue has been identified leading to MFC application vulnerability in DLL planting due to MFC not specifying the full path to system/localization DLLs. You can protect your computer by installing this update from Microsoft. After you install this item, you may have to restart your computer. More information: http://go.microsoft.com/fwlink/?LinkId=216803 System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Installation date: ?6/?28/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important This tool is being offered because an inconsistency was found in the Windows servicing store which may prevent the successful installation of future updates, service packs, and software. This tool checks your computer for such inconsistencies and tries to resolve issues if found. More information: http://support.microsoft.com/kb/947821 About My System I'm running Windows 7 Home Premium x64 Edition. This is a custom PC build and the OS was installed fresh, not an upgrade from a previous version. I've been running this system for about 4 months. Windows Updates aside, the system is usually quite stable. Thanks in advance for your help!

    Read the article

  • Still about SSD potentials...write and read speed

    - by Macroideal
    I have been working on SSD (solid state disk) for several months..Problems and Questions hit my head unexpectedly..Coz i am a virgin in ssd... Especially these days I was testing the write-read speed of ssd, which I was always caring.... however result turned out not good as I expected, or even worse Three kinds of read-write were implemented in my test read and write directly from and into ssd, with openning ssd as a whole device. in windows: _open("\\:g", ***).. It can be very tricky and hairy that you'd write a data with size of folds of 512, at the disk position of folds of 512bytes... So, If you wanto write just a byte or 4 bytes, you'v to write at least a whole sector one time. Read and write data from and into files located in SSD... Read and Write data from and into files in mechanical Disk I compared the pratices below...I found ssd sucks...the ssd performs worse than mechanical disk... so i am wondering where i can get the potential performance of ssd, since ssd is said to a substitute for mechanical disk in the future.. Nevertheless, I test ssd with a pro-hard-disk tools..ssd is like twice speedier than mechanical disk. So, why?

    Read the article

  • New 3TB HDD, can see full 2.7TB in Linux and Windows, but shows up as 801.6GB in BIOS

    - by Ben Lee
    I recently purchased a Seagate Barracuda 3TB drive (ST3000DM001). After installing it, my BIOS recognized it but reported the size as 801.6gb. I went ahead and booted into Linux anyway (Ubuntu 11.10 64-bit). Linux saw it as a 2.7TB. Following some online instructions (don't have the link handy, unfortunately), it looks liked converting this drive to GPT was recommended. So I used gparted to do that, then formatted it to NTFS also using gparted. (I'm using NTFS because my machine is daul-boot and I want to have access to the drive in Windows too). I rebooted to Windows (Windows 7 64-bit), and Windows also sees the drive with 2.7TB free. Everything seems to be working fine. The only issue is that my BIOS is still reporting the drive as 801.6GB. My motherboard is an ASRock 770 Extreme3 and BIOS is the latest version. Since everything seems to be working with the new drive anyway, I'm hoping that the fact that the BIOS is reporting the wrong size is not an actual problem. But honestly, I don't really know. Anyone out there more familiar with this know if this could potentially cause any problems in the future? Any way to get the BIOS to report the correct size?

    Read the article

  • Apache Caching and Expires configuration

    - by mcondiff
    I'm looking for a best possible caching/expires configuration for my specific situation. I realize that some sites have advocated turning etags off: Header unset ETag, FileETag None I know that I should use either Expires or Cache-Control. In additions, I know that I should use either Last-modified or ETAGs (Per ySlow docs). I inherited a clients server that uses the following in .htaccess: <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf|xml|txt|html|htm)$"> Header set Cache-Control "max-age=172800, public, must-revalidate" </FilesMatch> With this server I am not going to be able to rely on staff to rename images, css and js in web applications so I do not want to set the expires far in the future without knowing (with a good certainty) that "most/all" browsers will check to see if content has changed. What I do not want to happen is someone call me and say the website is broken because they replaced an image and it's not showing up. But I do want to take the most advantage I can with caching and expires while still maintaining that mostly all browsers will check with the server to see if components have changed. I have access to both the .htaccess and apache .conf file and it is a single server, the content is not deployed on multiple servers. What would be the best .htaccess or .conf configuration for me to achieve my goals for this clients server? Thanks for your help

    Read the article

  • nginx+passenger +static websites= problems

    - by Eugene K
    I've got a Rails app that nginx serves through passenger. I'd also like to serve some static content for a different domain name. But when I add another server block to my config, both websites become unavailable returning HTTP 204. What have I done wrong? What can I do to fix it? Here's the http block of my nginx.conf: https://gist.github.com/4243256 Here's what I added : server { listen 80; server_name website2; root /var/www/website2; location / { index index.html; } } It's going to be a Rails app as well at some point in the future (though I'm not really sure about that, maybe I'm going to use a different back-end solution.) Either way, I don't want anything dynamic to eat away the resources just yet. As of now, this website consists of nothing but index.html and stylesheet.css files in the root directory. What should I do? Thank you in advance. Sincerely yours, Eugene.

    Read the article

  • Staying anonymous while hosting your site?

    - by jamesCroft
    I don't mean anonymous surfing. I mean hosting and having your own domain and such. The reason is that my blog is about religious/political topics which may cause me trouble in the future. This is the domain I am working on: www.james-croft.com I know that using Whois search my name can come up: http://www.networksolutions.com/whois-search/james-croft.com The solution to that, as far as I understood, is to buy a privacy package from the domain registrar. in my case it is lucky register: http://i.stack.imgur.com/uvOdc.png Also hosting is a concern. I use the same hosting service for multiple websites. My question is this: Can my hosting be tracked and be used to identify me? Also: Are there other methods of finding out my identity from either Google Adsense or Amazon affiliate programs? I couldn't find any relevant articles online. If there is anything else that is relevant, please let me know. I appreciate any response.

    Read the article

  • Cacti not working for SNMP data sources

    - by lorenzo-s
    I installed packages cacti and snmpd on a Debian server. I'm able to display common graphs in Cacti (such as memory usage, load average, logged in users, etc) using the data templates listed as Unix. Now I want to replace these graphs with new ones using SNMP data sources, because I see there is also CPU usage and because it's not excluded I have to manage multiple hosts in the future. So, I installed snmpd on the machine and left the snmpd.conf as it is. In Cacti, I created three new data sources from SNMP templates for 127.0.0.1 host: ucd/net - CPU Usage - Nice ucd/net - CPU Usage - System ucd/net - CPU Usage - User Then I created a new graph from template ucd/net - CPU Usage, and select the three data sources in the Graph Item Fields section. Graph is now enabled and running, but empty. No data have been collected. Under Console - Devices my SNMP host is listed as up and running: System:Linux ip-xx-xx-xxx-xxx 3.2.0-23-virtual #36-Ubuntu SMP Tue Apr 10 22:29:03 UTC 2012 x86_64 Uptime: 929267 (0 days, 2 hours, 34 minutes) Hostname: ip-xx-xx-xxx-xxx Location: Sitting on the Dock of the Bay Contact: Me [email protected] In SNMP Options I left all as it is: SNMP Version: Version 1 SNMP Community: public SNMP Timeout: 500 ms Maximum OID's Per Get Request: 10 In Console - Utilities - Cacti Log I have multiple warning (two for each data source) every 5 minutes: 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] Host[2] DS[18] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.4.15.0' 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] Host[1] DS[9] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.11.52.0' 10/29/2012 01:40:01 PM - CMDPHP: Poller[0] Host[2] DS[19] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:40:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.4.6.0' [...] I have the feeling I'm missing something, but I cannot get it...

    Read the article

  • How to deal with DELL support system?

    - by Nishant Kumar
    We have purchased a Dell Optiplex 9010 SSFV for our organization's work. Since the first installation two of the USB keyboard keys were not working properly. I had to press those keys two times simultaneously, on first time keys did not work and for for second time it printed two characters (as it were buffering first character.) Two keys that were not working properly: Hexangrave (Below the ESC key: `) Double Quotes (Left the enter key ") We registered our complaint with DELL and they suggested (with some hard to understand and weird ENGLISH accent) some test and tricks, such as switching to different ports, checking keyboard on different PC, and it worked well with diff. PC(with Windows 7 Home Premium installed). It was clear that it is an OS fault, hence they suggested to re-install OS. Problem began here, we have a project on the run and currently a video editing project setup on our system, so can't re-install system in hurry and also DELL persons were not providing any other solution such as updating keyboard driver, etc. Arguments I am a Software Engg. and don't think it is a feasible solution to re-install entire system for simple problems. This prob is coming since the fresh system installation, so I don't think it will solve the problem. Finally, I had to find solution myself and got it here, now I want to show my disappointment to dell persons or at least tell them that they should improve there support system to not advice to re-install entire system for that simple problems. Notes We have purchased 5 years NEXT business day support from DELL for around 8000 INR (Not for that kind of solutions from DELL). It is Dell India Support System. So can anyone tell me how to tackle dell support system officially, so that they will pay more attention in near future. Thanks

    Read the article

  • ZFS - destroying deduplicated zvol or data set stalls the server. How to recover?

    - by ewwhite
    I'm using Nexentastor on a secondary storage server running on an HP ProLiant DL180 G6 with 12 Midline (7200 RPM) SAS drives. The system has an E5620 CPU and 8GB RAM. There is no ZIL or L2ARC device. Last week, I created a 750GB sparse zvol with dedup and compression enabled to share via iSCSI to a VMWare ESX host. I then created a Windows 2008 file server image and copied ~300GB of user data to the VM. Once happy with the system, I moved the virtual machine to an NFS store on the same pool. Once up and running with my VMs on the NFS datastore, I decided to remove the original 750GB zvol. Doing so stalled the system. Access to the Nexenta web interface and NMC halted. I was eventually able to get to a raw shell. Most OS operations were fine, but the system was hanging on the zfs destroy -r vol1/filesystem command. Ugly. I found the following two OpenSolaris bugzilla entries and now understand that the machine will be bricked for an unknown period of time. It's been 14 hours, so I need a plan to be able to regain access to the server. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924390 and http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=593704962bcbe0743d82aa339988?bug_id=6924824 In the future, I'll probably take the advice given in one of the buzilla workarounds: Workaround Do not use dedupe, and do not attempt to destroy zvols that had dedupe enabled. Update: I had to force the system to power off. Upon reboot, the system stalls at Importing zfs filesystems. It's been that way for 2 hours now.

    Read the article

  • Do I need liquid cooling?

    - by Mrrvomun
    I'm building a computer mainly for gaming and developing games. It's going to be a three screen system with two GeForce GTX 460's and a quad-core i7. The newegg wattage calculator says I need around 900W. The case I intend to get is this one. Full specs if you need 'em are at the end of this post. I have no intentions to overclock the system at the moment, but this may change in the future. I've done a lot of research on the subject, and the answers I've found indicate that it takes a heck of a lot of power to require liquid cooling, and most non-overclocked systems don't need it. But I haven't seen a question about a system with two GPU's, so I ask you the following two questions: Assuming that the system is used for gaming for very extended periods of time (say 4-6 hours at a time, nonstop) with all three screens running at full 1080p, would the fans installed in the system suffice? Or would I need more fans and/or liquid cooling? If the system is used under the same circumstances as above, and is overclocked to a reasonable level, would the fans installed in the system suffice? Or would I need more fans and/or liquid cooling? Specs: Intel Core i7 16GB DDR3 Two nVIDIA GeForce GTX 460 3TB HDD Two DVD-RW writers Thermaltake 1050W power supply Case is linked above

    Read the article

  • What's hogging my CPU?

    - by endolith
    Ubuntu's System Monitor applet shows 100% CPU usage continuously. If I click it, the resources tab shows it at 100% continuously, too. If I go to processes, though, to find out which process is the culprit, there is nothing above 10%. If I run top there is nothing above 10%. I try killing lots of things, but it continues at 100%. How can I find out what's hogging the CPU? This is an unusual situation on a computer I use daily, that normally only hits 100% CPU when I'm doing something that requires it (like loading 32 Firefox tabs) after which it goes back to a normal idle level. It's not a new install or anything. It shouldn't be maxed out. I'm not sure when it started or if I changed something that caused it to happen. Normally I would use top or System Monitor and find the process that had gone out of control, but I can't find anything with those tools this time. It persists after reboots and everything. And the processor is obviously hot, so it's not an erroneous reading. Update: I tried killing any process I saw active again, and killing vino-server finally fixed the problem, even though it never went above 5%. I had enabled Remote Desktop a few days ago (and have obviously now disabled it). How did it manage to use 100% CPU while top only showed it as 5% or so? How do I identify the culprit in the future? Looks like I'm not the only one: Still a problem in both jaunty & karmic. Interestingly, both System Monitor & htop do not show the sum of individual processes being anywhere near 100% cpu.

    Read the article

  • Windows 7 disk backup and clone for deployment to multiple systems

    - by gregmac
    I'm in the process of deploying some new PCs (there's only 8), all identical hardware. What I'd like to do is install Windows 7 (64bit), join to domain etc, install a bunch of other software, and then clone that drive to multiple other machines. I'd also like to be able to use it as a backup image, so the machine can be restored back to that image at some future date. I understand this involves at least sysprep, but I am confused after reading some tutorials that talk about using Windows Automated Installation Kit, or hacks with the registry and custom-build batch files. This process seems overly complex to me: I did something similar 10+ years ago, and and don't remember it being this bad. Surely things have improved in a decade? There's also some products that involve having network servers running deployment software, network boot, etc etc.. this is way more than I want to set up. My systems are all identical hardware. Is there a simplified way to clone PCs? Preferably (since I'm a lazy developer, and not an IT admin) I'd like to find some off-the-shelf product that I can run after I get the machine setup, that will spit out a bootable DVD I can run on all the other systems, which will boot up, ask for a computer name, join it to the domain, and that's it. Does such as product exist?

    Read the article

  • How to use Windows mini-dump files?

    - by ekaj
    I have a Mini-ITX Intel DH61AG mobo w/ an Intel i3 processor and 8GB of 1600MHz DDR3 RAM. Anyways, this computer has been crashing kind of frequently. It is not an OS problem, as I have used Ubuntu (and had kernel panics), Windows 7, and Windows 8 (BSODs aren't going to keep me from tinkering =p) Anyways, each of these OSes have had problems, so I ran a HDD check, and I know it is not a heat issue because I tested the processor for a few days when I first put the computer together. When I ran memtest86+, however, I got an error - so I did individual testing, and both chips came back good, did a really intense test with both of them again (took half a day), and no errors. So, I still think the problem could be RAM, but I am not sure - I tested it pretty extensively (might let it run all night again tonight)... which brings me to my point. Could someone explain to me (in simple terms if possible) how to READ the minidump files of Windows computers? I've tried before with a guide I found online, but failed miserably (can't remember guide, either =/). I'm fine with installing the software, I will probably need it sometime in the future as well. I have seen a few other posts on SU that just ask people to post minidump logs, but I feel as if that is too localized. Would someone be able to explain this? Note: If someone knows how to do this, but doesn't want to explain and is still willing to help me, this is the link for the minidump file =p Make sure to click

    Read the article

  • One nginx rules for lots of subdomain

    - by komase
    I have lots of subdomain in a server. Every subdomain has its own Drupal boost rules, like in below codes: server { server_name subdomain1.website.com; location / { root /var/www/html/subdomain/subdomain1.website.com; index index.php; set $boost ""; set $boost_query "_"; if ( $request_method = GET ) { set $boost G; } if ($http_cookie !~ "DRUPAL_UID") { set $boost "${boost}D"; } if ($query_string = "") { set $boost "${boost}Q"; } if ( -f $document_root/cache/normal/$host$request_uri$boost_query.html ) { set $boost "${boost}F"; } if ($boost = GDQF){ rewrite ^.*$ /cache/normal/$host/$request_uri$boost_query.html break; } if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; break; } } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/html/subdomain/subdomain1.website.com$fastcgi_script_name; include fastcgi_params; } } I adding all subdomain rules manually from time to time. The size of ngin.conf has become too big. So, I need one nginx rules which do: subdomain1.website.com pointing to /var/www/html/subdomain/subdomain1.website.com subdomain2.website.com pointing to /var/www/html/subdomain/subdomain2.website.com subdomain3.website.com pointing to /var/www/html/subdomain/subdomain3.website.com ...and so on (So that no more adding rules for subdomain .website.com I need in the future.)

    Read the article

  • Cannot access SMC8014WG-SI provided by TimeWarner/RoadRunner administrative interface...

    - by Matt Rogish
    I just received installation of RoadRunner internet/TV/Voice and I was given a wi-fi router from the TimeWarner folks. The model is a SMC SMC8014WG-SI. Unfortunately, the password it uses is WEP and that is, as we all know, completely insecure. The tech that was here didn't know how to change it to something like WPA2 w/TKIP, and I was on hold for 20 minutes with the TimeWarner folks before I gave up. My problem is that the default web interface (http://192.168.0.1) isn't responding. I can ping it, I can access the internet through it, but I can't get to the admin interface. I did a "hard reset" of the device but still no dice. My suspicion is that the wi-fi admin interface is disabled (a common setting) but the wired interface isn't working on either of my two laptops (I've tried two laptops with two different cables, no link light activated). Am I SOL? Did they lock this down so I can't do what I want to do? Worst-case is I just hook up my go-to WRT54G router to the other modem and leave this one turned off, but I'd rather use their hardware to avoid any "It's not our problem" in the future. Any thoughts? Thanks!!

    Read the article

  • Upgrading from SQL2000 database to SQL Express 2008 R2

    - by itwb
    Hi, We have a web application which uses a MSSQL 2000 backend database. We are currently paying a ridiculous amount for Shared Hosting, with the database costs alone costing us $150 per month (MSSQL 100mb extra space is $40 per month). Our database size is 896.38 MB I am looking at getting a Virtual Private Server and upgrading the database to a MSSQL2008 Express database. I am aware that the Express version is limited to a 10GB database (with R2), and is constrained to a single CPU. I have also been offered SQL Server 2008 Web Edition for $19/per month, but I cannot find many details on the difference between Express and Web. Any suggestions here? What I would also like to know is: If we upgrade the database to MSSQL 2008 database, is there any issues with possible data transformations in the future? I.e. Is it possible to download and mount it with SQL Server 2008 Standard edition? I'm more concerned about how to get data in and out of the database through SQL Management tools. Are there any other issues that I might face? Thanks, Mike

    Read the article

  • syslog-ng and nging logs to mysql

    - by Katafalkas
    So couple of days ago I asked how to log php and nginx logs to centralized MySQL database, and m0ntassar gave a perfect answer :) cheer ! The problem I am facing now is that I can not seem to get it working. syslog-ng version: # syslog-ng --version syslog-ng 3.2.5 This is my nginx log format: log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; syslog-ng source: source nginx { file( "/var/log/nginx/tg-test-3.access.log" follow_freq(1) flags(no-parse) ); }; syslog-ng destination: destination d_sql { sql(type(mysql) host("127.0.0.1") username("syslog") password("superpasswd") database("syslog") table("nginx") columns("remote_addr","remote_user","time_local","request","status","body_bytes_sent","http_ referer","http_user_agent","http_x_forwarded_for") values("$REMOTE_ADDR", "$REMOTE_USER", "$TIME_LOCAL", "$REQUEST", "$STATUS","$BODY_BYTES_SENT", "$HTTP_REFERER", "$HTTP_USER_AGENT", "$HTTP_X_FORWARDED_FOR")); }; MySQL table for testing purposes: CREATE TABLE `nginx` ( `remote_addr` varchar(100) DEFAULT NULL, `remote_user` varchar(100) DEFAULT NULL, `time` varchar(100) DEFAULT NULL, `request` varchar(100) DEFAULT NULL, `status` varchar(100) DEFAULT NULL, `body_bytes_sent` varchar(100) DEFAULT NULL, `http_referer` varchar(100) DEFAULT NULL, `http_user_agent` varchar(100) DEFAULT NULL, `http_x_forwarded_for` varchar(100) DEFAULT NULL, `time_local` text, `datetime` text, `host` text, `program` text, `pid` text, `message` text ) ENGINE=InnoDB DEFAULT CHARSET=latin1 Now first thing that goes wrong is when I restart syslog-ng: # /etc/init.d/syslog-ng restart Stopping syslog-ng: [ OK ] Starting syslog-ng: WARNING: You are using the default values for columns(), indexes() or values(), please specify these explicitly as the default will be dropped in the future; [ OK ] I have tried creating a file destination and it all works fine, and then I have tried replacing my destination with: destination d_sql { sql(type(mysql) host("127.0.0.1") username("syslog") password("kosmodromas") database("syslog") table("nginx") columns("datetime", "host", "program", "pid", "message") values("$R_DATE", "$HOST", "$PROGRAM", "$PID", "$MSGONLY") indexes("datetime", "host", "program", "pid", "message")); }; Which did work and it was writing stuff to mysql, The problem is that I want to write stuff to in exact format as nginx log format is. I assume that I am missing something really simple or I need to do some parsing between source and destination. Any help will be much appreciated :)

    Read the article

  • How to tackle dell support system? [closed]

    - by Nishant Kumar
    We have purchased a Dell Optiplex 9010 SSFV for our organization's work. Since the first installation two of the USB keyboard keys were not working properly. I had to press those keys two times simultaneously, on first time keys did not work and for for second time it printed two characters (as it were buffering first character.) Two keys that were not working properly: Hexangrave (Below the ESC key: `) Double Quotes (Left the enter key ") We registered our complaint with DELL and they suggested (with some hard to understand and weird ENGLISH accent) some test and tricks, such as switching to different ports, checking keyboard on different PC, and it worked well with diff. PC(with Windows 7 Home Premium installed). It was clear that it is an OS fault, hence they suggested to re-install OS. Problem began here, we have a project on the run and currently a video editing project setup on our system, so can't re-install system in hurry and also DELL persons were not providing any other solution such as updating keyboard driver, etc. Arguments I am a Software Engg. and don't think it is a feasible solution to re-install entire system for simple problems. This prob is coming since the fresh system installation, so I don't think it will solve the problem. Finally, I had to find solution myself and got it here, now I want to show my disappointment to dell persons or at least tell them that they should improve there support system to not advice to re-install entire system for that simple problems. Notes We have purchased 5 years NEXT business day support from DELL for around 8000 INR (Not for that kind of solutions from DELL). So can anyone tell me how to tackle dell support system officially, so that they will pay more attention in near future. Thanks

    Read the article

  • GRE Tunnel over IPsec with Loopback

    - by Alek
    I'm having a really hard time trying to estabilish a VPN connection using a GRE over IPsec tunnel. The problem is that it involves some sort of "loopback" connection which I don't understand -- let alone be able to configure --, and the only help I could find is related to configuring Cisco routers. My network is composed of a router and a single host running Debian Linux. My task is to create a GRE tunnel over an IPsec infrastructure, which is particularly intended to route multicast traffic between my network, which I am allowed to configure, and a remote network, for which I only bear a form containing some setup information (IP addresses and phase information for IPsec). For now it suffices to estabilish a communication between this single host and the remote network, but in the future it will be desirable for the traffic to be routed to other machines on my network. As I said this GRE tunnel involves a "loopback" connection which I have no idea of how to configure. From my previous understanding, a loopback connection is simply a local pseudo-device used mostly for testing purposes, but in this context it might be something more specific that I do not have the knowledge of. I have managed to properly estabilish the IPsec communication using racoon and ipsec-tools, and I believe I'm familiar with the creation of tunnels and addition of addresses to interfaces using ip, so the focus is on the GRE step. The worst part is that the remote peers do not respond to ping requests and the debugging of the general setup is very difficult due to the encrypted nature of the traffic. There are two pairs of IP addresses involved: one pair for the GRE tunnel peer-to-peer connection and one pair for the "loopback" part. There is also an IP range involved, which is supposed to be the final IP addresses for the hosts inside the VPN. My question is: how (or if) can this setup be done? Do I need some special software or another daemon, or does the Linux kernel handle every aspect of the GRE/IPsec tunneling? Please inform me if any extra information could be useful. Any help is greatly appreciated.

    Read the article

  • is it worth to use load balancer on web server/website

    - by user427969
    I have a website and a while ago, the web server of the company hosting my website was down for about a day. I consulted the company for a solution on how i can stop this from happening in future and they suggested to have a second machine and which will be connected to my current website/web server by a "load balancer" (at an additional huge cost!!!). The second machine will be replicate of the first one and so if i goes down, the other will always be running. ---- Explanation ----- My hosting company suggested that it will be a good idea to have a second machine running at the same time and both the machines will be connected by a load balancer which reduces the rist of a downtime. The second machine will be a mirror of the first and any changes to first must be replicated in the second. I don't mind spending money if it really saves my website from going down. I want to know is it worth having this "load balancer" for my purpose? My website is a 24/7 service. I cannot afford an outage of 24 hours/1 hour. I don't mind using this "load balancer" as far as it is really worth. I am not sure if its just a marketing trick of my hosting company or really a "best" solution Thanks for help. Regards

    Read the article

  • simple network between xp & 7 with cross cable problem...

    - by LostLord
    hi my dear friends : i have a simple network between xp & 7 windowses with cross cable (2 pc home)... ===================================================================== the one with 7 is mother and have 2 lan device (onboard + pci) A. onboard is like this when u go to tcp/ip v4 properties:(4 adsl internet) obtain an ip... preferred dns server : 81.91.129.67 alternate dns server : 4.2.2.4 shared...no permission 4 change so every thing is ok for internet on windows 7. B. the other lan pci card that is connected to pc with xp is like this : 192.168.2.11 255.255.255.0 0.0.0.0 empty empry computer name : cougar workgroup : nethome homeNetwork is disabled (i think that is 4 2 pc's with 7 os not xp) every thing is off in network options except file & printer sharing in public area ===================================================================== pc with xp os is like this : 192.168.2.12 255.255.255.0 192.168.2.11 (mean gateway) 4.2.2.4 8.8.8.8 computer name : tiger workgroup : nethome ===================================================================== at last my little net is ok... mean both have internet , both can see each other by their ip (\\192.168.2.11 or \\192.168.2.12) my problem is when in pc with xp type \\cougar it shows an error about network path! but in pc with 7 \\tiger works perfec. what is the problem in system with xp ? in few days ago this network was ok (search by computer name) when both os were xp , so there is no problem with my cable or devices. another problem is i can not find tiger in my network list in 7 pc \ why? is something wrong with my network? thanks 4 future advance best regards

    Read the article

  • Recommendation for Document Management Solution

    - by BillN
    We've just been informed by our software vendor that the custom document management system they'd written is no longer in development, and will not be supported in the future. So we are looking at new document management systems. Requirements: Multiple input vectors, we receive documents via e-mail, fax, scanning, and from the originating application Ability to Redact or obscure data. Customers may fax an order with CC data, we want to attach the image of the order form with the order record, but the CC data needs to be protected. Same with Tax IDs. Certain users should be able to see the redacted data, but access should be logged. Version control on documents. We'd like Product Development and Marketing to be able to track various versions of documents like Packaging Designs, but ensure that users have the latest approved version. AD integration, my users don't need another password. Ability to integrate to other apps. Our current system, offers function keys in the order-entry system, that will spawn the viewer application, and open the correct document. Mass import facility, we have a half a terabyte of existing documents in the old system that we would like to import. Retention Policy. I'd like a way to have the system comply with the corporate retention policy, so that when a document of a certain type reaches a certain age, it gets deleted, or atleast marked for manual deletion. We are a Windows Server and HP-UX shop. Does anybody have any experience with Document Management systems that they would like to share? Thanks.

    Read the article

  • How to set up ProxMox 1.9 on VPN?

    - by Gnudiff
    Disclaimer: I have only rudimentary knowledge of VPNs. I would love to learn about them properly, however, at the moment I really need to make stuff work on short notice. I am trying to set up a ProxMox virtualization platform in an existing network. The network currently consists of several servers which have VMWare free edition. There is some sort of VPN defined in switch. In order for VMWare management interface to be accessible, there needs to be ticked a checkbox in the network settings for VPN and entered the VPN id. I didn't notice any such configuration option during ProxMox installation, so my Proxmox VE on the same physical server, using same manual IP settings (ip/nm/gw), is not accessible. As I understand I should touch the Proxmox's underlying Debian config in /etc/network/interfaces, but I have no idea, what should I aim for: do I specify the settings for eth0, do I make a virtual interface? How to make it accessible for both ProxMox VE and underlying future VMs? I read the ProxMox installation guide, but unfortunately it presumes better understanding of VPNs than I have. A config template or similar would be appreciated. Thanks in advance.

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >