Search Results

Search found 20785 results on 832 pages for 'idea'.

Page 603/832 | < Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >

  • What's the best way to get a stored POP3 password out of Outlook 2007?

    - by Tom Morris
    If you have a password for a POP3 account in Outlook 2007 (Windows 7 Home Premium) and you then forget the password, how do you retrieve it? I tried copy-and-paste. No go. I downloaded Mail PassView, but upon installing it, AVG said it was malware, so I removed it. I eventually found the account details by opening up RegEdit, and found it in HKCU\Software\Microsoft\Windows NT\CurrentVersion\Windows Messaging Subsystem\Profiles\Outlook\ (...) but it was encoded in REG_BINARY. I Googled around and found various Visual Basic routines for decoding it but being a Unix dork I had absolutely no idea what to do with said scripts. By this point, I gave up and managed to get hold of the password by another means (it was written down on a piece of paper in the briefcase of the owner of the account - I know, it makes the inner sysadmin rage). I also attempted to write a simple POP3 server in Python and then get Outlook to log on to it, but that didn't really work out (it was about 4am at that point). For future reference, is there an easy and sensible way of doing this? Is Mail PassView actually evil spyware or was AVG just giving me a false positive? (Any chance of Windows 8 having something like OS X's Keychain?)

    Read the article

  • Raspberry pi slows down my entire network

    - by gnusouth
    Whenever my Raspberry Pi is connected to the network (via ethernet) the entire network is slowed to a crawl. On my main computer, ping times for google.com go from ~10ms to ~200ms and it takes forever to load web pages. Connections are also slow on the Pi, with an apt-get update showing pathetic speeds in the order of 1KB/s. Turning off the Pi completely removes the drag from the network. I've tried static and dynamic IP addresses for the Pi, but both have the same problems. I'm currently using Raspbian (downloaded today), but also had this problem with Arch Linux. I've checked the connection's duplex with dmesg | grep -i duplex, which shows that the Pi's connection is running at 100Mbps, full-duplex, as expected. My modem/router is a Billion 7404VNPX (an Australian thing); relatively high-end, albeit a bit buggy at times (it will occassionally delete all its firewall settings). It assigns IPs in the range 192.168.1.1 to 192.168.1.20 and has 192.168.1.254 as its own IP. When I assign static IPs I tend to use the 192.168.1.200 area. Does anyone have any idea as to what could be causing this weird slowdown? Or any tests I could try? Thanks

    Read the article

  • Why did you start with Linux ? And why did you continue using it ?

    - by Stefano Borini
    I'd like to know the reasons that moved you towards Linux. Personally, I started because we had to use a Digital for the Fortran 77 exercises during my first year at the university. Linux was installed on many university computers, and I got interested in it. I always liked to code (on the C64) in basic and assembler, but I knew nothing about other languages. I soon discovered a chat engine called NUTS, and the idea of becoming proficient in C appealed me, so I started hacking the code. To do so, I needed a Unix at home, so I bought a Slackware 3.4 and installed it on my Pentium 166. I then continued using it for many years, reason being that I had pleasure in learning new things and the openness of information about the internals. It was a great learning platform. I then moved to osx because I enjoy the power of Unix with the beauty and efficiency of its interface. I am interested in your answer because I believe that the panorama has changed somehow. Although I still guess to find many "hackers" interested in Linux for the sake of knowledge, I also believe that there are other reasons (work, friends, bought a netbook).

    Read the article

  • Difficulty restoring a differential backup in SQL Server, 2 media families are expected or no files are ready for rollforward

    - by digiguru
    I have sql backups copied from server A to server B on a nightly basis. We want to move the sql server from server A to server B without much downtime, but the files are very large. I assumed that performing a differential backup and restore would solve the problem with the databases. Copy full backup from server A to copy to server B (10+gb) Open SQL Server Managment Studio on server B Right mouse on databases Restore Database Type in the new DB-name Choose "From Device" and browse to the backup file Click Okay. This is now resorting the original "full" backup. Test new db with dev application - everything works :) On original database rightmouse on DB Tasks Backup... Backup Type = Differential, Backup to disk, add a new file, and remove the old one (it needs to be a small file to transfer for the smallest amount of outage) Copy the diff backup onto the new db Right mouse on DB Tasks Restore Database This is where I get stuck. If I add both the new differential file, and the original backup to the restore process I get an error The media loaded on "M:\path\to\backup\full.bak" is formatted to support 1 media families, but 2 media families are expected according to the backup device specification. RESTORE HEADERONLY is terminating abnormally. But if I try to restore using just the differential file I get System.Data.SqlClient.SqlError: The log or differential backup cannot be restored because no files are ready to rollforward. (Microsoft.SqlServer.Smo) Any idea how to do it? Is there a better way of restoring backups with limited downtime?

    Read the article

  • Routing public IPs (each a /32) through a VPN to another server

    - by Lee S
    Hopefully the title makes sense; I have a server currently in a colo facility, with many IP addresses routed to it. They are individual IPs and not in a contiguous block. Due to vastly improved connectivity (fibre) at home I am slowly bringing my infrastructure in-house for managability and eventually, cost savings. What I would like to do though is use the IP addresses allocated to my existing server, at home. I have an IP block allocated to me on my new ISP connection, but for a couple of reasons I'd like to make use of the colo ones for now: Ease of transition - lots of domains, dns, hard-coded IPs in programs, etc. Connectivity fallback. If my primary line goes down and switches to fallback 1 (dsl) or fallback 2 (4G), I lose access to the ISP-allocated IP block of IPs that are only presented on the primary WAN interface. What I'd like to achieve is my home virtualisation server (Proxmox/Debian-based) "dials in" to the colo server in the colo facility (also Proxmox/Debian) via VPN or similar, and gets to make use of the IP addresses that currently terminate on the colo box. If the primary connection to my ISP goes down and one of the fallback routes kicks in, the VPN tunnel will just time out and then be re-established on the backup connection instead. I'm sure this is doable, but I have no idea how. I'm not afraid to get my hands dirty, I just don't really know where to start?

    Read the article

  • Configuring Vmware virtual machines to run under different IPs and PC specs

    - by Alex
    Right now I'm using a simple VmWare virtual machine with preinstalled Win 7. The IP is assigned automatically (it's the same as main OS IP). Is it possible to create several virtual machines that have different hardware specifications and different IP addresses? Here is what I mean regarding these issues: Specs: Certainly, you can easily change some specifications in the Settings menu (RAM size, HDD size), but what about advanced settings? For example: advanced settings for the Processor: is it AMD (2500+,4000+, etc.. ) or Intel (core 2, Pentium, etc..) Ram - is it Corsair 4 Gb 1333 Mhz or Kingston 2 x 2 Gb 866Mhz or something else? Hdd - Is it Seagate Barracuda 80 gb 5400 Rpm or is it Samsung 500Gb 7200 Rpm or some random SSD? Programs that work under a Virtual Machine shouldn't have a clue if that's a VmWare or not. IPs: Every program that's launched under main OS use the real IP: 93.56.xx.xx All programs that are launched under virtual machine A use IP 1: 74.78.xx.xx All programs that are launched under virtual machine B use IP 2: 84.159.xx.xx I believe that you have to use either VPN or Proxy to solve this problem. The Sum Up: The idea is to create 2-3 independent virtual machines with different hardware specifications and IP addresses. Programs that work under a certain Virtual Machine shouldn't have a clue if that's a VmWare or the real PC. Any ideas/tips or experience regarding configuration will be appreciated!

    Read the article

  • nginx giving of 404 when using set in an if-block

    - by ba
    I've just started using nginx and I'm now trying to make it play nice with the Wordpress plugin WP-SuperCache which adds static files of my blog posts. To serve the static file I need to make sure that some cookies aren't set, that it's not a POST-request and making sure the cached/static file exist. I found this guide and it seems like a good fit. But I've noticed that as soon as I try to set something inside an if my site starta giving 404s on an URL that isn't rewritten. The location block of the configuration: location /blog { index index.php; set $supercache_file ''; set $supercache_ok 1; if ($request_method = POST) { set $supercache_ok 0; } if ($http_cookie ~* "(comment_author_|wordpress|wp-postpass_)") { set $supercache_ok '0'; } if ($supercache_ok = '1') { set $supercache_file '$document_root/blog/wp-content/cache/supercache/$http_host/$1/index.html.gz'; } if (-f $supercache_file) { rewrite ^(.*)$ $supercache_file break; } try_files $uri $uri/ @wordpress; } The above doesn't work, and if I remove all the ifs above and add if ($http_host = 'mydomain.tld') { set $supercache_ok = 1; } and then I get the exact same message in the errors.log. Namely: 2010/05/12 19:53:39 [error] 15977#0: *84 "/home/ba/www/domain.tld/blog/2010/05/blogpost/index.php" is not found (2: No such file or directory), client: <ip>, server: domain.tld, request: "GET /blog/2010/05/blogpost/ HTTP/1.1", host: "domain.tld", referrer: "http://domain.tld/blog/" Remove the if and everything works as it should. I'm stymied, no idea at all where I should start searching. =/ ba@cell: ~> nginx -v nginx version: nginx/0.7.65

    Read the article

  • Slow upload, fast download on Windows 7 64bit system

    - by Malik
    I've got a weird problem in the download speeds on my desktop PC (Windows 7 Home Premium 64bit) are consistently fast (approx. 400kB/s) but uploads are very slow (around 6-10kB/s). This has been going on for the last 3 weeks or so. I am a very competent user and troubleshooter, and have searched online for 2 weeks for a solution, to no avail. Part of the problem is that internet is provided by WiFi by my landlord and I have no access to the router (BT Home Hub router) although I know for sure he wouldn't have the first idea on how to restrict my usage :) (rules that out) Anyway, I've tried: - various drivers (my Wifi 'card' is TP-link TL-WN851N, and I've tried TP-link + Atheros + Qualcomm Atheross drivers, suggested by Microsoft) - various tweaks to network parameters (e.g. as suggested by SpeedOptimser) - various tweaks to Windows 7 services (e.g. disabling/manual-ing unecessary services) - raising and lowering head onto a reasonably firm surface at moderate frequency (jk :D) None of the above have helped, and I'm officialy asking for help now!! Thanks for your time and effort in advance!

    Read the article

  • Anyone have real world experience with Rackspace Cloud Sites at high scale?

    - by Allara
    I have a pure web service application layer using .NET. I was originally planning to use Amazon EC2, but rolling my own autoscaling procedures is a bit intimidating, and the scaling isn't very granular from a cost perspective. If the app is successful, we could be looking at relatively high scale (millions of requests per month). The app uses Amazon SimpleDB as the database layer. As a test, I have the app running successfully in Rackspace Cloud Sites. Performance seems to be equal to (if not better than) a standard EC2 instance, even with the added latency of the SimpleDB requests travelling to the Rackspace network. However, testing at this stage is at a very low scale. My question is this: has anyone had real-world experience running a high scale application on Rackspace Cloud Sites? Moreover, once you pass the "included" 10,000 compute cycles per month, does the overall cost seem to be lower than rolling lots of EC2 instances? My assumption would be that with completely smooth scaling (i.e. only adding compute resources as needed), the cost could be lower on average. However, their stated goal of calibrating 10,000 CCs as a single 1.2 Ghz CPU seems on average to be much more expensive than EC2. I like the idea of no-touch scaling, but is it too good to be true?

    Read the article

  • Faster, secure, protocol/code required for long-distance transfer.

    - by Chopper3
    I've ran into a problem and I'm looking for a new secure protocol/client/server that's faster over a 1Gb/s fibre link - let me tell you the story... I have a pair of redundant, diversely-routed, 1Gb/s links over a distance of around 250 miles or so (not dark fibre but a dedicated point to point link, not a mesh). At the 'client' end I have a HP DL380 G5 (2 x dual-core 2.66Ghz Xeon's, 4GB, Windows 2003EE 32-bit), at the 'server' end I have a HP BL460c G6 (2 x quad-core 2.53Ghz Xeons, 48GB, Oracle Linux 5.3 64-bit). I need to transfer around 500 x 2GB files per week from the client to the server machines per week - but the transfer NEEDS to be secure. Using both iPerf or regular FTP I can get ~80MB/s of transfer pretty consistently, which is great. Using WinSCP or Windows SFTP I can't seem to get more that ~3-4MB/s, at this point the server's CPU is 3% busy while CPU0 of the client goes to ~30% utilised. We've tried editing various TCP window sizes with little success. Both ends are connected to quite low-usage Cisco Cat6509's with Sup720's. I can replace the client machine with a newer machine and/or move it to Linux - but this will take time. Clearly these single-threaded secure Windows clients are introducing too much latency doing their encryption. So a few questions/thoughts; Are there any higher performing secure protocols or client software for Windows that I could try? I'm pretty protocol-gnostic so long as it'll work between Windows and Linux. Should I be using hardware to do the encryption, either in the client or the network parts? If so what would you recommend? I'm not convinced that just swapping the server would be that much faster, the CPU was only at 30% but then again that's higher than I'd have expected given the load - moving to Linux at the client end may be a better idea but would be quite disruptive. Am I missing a trick? Thanks in advance.

    Read the article

  • USB drive was bootable, but no longer boots

    - by i-g
    I'm trying to install a new OS onto a computer from a bootable USB stick. I previously installed Ubuntu Linux and it was a piece of cake -- I downloaded the ISO image, used UNetbootin to copy it to the USB drive and make it bootable, and that was that. Now, however, no matter what I try, I can't make the same USB drive bootable again! I've tried formatting it as FAT32 and NTFS. I've tried several different Linux distributions and Windows 7. I've tried using UNetbootin, Windows 7 USB Download Tool, WinToFlash, and manually making it bootable with diskpart/bootsect/bootrec. (Yes, I've tried bootsect /nt60 x: /force.) None of this seems to be working! When I try to boot from the drive, the machine reads from it (I can see the drive's LED blinking) and then gives me the same "Insert system disk and press Enter" message. (I've disabled booting from the hard drive.) Am I missing something I need to do to make the USB drive bootable again? I think it lost some pixie dust when I formatted it with the standard Windows formatting tool (it was quicker than deleting files), but I have no idea what it was or how to get it back. The USB drive in question is a SanDisk Cruzer 8GB SDCZ6. The computer I'm working on is running Windows Vista SP1.

    Read the article

  • CPU operating temperature ranges

    - by osij2is
    I have an AMD Phenom II 960T with 2 cores unlocked for a total of 6 cores. I don't overclock at all. I have a Arctic Cooling ACALP64 Heatsink/Fan installed. I'm currently running ESXi 5.0 so I have to go into the BIOS to read the CPU temperatures, which at idle seem to be in the 71-74C range. To me, this is pretty high, but I cannot find any official temperature ranges that AMD says the CPU can work well within. There seems to be a lot of questions on superuser and numerous forums around CPU temperatures but no one seems to have a clear consensus as to what the manufacturer temperature ranges are for specific CPUs. I've tried searching through AMDs site to no avail. At this point, I'd be willing to shut off the 2 extra cores if it keeps the heat down but until I get some sort of tolerance or range for temperature, I have no idea if the CPU is being damaged or not. Can anyone point to a direct source, article, FAQ from AMD that specifically states their CPUs temperature range? Or are CPU temperature ranges so varying that there's no possible baseline? Am I being too paranoid about this? To me, anything over 65C is a bit much and if I'm in the low-mid 70s range with NO VMs running, what can I expect if I have several VMs running?

    Read the article

  • does leaving the laptop cord in break the cord?

    - by firedrake
    i recently got a new cord for my laptop because the cord i had before broke.im not sure how it got messed up, but one moment it charged my laptop and the next moment it didn't. i then got a new cord. the new cord worked perfectly fine until now, and it has only been a month. the cord wont work unless i am pressing it into the socket, which i am currently doing. the only thing i can find in common with the cords is that i leave them plugged in 24/7. my brother says that is the problem, but i do not think it is.any tips or hints? im going to use duct tape to keep the pressure on the cord till i can find a better solution(im also thinking it could be the hole i plug it into on the back of my computer, but im focusing on the other idea for now.if i wiggle the plug part in the socket of my computer it will stop charging unless im pressing it in or to the side) any help or ideas are appreciated. not sure if this will help but i use a gateway with windows vista. thanks

    Read the article

  • Windows 8 Install Hanging at first white-font boot splash

    - by Omega
    I'm trying to install the Windows 8 preview on my Samsung Series 9 (2012, Ivy Bridge). I've done a bit of a custom scheme with this one: I'm using EFI/UEFI on this system. I've seen no indication that this system supports secure boot (yay!) My SSD is set up with GPT Ubuntu is already installed and working great via UEFI. I'm trying to boot the Windows 8 install from a USB stick via UEFI I don't have access to a CD drive. The problem is that the boot seems to hang at the very first splash screen that looks like this. White windows font, the little beads don't show up. My USB stick has an activity light and it does blink for the first few seconds, but then goes back to it's "nobody is talking to me" idle pulse. What I know: UEFI booting is definitely working. Windows 8 for those few seconds seems to have some kind of access to the USB drive. My Series 9 is running the latest BIOS/firmware. Any idea what I might be able to do to get Windows 8 installed??

    Read the article

  • Is it bad to redirect http to https?

    - by jasondavis
    I just installed an SSL Certificate on my server. I use a web hosting panel called ZPanel that is an open source project. It then set up a redirect for all traffic on my domain on Port 80 to redirect it to Port 443. In other words, all my http://example.com traffic is now redirected to the appropriate https://example.com version of the page. The redirect is done in my Apache Virtual Hosts file with something like this... RewriteEngine on ReWriteCond %{SERVER_PORT} !^443$ RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L] My question is, are there any drawbacks to using SSL? Since this is not a 301 Redirect, will I lose link juice/ranking in search engines by switching to https? I appreciate the help. I have always wanted to set up SSL on a server, just for the practice of doing it, and I finally decided to do it tonight. It seems to be working well so far, but I am not sure if it's a good idea to use this on every page. My site is not eCommerce and doesn't handle sensitive data; it's mainly for looks and the thrill of installing it for learning. UPDATED ISSUE Strangely Bing creates this screenshot from my site now that it is using HTTPS everywhere...

    Read the article

  • Nginx Ubuntu Postfix Config - Can't connect to incoming IMAP server 'server not responding' but can send mail via outgoing using same details?

    - by daveaspinall
    I'm pretty to new server admin and especially nginx but seem to be getting ok fine apart from accessing my mail via my iPhone? I've changed my domain to 'domain.com' The thing is I can send mail via my outgoing IMAP server but can't connect to the incoming one? I just get the message "the mail server at mail.domain.com is not responding" /etc/postfix/main.cf alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix home_mailbox = Maildir/ inet_interfaces = all inet_protocols = all mailbox_command = mailbox_size_limit = 0 mydestination = domain.com, mail.domain.com, localhost.com, , localhost, localhost.localdomain mydomain = domain.com myhostname = mail.domain.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = /etc/mailname recipient_delimiter = + relayhost = smtp_tls_note_starttls_offer = yes smtp_tls_security_level = may smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_security_level = may smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom telnet localhost 25 ehlo locahost 250-mail.domain.com 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-AUTH LOGIN PLAIN 250-AUTH=LOGIN PLAIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN Using the following details to connect: username password hostname: mail.domain.com port: 25 iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination I also sent mail to the server as a test and got this missage if it helps? Technical details of temporary failure: [mail.domain.com. (10): Connection refused] I also looked in /var/log/mail.log and it has multiple entries of: postfix/smtpd[12239]: connect from 5acefc9a.bb.sky.com[90.206.252.xxx] Mar 23 06:47:09 new-domain postfix/smtpd[12239]: lost connection after CONNECT from 5acefc9a.bb.sky.com[90.206.252.154] Notice new-domain which is incorrect but the server hostname and hostname in the configs are correct? I recently moves servers and the host has set the primary domain on the service as new-domain.com so this may be the issue? Like I said, it works to connect to outgoing server, but incoming gets the not responding error? Any idea would be much appreciated!

    Read the article

  • What does Firefox do when "scanning for viruses" after download?

    - by Joey
    Never mind the fact that Firefox is a browser and not a AV tool, but what exactly does it do after a download? Even on systems that have an up-to-date AV this generates a pause of several seconds after download (where I can't open the file from within the DL manager) and I have no idea what FF might be trying there. I know I can turn it off (using FF only at work anyway) but I'm wondering. I can think of some things here what it might be: FF itself is a AV scanner and it loads signatures in the background and whatnot. Sounds highly unlikely and shouldn't need tens of seconds for 20 KiB files. FF tries to talk with the installed AV to munch the file. Sounds unneeded, given that most AV programs feature real-time protection anyway and therefore will have caught a virus already and also because FF does that on systems without AV installed too. FF uploads the file to some online virus checker. Unlikely and stupid. FF instructs some online virus checker to download the file and check it. Unlikely and would be a nice target for DoSing that service. FF generates a hash of the file and sends that somewhere (presumably Google) to check for. They then respond with either "Whoa, that hash is totally a virus" or "Nope, that MD5 doesn't look very virus-y to me". I'm running out of better ideas. Anyone have a clue?

    Read the article

  • .htaccess with godaddy not working in subdomain

    - by explorex
    Hi, i have a site uploaded to shared subdomain (which is inside a folder). and htaccess is not working. please get details from here. EDIT::copied from stack overflow Hi, i uploaded as website to a subdomain, and every page is not working except the front page please check it here. what could be the possible reason? i shoud have 8 pages in front level and many more on admin level but i am getting 404 error as you can see, does anyone has idea or suggestion? UPDATE:: .htaccess file RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] UPDATE to url rounting i do have few url router like below BUT i dont have any default router $router->addRoute( 'get-destination', new Zend_Controller_Router_Route('destination/get/:id/:dest-name', array( 'controller' => 'destination', 'action' => 'get', 'id' => 'id', 'dest-name' => 'dest-name' )) ); just to make look cooler and on my navigation (which is loaded from xml i have) something like <nav> <home> <label>HOME</label> <controller>index</controller> <action>index</action> <route>default</route> </home> since i was getting url problem from where url was routed and please check phpinfo at http://websmartus.com/demo/globaltours/public_html/phpinfo.php

    Read the article

  • Get the Windows Scheduler's Location Or Code?

    - by Ram
    Today i got a task to find the scheduler that is running once in a month, actually I have to change its running time period to once a week. I have searched for it in the Windows scheduled tasks, but i didn't find it. It is sending a mail containing a link. Now i am not confused about this where else can i find it. As the place i know where the scheduler can be, has already been checked by me. Can someone suggest me where else can i search for this scheduler? As per the previous developer's comment "It is a normal Windows method of scheduling tasks" UPDATE Actually the task is running after a month periodically. As per my knowledge it could be a "Windows Task Schedule" or "Windows Service" created by the old developer. Now as the previous developer is not available and i do not have any documentation.. i need to change the time period from month to weakly. I have checked in the "Task Schedules" on the server and checked the services running ob the server and was unable to find the "Scheduler". now i have two questions: is there any other approach by using that i can schedule an automated email periodically. any idea to find this.

    Read the article

  • Amavisd-new(2.6.4-3) failing to do "lookup_sql_dsn" when large number of emails are need to be accessed

    - by sandip
    Amavis is failing to do sql lookup when large number of emails are sent to amavis. Its throwing out error after scanning 40 to 50 email. It shows error like. (!!)TROUBLE in process_request: sql exec: err=7, 57P01,DBD::Pg::st bind_param failed:FATAL: terminating connection due to administrator command\nSSL connection has been closed unexpectedly at (eval 103) line 164, <GEN50> line 5. at (eval 104) line 280, <GEN50> line 5. As soon as this error appears in the logs, Amavis stops and port 10024 is closed. Thinking it to an error due to ssl connection in the database(postgresql-8.4), i had stopped ssl in postgres, but it was of no use. I have tried to configure amavis on another server, but i got the same error again. This happening on a production server, So i am not being able to scan emails as per user settings. Anybody have any idea, what may be the source of this error ?? Please help. Thanks in advance

    Read the article

  • Permission denied (publickey,gssapi-with-mic,password) ssh error

    - by zentenk
    Heads up I'm a noob with linux and networking. I set up a ubuntu server and I have a static ip for my network. When I try to connect to the server at home (external), it prompts me to log in. I supply the correct password (or incorrect pw), I get the error Permission denied, please try again. and after 3 times I get Permission denied (publickey,gssapi-with-mic,password) I am however able to connect with SSH from another computer in the same network with ssh < internal ip of server > I'm connecting with mac os x and my config file is vanilla. Note: During installation of ubuntu it says I don't have a default route or something while doing auto network configuration, but I ignored it and continued the installation, could this be the problem? EDIT: I have tried the below, I have nothing in hosts.allow and also iptables shows the ports that I have allowed, which is 22. I checked the auth.log, and there is nothing when I connect to it remotely (even when it says permission denied). I have tried connecting to it internally and the correct authentication logs show. Any idea whats wrong?

    Read the article

  • Does Guest WiFi on an Access Point make any sense? [migrated]

    - by Jason
    I have a Belkin WiFi Router which offers a feature of a secondary Guest Access WiFi network. Of course, the idea is that the Guest network doesn't have access to the computers/devices on the main network. I also have a Comcast-issues Cable Modem/Router device with mutliple wired ports, but no WiFi-capabilities. I prefer to only run one router/DHCP/NAT instead of both the Comcast Router and the Belkin Router, so I can disable the Routing functions of the Belkin and allow the Comcast Router to But if I disable the Routing functions of the Belkin device, the Guest WiFi network is still available. Is this configuration just as secure as when the Belkin acts as a Router? I guess the question comes down to this: Do Guest WiFi's provide security by 1) only allowing requests to IPs found in-front of the device, or do they work by 2) disallowing requests to IPs on the same subnet? 1) Would mean that Guest WiFi on an access point provides no benefit 2) Would mean that the Guest WiFi functionality can work even if the device is just an access point. Or maybe something else entirely?

    Read the article

  • Redmine does not return the web page

    - by m0skit0
    I migrated a Redmine installation from an Ubuntu machine to a Debian one (both 32-bits), and now for some reason, for some users it doesn't return the page but only a 200 OK message. Here is the flow (from Wireshark): GET /issues/142 HTTP/1.1 Host: debian:3000 User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:17.0) Gecko/20100101 Firefox/17.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Cookie: _redmine_session=BAh7DCIQX2NzcmZfdG9rZW4iMStIM1RBNTlNelZVUXlUazgrR1pUNGUvNGdEbytUZzRyMVFSUnBvNGhlSDg9Ihd0aW1lbG9nX2luZGV4X3NvcnQiEnNwZW50X29uOmRlc2MiD3Nlc3Npb25faWQiJThiMDk0MzVhOTEzYTI0MzVjOGEzYTRmNDU0NzcwMTAwIgx1c2VyX2lkaQoiFmlzc3Vlc19pbmRleF9zb3J0IgxpZDpkZXNjIg1wZXJfcGFnZWlpIgpxdWVyeXsHOg9wcm9qZWN0X2lkaQc6B2lkaQo%3D--8588c221c0642a12f396239455fb702aec14c9c9; my_wiki_session=f70ae11e1c533c86f0e039d63cf3f69c; my_wikiUserID=1; my_wikiUserName=Yasin Cache-Control: max-age=0 HTTP/1.1 200 OK Connection: Keep-Alive Date: Wed, 12 Dec 2012 16:30:16 GMT Server: WEBrick/1.3.1 (Ruby/1.8.7/2010-08-16) Content-Length: 0 As you can see, I get nothing from the server. This is mostly random because this blank page happens sometimes for some users, and for other users it almost never returns the page... I'm absolutely lost here. Any idea about what can be the cause? Thanks in advance!

    Read the article

  • Setting umask for all users

    - by Yarin
    I'm trying to set the default umask to 002 for all users including root on my CentOS box. According to this and other answers, this can be achieved by editing /etc/profile. However the comments at the top of that file say: It's NOT a good idea to change this file unless you know what you are doing. It's much better to create a custom.sh shell script in /etc/profile.d/ to make custom changes to your environment, as this will prevent the need for merging in future updates. So I went ahead and created the following file: /etc/profile.d/myapp.sh with the single line: umask 002 Now, when I create a file logged in as root, the file is born with 664 permissions, the way I had hoped. But files created by my Apache wsgi application, or files created with sudo, still default to 644 permissions... $ touch newfile (as root): Result = 664 (Works) $ sudo touch newfile: Result = 644 (Doesn't work) Files created by Apache wsgi app: Result = 644 (Doesn't work) Files created by Python's RotatingFileHandler: Result = 644 (Doesn't work) Why is this happening, and how can I ensure 664 file permissions system wide, no matter what creates the file? UPDATE: I ended up finding a cleaner solution to this on a per-directory basis using ACLs, which I describe here.

    Read the article

  • PHP on several servers with session-sharing

    - by Etu
    there's certanly other threads about this, but I have one more question. We are about to scale the website at work to have more than one server. And we need to share the sessions between the servers. We have been looking into different solutions, one in memcached and use Memcached as sessionhandler in PHP. That will probably work. And the idea would be to run memcached on every machine and let all webservers access all other servers memcached servers, and then we have shared sessions between the machines, yay. (we have no resources to setup with sticky-sessions yet, that's a later project. we need this running, and we need this running now. and we will loadbalance with DNS for a starter) But then... If I want to take one server down, say, for maintenance, or a server crashes, or whatever reason. I don't want the users to just loose their sessions and have to start from the beginning... That's why we need some kind of replication, which Memcached does not support. Then I found http://repcached.lab.klab.org/ -- which has multi-master replication of memcached, which is great, and is what I want. But does it work with 2 machines? Say 3, 5, 10? For future scaling. I also looked into redishttp://redis.io/ -- which also seems great, but is a bit more "shaky" with the php-session-handler support, and no multi-master-replication. The thing is that I like to use memcached, but I want to be able to power down one of two boxes without loosing half of the sessions. Any suggestions?

    Read the article

< Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >