Search Results

Search found 7897 results on 316 pages for 'generate'.

Page 229/316 | < Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >

  • Ubuntu Postfix Gmail SMTP Relay Not Working

    - by Nick DeMayo
    I currently have postfix set up to relay messages from my websites through gmail, and up until recently it was working perfectly. However, within the last week or so (not really sure when) I started getting the below error whenever attempting to send an email: Jul 20 07:40:46 localhost postfix/smtp[11958]: connect to smtp.gmail.com[2001:4860:800a::6c]:587: Network is unreachable Jul 20 07:40:46 localhost postfix/smtp[11958]: connect to smtp.gmail.com[173.194.76.109]:587: Connection refused Jul 20 07:40:46 localhost postfix/smtp[11958]: connect to smtp.gmail.com[173.194.76.108]:587: Connection refused Here is my configuration file: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h #readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = [my domain name] alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases #myorigin = /etc/mailname mydestination = [my host name], localhost.localdomain, localhost relayhost = [smtp.gmail.com]:587 mynetworks = 127.0.0.0/8 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = loopback-only inet_protocols = all ########################################## ##### non debconf entries start here ##### ##### client TLS parameters ##### smtp_tls_loglevel=1 smtp_tls_security_level=encrypt smtp_sasl_auth_enable=yes smtp_sasl_password_maps=hash:/etc/postfix/sasl/passwd smtp_sasl_security_options = noanonymous ##### map username@localhost to [email protected] ##### smtp_generic_maps=hash:/etc/postfix/generic Nothing changed on my server, as far as I know...any ideas what could have caused it to stop working?

    Read the article

  • AjaxControlToolkit JavaScript is not pointing correctly on IIS7 running behind Apache mod_proxy

    - by sohum
    So here's my setup. I've got a DynDNS account since I have a dynamic IP. I have Apache listening on port 80 and IIS7 on port 8080. I don't want users to have to enter in mydyndns.dyndns.com:8080 to get to IIS7, so I've added the following code to my Apache httpd.conf file to enable a proxy/reverse proxy: <VirtualHost *:80> ProxyPass / http://localhost:8080/myASPSite/ ProxyPassReverse / http://localhost:8080/myASPSite/ ServerName myaspsite.mydomain.com </VirtualHost> I've got a CNAME record set up on my DNS so that myaspsite.mydomain.com redirects to mydyndns.dyndns.com. When I type in myaspsite.mydomain.com into my browser, everything works beautifully... mostly. IIS7 serves up the ASPX pages and visitors to the site don't know any better. A problem arises, however, when I add Ajax Control Toolkit controls into my ASPX website, because these generate JavaScript and apparently mod_proxy_html isn't geared to handle the JS URIs properly. Sure enough, when I open up the source of my ASPX page, it has script elements as follows: <script src="/myASPSite/WebResource.axd?xyz" type="text/javascript"></script> <script src="/myASPSite/ScriptResource.axd?xyz" type="text/javascript"></script> Sure enough, these scripts are attempting to be resolved at http://myaspsite.mydomain.com/myASPSite/WebResource..., which through the proxy translates to localhost:8080/myASPSite/myASPSite/.... How can I solve this problem. The couple of websites I found suggested turning on ProxyHTMLExtended but when I tried doing that, the server did not start. I'm guessing I didn't know how to do it properly. Anyone has a handy couple of config lines that I can add to my Apache conf file to get this working as I need? I'm using Apache 2.2.11. Thanks!

    Read the article

  • puppetca never returns anything

    - by mrisher
    Hi: I'm trying to configure Puppet on Ubuntu, and strangely I am never able to generate a certificate because my server never shows any pending certificate requests. Put differently, on the server I am running puppetmasterd and on the client I am able to connect to the server, but the client continues printing notice: Did not receive certificate warning: peer certificate won't be verified in this SSL session and yet the server never sees the request mrisher@lab2$ puppetca --list [nothing shows up] mrisher@lab2$ puppetca --sign clientname.domain.com clientname.domain.com err: Could not call sign: Could not find certificate request for clientname.domain.com Edit: There was a suggestion that autosign was happening, but that does not seem to be it. There is no autosign.conf file, and when I run puppetmasterd --no-daemonize -d -v I receive the following output: info: Could not find certificate for 'clientname.domain.com' every time the client says notice: Did not receive certificate I checked the certs on the server and there don't seem to be any: mrisher@lab2:~$ puppetca --list --all mrisher@lab2:~$ sudo puppetca --list --all + lab2.domain.com // this is the server (master) mrisher@lab2:~$ sudo puppetca --list [blank line] mrisher@lab2:~$ Note: This is mostly running the default install from Ubuntu, if that gives any leads. Thanks for any help out there.

    Read the article

  • Hostname error on my Slicehost Ubuntu server

    - by allesklar
    Like many folks who upgraded to Rails 2.2, I got an exception raised when sending an email. This version of Rails or later does require using tls for sending emails. The message in the production log file says: hostname was not match with the server certificate I did a whole lot of research and work on this and did everything I could. I changed my slice's hostname to ohlalaweb.com. If I run the command 'hostname' at the CL I get: ohlalaweb.com Postfix seems to work fine. I can send emails from the CL to my gmail, yahoo, and google apps gmail accounts with no problems. Here is the result of cat /etc/postfix/main.cf # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname smmtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ohlalaweb.pem smtpd_tls_key_file=/etc/ssl/certs/ohlalaweb.pem smtpd_use_tls=yes # SA created next line to force postfix to use self create certificate smtpd_tls_auth_only=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = ohlalaweb.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydestination = localhost.localdomain, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all I have regenerated the ssl keys with the ohlalaweb.com host name. Any ideas or suggestions?

    Read the article

  • Is this SPF record correct for me?

    - by DT
    I'm completely new to Stack Overflow, so Hi! I need to add an SPF record to my site "main.com" (not the real address) to allow an email publishing company "emailpublishers.com" (not the real address) to send emails on my behalf. However, I'm nervous about adding an SPF record because of the havoc it could wreak if done incorrectly. I use Google Apps. I also use "auxiliary.com" to send mail from "main.com." And, of course, I use "main.com" to send mail as well. "auxiliary.com" doesn't have an SPF record. I used Microsofts' and OpenSPF's wizards to generate the following SPF entry. Does it seem to be correct for me? "v=spf1 a mx ip4:55.55.555.55 mx:alt1.aspmx.l.google.com mx:alt2.aspmx.l.google.com mx:aspmx.l.google.com mx:aspmx2.googlemail.com mx:aspmx3.googlemail.com mx:aspmx4.googlemail.com mx:aspmx5.googlemail.com a:auxiliary.com include:_spf.google.com include:auxiliary.com mx:auxiliary.com include:emailpublishers.com mx:emailpublishers.com ~all" However, my host MediaTemple says in a knowledge base article to use: v=spf1 a:main.com/20 ~all So that added to my confusion. Thanks a lot!

    Read the article

  • Change XRDP keyboard layout to en-gb Ubuntu 12.04

    - by Earl Sven
    Does anybody know how to change the keyboard layout to en-gb in an XRDP session on Ubuntu 12.04? I am using mstsc.exe to connect to an XRDP server hosting an XVNC session, however I cannot work out how to apply the UK keyboard layout. A bit of googling has yeilded these instructions which allow me to change the keymap, however using the keymap file I downloaded from here I loose the ability to use the arrow keys, home/end etc. Comparing the file with the standard one there are substantially more differences than I would expect considering the similarity between the layouts. I only have RDP access to the box so i don't seem to be able to actually generate a new layout per the instructions above, maybe it's a local console thing? Also I can't change either the RDP client used or the RDP server as they are my only access to the system, I don't have local console access. I do have root priveleges on the OS however. Any thoughts? Edit: I have found http:// xrdp.sourceforge.net/documents/keymap/newkeymap.html (apologies for not typing the link properly but the antispam filter won't let me post more than 2 links) this documentation on the XRDP sourceforge page which describes keymap file format. It indicates the values in the keymap files are unicode 0x64 etc, however the files I have already on my system seem to use a different format 0:0 or 65307:27 etc, does anybody know what the difference is?

    Read the article

  • EC2 Configuration

    - by user123683
    I am trying to create a server structure for my EC2 account. The design I have chosen consists of 2 instances running in different availability zones, elastic load balancer, an auto-scaling group with cloudwatch monitoring configured and a security group defining rules for access to the instances. This setup is to support an online web application written in PHP. I am trying to decide what is a better policy: Store MySQL DB on a separate Instance Store MySQL DB on an attached EBS volume (from what i know auto-scaling will not replicate the attached EBS volume but will generate new instances from a chosen AMI - is this view correct?) Regards the AMI I plan to use a basic Amazon linux 64 bit AMI, and install bastille (maybe OSSEC) but I am looking to also use an encrypted file system. Are there any issues using an encrypted file system and communication between the DB and webapp i neeed to be aware of? Are there any comms issues using the encrypted filesystem on the instance housing the webapp I was going to launch a second instance or attach a second volume in the second availability zone to act as a standby for the database - I'm just looking for some suggestions about how to get the two DB's to talk - will this be a big task Regards updates for security is it best to create a recent snapshot and just relaunch and allow Amazon to install updates on launch or is the yum update mechanism a suitable alternative - is it better practice to relaunch instead of updates being installed which force a restart. I plan to create two AMI snapshots one for the app server and one for the DB each with the same security measures in place - is this a reasonable - I just figure it is a better policy than having additional applications that are unnecessary included in a AMI that I intend on using. My plan for backup is to create periodic snapshots of the webapp and DB instances (if I use an additional EBS volume instead of separate instances my understanding is that the EBS volume will persist in S3 storage in the event of an unexpected termination and I can create snapshots of the volume backup purposes). Thanks in advance for suggestions and advice. I am new to EC2 and I may have described unnecessary overkill but I want to try implement what can be considered a best practice solution so all advice is appreciated.

    Read the article

  • SQL 2000 Backup/Export Process - Can't find SQL 2000 Enterprise Manager, Can't use Mgmt Studio Expre

    - by 1nsane
    I need to make a backup of a client's SQL 2000 database, however there are a few issues preventing me from doing so using the traditional methods. I've tried using SQL Management Studio Express, but the host doesn't give sufficient privileges to create a backup and I'm getting some strange error messages. I've also tried doing the "Generate Scripts" to recreate the schema, then using the DTS Wizard to migrate the data, but the IDs set up with the identity specification property are not consistent with the live database once copied over. This results in some foreign key breakage... If I remember correctly, I was able to use Microsoft SQL 2000 Enterprise Manager to perform the task before, but I can't find this anywhere... it seems Microsoft has pulled most SQL Server 2000 stuff from their site. Does anyone know where I can find a copy of Enterprise Manager (or a trial of SQL Server 2000, which I believe comes with the component)? Or conversely, does anyone know of any other tools (preferably non-commercial) that are capable of mirroring remote SQL Server 2000 DBs? Thanks!

    Read the article

  • Windows 7: Image thumbnails fail to appear

    - by Fopedush
    All right, I've got a pretty strange one here. Since I installed Windows 7 on this machine some time ago, image thumbnails have never worked properly. For the vast majority of images, they completely fail to show up, showing the icon of the default image viewing application instead. Please note that the “Always show icons, never thumbnails” option in folder options is not checked. I've taken a screenshot demonstrating the problem, here: Sometimes, a few image thumbnails will show up correctly, maybe about one in ten, with the rest failing as well. Another screenshot, with a handful of thumbnails visible, can be seen here: Windows explorer does not appear to make any effort to populate the missing thumbnails, and there is no appreciable CPU usage. I can leave a window with missing thumbnails open all night and they will still never appear. Newly created images never generate thumbnails, only images that have been on the system since day one will occasionally show them. This leads me to believe that explorer is set to show thumbnails, but whatever process is supposed to be in charge of actually generating them has failed somehow. In previous versions of windows, explorer.exe itself was responsible for thumbnail generation – has this changed? Any suggestions at all – even if you aren't sure that they will work – are welcome. I'd hate to have to wipe and reinstall on this machine for such a minor annoyance.

    Read the article

  • JavaScript is not pointing correctly on IIS7 running behind Apache mod_proxy

    - by sohum
    So here's my setup. I've got a DynDNS account since I have a dynamic IP. I have Apache listening on port 80 and IIS7 on port 8080. I don't want users to have to enter in mydyndns.dyndns.com:8080 to get to IIS7, so I've added the following code to my Apache httpd.conf file to enable a proxy/reverse proxy: <VirtualHost *:80> ProxyPass / http://localhost:8080/myASPSite/ ProxyPassReverse / http://localhost:8080/myASPSite/ ServerName myaspsite.mydomain.com </VirtualHost> I've got a CNAME record set up on my DNS so that myaspsite.mydomain.com redirects to mydyndns.dyndns.com. When I type in myaspsite.mydomain.com into my browser, everything works beautifully... mostly. IIS7 serves up the ASPX pages and visitors to the site don't know any better. A problem arises, however, when I add Ajax Control Toolkit controls into my ASPX website, because these generate JavaScript and apparently mod_proxy_html isn't geared to handle the JS URIs properly. Sure enough, when I open up the source of my ASPX page, it has script elements as follows: <script src="/myASPSite/WebResource.axd?xyz" type="text/javascript"></script> <script src="/myASPSite/ScriptResource.axd?xyz" type="text/javascript"></script> Sure enough, these scripts are attempting to be resolved at http://myaspsite.mydomain.com/myASPSite/WebResource..., which through the proxy translates to localhost:8080/myASPSite/myASPSite/.... How can I solve this problem. The couple of websites I found suggested turning on ProxyHTMLExtended but when I tried doing that, the server did not start. I'm guessing I didn't know how to do it properly. Anyone has a handy couple of config lines that I can add to my Apache conf file to get this working as I need? I'm using Apache 2.2.11. Thanks!

    Read the article

  • Easiest way to send encrypted email?

    - by johnnyb10
    To comply with Massachusetts's new personal information protection law, my company needs to (among other things) ensure that anytime personal information is sent via email, it's encrypted. What is the easiest way to do this? Basically, I'm looking for something that will require the least amount of effort on the part of the recipient. If at all possible, I really want to avoid them having to download a program or go through any steps to generate a key pair, etc. So command-line GPG-type stuff is not an option. We use Exchange Server and Outlook 2007 as our email system. Is there a program that we can use to easily encrypt an email and then fax or call the recipient with a key? (Or maybe our email can include a link to our website containing our public key, that the recipient can download to decrypt the mail?) We won't have to send many of these encrypted emails, but the people who will be sending them will not be particularly technical, so I want it to be as easy as possible. Any recs for good programs would be great. Thanks.

    Read the article

  • Request bursting from web application Load Tests

    - by MaseBase
    I'm migrating our web and database hosting to a new environment on all new machines. I've recently performed a Load Test using WAPT to generate load from multiple distributed clients. The server has plenty of room to handle the traffic load, but I'm seeing an odd pattern of incoming traffic during the load tests. Here is the gist of our setup: Firewall server running MS Forefront TMG 2010 on Win 2k8 server Request routing done by IIS Application Request Routing on firewall machine Web server is a Hyper-V VM on the Database server (which is the host OS) These machines are hefty with dual-CPU's with six cores (12 total procs) Web server running IIS 7.5 Web applications built in ASP.NET 2.0, with 1 ISAPI filter (Url Rewrite) in front What I'm seeing during the load tests is that the requests all come through in bursts. Even though I have 7 different distributed clients sending traffic loads, the requests come through about 300-500 requests at a time. The performance monitor shows nearly all of the counters moving through this pattern, where a burst of requests comes in the req/sec jumps to 70, the queued requests jumps to 500, the current requests jumps up, the CPU jumps up, everything. Then once it's handled that group of requests, it has a lull for nearly 10 seconds where nearly nothing is happening. 0-5 req/sec, 0 queued requests, minimal CPU usage. Then after 10 seconds of inactivity, another burst comes through, spiking all of the counters once again. What I can't figure out is why the requests are coming through in bursts when I know that the load being generated is not sent that way, especially considering the various load-generating clients sending traffic all in different intervals with random think time's between each request. Is there something in the layers between Hyper-V or perhaps in the hardware which might cause this coalesce of requests together? Here is what i'm looking at, the highlighted metric is Requests/sec, but the others critical counter go with it: Requests Queued (which I'd obviously like to keep as close to 0 as possible). Any ideas on this?

    Read the article

  • Working with barcode fonts in Word

    - by Bob Rivers
    I need to create labels in Microsoft Word 2010 with numbers encoded as barcodes. The barcode's format (ean, code39, upc, etc) does not matter. I have downloaded a barcode conversion font that I found at this site. When I type the number that I want and then I format it with my new font, it produces a barcode. I then print it on an OKI laser printer (1200 dpi). The result seems to be fine, at least for common people. But, when I try to scan it, nothing happens. I tried both with a barcode scanner and a data collector, but neither of them read the barcode. My barcode scanner is working fine, because I can read commercial barcodes printed on products. Does anybody have any advice? How do I do this kind of stuff? I want to do it using Word because I will generate labels using Mail Merge. Therefore using external programs aren't option for me.

    Read the article

  • Configuring OS X L2TP VPN to use Certificate for IPSEC layer instead of Pre Shared Key

    - by Matthew Savage
    I'm trying to setup a L2TP VPN on an OS X Snow Leopard Server setup, and have had success using a pre-shared key, however I would rather not rely on a simple string, and use a certificate instead. Setting this up on the server side is seemingly easy, you simply select a certificate you have generated from the list, and hit apply, however when I try to use the certificate on the client side it fails. I have exported the certificate into a P12 file, and then transferred to the client, and imported into the login keychain, however when I try to choose the certificate (from Network preferences, clicking Authentication Settings, then selecting Certificate and pressing Select) I am shown the following error: No machine certificates found Certificate authentication cannot be used because your keychain does not contain any suitable certificates. Use Keychain Access to import the certificate into your keychain. If you do not have the certificates required for authentication, contact your network administrator. Unfortunately even when I try to generate a certificate where I override the defaults, ensure the DNS name etc are set properly this doesn't change. When I select Certificate Authentication for the User Auth, and click Select the certificate for the server shows up there, but obviously this isn't where I need it to be available.

    Read the article

  • Mysql Fail to start

    - by John Naegle
    I'm running a Ubuntu 12.04 LTS Virtual Machine. Last week, the VM stopped unexpectedly now mysql will not start on the VM. These two events may be related, they may not be. When I try to connect: $ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) Then: $ sudo service mysql start start: Job failed to start And $ dmesg [ 1838.218400] type=1400 audit(1374633238.253:50): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=18473 comm="apparmor_parser" [ 1838.358656] init: mysql main process (18477) terminated with status 1 [ 1838.358695] init: mysql main process ended, respawning [ 1839.269303] init: mysql post-start process (18478) terminated with status 1 And $ service mysql status mysql stop/waiting I think this means mysql is crashing when it starts: $ sudo mysqld start 130723 21:51:24 InnoDB: Assertion failure in thread 3064211200 in file fut0lst.ic line 83 InnoDB: Failing assertion: addr.page == FIL_NULL || addr.boffset >= FIL_PAGE_DATA InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 02:51:24 UTC - mysqld got signal 6 ; Per the manual, I went to the data directory (/var/lib/mysql) and ran this: myisamchk --silent --force */*.MYI Then: $ sudo mysqld ... InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: for more information. ... Is my database corrupt? What can I do to recover? Re-install mysql? Something less drastic? I'm fine with losing the database, I just want a working system.

    Read the article

  • Intel Atom overheating in ASUS EEE Box 1501P

    - by Sergey L.
    I have had an ASUS EEE Box 1501P for just a little bit over a year. Of course it breaks 2 months after the warranty runs out. http://www.asus.com/Eee/EeeBox_PC/EeeBox_PC_EB1501P/ I have been using the box as a Home Media Center. Running mostly 24/7 often pausing a video overnight. Since last week the fan started running extremely loud. After some digging I found that the Intel Atom CPU in it is overheating and the built-in sensor is reporting temperatures way over 105°C. This got me worried, so I took the unit apart. Completely vacuumed the heat sink, oiled the fan, but the unit is still showing the same behaviour. After turning it on and just observing the hardware monitor in the BIOS the temperature slowly rises from 40°C to over 95°C in appx 5 min. I am running the newest BIOS and a lightweight Linux OPENELEC OS with XBMC on it. Now I am wondering if it could be a faulty heat sensor in the Atom. Recommended running temperature is up to 85°C, but I have not detected any performance hits when running at the above mentioned 105°C and there seem to be no software faults. How can an Atom with an attached heat sink and a fan running at full capacity even get this hot in the first place at 0 load? Aren't those things designed to generate virtually no heat? Could it be a faulty heat sensor? What shall I try to fix this? I would prefer not to damage the CPU, since it is hard fused into the motherboard and cannot be replaced. I could remove the heat pipe/heat sink, but it is getting hot, so heat is properly transferring from the CPU to the heat pipe, the fan is running at full capacity, is recently oiled and warm air is making it out of the exhaust. Edit: One more note: The North-bridge (or whatever it is called nowadays) is on the same heat pipe.

    Read the article

  • Does fast typing influence fast programming?

    - by Lukasz Lew
    Many young programmers think that their bottleneck is typing speed. After some experience one realizes that it is not the case, you have to think much more than type. At some point my room-mate forced me to turn of the light (he sleeps during the night). I had to learn to touch type and I experienced an actual improvement in programming skill. The most surprising was that the improvement not due to sheer typing speed, but to a change in mindset. I'm less afraid now to try new things and refactor them later if they work well. It's like having a new tool in the bag. Have anyone of you had similar experience? Now I trained a touch typing a little with KTouch. I find auto-generate lessons the best. I can use this program to create new lessons out of text files but it's only verbatim training, not auto-generated based on a language model. Do you know any touch typing program that allows creation of custom, but randomized lessons?

    Read the article

  • Which database to use and system/db administration by layman [closed]

    - by blah
    So my friend and I got briliant ;) idea for a business. Since it is not predictable whether it will work out or not, we decided to keep cost as low as possible to start with, in particular not to hire anyone. If it will work out as expected it will generate enough profit to hire professionals in few months. But for the first few months we'll be doing everything by ourselfs. He's a business/finance major, and I'm a software developer, so obviously I have to take care of IT :) It will be a webapp, written in python/django. My questions regarding this project: 1) What database should I choose? I'm experienced with oracle, and have been working with SQL Server for a while, but both of them are too expensive(at least now). It's a developer experience, I've never done any dba stuff. I'm looking for something free(as in beer). Looks like MySql or PostgreSQL are most popular in this sector. I would appreciate any comments on which db to choose. I'm open to any suggestions(it doesn't have to be MySql or Postgre). Here's what I know about data: It will be almost dates and numbers, a little bit of text. Searched mainly by dates. Data will almost never be updated, mostly inserted and browsed. From 30k to 300k new records/month. 2) Servers. My idea is to rent two dedicated servers. During normal operation one would be a web server(debian/apache), other would be a db server(debian/?). My recovery plan is to install everything on both, and in case of trouble with one of machines just run everything on the other one. Does it even makes sense? Any other tips appreciated. Thanks.

    Read the article

  • JBoss 5 on AIX 5.3

    - by jess
    I am a very newbie for AIX and system monitoring. Actually our application currently run production on jboss 5.1 in AIX 5.3. Please check below configuration & system settings. AIX system configuration OS Level 5.3.9.0 (oslevel -g) Physical Memory size 24GB (svmon -G) Page space 4GB (lsps -s) processors 3 cores, Processor Type: PowerPC_POWER6, Processor Clock Speed: 4704 MHz (prtconf | grep Processor) Java version JRE 1.6.0 IBM AIX build pap6460sr10fp1-20120321_01 (SR10 FP1) (java -fullversion) JBoss configuration JBoss 5.1/JBoss ESB 4.11 Hornetq messaging with consumer flow control java opts : -d64 -Xms2g -Xmx4g -XX:MaxPermSize=1024m Sometime we observe very strange behavior in the JBoss that freeze without any error logs. Also server log stop without any further trace. We also not able to get thread dump (kill -3) and its not generate at that point. (kill -3 xxxxx works in normal circumstances) Only option available for us was restart the jboss server and its seem all messages that were in queues during the freeze time process after restarting. We try tweak some of setting in JBoss hornetq, we though issue was there. Hornetq Stuck By Default. But we haven't any luck and also unable to isolate the issue in any point. We looking at tool like nmon for monitoring this but no clue is that good enough to do so. Please provide some point to investigate this issue. Thanks

    Read the article

  • copying folder and file permissions from one user to another after switching domains [closed]

    - by emptyspaces
    Please excuse the title, this was the best way I could think to describe this scenario without an entire paragraph. I am using C#. Currently I have a file server running windows server 2003 setup on a domain, we will call this oldDomain, and I have about 500 user accounts with various permissions on this server. Because of restrictions out of my control we are abandoning this domain and using another one that is more dominant within the organization, we will call this newDomain. All of the users that have accounts on oldDomain also have accounts on newDomain, but the usernames are completely different and there is no link between the two. What I am hoping to do is generate a list of all user accounts and this appropriate sid's from AD on the oldDomain, I already have this part done using dsquery and dsget. Then I will have someone go through and match all of the accounts from oldDomain to the correct username on newDomain. Ultimately leaving me with a list of sids from oldDomain and the appropriate username from newDomain. Now I am hoping to copy the file and folder permissions from the old user from oldDomain to the new user on newDomain once I join the server to newDomain. Can anyone tell me what the best way to copy permissions from the sid to the user on newDomain? There are a bunch of articles out there about copying permissions from user a to user b but I wanted to check and see what the recommended practice is here since there are a ton of directories.

    Read the article

  • OpenVPN Client timing out

    - by Austin
    I recently installed OpenVPN on my Ubuntu VPS. Whenenver I try to connect to it, I can establish a connection just fine. However, everything I try to connect to times out. If I try to ping something, it will resolve the IP, but will time out after resolving the IP. (So DNS Server seems to be working correctly) My server.conf has this relevant information (At least I think it's relevant. I'm not sure if you need more or not) # Which local IP address should OpenVPN # listen on? (optional) ;local a.b.c.d # Which TCP/UDP port should OpenVPN listen on? # If you want to run multiple OpenVPN instances # on the same machine, use a different port # number for each one. You will need to # open up this port on your firewall. port 1194 # TCP or UDP server? ;proto tcp proto udp # "dev tun" will create a routed IP tunnel, # "dev tap" will create an ethernet tunnel. # Use "dev tap0" if you are ethernet bridging # and have precreated a tap0 virtual interface # and bridged it with your ethernet interface. # If you want to control access policies # over the VPN, you must create firewall # rules for the the TUN/TAP interface. # On non-Windows systems, you can give # an explicit unit number, such as tun0. # On Windows, use "dev-node" for this. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. ;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel if you # have more than one. On XP SP2 or higher, # you may need to selectively disable the # Windows firewall for the TAP adapter. # Non-Windows systems usually don't need this. ;dev-node MyTap # SSL/TLS root certificate (ca), certificate # (cert), and private key (key). Each client # and the server must have their own cert and # key file. The server and all clients will # use the same ca file. # # See the "easy-rsa" directory for a series # of scripts for generating RSA certificates # and private keys. Remember to use # a unique Common Name for the server # and each of the client certificates. # # Any X509 key management system can be used. # OpenVPN can also use a PKCS #12 formatted key file # (see "pkcs12" directive in man page). ca ca.crt cert server.crt key server.key # This file should be kept secret # Diffie hellman parameters. # Generate your own with: # openssl dhparam -out dh1024.pem 1024 # Substitute 2048 for 1024 if you are using # 2048 bit keys. dh dh1024.pem # Configure server mode and supply a VPN subnet # for OpenVPN to draw client addresses from. # The server will take 10.8.0.1 for itself, # the rest will be made available to clients. # Each client will be able to reach the server # on 10.8.0.1. Comment this line out if you are # ethernet bridging. See the man page for more info. server 10.8.0.0 255.255.255.0 # Maintain a record of client <-> virtual IP address # associations in this file. If OpenVPN goes down or # is restarted, reconnecting clients can be assigned # the same virtual IP address from the pool that was # previously assigned. ifconfig-pool-persist ipp.txt # Configure server mode for ethernet bridging. # You must first use your OS's bridging capability # to bridge the TAP interface with the ethernet # NIC interface. Then you must manually set the # IP/netmask on the bridge interface, here we # assume 10.8.0.4/255.255.255.0. Finally we # must set aside an IP range in this subnet # (start=10.8.0.50 end=10.8.0.100) to allocate # to connecting clients. Leave this line commented # out unless you are ethernet bridging. ;server-bridge 10.8.0.4 255.255.255.0 10.8.0.50 10.8.0.100 # Configure server mode for ethernet bridging # using a DHCP-proxy, where clients talk # to the OpenVPN server-side DHCP server # to receive their IP address allocation # and DNS server addresses. You must first use # your OS's bridging capability to bridge the TAP # interface with the ethernet NIC interface. # Note: this mode only works on clients (such as # Windows), where the client-side TAP adapter is # bound to a DHCP client. ;server-bridge # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. ;push "route 192.168.10.0 255.255.255.0" ;push "route 192.168.20.0 255.255.255.0" # To assign specific IP addresses to specific # clients or if a connecting client has a private # subnet behind it that should also have VPN access, # use the subdirectory "ccd" for client-specific # configuration files (see man page for more info). # EXAMPLE: Suppose the client # having the certificate common name "Thelonious" # also has a small subnet behind his connecting # machine, such as 192.168.40.128/255.255.255.248. # First, uncomment out these lines: ;client-config-dir ccd ;route 192.168.40.128 255.255.255.248 # Then create a file ccd/Thelonious with this line: # iroute 192.168.40.128 255.255.255.248 # This will allow Thelonious' private subnet to # access the VPN. This example will only work # if you are routing, not bridging, i.e. you are # using "dev tun" and "server" directives. # EXAMPLE: Suppose you want to give # Thelonious a fixed VPN IP address of 10.9.0.1. # First uncomment out these lines: ;client-config-dir ccd ;route 10.9.0.0 255.255.255.252 # Then add this line to ccd/Thelonious: # ifconfig-push 10.9.0.1 10.9.0.2 # Suppose that you want to enable different # firewall access policies for different groups # of clients. There are two methods: # (1) Run multiple OpenVPN daemons, one for each # group, and firewall the TUN/TAP interface # for each group/daemon appropriately. # (2) (Advanced) Create a script to dynamically # modify the firewall in response to access # from different clients. See man # page for more info on learn-address script. ;learn-address ./script # If enabled, this directive will configure # all clients to redirect their default # network gateway through the VPN, causing # all IP traffic such as web browsing and # and DNS lookups to go through the VPN # (The OpenVPN server machine may need to NAT # or bridge the TUN/TAP interface to the internet # in order for this to work properly). push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 8.8.8.8" # Certain Windows-specific network settings # can be pushed to clients, such as DNS # or WINS server addresses. CAVEAT: # http://openvpn.net/faq.html#dhcpcaveats # The addresses below refer to the public # DNS servers provided by opendns.com. ;push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" # Uncomment this directive to allow different # clients to be able to "see" each other. # By default, clients will only see the server. # To force clients to only see the server, you # will also need to appropriately firewall the # server's TUN/TAP interface. ;client-to-client # Uncomment this directive if multiple clients # might connect with the same certificate/key # files or common names. This is recommended # only for testing purposes. For production use, # each client should have its own certificate/key # pair. # # IF YOU HAVE NOT GENERATED INDIVIDUAL # CERTIFICATE/KEY PAIRS FOR EACH CLIENT, # EACH HAVING ITS OWN UNIQUE "COMMON NAME", # UNCOMMENT THIS LINE OUT. ;duplicate-cn # The keepalive directive causes ping-like # messages to be sent back and forth over # the link so that each side knows when # the other side has gone down. # Ping every 10 seconds, assume that remote # peer is down if no ping received during # a 120 second time period. keepalive 10 120 # For extra security beyond that provided # by SSL/TLS, create an "HMAC firewall" # to help block DoS attacks and UDP port flooding. # # Generate with: # openvpn --genkey --secret ta.key # # The server and each client must have # a copy of this key. # The second parameter should be '0' # on the server and '1' on the clients. ;tls-auth ta.key 0 # This file is secret # Select a cryptographic cipher. # This config item must be copied to # the client config file as well. ;cipher BF-CBC # Blowfish (default) ;cipher AES-128-CBC # AES ;cipher DES-EDE3-CBC # Triple-DES # Enable compression on the VPN link. # If you enable it here, you must also # enable it in the client config file. comp-lzo # The maximum number of concurrently connected # clients we want to allow. ;max-clients 100 # It's a good idea to reduce the OpenVPN # daemon's privileges after initialization. # # You can uncomment this out on # non-Windows systems. ;user nobody ;group nogroup # The persist options will try to avoid # accessing certain resources on restart # that may no longer be accessible because # of the privilege downgrade. persist-key persist-tun # Output a short status file showing # current connections, truncated # and rewritten every minute. status openvpn-status.log # By default, log messages will go to the syslog (or # on Windows, if running as a service, they will go to # the "\Program Files\OpenVPN\log" directory). # Use log or log-append to override this default. # "log" will truncate the log file on OpenVPN startup, # while "log-append" will append to it. Use one # or the other (but not both). ;log openvpn.log ;log-append openvpn.log # Set the appropriate level of log # file verbosity. # # 0 is silent, except for fatal errors # 4 is reasonable for general usage # 5 and 6 can help to debug connection problems # 9 is extremely verbose verb 3 # Silence repeating messages. At most 20 # sequential messages of the same message # category will be output to the log. ;mute 20 I've tried on multiple computers by the way. The same result on all of them. What could be wrong? Thanks in advance, and if you need other information I'll gladly post it. Information for new comments root@vps:~# iptables -L -n -v Chain INPUT (policy ACCEPT 862K packets, 51M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 3 packets, 382 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 4641 298K ACCEPT all -- * * 10.8.0.0/24 0.0.0.0/0 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 1671K packets, 2378M bytes) pkts bytes target prot opt in out source destination And root@vps:~# iptables -t nat -L -n -v Chain PREROUTING (policy ACCEPT 17937 packets, 2013K bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 8975 packets, 562K bytes) pkts bytes target prot opt in out source destination 1579 103K SNAT all -- * * 10.8.0.0/24 0.0.0.0/0 to:SERVERIP Chain OUTPUT (policy ACCEPT 8972 packets, 562K bytes) pkts bytes target prot opt in out source destination

    Read the article

  • Powershell script to delete sub folders and files if creation date is >7 days but maintain parent folders of sub folders and files <7 days old

    - by Mark
    I'm currently using the Powershell script below to delete all files directories and sub directories of "$dump_path" that are seven days or older based upon the creation date and not modified date. The problem with this script is this: If folder "A" is seven (or more) days old it will be deleted even if its sub folders and files are less then seven days old. What I would like this script to do is this: Delete all files from the root and in all sub folders of "$dump_path" that are seven or more days old but maintain the parent folder(s) of files and folders that are less than seven days old even if that means the parent folders are more than seven days old. If all subfolders and files are seven days or older than the parent folder then the parent can be deleted. Slightly obscure problem I know, but the intention is to have a 7 day retention period of all data in a 'sandbox' location of our shared areas. Also, an added bonus if it could generate a log of what it deletes and e-mails it out post deletion. Thank you for reading and I hope that all makes sense! Mark # set folder path $dump_path = "c:\temp" # set minimum age of files and folders $max_days = "-7" # get the current date $curr_date = Get-Date # determine how far back we go based on current date $del_date = $curr_date.AddDays($max_days) # delete the files and folders Get-ChildItem $dump_path | Where-Object { $_.CreationTime -lt $del_date } | Remove-Item -Recurse

    Read the article

  • what web based tool, to allow a non-technical user to manage authorized keys files on a Linux (fedora/centos/ubuntu/debian) server

    - by Tom H
    (Edit: clarification below) We have a number of groups of developers that change frequently, and a security policy to require individual logins to servers using rsa or dsa public keys, which is achieved via the standard method of adding id_dsa.pub to their authorized keys file. I am using chef to sync the user accounts across machines, however our previous method of using webmin to manage the user passwords is not designed for key based auth, and hence is not easy to use for non-technical users. The developers are logging in from the WAN using ssh, they can either provide their own key, or an administrator will send them a private key. The development machines are located in the cloud and we have a single server available to host the master set of accounts. Obviously I could deploy ldap or other centralised authentication system, but that seems a bit over blown when webmin worked well for the simple case. It is easy to achieve synchronised users, groups and passwords across a bunch of low security development boxes using webmin clustered users and groups. However looking at the currently installed webmin it is not so easy to create the authorized keys as it is to create user accounts and passwords. (its possible, but its not easy - some functionality is in the usermin module, or would required some tedious steps) Ideally I'd like a web interface that is pretty much dedicated to creating users and groups, and can generate key pairs on the fly, and can accepted pasted in public keys to add to the users authorized keys file. If the tool sync'ed the users and keys as well, that would be great, but I can use chef to do that part if the accounts are created correctly on the "master" server.

    Read the article

  • Unable to get squid working for remote users

    - by Sean
    I am trying to setup squid 3.2.4, but I have not been able to get it working for remote users. Works fine locally. Unable to figure out what I am doing wrong... http_port 3128 transparent ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert=/usr/share/ssl-cert/myCA.pem refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 acl localnet src 10.0.0.0/8 # RFC 1918 possible internal network acl localnet src 172.16.0.0/12 # RFC 1918 possible internal network acl localnet src 192.168.0.0/16 # RFC 1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access allow localhost http_access allow localnet http_access allow all cache deny all via off forwarded_for off header_access From deny all header_access Server deny all header_access WWW-Authenticate deny all header_access Link deny all header_access Cache-Control deny all header_access Proxy-Connection deny all header_access X-Cache deny all header_access X-Cache-Lookup deny all header_access Via deny all header_access Forwarded-For deny all header_access X-Forwarded-For deny all header_access Pragma deny all header_access Keep-Alive deny all acl ip1 localip 1.1.1.90 acl ip2 localip 1.1.1.91 acl ip3 localip 1.1.1.92 acl ip4 localip 1.1.1.93 acl ip5 localip 1.1.1.94 tcp_outgoing_address 1.1.1.90 ip1 tcp_outgoing_address 1.1.1.91 ip2 tcp_outgoing_address 1.1.1.92 ip3 tcp_outgoing_address 1.1.1.93 ip4 tcp_outgoing_address 1.1.1.94 ip5 tcp_outgoing_address 1.1.1.90

    Read the article

  • IIS7 - Web Deployment Tool - SetParam/SetParamFile to set http and https bindings + Cert

    - by Andras Zoltan
    Hi, we're currently using the MS Web Deployment Tool to sync a live website and some WebServices from a staging box to two live servers. The staging box hosts the site on any IP on port 17000, whereas the two live servers are load-balanced and have a different IP for each of them. At present, I generate two separate packages for deployment - one for each machine - using the sync operation and specifying a DestinationBinding parameter as follows: msdeploy -verb:sync -source:WebServer,computerName=localhost -dest:package="machinename.zip" -setParam:type="DestinationBinding",scope="SiteName",value="ip_address:port:". (Split across multiple lines to make it easier to read!) I run this twice, with a different target filename and ip address for each of the two machines. When it comes to deployment, I simply do a sync from each package to its respective live site. I know, I know - I should be able to do it by generating one parameterised package and then perhaps using the SetParamFile switch for each of the two Servers - believe me I'd like to, but the documentation on doing this is frankly non-existent. Now I need to configure and deploy both HTTP and HTTPS binding for this site; including also the ssl cert that is to be used. I've added an SSL binding for the site on the staging box - which uses a development cert (which will need to be replaced - or should the staging box be using the live cert?), and now the above command line has the effect of replacing the target IP on both http and https entries. It appears that I cannot specify multiple bindings plus the cert information in the DestinationBinding value in the -setParam above, so anyone know how would I go about doing this? Any help greatly appreciated.

    Read the article

< Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >