Search Results

Search found 785 results on 32 pages for 'cf maintainer'.

Page 19/32 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • How do I deny all requests not from cloudflare?

    - by phillips1012
    I've recently gotten denial of service attacks from multiple proxy ips, so I installed cloudflare to prevent this. Then I started noticing that they're bypassing cloudflare by connecting directly to the server's ip address and forging the host header. What is the most performant way to return 403 on connections that aren't from the 18 ip addresses used by cloudflare? I tried denying all then explicitly allowing the cloudflare ips but this doesn't work since I've set it up so that CF-Connecting-IP sets the ip allow tests for. I'm using nginx 1.6.0.

    Read the article

  • Embedded linux Development learning

    - by user1797375
    I come from a windows background and i am proficient with the .net platform. For work, i need to bring up a custom embedded system platform. We have bought the pandaboard ES as the test platform. The application is to stream images over the wifi. If you think about it, we are building something similar to a netgear router - the only difference being when you log into the device it serves images. Because my background is in windows i am not quite sure how to start off with embedded linux development. in reading through various sites i have come to the conclusion that going to linux as development host is the best option. Can some one point to me in the right direction regarding the set up. I have a windows machine that will be used for development purposes. I can either do a virtual box or setup a partition for linux. But the finer details are what throwing me off..what i need to know is 1) once i install linux what other software do I need - Code blocks, 2) what about toolchain 3) How to debug - through serial port ? 4) Is there a way to send the image built directly to the CF card? Thanks

    Read the article

  • Postfix configuration problem

    - by dhanya
    Can anyone help me by giving your postfix configuration file as a reference so that I can find my mistakes? I'm working on SUSE Linux Enterprise Server. My goal is to set up a mailserver in a campus network. Postfix shows it is running but no mail is sent to var/spool/mail I send mail using mail command at terminal. Here is my main.cf file, please help me finding a solution: readme_directory = /usr/share/doc/packages/postfix-doc/README_FILES inet_protocols = all biff = no mail_spool_directory = /var/mail canonical_maps = hash:/etc/postfix/canonical virtual_alias_maps = hash:/etc/postfix/virtual virtual_alias_domains = hash:/etc/postfix/virtual relocated_maps = hash:/etc/postfix/relocated transport_maps = hash:/etc/postfix/transport sender_canonical_maps = hash:/etc/postfix/sender_canonical masquerade_exceptions = root masquerade_classes = envelope_sender, header_sender, header_recipient myhostname = cmail.cetmail delay_warning_time = 1h message_strip_characters = \0 program_directory = /usr/lib/postfix inet_interfaces = all #inet_interfaces = 127.0.0.1 masquerade_domains = cetmail mydestination = cmail.cetmail, localhost.cetmail, cetmail defer_transports = mynetworks_style = subnet disable_dns_lookups = no relayhost = postfix mailbox_command = cyrus mailbox_transport = strict_8bitmime = no disable_mime_output_conversion = no smtpd_sender_restrictions = hash:/etc/postfix/access smtpd_client_restrictions = smtpd_helo_required = no smtpd_helo_restrictions = strict_rfc821_envelopes = no smtpd_recipient_restrictions = permit_mynetworks,reject_unauth_destination smtp_sasl_auth_enable = no smtpd_sasl_auth_enable = no smtpd_use_tls = no smtp_use_tls = no alias_maps = hash:/etc/aliases mailbox_size_limit = 0 message_size_limit = 10240000

    Read the article

  • Postfix relay to multiple servers and multiple users

    - by Frankie
    I currently have postfix configured so that all users get relayed by the local machine with the exception of one user that gets relayed via gmail. To that extent I've added the following configuration: /etc/postfix/main.cf # default options to allow relay via gmail smtp_use_tls=yes smtp_sasl_auth_enable = yes smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.crt smtp_sasl_security_options = noanonymous # map the relayhosts according to user sender_dependent_relayhost_maps = hash:/etc/postfix/relayhost_maps # keep a list of user and passwords smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd /etc/postfix/relayhost_maps user-one@localhost [smtp.gmail.com]:587 /etc/postfix/sasl_passwd [smtp.gmail.com]:587 [email protected]:user-one-pass-at-google I know I can map multiple users to multiple passwords using smtp_sasl_password_maps but that would mean that all relay would be done by gmail where I specifically want all relay to be done by the localhost with the exception of some users. Now I would like to have a user-two@localhost (etc) relay via google with their own respective passwords. Is that possible?

    Read the article

  • Bootstrap a debian build environment and build source packages with no root privileges

    - by Erwan Queffélec
    On debian squeeze, I am trying to do the following : fetch sources package from the wheezy source repository bootstrap a squeeze chroot for several architectures build the packages for several architectures (i386, amd64 + all and any) I want both the fetching, bootstrapping and build operation to be scriptable, repeatable, and run as a normal user. For the environment setup, I want to make as little use of the root account as possible (install the necessary dependencies, and maybe some visudo stuff). If possible I would like to avoid using a VM (pbuilder with user mode linux) So far I have tried several things with pbuilder (require root), debootstrap (require root) with little success. Here is an example script of what I want to do (does not work): #/bin/bash set -e set -x this=`readlink -f $0` this_dir=`dirname $this` archs='i386 amd64 any all' pushd $this_dir/src # I actually want the following line to work with a repo that # is not in /etc/apt/sources.list but that is another question apt-get source cyrus-imapd-2.4 popd for arch in $archs do build_dir=$this_dir/build/$arch/ pbuilder --create --configfile $build_dir/pbuilderrc --buildresult $build_dir/ pbuilder --build --configfile $build_dir/pbuilderrc --buildresult \ $build_dir/ $this_dir/src/*.dsc # of course I want to use the .dput.cf in /home/myuser/ # and not in /root/ dput $build_dir/$arch/*.changes done

    Read the article

  • Can someone recommend a Compact Flash card to be used as a boot disk

    - by Hamish Downer
    I have an early Acer Aspire One netbook, and the flash drive is really slow at writing. I've taken it apart to add more RAM, but I've pretty much stopped using it. I've read about people replacing the SSD with a Compact Flash card and a CF to ZIF adapter but I've also read about some Compact Flash cards where the manufacturer has permanently disabled the boot flag to stop people doing this kind of mod. (Can't find the link any more though). So my most specific question is: can someone recommend a compact flash card that does allow the boot flag to be set? Please say whether you've done it yourself, or just heard about it from someone else. Beyond that, is this generally a problem?

    Read the article

  • postfix destination concurrency

    - by Hamlet
    Hi, I have a website sending email using phpmailer - Centos, postfix, php, mysql etc Emails are getting delivered to all hosts correctly except one. Mar 30 14:38:22 server postfix/qmgr[15467]: 7237D218852D: to=, relay=none, delay=0.04, delays=0.04/0.01/0/0, dsn=4.4.2, status=deferred (delivery temporarily suspended: lost connection with mail06.indamail.hu[91.83.45.46] while sending MAIL FROM) They told me to limit concurrent connections to 2 to their server. I did, but they still say I connect more than twice. main.cf: default_destination_concurrency_limit = 1 fallback_relay = smtp_destination_concurrency_limit = 1 initial_destination_concurrency = 1 Any ideas? Thanks, Hamlet

    Read the article

  • Bypass spam check for Auth users in postfix

    - by magiza83
    I would like to know if there is any option to "FILTER" auth users in postfix. Let me explain me better, I have the amavis and dspam services between postfix(25) and postfix(10026) but I would like to avoid this check if the users are authenticated. postfix(25)->policyd(10031)->amavis(10024)->postfix(10025)->dspam(dspam.sock)->postfix(10026)--->cyrus | /|\ |________auth users______________________________________________________________| my conf is: main.cf ... smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_sasl_path = smtpd smtpd_recipient_restrictions = permit_sasl_authenticated, reject_unauth_destination, check_policy_service inet:127.0.0.1:10040, reject_invalid_hostname, reject_rbl_client multi.uribl.com, reject_rbl_client dsn.rfc-ignorant.org, reject_rbl_client dul.dnsbl.sorbs.net, reject_rbl_client list.dsbl.org, reject_rbl_client sbl-xbl.spamhaus.org, reject_rbl_client bl.spamcop.net, reject_rbl_client dnsbl.sorbs.net, reject_rbl_client cbl.abuseat.org, reject_rbl_client ix.dnsbl.manitu.net, reject_rbl_client combined.rbl.msrbl.net, reject_rbl_client rabl.nuclearelephant.com, check_policy_service inet:127.0.0.1:10031, permit_mynetworks, reject ... I would like something like "FILTER smtp:localhost:10026" in case they are authenticated, because in my actual configuration i'm only avoiding policyd, but not amavis and dspam. Thanks.

    Read the article

  • fsck: FILE SYSTEM WAS MODIFIED after each check with -c, why?

    - by Chris
    Hi I use a script to partition and format CF cards (connected with a USB card writer) in an automated way. After the main process I check the card again with fsck. To check bad blocks I also tried the '-c' switch, but I always get a return value != 0 and the message "FILE SYSTEM WAS MODIFIED" (see below). I get the same result when checking the very same drive several times... Does anyone know why a) the file system is modified at all and b) why this seems to happen every time I check and not only in case of an error (like bad blocks)? Here's the output: linux-box# fsck.ext3 -c /dev/sdx1 e2fsck 1.40.2 (12-Jul-2007) Checking for bad blocks (read-only test): done Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Volume (/dev/sdx1): ***** FILE SYSTEM WAS MODIFIED ***** Volume (/dev/sdx1): 5132/245760 files (1.2% non-contiguous), 178910/1959896 blocks Thanks, Chris

    Read the article

  • Windows CE and the Compact Framework are dead?

    - by Valter Minute
    This is one of the question that I’ve been asked more and more frequently at my public speeches and each time I meet customers. The announcement of the new Windows Phone 7 platform and the release of Visual Studio 2010 generated a bit of confusion around Windows CE and some of the technologies it supports. Windows CE is still alive and a lot of good programmers are working on the new releases (I had a chance to know some of them during the MVP summit in February). Here’s a blog post from Olivier Bloch that describes the situation and provides some good news about the OS: http://blogs.msdn.com/obloch/archive/2010/05/03/windows-ce-is-not-dead.aspx As you can read here, Windows Phone 7 keeps its “roots” inside Windows CE. Regarding the .NET Compact Framework, this article from the excellent “I know the answer (it’s 42)” blog from Abhinaba (it seems that we share a passion for photography, Douglas Adams and embedded development), explains that the .NET CF is the foundation of XNA and Silverlight implementation on the WP7 platform: http://blogs.msdn.com/abhinaba/archive/2010/03/18/what-is-netcf.aspx So Windows CE is here to stay, powering one of the most interesting smart phone platforms and ready to power also your devices. Add those blogs to your RSS reader list and stay tuned for more good news about CE and the Compact Framework!

    Read the article

  • how do I remove the last connected users from the lightdm greeter list

    - by Christophe Drevet
    With gdm3, I was able to remove the last connected users from the list by removing the file '/var/log/ConsoleKit/history' With lightdm, the last users appears even when : removing /var/log/ConsoleKit/history removing /var/lib/lightdm/.cache/unity-greeter/state Where does lightdm store this list ? Edit: It seems like it's using the content from the last command. Then purging the content of the file /var/log/wtmp is sufficient to remove any previously connected user from the list : # > /var/log/wtmp But, after doing this, I have the unwanted side effect that users loging in via lightdm doesn't appears at all in this list. I must say that I'm in a enterprise network environment using NIS. Edit2: Well, it seems that lightdm uses wtmp to display recent network users list, but does not update it. So, lightdm will show a network user only if it logged in in another fashion (ssh, login), like I did on this computer before. cf: https://bugs.launchpad.net/lightdm/+bug/871070 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=648604 Edit3: I just added the following line to the file /etc/pam.d/lightdm To force lightdm to store users in wtmp : session optional pam_lastlog.so silent

    Read the article

  • Cannot run setups from a vboxsvr mapped network drive on Windows 7 within VirtualBox

    - by Dimitri C.
    I'm trying to run an application setup by double-clicking the setup.exe from within Windows Explorer. The file is located on a mapped network drive, and I'm using Windows 7. This results in the following error message: The specified path does not exist. Check the path, and then try again. The workaround I found is to copy the installer to the main hard drive (c:) and run it from there; however, this is rather inconvenient. The same action did work on Windows 2000, Windows XP and Windows Vista. I have the impression that the problem only occurs with installers, as everything seemed to work fine with regular exe's. Is there anyone who can explain this odd behavior? Update: After some extended tests I noticed that the problem only occurs with a mapped drive of VirtualBox's "shared folders" (cf. vboxsvr). Mapping an SMB drive works fine.

    Read the article

  • How to setup heartbeat for IP fail over on SSH failure

    - by Tony
    I wonder if anyone can help me, I am trying to setup heartbeat on a redhat 5 to failover an IP address when ssh stops responding on a server. So basically you ssh to a VIP and then get put through which ever server has the floating ip. 192.168.0.100 | | /------------------------\ | /------------------------\ | Server 01 | | | Server 02 | | eth0 - 192.168.0.1 |-----/ | eth0 - 192.168.0.2 | | eth0:0 - 192.168.0.100 | | eth0:0 - down | \------------------------/ \------------------------/ if ssh stops responding i want eth0:0 to be brought up on the second machine to allow ssh connections to carry on being served. I have tried to follow some documents I have found online so here is my current configuration: ha.cf bcast eth0 keepalive 2 warntime 10 deadtime 30 initdead 120 udpport 694 auto_failback off node vm-bal01 node vm-bal02 debugfile /var/log/ha-debug logfile /var/log/ha-log authkeys auth 1 1 sha1 sshhhsecret1234 haresources server01 192.168.0.100/24/eth0:0/192.168.0.255 Hope someone can help as this is driving me nuts...

    Read the article

  • How can I limit CloudFront downloads

    - by Alex Crouzen
    I'm looking to use Amazon's CloudFront to host some content in the near future. Currently, I'm keeping it very simple and I'm just uploading my content to S3 and then making a distribution available via Cloudfront. However, because I have a limited budget, I'd like to be able to limit the number of downloads or the money spent on bandwidth. As far as I can see, I can't set any quotas or budgets like you can in Google's App Engine, so I'm looking for another way of doing this. Has anyone had any experience doing this? One approach I'm thinking of is having to place a webserver with redirects in between, but that kind of defeats the simplicity of CF for me.

    Read the article

  • postfix header_checks using regexp proper setup

    - by Philip Rhee
    I just can't seem to figure out why header_checks are not being evaluated. I'm on Ubuntu 12.04, postfix 2.7, dovecote, spamassasin, clamav, amavis. I add following line to /etc/postfix/main.cf : header_checks = regexp:/etc/postfix/header_checks And here is header_checks : /From: .*/ REPLACE From: [email protected] To test out regexp : #postmap -q "From: <werwe>" regexp:/etc/postfix/header_checks which evaluates correctly and give me return output of : REPLACE From: [email protected] However, when I try to send email from commandline or from php webpage, postfix will not replace the From header. I'm stumped.

    Read the article

  • Apache in front of tomcat on Railo proxy with ajp

    - by user1468116
    I'm trying to setup apache in front of the tomcat embedded in railo. I have this settings: <VirtualHost *:80> DocumentRoot "/var/www/myapp" ServerName www.myapp.test ServerAlias www.myapp.test ProxyRequests Off ProxyPass /app ! <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost On ProxyPassReverse / ajp://%{HTTP_HOST}:8009/ RewriteEngine On # If it's a CFML (*.cfc or *.cfm) request, just proxy it to Tomcat: RewriteRule ^(.+\.cf[cm])(/.*)?$ ajp://%{HTTP_HOST}:8009/$1$2 [P] My server.xml : <Host name="www.myapp.test" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <Context path="" docBase="/var/www/myapp" /> <Alias>myapp.test</Alias> </Host> The index is loaded, but if I try to load some internal page I got: The proxy server could not handle the request GET /report/myreportname. Reason: DNS lookup failure for: localhost:8009report Could you help me?

    Read the article

  • "Recipient address rejected" when sending an email to an external address with sendgrid

    - by WJB
    In postfix, I'm using relay_host to send an email to an external address using sendgrid, but I get an error about local ricipient table when sending an email from my PHP code. This is my main.cf in /postfix/ ## -- Sendgrid smtp_sasl_auth_enable = yes smtp_sasl_password_maps = static:username:password smtp_sasl_security_options = noanonymous smtp_tls_security_level = may header_size_limit = 4096000 relayhost = [smtp.sendgrid.net]:587 This is the error message from the log: postfix/smtpd[53598]: [ID 197553 mail.info] NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 550 5.1.1 Recipient address rejected: User unknown in local recipient table; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<localhost.localdomain> One interesting thing is when I use "sendmail [email protected]" from the command line, the email is delivered successfully using SendGrid. I think it's because this uses postfix/smtp instead of postfix/smtpD the log for this says, postfix/smtp[18670]: [ID 197553 mail.info] AAF7313A7E: to=, relay=smtp.sendgrid.net[50.97.69.148]:587, delay=4.1, delays=3.5/0.02/0.44/0.18, dsn=2.0.0, status=sent (250 Delivery in progress) Thank you

    Read the article

  • How to troubleshoot suspend and hibernate in Ubuntu

    - by Aerik
    I have Ubuntu Lucid installed on a Panasonic Toughbook CF-29. Most things work well, but, under Gnome, suspend and hibernate do not work. Interesting, in Xubuntu, hibernate does work. So my question is twofold: How do I troubleshoot the hibernate function in Gnome desktop (since I know the laptop can hibernate in Ubuntu), and How to go about troubleshooting the suspend function? I got as far as looking at the /var/log/pm-suspend.log, but that just tells me the things that ran successfully... I'm kind of stuck there.

    Read the article

  • postfix/postdrop Issue with Solaris 10 (sparc) - permissions

    - by Zayne
    I am trying to get postfix (installed from blastwave) working on a Solaris 10 server, but only root is allowed to send mail. The problem appears to be permission related with postdrop. postdrop: warning: mail_queue_enter: create file maildrop/905318.27416: Permission denied I've checked that /var/opt/csw/spool/postfix/maildrop and /var/opt/csw/spool/postfix/public are both in the 'postdrop' group. main.cf contains setgid_group = postdrop. ppriv on postdrop as non-root user reports: postdrop[27336]: missing privilege "file_dac_write" (euid = 103, syscall = 5) needed at ufs_iaccess+0x110 I'm at a loss as to what to do next. I'm don't have much experience with Solaris; I use Linux daily. Any suggestions?

    Read the article

  • cant send using postfix from external ip address

    - by daniel
    i have postfix set up as a satellite to listen on port 587 i can send email outside fine trough the postfix(ubuntu) box from the local network with no problems when i try to connect to the postfix(ubuntu) box from a external ip and send mail it spits back a 554 5.7.1 Relay access denied error i can telnet to it fine, just cant send mail this is my main.cf : smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no readme_directory = no smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = smtp_use_tls = no myhostname = cotiso-desktop alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = mydomainname.com, cotiso-desktop, localhost.localdomain, localhost relayhost = smtp.mydomainname.com mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = all there is no security set up yet, i'm just trying to get it working first any ideas? thanks in advance

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part I

    - by dbayard
    Abstract: This blog post will show how we used Oracle R Enterprise to tackle a customer’s big calculation problem across a big data set. Overview: Databases are great for managing large amounts of data in a central place with rigorous enterprise-level controls.  R is great for doing advanced computations.  Sometimes you need to do advanced computations on large amounts of data, subject to rigorous enterprise-level concerns.  This blog post shows how Oracle R Enterprise enables R plus the Oracle Database enabled us to do some pretty sophisticated calculations across 1 million accounts (each with many detailed records) in minutes. The problem: A financial services customer of mine has a need to calculate the historical internal rate of return (IRR) for its customers’ portfolios.  This information is needed for customer statements and the online web application.  In the past, they had solved this with a home-grown application that pulled trade and account data out of their data warehouse and ran the calculations.  But this home-grown application was not able to do this fast enough, plus it was a challenge for them to write and maintain the code that did the IRR calculation. IRR – a problem that R is good at solving: Internal Rate of Return is an interesting calculation in that in most real-world scenarios it is impractical to calculate exactly.  Rather, IRR is a calculation where approximation techniques need to be used.  In this blog post, we will discuss calculating the “money weighted rate of return” but in the actual customer proof of concept we used R to calculate both money weighted rate of returns and time weighted rate of returns.  You can learn more about the money weighted rate of returns here: http://www.wikinvest.com/wiki/Money-weighted_return First Steps- Calculating IRR in R We will start with calculating the IRR in standalone/desktop R.  In our second post, we will show how to take this desktop R function, deploy it to an Oracle Database, and make it work at real-world scale.  The first step we did was to get some sample data.  For a historical IRR calculation, you have a balances and cash flows.  In our case, the customer provided us with several accounts worth of sample data in Microsoft Excel.      The above figure shows part of the spreadsheet of sample data.  The data provides balances and cash flows for a sample account (BMV=beginning market value. FLOW=cash flow in/out of account. EMV=ending market value). Once we had the sample spreadsheet, the next step we did was to read the Excel data into R.  This is something that R does well.  R offers multiple ways to work with spreadsheet data.  For instance, one could save the spreadsheet as a .csv file.  In our case, the customer provided a spreadsheet file containing multiple sheets where each sheet provided data for a different sample account.  To handle this easily, we took advantage of the RODBC package which allowed us to read the Excel data sheet-by-sheet without having to create individual .csv files.  We wrote ourselves a little helper function called getsheet() around the RODBC package.  Then we loaded all of the sample accounts into a data.frame called SimpleMWRRData. Writing the IRR function At this point, it was time to write the money weighted rate of return (MWRR) function itself.  The definition of MWRR is easily found on the internet or if you are old school you can look in an investment performance text book.  In the customer proof, we based our calculations off the ones defined in the The Handbook of Investment Performance: A User’s Guide by David Spaulding since this is the reference book used by the customer.  (One of the nice things we found during the course of this proof-of-concept is that by using R to write our IRR functions we could easily incorporate the specific variations and business rules of the customer into the calculation.) The key thing with calculating IRR is the need to solve a complex equation with a numerical approximation technique.  For IRR, you need to find the value of the rate of return (r) that sets the Net Present Value of all the flows in and out of the account to zero.  With R, we solve this by defining our NPV function: where bmv is the beginning market value, cf is a vector of cash flows, t is a vector of time (relative to the beginning), emv is the ending market value, and tend is the ending time. Since solving for r is a one-dimensional optimization problem, we decided to take advantage of R’s optimize method (http://stat.ethz.ch/R-manual/R-patched/library/stats/html/optimize.html). The optimize method can be used to find a minimum or maximum; to find the value of r where our npv function is closest to zero, we wrapped our npv function inside the abs function and asked optimize to find the minimum.  Here is an example of using optimize: where low and high are scalars that indicate the range to search for an answer.   To test this out, we need to set values for bmv, cf, t, emv, tend, low, and high.  We will set low and high to some reasonable defaults. For example, this account had a negative 2.2% money weighted rate of return. Enhancing and Packaging the IRR function With numerical approximation methods like optimize, sometimes you will not be able to find an answer with your initial set of inputs.  To account for this, our approach was to first try to find an answer for r within a narrow range, then if we did not find an answer, try calling optimize() again with a broader range.  See the R help page on optimize()  for more details about the search range and its algorithm. At this point, we can now write a simplified version of our MWRR function.  (Our real-world version is  more sophisticated in that it calculates rate of returns for 5 different time periods [since inception, last quarter, year-to-date, last year, year before last year] in a single invocation.  In our actual customer proof, we also defined time-weighted rate of return calculations.  The beauty of R is that it was very easy to add these enhancements and additional calculations to our IRR package.)To simplify code deployment, we then created a new package of our IRR functions and sample data.  For this blog post, we only need to include our SimpleMWRR function and our SimpleMWRRData sample data.  We created the shell of the package by calling: To turn this package skeleton into something usable, at a minimum you need to edit the SimpleMWRR.Rd and SimpleMWRRData.Rd files in the \man subdirectory.  In those files, you need to at least provide a value for the “title” section. Once that is done, you can change directory to the IRR directory and type at the command-line: The myIRR package for this blog post (which has both SimpleMWRR source and SimpleMWRRData sample data) is downloadable from here: myIRR package Testing the myIRR package Here is an example of testing our IRR function once it was converted to an installable package: Calculating IRR for All the Accounts So far, we have shown how to calculate IRR for a single account.  The real-world issue is how do you calculate IRR for all of the accounts?This is the kind of situation where we can leverage the “Split-Apply-Combine” approach (see http://www.cscs.umich.edu/~crshalizi/weblog/815.html).  Given that our sample data can fit in memory, one easy approach is to use R’s “by” function.  (Other approaches to Split-Apply-Combine such as plyr can also be used.  See http://4dpiecharts.com/2011/12/16/a-quick-primer-on-split-apply-combine-problems/). Here is an example showing the use of “by” to calculate the money weighted rate of return for each account in our sample data set.  Recap and Next Steps At this point, you’ve seen the power of R being used to calculate IRR.  There were several good things: R could easily work with the spreadsheets of sample data we were given R’s optimize() function provided a nice way to solve for IRR- it was both fast and allowed us to avoid having to code our own iterative approximation algorithm R was a convenient language to express the customer-specific variations, business-rules, and exceptions that often occur in real-world calculations- these could be easily added to our IRR functions The Split-Apply-Combine technique can be used to perform calculations of IRR for multiple accounts at once. However, there are several challenges yet to be conquered at this point in our story: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In our next blog post in this series, we will show you how Oracle R Enterprise solved these challenges.

    Read the article

  • My card reader doesn't show up at all, but previously did in 10.10

    - by Nathan J. Brauer
    I bought a pin-based-USB powered internal media card reader and it worked perfectly when I first installed Ubuntu 10.10 a month ago. I used it a few times since and today I booted up the computer and it's not working. Here's how it used to work: In nautilus->computer, 5 "drives" would display even when no card was inserted. One for each slot (SD, XD, CF/MS, etc). Opening one w/o a card would initiate a "Please insert card." dialog. Inserting a card would automatically open nautilus. Now: No drives display at all whether cards are inserted or not. lsusb lists the following (which seems to indicate that it's not being detected) Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 002: ID 046d:c52e Logitech, Inc. (my keyboard/mouse) Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 002: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub (A side note, all of my USB ports are 2.0 + 1 USB port which is 3.0, why does everything say 1.1/2.0?) I tried using ubuntu-bug but for USB devices it expects you to be able to remove and insert them while the computer is running -- obviously something you probably shouldn't be doing when you're dealing with devices plugged straight into the USB pins. Thanks in advance!

    Read the article

  • configure Heartbeat on Centos Linux - error message

    - by Elad Dotan
    I installed Heartbeat on my Centos Linux and it seems to partially work..but I'm trying to monitor a service with no success. only when I reboot the main server the backup server takes over. in the logs I get : heartbeat[30476]: 2012/03/20_18:51:57 WARN: string2msg_ll: node [node1] failed authentication heartbeat[30476]: 2012/03/20_18:51:58 WARN: string2msg_ll: node [node02] failed authentication the authkeys is identical (copied from one to another). this is my ha.cf: logfile /var/log/ha-log logfacility local0 keepalive 2 deadtime 30 initdead 120 bcast eth0 udpport 694 auto_failback on node server01.com node server02.com haresources : server01.com 38.108.117.3 aim chat any idea how to fix the problem so if a service stops the other server take over Thanks! E.

    Read the article

  • why do i get an SPF Softfail?

    - by johnlai2004
    I installed SPF on my LAMP server with postfix. But for some reason, I get this error Received-SPF: softfail (mta1070.mail.re4.yahoo.com: domain of transitioning [email protected] does not designate 1.1.1.1 as permitted sender) I have two questions: 1) how do I trouble shoot this error 2) I've been looking through my configuration files in an attempt to change [email protected] to [email protected] because anotherurl.com has the correct SPF TXT records. Where do i go to change this? I tried editing myhostname under /etc/postfix/main.cf, but it didn't do anything.

    Read the article

  • postfix, TLS and rapidssl - "verify error:num=19:unable to get local issuer certificate"

    - by technobuddha
    I have been googeling for days! I have a cert from rapidssl. I read up that the problem with num=20, is that indicates it doesn't know the issuer, or doesn't know the ROOT Cert, right? I run this command: openssl s_client -showcerts -connect smtp.server.com:465 I get this error: verify error:num=19:self signed certificate in certificate chain Here is what i have in my postfix main.cf, and what i have done: smtpd_tls_key_file = /etc/postfix/ssl/smtp.server.com.rsa.key (this is the private key) smtpd_tls_cert_file = /etc/postfix/ssl/smtp.server.com.PUBLIC.key (this is the public key given to me by rapidssl) smtpd_tls_CAfile = /etc/postfix/ssl/combo.csr.key This key has both the intermediate keys ON TOP, and the ROOT KEY on the bottom. Here is the Intermediate keys: https://knowledge.geotrust.com/library/VERISIGN/ALL_OTHER/geotrust%20ca/GT_QuickSSL_and_Premium_and_Trial_intermediate_bundle.pem and here is the root CERT: http://www.geotrust.com/resources/root_certificates/certificates/Equifax_Secure_Certificate_Authority.cer anyone know how to use rapidssl certs?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >