Search Results

Search found 6079 results on 244 pages for 'define'.

Page 170/244 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • emacs ORG-mode "headless" export-as commands?

    - by Seamus
    When I use org-export-as-latex or org-export-as-html orgmode turns my buffer into a .tex file or .html file. But I don't want all the extra junk that it adds to the file: I want to handle the documentclass and everything myself and just \input the org mode generated file. (Or the analogous things for html with php). So if my org file just has: * Section - Stuff - Things I want the org mode command to output just \section{Section} \begin{itemize} \item Stuff \item Things \end{itemize} Without any of the extra \tableofcontents junk that ORG adds to it. I know I could define my own kind of #+LaTeX_CLASS that could add the packages I want and so on, but I don't want to do things that way (and that wouldn't remove the \maketitle or the spurious \vspace* that ORG insists on inserting. Is there a command to do this "headless" parsing and converting? I had a look but it's not obvious from the documentation. Presumably some low level ORG command is doing the parsing and converting I want, but I couldn't find what it was called from looking at the docs and C-h pages... This is not a question about HTML or LaTeX but about emacs ORG mode. So don't kick it off to some other site...

    Read the article

  • ACDSee alternatives for batch editing images

    - by Oxwivi
    I am looking for free, preferably open, alternatives to ACDSee for batch editing work. While I can do much of the work well on ACDSee, it's not entirely satisfactory despite having to pay for it. I need at least the following batch editing functions: Resize using either height or width and maintain aspect ration Auto contrast text overlays and occasionally, cropping oh, I make extensive use of renaming features as well Couple of issues with ACDSee are: I always need to highlight the Exposure section or auto contrast will not be done despite it being saved in the preset; and I can't define, move around the cropping box, forcing me to manually crop tons of images. I'm not an advanced, or "power photo-editor". I only require the basic stuff I described to be automated. My personal feature wish list (I'm pretty sure something so niche doesn't exist) would be text overlay based on the image names (images are named as image-1_1, image-1_2 or image-2_c1_1, image-2_c1_2, and text overlay would Image-1 and Image-2 C1 and Image-2 C2). I tried digiKam, but damn that thing is huge. It runs very slowly on my Pentium 4 and 1.5 GB RAM. On top of being a program with over 1 GB of files, the KDE library it uses is always slow regardless of it running on either Windows or Linux.

    Read the article

  • Apple: Bind a key to a commandline command?

    - by Stefan Lasiewski
    I have a Mac Powerbook running Leopard (10.5.8). Does Leopard provide an easy way to bind keys to commands which are typically run on the commandline? For example, I can open up Terminal.app and run the command /System/Library/Frameworks/ScreenSaver.framework/Resources/ScreenSaverEngine.app/Contents/MacOS/ScreenSaverEngine which will activate the screensaver and lock my screen. What if I want to bind 'Apple-key L' to this command and execute this globally, regardless of which application is in use at the moment? Can I do this, or can I only run ScreenSaverEngine from a Terminal window? I tried to set up global keyboard shortcuts, but it seems that this won't allow me to bind a key to an arbitrary shell command: Note: You can create keyboard shortcuts only for existing menu commands. You cannot define keyboard shortcuts for general purpose tasks such as opening an application or switching between applications. I tried to set up a application keyboard shortcut, but commands like ScreenSaverEngine don't seem to be an application. Note that this Screensaver/Lock screen is just one example. I have come across other nifty commands which I might want to bind to a key-combination as well. I can do this in Gnome and Windows (with varying success). How about with Leopard?

    Read the article

  • Partitioning & Linux

    - by Zac
    Every tutorial on Linux-based partitioning schemes (or, just partitioning in general) will tell you that a PC can have either 4 primary partitions, or 3 primaries and 1 extended. They will all also tell you that Linux (in my case, Ubuntu) can be installed on either. It's also come to my attention that it is not too atypical for FHS directories, such as usr/, tmp/, etc/, home/ or var/ to be mounted separately on other partitions. Several questions I am unable to find the answers to, purely for my own edification: (1) By "PC", are we really talking about common PC disk types, like IDE or SATA? I guess I'm wondering why PC uses are limited to 4 primaries or 3 primaries + 1 extended (2) I'm choking on some basic OS concepts: it is said that a partition can be mounted by a file system or an OS. So I assume this means I can somehow instruct Ubuntu to mount to 1 partition, and then any part of, say, ReiserFS, to be mounted to another partition? How? (3)(a) What about creating swap partitions? Is there too much of a good thing with swap partitioning? If I have 4GB RAM over 320GB disk, what should my swap partition size be, and why? (3)(b) Are swap files the only way to create swap partitions? Wouldn't a Linux partitioning utility allow me to define a partition as being for virtual memory only? (4) Why are partitions limited to being "mounted" by just OSes and file systems? Why couldn't I write a program to take up its own, say, 512 MB partition, and then have it invoked or uses by an OS installed on another partition? Thanks for shedding any light here... not critical that I know this stuff, but it's got me thinking incessantly. And when I think incessantly, I...can't......sleep....

    Read the article

  • how to setup a bridge with 2 NICs and few virtual machines

    - by Bond
    Here is my situation. I have a server with 2 NICs. I have installed virtual box and I have created a few Guest Operating Systems on it. I want these Virtual Machines to be using a bridge.NIC2 would be used to setup this bridge and NIC1 would be connected to corporate network.I am not clear with how should I go on doing this. /etc/network/interfaces is the file which I am trying to modify etc. My approach is following 1) Define a configuration file /etc/network/interfaces 2) Create IPTABLES as how NIC1 will forward the packets to Bridge on NIC2 Now comes the problem I do not understand what is the meaning of following lines in the configuration file auto lo iface lo inet loopback # The primary network interface auto eth2 iface eth2 inet manual auto br0 iface br0 inet static address 192.168.1.14 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 gateway 192.168.1.10 # dns-* options are implemented by the resolvconf package, if installed dns-nameservers 192.168.13.2 dns-search myserver.net bridge_ports eth2 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off So any pointers to what should be the entries of /etc/network/interfaces file. So that I understand which parameter is to be used when and where that would help me.

    Read the article

  • "could not find suitable fingerprints matched to available hardware" error

    - by Alex
    I have a thinkpad t61 with a UPEK fingerprint reader. I'm running ubuntu 9.10, with fprint installed. Everything works fine (I am able to swipe my fingerprint to authenticate any permission dialogues or "sudo" prompts successfully) except for actually logging onto my laptop when I boot up or end my session. I receive an error below the gnome login that says "Could not locate any suitable fingerprints matched to available hardware." What is causing this? here are the contents of /etc/pam.d/common-auth file # # /etc/pam.d/common-auth - authentication settings common to all services # # This file is included from other service-specific PAM config files, # and should contain a list of the authentication modules that define # the central authentication scheme for use on the system # (e.g., /etc/shadow, LDAP, Kerberos, etc.). The default is to use the # traditional Unix authentication mechanisms. # # As of pam 1.0.1-6, this file is managed by pam-auth-update by default. # To take advantage of this, it is recommended that you configure any # local modules either before or after the default block, and use # pam-auth-update to manage selection of other modules. See # pam-auth-update(8) for details. # here are the per-package modules (the "Primary" block) auth sufficient pam_fprint.so auth [success=1 default=ignore] pam_unix.so nullok_secure # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_ecryptfs.so unwrap # end of pam-auth-update config #auth sufficient pam_fprint.so #auth required pam_unix.so nullok_secure

    Read the article

  • Users and Groups management on 7 Home Premium

    - by AviD
    Recently upgraded the home pc from XP pro, to Windows 7 Home Premium. I'm looking for a solution for a few things that seem to be missing from this edition... Since Local Users and Groups is blocked on Home Premium, I can't figure out how to manage groups, or even do anything even slightly advanced to users (basically, create/group/picture is it). net localgroup, net users, net etc dont seem to work - getting "system error 5". While I'm on the topic, I cant activate (what was once) "Local Security Policy"... Looking for any help, advice, or even a new direction cuz things is differ'nt on Winnows7... To clarify, I'm looking to do some of the following, which were simply back in XP-land: remote user only (i.e. no local logon) Grant special privileges for specific user grant access to e.g. C$ share for specific remote user create custom groups for users, to be able to separate privileges of say, my wife's from my kids define quite specifically what each user can do (beyond just standard users) Harden OS (hmm, i guess maybe what i'm looking for is security hardening guide for 7...?)

    Read the article

  • Cookieless Domain redirect in WHM/cPANEL

    - by Patrick Lanfranco
    I am currently trying to get my head around in understanding how to set-up a "cookieless" domain using WHM / Cpanel - unfortunately without any success at this moment. I have a Magento store and I would like to use "cookieless domains" for my media, skin (template) and js files. Magento has a nice feature to define URL for those folders. My current setup is as follows: www.mydomain.com <- main store media.mydomain.com <- subdomain to the media folder (mydomain.com/media/) skin.mydomain.com <- subdomain to the media folder (mydomain.com/skin/) js.mydomain.com <- subdomain to the media folder (mydomain.com/js/) I think it's poinless to have them used as "cookieliess domains" since my Magento installation uses .mydomain.com as cookie domain, so what I would like to achieve is to register a new additional domain and have it point via WHM / cPanel to those specific locations. I have tried to change the A and CNAME records although without any success as they were just simply redirecting from one page to another in the browser (newdomain.com - jump to old.com). What kind of records do I have to set to have this working properly? Some advice would be highly appreciated.

    Read the article

  • Debian 6: setting up FTP just for website editing

    - by David Oliver
    I have a VPS using Debian 6.0. Currently, SSH is set to not accept password logins, and only key-based ones. A person who needs to work on one particular website (a vhost) wishes to use FTP. He doesn't need/want SSH. How can I set up FTP access for him, enabling him to have write permissions for all files in the relevant directory, and only the relevant directory? The directory is /srv/www/domainname.com/public_html Currently, all directories and files in that directory belong to www-data:www-data and are 644/755. I've installed vsftpd and have been reading some guides, but they all seem to deal with allowing multiple users to have their own user-named directories which isn't what I'm after. I can't seem to work out how to simply define one FTP user with a password that has access to one directory of my choosing. This is my first experience of setting up an FTP server. Thanks. Edit: have also found this - maybe I should be using ProFTPd, or can vsftpd also do what I want?

    Read the article

  • Looking for some IIS redirect help/ideas

    - by CoreyT
    Right now we have a site with a LOT of static asp pages such as, www.site.com/123.asp. This is due to how our current site's CMS builds it's pages by default. I don't have an exact count but we have roughly 6000 asp files in the site right now. We are in the middle of a redesign and restructuring of the site, and are looking to migrate to SEO friendly URLs. The problem we're having right now is what do we do to redirect the old pages to the new friendly URLs? I know how to do redirects that is not the issue here. The problems I am coming up with right now are listed below. 1 - Is there a limit to the number of redirects in IIS? 2 - Would having even a few thousand redirects affect IIS performance? 3 - My understanding is that we would not be passing along page rank to the new URLs, is that true? (not a major question I can ask on more SEO forums if nobody here is sure) 4 - Would using something like the IIS URL Rewrite 2 module for IIS 7 help us out? Or would I still need to define several thousand unique redirects in it? Our server right now is running Server 2003, however in the redesign I would be open to migrating to Server 2008 R2 if there is a good case for it (i.e. the URL Rewrite module). Thanks for any guidance or help. I have been looking for a good way to do this for a while now and keep coming up with things that sound problematic and bad (such as having 6000 redirects).

    Read the article

  • MAMP Pro virtual hosts on Mountain Lion not being recognized

    - by user135242
    I'm running MAMP PRO 2.1.1 on OSX 10.8.1 and not having any luck getting Apache to recognize any hosts that have been set up in MAMP besides localhost. This only happens when trying to use port 80. When I switch to port 8888 everything runs fine. Mountain Lion's Apache has been disabled and I get no errors or warnings when starting MAMP and the servers. Any doc root I set for "localhost" runs without problem. Any other server name I define, however, results in "cannot connect" when viewing in Chrome (not to be confused with "cannot find" - the browser is in fact following /etc/hosts to 127.0.0.1, but MAMP's Apache is simply not responding) I was wondering if anyone else has run into this issue or knows how to solve it. I'm working on some WordPress development and it keeps wanting to redirect to the base URL (with no port reference) even during setup. I'm sure I could fix things from the WP side but I'd rather figure out what the root issue is with MAMP. Thanks in advance for any insight you can provide.

    Read the article

  • Goal setting/tracking packages for software projects

    - by Avi
    I'm a developer working by myself. I'm looking for a computerized tool to manage my goals and activities. I own it Microsoft Project, but I don't like it. I've started many "projects" but could never keep on using it. Too complex and heavyweight for me. I use MS-Outlook tasks. They are not what I need. No planning capability. Tracking is not nice. I'm using the Pomodoro technique and I like it, but I'm looking for something more comprehensive and with better computerized support. Something that would allow me to define goals with dependencies and time estimation, keep daily prioritized lists etc. So, I'm looking for a solution. One I've found is GoalPro, but I uneasy because I could not find a cross-product "top ten" like review. Are you using any goal setting package such as GoalPro? Which? Does it help? Pros and Cons?

    Read the article

  • Protect all XML-RPC calls with HTTP basic auth but one

    - by bodom_lx
    I set up a Django project for smartphone serving XML-RPC methods over HTTPS and using basic auth. All XML-RPC methods require username and password. I would like to implement a XML-RPC method to provide registration to the system. Obviously, this method should not require username and password. The following is the Apache conf section responsible for basic auth: <Location /RPC2> AuthType Basic AuthName "Login Required" Require valid-user AuthBasicProvider wsgi WSGIAuthUserScript /path/to/auth.wsgi </Location> This is my auth.wsgi: import os import sys sys.stdout = sys.stderr sys.path.append('/path/to/project') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings' from django.contrib.auth.models import User from django import db def check_password(environ, user, password): """ Authenticates apache/mod_wsgi against Django's auth database. """ db.reset_queries() kwargs = {'username': user, 'is_active': True} try: # checks that the username is valid try: user = User.objects.get(**kwargs) except User.DoesNotExist: return None # verifies that the password is valid for the user if user.check_password(password): return True else: return False finally: db.connection.close() There are two dirty ways to achieve my aim with current situation: Have a dummy username/password to be used when trying to register to the system Have a separate Django/XML-RPC application on another URL (ie: /register) that is not protected by basic auth Both of them are very ugly, as I would also like to define a standard protocol to be used for services like mine (it's an open Dynamic Ridesharing Architecture) Is there a way to unprotect a single XML-RPC call (ie. a defined POST request) even if all XML-RPC calls over /RPC2 are protected?

    Read the article

  • Protect all XML-RPC calls with HTTP basic auth but one

    - by bodom_lx
    I set up a Django project for smartphone serving XML-RPC methods over HTTPS and using basic auth. All XML-RPC methods require username and password. I would like to implement a XML-RPC method to provide registration to the system. Obviously, this method should not require username and password. The following is the Apache conf section responsible for basic auth: <Location /RPC2> AuthType Basic AuthName "Login Required" Require valid-user AuthBasicProvider wsgi WSGIAuthUserScript /path/to/auth.wsgi </Location> This is my auth.wsgi: import os import sys sys.stdout = sys.stderr sys.path.append('/path/to/project') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings' from django.contrib.auth.models import User from django import db def check_password(environ, user, password): """ Authenticates apache/mod_wsgi against Django's auth database. """ db.reset_queries() kwargs = {'username': user, 'is_active': True} try: # checks that the username is valid try: user = User.objects.get(**kwargs) except User.DoesNotExist: return None # verifies that the password is valid for the user if user.check_password(password): return True else: return False finally: db.connection.close() There are two dirty ways to achieve my aim with current situation: Have a dummy username/password to be used when trying to register to the system Have a separate Django/XML-RPC application on another URL (ie: /register) that is not protected by basic auth Both of them are very ugly, as I would also like to define a standard protocol to be used for services like mine (it's an open Dynamic Ridesharing Architecture) Is there a way to unprotect a single XML-RPC call (ie. a defined POST request) even if all XML-RPC calls over /RPC2 are protected?

    Read the article

  • Puppet write hosts using api call

    - by Ben Smith
    I'm trying to write a puppet function that calls my hosting environment (rackspace cloud atm) to list servers, then update my hosts file. My get_hosts function is currently this: require 'rubygems' require 'cloudservers' module Puppet::Parser::Functions newfunction(:get_hosts, :type => :rvalue) do |args| unless args.length == 1 raise Puppet::ParseError, "Must provide the datacenter" end DC = args[0] USERNAME = DC == "us" ? "..." : "..." API_KEY = DC == "us" ? "..." : "..." AUTH_URL = DC == "us" ? CloudServers::AUTH_USA : CloudServers::AUTH_UK DOMAIN = "..." cs = CloudServers::Connection.new(:username => USERNAME, :api_key => API_KEY, :auth_url => AUTH_URL) cs.list_servers_detail.map {|server| server.map {|s| { s[:name] + "." + DC + DOMAIN => { :ip => s[:addresses][:private][0], :aliases => s[:name] }}} } end end And I have a hosts.pp that calls this and 'should' write it to /etc/hosts. class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { host{ $name: ip => $name[ip], host_aliases => $name[aliases] } } As you can imagine, this isn't currently working and I'm getting a 'Symbol as array index at /etc/puppet/manifests/hosts.pp:2' error. I imagine, once I've realised what I'm currently doing wrong there will be more errors to come. Is this a good idea? Can someone help me work out how to do this?

    Read the article

  • Equivalent of scp -l bandwidth_cap for .ssh/config?

    - by Mark Bennett
    Short form: You can limit the bandwidth the scp uses with the -l switch, you pass a number that's in kbits/sec. I'd rather set this in my .ssh/config file for certain names machines. What's the equivalent named setting for -l ? I haven't been able to find it. Followup question: Generally, not sure how to map back and forth between ssh command line options and config names, short of doing Google searches or manually comparing man pages on a case by case basis. Is there a table that directly equates the two? Longer form of first question, with context: I've started using ssh config quite a bit, especially now that I need to go through a proxy and do lots of port mappings. I even define the same machine more than once depending on what type of tunneling I need. However, when uploading a large file, it's difficult to do anything else on my machine. Even though I have more download bandwidth than up, I think that scp saturates the link so even my small requests can't reach the Internet. There's a fix for this, using the -l bandwidth command line switch for scp. scp -l 1000 bigfile.zip titan: I'd like to use this in my config instead, so I'd create an additional named entry called "titan-upload" and I'd use that as the target whenever I upload. So instead of: scp bigfile.zip titan: I'd say: scp bigfile.zip titan-upload Or even set different caps depending on where I am: scp bigfile.zip titan-upload-from-home vs. scp bigfile.zip titan-upload-from-work I'm generally on Mac and Linux.

    Read the article

  • Nginx + WordPress + HHVM: Why isn't Batcache working? Would Varnish help even more?

    - by javipas
    I've heard great things about HHVM, so I've setup a copy of WordPress blog (on another domain) with Nginx (with the Pagespeed module) and HHVM. Right now the benefits are obvious: on the same config, load times are between two and three times faster. I'm trying to speed up things a little bit, and I've also installed Memcached and Batcache. I've installed the memcached package, copied object-cache.php (Pastebin) onto the root folder of the WordPress blog, and after that I've installed the Batcache plugin and copied the advanced-cache.php (Pastebin) file onto the wp-content folder. Also, I've included the line define('WP_CACHE', true); in the wp-config.php file. It seems it doesn't work, though. If I quickly reload the page several times Batcache should show the cached page, but it doesn't. It's easy to check that by reloading (Cmd+R on Chrome on OS X) the page several times and then viewing the page's code. Under the <head> section I should see some batcache stats, but they aren't there. I wonder if someone could give me some hint on this. On a side note, I don't know if I could add some other component in order to help the performance be even better. I'm thing about Varnish, but I'm not sure if it's just useless and it's just another way to the same I'm currently doing. Any other component there? (I'll test CDN for images, minifying js, etc and some other tricks as well, but I'm talking from the server perspective).

    Read the article

  • Implications of Multiple JobTracker nodes in a Hadoop cluster?

    - by Jim Dennis
    I get the impression that one can, potentially, have multiple JobTracker nodes configured to share the same set of MR (TaskTracker) nodes. I know that, conventionally, all the nodes in a Hadoop cluster should have the same set of configuration files (conventionally under /etc/hadoop/conf/ --- at least for the Cloudera Distribution of Hadoop (CDH). Can we define multiple Job Trackers in mapred-site.xml? Something like: <configuration> <property> <name>mapred.job.tracker</name> <value>jt01.mydomain.not:8021</value> </property> <property> <name>mapred.job.tracker</name> <value>jt02.mydomain.not:8021</value> </property> ... </configuration> Or is there some other allowed syntax for this? What are the implications of doing this. Does each JobTracker get information about the load on each TaskTracker node. In other words can the two JobTracker co-ordinated their scheduling across the TT nodes only based on the gossip information from the TTs or would they need to talk to one another? Is this documented anywhere?

    Read the article

  • What should I encrypt in Debian during install?

    - by ianfuture
    I have seen various guides and recommendations on web about how best to do this but nothing that clearly explains the best way and why. So I understand there is a need for part of Debian during install to be un-encrypted on its own partition to allow it to boot. Most info I have seen is call this /boot and set the boot flag. Next I believe the best approach is to create another partition out of all the rest of the disk space, encrypt this, then on top of that create a LVM and then within the LVM create my various partitions , name them , select size, and file system type. Can I include /swap in the encrypted LVM part ? Is this approach sound? If so what are the partitions I should use (this is going to be a minimal server install with a view to install as and when what I need for a dev server)? Finally how does the installer know what to put in each partition I define ? I appreciate there are more than one question but any help and suggestions would be appreciated. If further clarification is needed please mention in the comments . Thanks.. Ian

    Read the article

  • How to access original files from before a symlink gets updated, which have since been moved to another dir

    - by Luke Cousins
    We have a website and our deployment process goes somewhat like the following (with lots of irrelevant steps excluded) echo "Remove previous, if it exists, we don't need that anymore" rm -rf /home/[XXX]/php_code/previous echo "Create the current dir if it doesn't exist (just in case this is the first deploy to this server)" mkdir -p /home/[XXX]/php_code/current echo "Create the var_www dir if it doesn't exist (just in case this is the first deploy to this server)" mkdir -p /home/[XXX]/var_www echo "Copy current to previous so we can use temporarily" cp -R /home/[XXX]/php_code/current/* /home/[XXX]/php_code/previous/ echo "Atomically swap the symbolic link to use previous instead of current" ln -s /home/[XXX]/php_code/previous /home/[XXX]/var_www/live_tmp && mv -Tf /home/[XXX]/var_www/live_tmp /home/[XXX]/var_www/live # Rsync latest code into the current dir, code not shown here echo "Atomically swap the symbolic link to use current instead of previous" ln -s /home/[XXX]/php_code/current /home/[XXX]/var_www/live_tmp && mv -Tf /home/[XXX]/var_www/live_tmp /home/[XXX]/var_www/live The problem we are having and would like help with is that, the first thing any website page load does is work out the base dir of the application and define it as a constant (we use PHP). If then during that page load a deployment occurs, the system tries to include() a file using the original full path and will get the new version of that file. We need it to get the old one from the old dir which has now moved as in: System starts page load and determines SYSTEM_ROOT_PATH constant to be /home/[XXX]/var_www/live or by using PHP's realpath() it could be /home/[XXX]/php_code/current. Symlink for /home/[XXX]/var_www/live get updated to point to /home/[XXX]/php_code/previous instead of /home/[XXX]/php_code/current where it did originally. System tries to load /home/[XXX]/var_www/live/something.php and gets /home/[XXX]/php_code/current/something.php instead of /home/[XXX]/php_code/previous/something.php I'm sorry if that is not explained very well. I'd really appreciate some ideas on how to get around this problem if someone can. Thank you.

    Read the article

  • Which steps are required to avoid my server being considered as spam sender?

    - by Cyril N.
    I'm looking to set up a webmail server that will be used by a lots of users that will receive and send emails. They will also have the possibility to forward emails they receive. I'd like to know which steps are recommanded/required to indicate to others Mail services (GMail, Outlook, etc) that my server is not used as a spam sender (disclaimer : IT's NOT ! :p) but a legitimate one. I know I have to define a SPF TXT records for example, but what others steps would you recommend me to do ? For example, is there a formula like having a proportional number of servers based on the amount of email sent (for having a different IP address) ? (something like sending a maximum of 1M emails / per IP / per day ?) Something else I'm missing ? I tried to search online, but I mostly find how to avoid emails sent with scripts (like PHP) being put in the SPAM folder. I'm looking for a server/dns configuration side. Thanks a lot for your help/tips, I appreciate !

    Read the article

  • In Outlook 2007 Rules and Alerts, EXACTLY what does "my name" mean?

    - by Cornan The Iowan
    I can't find any definition of "my name" in the Outlook 2007 Rules and Alerts or on the Internet. In this case our email system presents two email addresses for me to the outside world. I'd like BOTH of these addresses to be recognized as being "me". I thought that perhaps if I understood the definition of "my name" in the rules, I could set up my mailbox(es) appropriately. Of course if "my name" actually means a single email address, then I won't be able to do so, but if it means "any email on my account" or "any account meeting [some criteria]", then I might be successful. I'd like to note a subtlety in the rules definitions. While there is a rule named "where my name is in the To or Cc box", the only rule for explicit addresses is "sent to people or distribution list" (I'm assuming that "sent to" means "in the To:" list rather than "in the To: or cc: lists"). Summing up. My preference: 1) Understanding the precise definition of "my name" so that I can use "where my name is in the To or Cc box" to capture both email addresses from my account. 2) Learning the "sent to people or distribution list" actually includes Cc: entries (I can test this myself of course) 3) Any other solution that will let me define a rule where my secondary email address will be detected in EITHER the To: or Cc: boxes.

    Read the article

  • Serve web application error messages from Http server

    - by licorna
    I have nginx as a http server with tomcat as a backend (using proxy_pass). It works great but I want to define my own error pages (404, 500, etc.) and that they are served by nginx and not tomcat. For example I have the following resource: https://domain.com/resource which doesn't exist. If I [GET] that URL then I get a Not Found message from Tomcat and not from nginx. What I want is that every time Tomcat responds with a 404 (or any other error message) nginx sends itself a message to the user: some html file accessible by nginx. The way I have my nginx server configured is very easy, just: location / { proxy_pass http://localhost:8080/<webapp-name>/; } And I've configured port 8080, which is tomcat, as not accessible from outside this machine. I don't think that using different location directives in nginx configuration will work, because there are some resources that depend on the URL: https://domain.com/customer/<non-existent-customer-name>/[GET] Will always return 404 (or any other error message), while: https://domain.com/customer/<existent-customer>/[GET] Will return anything different from 404 (the customer exists). Is there any way of serving Tomcat (Application Server) error messages with Nginx (http Server)? To check the message sent by the proxy_pass directive and act upon it?

    Read the article

  • Strategy for Incremental Datasource fetchings in Excel

    - by user1352530
    I am in an scenario with a table that is refresh by a third app every week. I need to keep accumulating all data in Excel, using an ODBC connection to the database. I am wondering Approach 1: Is there a way to force Excel to append results for every update (this update would be triggered according to a parameter that indicates week)? I tried to define the table for which the connection loads using a dynamic reference but once is anchored first time, table position is never redefined Approach 2: Use an ETL to accumulate all weekly results into a staging table and then connect Excel to it in real time. But, I would need a mechanism for caching old data, as I cannot grow exponentially the time Excel opens. Imagine after 10 years, Excel would need to update at opening 10 years fo data before showing it. Is there a way to store already fetched data and increment it at real time (when book is opened) by selecting new data (with a query/filter of something) Thanks EDIT: Maybe it's better to ask it that way: What is the optimal strategy for a table that keeps growing and needs to be read in real time by Excel? I just don't want to fetch absolutely all data after some months...

    Read the article

  • Can mod-rewrite be used to set environmental variables?

    - by VLostBoy
    Hi, I've got an existing simple rewrite rule like so: <Directory /path> RewriteEngine on RewriteBase / # if the requested resource does not exist RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # route the uri to a front controller RewriteRule ^(.*)$ index.php/$1 [L] </Directory> This works fine, but I want to do one of either two things. On the basis of detecting the clients accept-language header, I want to either (i) Set the detected language as an environmental variable that the script can use or (ii)Rewrite the request so that the url begins with the language code (e.g. www.example.com/en/some/resource) In terms of implementing (i), I defined this rule: <Directory /path> RewriteEngine on RewriteBase / # if the requested resource does not exist RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # if the users preferred language is supported... RewriteCond %{HTTP:Accept-Language} ^.*(de|es|fr|it|ja|ru|en).*$ [NC] # define an environmental variable PREFER_LANG RewriteRule ^(.*)$ - [env=PREFER_LANG:%1] # route the uri to a front controller RewriteRule ^(.*)$ index.php/$1 [L] </Directory> I've tried a few variations, but PREFER_LANG is not defined in $_SERVER nor retrievable by getenv. In terms of implementing (ii)... lets just say its messy. I'll post it if I can't get an answer to one. Can anyone advise me? Thanks!

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >