Search Results

Search found 5121 results on 205 pages for 'foo'.

Page 64/205 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Scope quandary with namespaces, function templates, and static data

    - by Adrian McCarthy
    This scoping problem seems like the type of C++ quandary that Scott Meyers would have addressed in one of his Effective C++ books. I have a function, Analyze, that does some analysis on a range of data. The function is called from a few places with different types of iterators, so I have made it a template (and thus implemented it in a header file). The function depends on a static table of data, AnalysisTable, that I don't want to expose to the rest of the code. My first approach was to make the table a static const inside Analysis. namespace MyNamespace { template <typename InputIterator> int Analyze(InputIterator begin, InputIterator end) { static const int AnalysisTable[] = { /* data */ }; ... // implementation uses AnalysisTable return result; } } // namespace MyNamespace It appears that the compiler creates a copy of AnalysisTable for each instantiation of Analyze, which is wasteful of space (and, to a small degree, time). So I moved the table outside the function like this: namespace MyNamespace { const int AnalysisTable[] = { /* data */ }; template <typename InputIterator> int Analyze(InputIterator begin, InputIterator end) { ... // implementation uses AnalysisTable return result; } } // namespace MyNamespace There's only one copy of the table now, but it's exposed to the rest of the code. I'd rather keep this implementation detail hidden, so I introduced an unnamed namespace: namespace MyNamespace { namespace { // unnamed to hide AnalysisTable const int AnalysisTable[] = { /* data */ }; } // unnamed namespace template <typename InputIterator> int Analyze(InputIterator begin, InputIterator end) { ... // implementation uses AnalysisTable return result; } } // namespace MyNamespace But now I again have multiple copies of the table, because each compilation unit that includes this header file gets its own. If Analyze weren't a template, I could move all the implementation detail out of the header file. But it is a template, so I seem stuck. My next attempt was to put the table in the implementation file and to make an extern declaration within Analyze. // foo.h ------ namespace MyNamespace { template <typename InputIterator> int Analyze(InputIterator begin, InputIterator end) { extern const int AnalysisTable[]; ... // implementation uses AnalysisTable return result; } } // namespace MyNamespace // foo.cpp ------ #include "foo.h" namespace MyNamespace { const int AnalysisTable[] = { /* data */ }; } This looks like it should work, and--indeed--the compiler is satisfied. The linker, however, complains, "unresolved external symbol AnalysisTable." Drat! (Can someone explain what I'm missing here?) The only thing I could think of was to give the inner namespace a name, declare the table in the header, and provide the actual data in an implementation file: // foo.h ----- namespace MyNamespace { namespace PrivateStuff { extern const int AnalysisTable[]; } // unnamed namespace template <typename InputIterator> int Analyze(InputIterator begin, InputIterator end) { ... // implementation uses PrivateStuff::AnalysisTable return result; } } // namespace MyNamespace // foo.cpp ----- #include "foo.h" namespace MyNamespace { namespace PrivateStuff { const int AnalysisTable[] = { /* data */ }; } } Once again, I have exactly one instance of AnalysisTable (yay!), but other parts of the program can access it (boo!). The inner namespace makes it a little clearer that they shouldn't, but it's still possible. Is it possible to have one instance of the table and to move the table beyond the reach of everything but Analyze?

    Read the article

  • With Monit, how do I restart a process when a directory timestamp check fails?

    - by Alterscape
    In my /etc/monit/monitrc I have the following lines: check process foo_server with pidfile /var/run/bwam_server.pid start program = "/Users/foo/foo_server.sh start" stop program = "/Users/foo/foo_server.sh stop" check directory foo_data path "/Users/foo/Library/Application Support/foo_server/data" if timestamp > 1 minute then alert #if timestamp > 1 minute then restart foo_server I know I shouldn't have some of this stuff in my home directory, but this aside: if I uncomment the last line, Monit tells me syntax error on foo_server -- but I am, as far as I understand, correctly defining the process -- how else do I reference it?

    Read the article

  • Dealing with different usernames when mounting removable media in Linux

    - by dimatura
    I have a laptop in which my username is, say, "foo". I have an external drive, formatted with Ext4, for which all files are owned by "foo" (at a filesystem level). Now, I have a desktop in which my username is, say, "bar". If I mount this external drive in this computer the files are considered to not be owned by "bar". This makes sense, but it is annoying because their bits mode are set so that only the owner can modify/delete them. What's the cleanest way to deal with this? Create a group with "foo" and "bar" and add group modification permissions?

    Read the article

  • Apache2 alias in virtual host

    - by 0x7c00
    I have multiple virtual host in one server and plan to has some alias setup in one virtualhost. So I add the Alias /foo/ /path/to/foo/ in virtualhost directive,but it has no effect. request of host1/foo/ will return 404. But if I add this to /etc/apache2/mods-available/alias.conf, it works. But the problem is host2 will also share this alias. Is there a way to make the alias work only for host1? B.T.W, I use apache2ctl -l, there's no mod_alias.c listed, weird.

    Read the article

  • How to prevent mod_proxy from rewriting redirects into absolute URLs?

    - by Yang
    I have: nginx (port 80) reverse-proxying to apache2 (port 88) reverse-proxying to a web app (port 5001). However, when the web app responds with a redirect like Location: /foo, apache2 rewrites this into Location: http://host.com:88/sub/foo, even though port 88 is publicly inaccessible. I'd like it to just redirect to the relative URL Location: /sub/foo. Any ideas? My apache config (using mod_proxy_http, mod_proxy_html, mod_substitute): <Location /notes/> Allow from all ProxyPass http://127.0.0.1:5001/ SetOutputFilter proxy-html ProxyPassReverse / ProxyHTMLURLMap / /notes/ RequestHeader unset Accept-Encoding AddOutputFilterByType SUBSTITUTE application/atom+xml Substitute "s|127.0.0.1:5001|host.com/notes|" </Location>

    Read the article

  • Execute background program in bash without job control

    - by Wu Yongzheng
    I often execute GUI programs, such as firefox and evince from shell. If I type "firefox &", firefox is considered as a bash job, so "fg" will bring it to foreground and "hang" the shell. This becomes annoying when I have some background jobs such as vim already running. What I want is to launch firefox and dis-associate it with bash. Consider the following ideal case with my imaginary runbg: $ vim foo.tex ctrl+z and vim is job 1 $ pdflatex foo $ runbg evince foo.pdf evince runs in background and I get me bash prompt back $ fg vim goes foreground Is there any way to do this using existing program? If no, I will write my own runbg.

    Read the article

  • How do I send email with sendmail to external hosts?

    - by Jake
    If I wanted to send an email to a user on the same linux machine, I can run: echo -e "Subject: Foo\n\nBar\n" | sendmail -v jacob But if I run: echo -e "Subject: Foo\n\nBar\n" | sendmail -v [email protected] It will give me the error: 050 >>> MAIL From:<jacob@mu> SIZE=321 050 550 5.1.8 Cannot resolve your domain {mx-us011} If my machine has access to the internet but is behind a router and has no domain associated with it, can I use sendmail to send mail to this address? Do I need to connect through an SMTP server? Can I do that with sendmail? If I use sendmail's -f option and put my gmail account there it will work. Can (or should I) I use my IP address? echo -e "Subject: Foo\n\nBar\n" | sendmail -v -f [email protected] [email protected] I'm a bit lost on how all this comes together in sending mail from the command line.

    Read the article

  • Wget - if / else download condition?

    - by Kai
    I want wget to prefer a certain filetype over another, if the files have the same basename. For example: if foo.ogg available, don't download foo.mp3 the way i use wget so far to crawl/automatically download (if anyone is interested): wget -Dfoo.com -I /folder/ -r -l 1 -nc -A.ogg,.mp3 -i http://www.foo.com/folder/ but this, of course, gets me .mp3 AND .ogg files. It often also gets me image files like .png which i didn't want in the first place, and discards them afterwards. Any Ideas? (Syntax-Explanation: -D: download only from this Domain -I: download only from this subfolder of Domain -r: recursive (follow links and directory structure) -l 1: follow only 1 link deep -nc: no clobber = download only if file doesn't exist -A: accept/download only all *.ogg and *.mp3 (discard necessary html-files) -i: download-url/starting point)

    Read the article

  • ${extension} empty after catch-all alias in Postfix

    - by Paul Wagener
    I want a setup where an e-mailaddress like [email protected] redirects mail to the folder foo. I've already got dovecot configured and tested. It is called by postfix with this line in master.cf: dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -f ${sender} -d ${user}@${nexthop} -n -m ${extension} I expect ${extension} to expand to 'foo' but it is always empty. I've added recipient_delimiter = + to my main.cf. How can I get it to work? Update: I've got a catch-all alias that redirects @domain.com to [email protected]. It seems that the extension is empty because of this. So the question becomes: Can I have a catch-all so that [email protected] redirects to [email protected] without explicitly defining either the random or the ext part?

    Read the article

  • Apache: Setting up a reverse proxy configuration with SSL with url rewriting

    - by user1172468
    There is a host: secure.foo.com that exposes a webservice using https I want to create a reverse proxy using Apache that maps a local http port on a server internal.bar.com to the https service exposed by secure.foo.com Since it a web service I need to map all urls so that a path: https://secure.foo.com/some/path/123 is accessible by going to: http://internal.bar.com/some/path/123 Thanks. I've gotten this far: <VirtualHost *:80> ServerName gnip.measr.com SSLProxyEngine On ProxyPass / https://internal.bar.com/ </VirtualHost> I think this is working except for the url rewriting. Some resources I've found on this are: Setting up a complex Apache reverse proxy Apache as reverse proxy for https server

    Read the article

  • IIS: redirect everything to another URL, except for one Directory

    - by DrStalker
    I have an IIS server (IIS 6, Win 2003) that hosts the site http://www.foo.com. I want any request to http://foo.com (no matter what path/filename is used) to redirect to http://www.bar.org/AwesomePage.html UNLESS the request is for http://www.foo.com/specialdir, in which case the HTML files in the local directory specialdir should be used. The problem I have is once the redirect is set it also affects /specialdir - even if I right click on that directory and select "content should come from ... local directory" that change does not take effect, and the directory still shows as redirecting to http://www.bar.org/AwesomePage.html. The same thing happens if I try to set individual files to load from the local system instead of redirecting - IIS gives no error, but the change does not take effect and the files still show as being redirected. How can I set specialdir to override the redirection to the new URL?

    Read the article

  • After updating debian lenny, ps stops recognising my users

    - by William Coates
    Since upgrading my debian lenny box with apt-get, ps seems to be behaving strangely, and also if i run top i see under the user column the ids, not the names. whoami => foo ps -U foo => ERROR: User name does not exist. I get this output when I run "strace -e trace=open ps -U foo 2&1 | less", seems just this /usr/lib/libnss_compat.so.2 doesnt exist: open("/etc/mtab", O_RDONLY) = 3 open("/usr/lib/locale/locale-archive", O_RDONLY|O_LARGEFILE) = 3 open("/proc/self/stat", O_RDONLY) = 3 open("/proc/uptime", O_RDONLY) = 3 open("/etc/nsswitch.conf", O_RDONLY) = 4 open("/etc/ld.so.cache", O_RDONLY) = 4 open("/lib/libnss_compat.so.2", O_RDONLY)=4 open("/usr/lib/libnss_compat.so.2", O_RDONLY)=-1 ENOENT (No such file or directory) open("/proc/self/stat", O_RDONLY)= 4

    Read the article

  • Run a local command after closing an SSH connection?

    - by James B
    I've set up my zsh to update the XTerm title whenever I change directories. It's neat! Unfortunately I have one common problem, which is this: % cd foo; # title changes to "host1:~/foo" % ssh host2; # title changes to "host2:~" % pwd /home/user/foo # title is still "host2:~" I need to run some command anytime an ssh connection terminates, either chpwd, or cd ., or something similar. I don't think I can use an alias, because I'd need something like alias ssh=ssh $*; cd . but AFAICT you can't pick where the arguments go in an alias.

    Read the article

  • What is wrong with my expect script?

    - by Bryan
    I'm trying to learn how to use the expect command, to help me automate deployment of some software via shell scripts, and figured I start with something simple to get me started. I've created a file in my home dir called 'foo' using: touch foo And I've created the following script saved as test.exp #!/usr/bin/expect spawn rm -i foo expect "rm: remove regular empty file `foo'?" send "y\r" When I run the script using ./test.exp, it spawns the rm command, but it doesn't appear to send the Y and carriage return. I know I don't have a typo in the expect string, as I've used copy and paste to put in the script. What am I doing wrong?

    Read the article

  • Need help with an .htaccess URL rewriter

    - by AlexV
    I'm trying to do another SEO system with PHP/.htaccess... I need the following rules to apply: Must catch all URLs that do not end with an extension (www.foo.com -- catch | www.foo.com/catch-me -- catch | www.foo.com/dont-catch.me -- don't catch). Must catch all URLs that end with .php* (.php, .php4...) (thwaw are the exceptions to rule #1). All rules must only apply in some directories and not in their subdirectories (/ and /framework so far). The htaccess must send the typed URL in a GET value so I can work with it in PHP. Any mod-rewrite wizard can help me?

    Read the article

  • How do I configure DFS replication using the command line?

    - by zneak
    Hello everyone, I'm in the process of making a script to automate DFS creation and replication for an exam I have next week. So, assuming I have a namespace: dfsutil root adddom \\Foo\bar 'My namespace' And I have a link: dfsutil link add \\Foo\Bar\CoolStuff \\Server2\CoolStuff 'Neat stuff' How can I use the command line to replicate \\Server2\CoolStuff over, say, \\Server3\CoolStuff? When I use dfscmd: dfscmd /add \\Foo\Bar\CoolStuff \\Server3\CoolStuff It says it ended correctly, but opening up the MMC shows that there are no replication groups for CoolStuff. Thanks!

    Read the article

  • bash shell script which adds output of commands

    - by John Kube
    Let's say I have a command called foo which prints a number to the screen when called: $foo 3 Let's also say I have another command called bar which prints another number to the screen when called: $bar 5 I'm looking to write a shell script which will add together the output of foo and bar. How would I do that? (The outputs from the commands are not known ahead of time. They just so happen to have been 3 and 5 the last time they were run. They could have been something else.) Thanks!

    Read the article

  • Apply SetEnvIf after Apache RewriteRule

    - by coneybeare
    I have a working apache rewrite rule: RewriteCond %{HTTP_HOST} ^.*foo.com RewriteRule (.*) http://bar.com$1 [R=301,QSA,L] and some working dontlog SetEnvIfs: SetEnvIf Request_URI "^/server-status$" dontlog SetEnvIf Request_URI "^/home/ping$" dontlog SetEnvIf Request_URI "^/haproxy-status$" dontlog SetEnvIf User-Agent ".*internal dummy connection.*" dontlog CustomLog /var/log/apache2/access.log combined env=!dontlog but I can't figure out how to stop the RewriteRule from logging a duplicate line. foo.com and bar.com are both on the same machine. I would expect this rule to work, but it did not: SetEnvIf Host "foo.com" dontlog I still get duplicates in the Apache Log: 10.250.18.97 - - [06/Apr/2012:16:57:12 +0000] "GET / HTTP/1.1" 200 732 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/534.55.3 (KHTML, like Gecko) Version/5.1.5 Safari/534.55.3" 68.194.30.42 - - [06/Apr/2012:16:57:12 +0000] "GET / HTTP/1.1" 200 732 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/534.55.3 (KHTML, like Gecko) Version/5.1.5 Safari/534.55.3" .... where 10.250.18.97 is the server's IP. How can I prevent that RewriteRule from logging?

    Read the article

  • Linux: Can I link multiple destinations via softlinks?

    - by kds1398
    Attempting to end up with something similar to this: $ ls -l lrwxrwxrwx 1 user group 4 Jun 28 2010 foo -> /home/bar lrwxrwxrwx 1 user group 4 Jun 29 2010 foo -> /etc/bar The intention is to be able to move a file to foo & have it go to both destination directories for now. The goal is to eventually unlink /home/bar link after confirming there are no issues with moving the files to /etc/bar. I am restricted in that I am unable to change or add to the process that moves the files.

    Read the article

  • Need help with an .htaccess URL redirector

    - by AlexV
    I'm trying to do another SEO system with PHP/.htaccess... I need the following rules to apply: Must catch all URLs that do not end with an extension (www.foo.com -- catch | www.foo.com/catch-me -- catch | www.foo.com/dont-catch.me -- don't catch). Must catch all URLs that end with .php* (.php, .php4...) (thwaw are the exceptions to rule #1). All rules must only apply in some directories and not in their subdirectories (/ and /framework so far). The htaccess must send the typed URL in a GET value so I can work with it in PHP. Any mod-rewrite wizard can help me?

    Read the article

  • IIS: redirect everything to another URL, except for one Directory

    - by DrStalker
    I have an IIS server (IIS 6, Win 2003) that hosts the site http://www.foo.com. I want any request to http://foo.com (no matter what path/filename is used) to redirect to http://www.bar.org/AwesomePage.html UNLESS the request is for http://www.foo.com/specialdir, in which case the HTML files in the local directory specialdir should be used. The problem I have is once the redirect is set it also affects /specialdir - even if I right click on that directory and select "content should come from ... local directory" that change does not take effect, and the directory still shows as redirecting to http://www.bar.org/AwesomePage.html. The same thing happens if I try to set individual files to load from the local system instead of redirecting - IIS gives no error, but the change does not take effect and the files still show as being redirected. How can I set specialdir to override the redirection to the new URL?

    Read the article

  • Is it possible to not trigger my normal search highlighting when using search as a movement?

    - by Nathan Long
    When I do a search in vim, I like to have my results highlighted and super-visible, so I give them a bright yellow background and a black foreground in my .vimrc. " When highlighting search terms, make sure text is contrasting colors :highlight Search guibg=yellow guifg=black (This is for GUI versions of vim, like MacVim or Gvim; for command-line, you'd use ctermbg and ctermfg.) But I sometimes use search as a movement, as in c/\foo - "change from the cursor to the next occurrence of foo." In that case, I don't want all the occurrences of foo to be highlighted. Can I turn off highlighting in cases where search is used as a movement?

    Read the article

  • Homedir inside homedir restricted access

    - by blid
    On my VPS i've installed debian, apache+php. I have 2 users: foo and bar. Apache is configured to execute php files from /home/foo/htdocs. I created dir: /home/foo/htdocs/bar/ and made it home dir for user bar. Hover, I need to make a restriction: bar can't read, write or executre any files outside his own dir, but Apache has to be able execute all php files from /htdocs. I tried to chown the bar dir only for user bar, also experimented a lot with chmod but without a result so far. If there's any better way to satisfy my needs don't hesitate to write about it. Thanks in advance

    Read the article

  • Link to user directory displayed with wrong name in start menu

    - by wierob
    In the start menu the link to my user directory is displayed with a wrong name e.g. foo When I click on the link in the start menu the explorer opens my correct user directory but the addressbar still names it foo. However, when I open a cmd from that directory the location is correctly shown as C:\Users\myUserName. Furthermore there is no C:\Users\foo directory. How can I fix this (i.e. ths link in the startmenu should be named myUserName)?

    Read the article

  • How to pass parameters to a function?

    - by sbi
    I need to process an SVN working copy in a PS script, but I have trouble passing arguments to functions. Here's what I have: function foo($arg1, $arg2) { echo $arg1 echo $arg2.FullName } echo "0: $($args[0])" echo "1: $($args[1])" $items = get-childitem $args[1] $items | foreach-object -process {foo $args[0] $_} I want to pass $arg[0] as $arg1 to foo, and $arg[1] as $arg2. However, it doesn't work, for some reason $arg1 is always empty: PS C:\Users\sbi> .\test.ps1 blah .\Dropbox 0: blah 1: .\Dropbox C:\Users\sbi\Dropbox\Photos C:\Users\sbi\Dropbox\Public C:\Users\sbi\Dropbox\sbi PS C:\Users\sbi> Note: The "blah"parameter isn't passed as $arg1. I am absolutely sure this is something hilariously simple (I only just started with doing PS and still feel very clumsy), but I have banged my head against this for more than an hour now, and I can't find anything.

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >