Search Results

Search found 6722 results on 269 pages for 'foo inc'.

Page 32/269 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • How to recursively move all files (including hidden) in a subfolder into a parent folder in *nix?

    - by deadprogrammer
    This is a bit of an embarrassing question, but I have to admit that this late in my career I still have questions about the mv command. I frequently have this problem: I need to move all files recursively up one level. Let's say I have folder foo, and a folder bar inside it. Bar has a mess of files and folders, including dot files and folders. How do I move everything in bar to the foo level? If foo is empty, I simply move bar one level above, delete foo and rename bar into foo. Part of the problem is that I can't figure out what mv's wildcard for "everything including dots" is. A part of this question is this - is there an in-depth discussion of the wildcards that cp and mv commands use somewhere (googling this only brings very basic tutorials).

    Read the article

  • Setting umask globally

    - by DevSolar
    I am using a private user group setup, i.e. a user foo's home directory is owned by foo:foo, not foo:users. For this to work, I need to set the umask to 002 globally. After a quick grep -RIi umask /etc/*, it seemed for a moment that modifying the UMASK entry in /etc/login.defs should do the trick. It does, too -- but only for console logins. If I log in to my desktop, and open a terminal there, I still get to see the default umask 022. Same goes for files created from apps started through the menu. Apparently, the display manager (or whatever X11 component responsible) does source some different setting than a console login does, and damned if I could tell which one it is. (I tried changing the setting in /etc/init.d/rc, and no, it did not help.) How / where do I set umask globally, so that the X11 desktop environment gets the memo as well?

    Read the article

  • mod_rewrite for specific domains in a mappings file

    - by scott
    I have a bunch of domains that I want to go to one domain but various parts of that domain. # this is what I currently have RewriteEngine On RewriteCond %{HTTP_HOST} ^.*\.?foo\.com$ [NC] RewriteRule ^.*$ ${domainmappings:www.foo.com} [L,R=301] # rewrite map file www.foo.com www.domain.com/domain/foo.com.php www.bar.com www.domain.com/domain/bar.com.php www.baz.com www.domain.com/other/baz.php.foo The problem is that I don't want to have to have each domain be part of the RewriteCond. I tried RewriteCond %{HTTP_HOST} ^www\.(.*) RewriteRule (.*) http://%1/$1 [R=301,L] but that will do it for EVERY domain. I only want the domains that are in the mappings file to redirect, and then continue on to other rewrites if it doesn't match any domains in the mappings file.

    Read the article

  • dedicated domain name VS just folders under a single domain?

    - by Ben Keating
    I run WordPress-Multisite for several sites. Each of these sites resolve under a single domain, e.g. example.com/foo/, example.com/bar/. I also have domain names for these e.g. foo.com, bar.com. which are currently redirects, so if a user hits foo.com, they are redirected (301) to example.com/foo/. My question is, should it be the other way around? should I use the dedicated domain names directly? What are the pros/cons of putting multiple sites under a single domain vs their own dedicated domains. I guess im asking with SEO and findability in mind.

    Read the article

  • How to combine RewriteRule of index.php and queries rewrite and avoid Server Error 404?

    - by Binyamin
    Both RewriteRule's works fine, except when used together. 1.Remove all queries except query ?callback=.*: # /api?callback=foo has no rewrite # /whatever?whatever=foo has 301 redirect /whatever RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /([^?#\ ]*)\?[^\ ]*\ HTTP/ [NC] RewriteCond %{REQUEST_URI}?%{QUERY_STRING} !/api(/.*)?\?callback=.* RewriteRule .*$ %{REQUEST_URI}? [R=301,L] 2.Rewrite index.php queries api and url=$1: # /api returns data index.php?api&url= # /api/whatever returns data index.php?api&url=whatever RewriteRule ^api(?:/([^/]*))?$ index.php?api&url=$1 [QSA,L] RewriteRule ^([^.]*)$ index.php?url=$1 [QSA,L] Any valid combination to this RewriteRule's on keeping its functionality? This combination will return Server Error 404 to /api/?callback=foo: # Remove all queries except query "callback" RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /([^?#\ ]*)\?[^\ ]*\ HTTP/ [NC] RewriteCond %{REQUEST_URI}?%{QUERY_STRING} !/api(/.*)?\?callback=.* RewriteRule .*$ %{REQUEST_URI}? [R=301,L] # Rewrite index.php queries RewriteCond %{REQUEST_URI}?%{QUERY_STRING} !/api(/.*)?\?callback=.* # Server Error 404 on /api/?callback=foo and /api/whatever?callback=foo RewriteRule ^api(?:/([^/]*))?$ index.php?api&url=$1 [QSA,L] RewriteCond %{REQUEST_URI}?%{QUERY_STRING} !/api(/.*)?\?callback=.* RewriteRule ^([^.]*)$ index.php?url=$1 [QSA,L]

    Read the article

  • Abandoment to blame for the last JavaScript file not always being loaded?

    - by Larsenal
    I have a code snippet for an app that users are loading as a 3rd party script on their site. The general sequence is as follows: Site loads http://www.example.com/foo.js foo.js does stuff 1 to 2 seconds later, foo.js loads bar.js Now in a perfect world, I'd want to see matching counts for the calls to foo.js and bar.js. However, bar.js loads only about 94% of the time. I'm wondering how much of this discrepancy might be attributable to site abandonment given the fact that bar.js is delayed by 1 or 2 seconds. I posted here instead of StackOverflow since I think it's more a question about what would be typical time on page when users abandon the page.

    Read the article

  • Windows domain full hostnames cannot be resolved resulting in intranet not working

    - by OpethR
    the domain: is foo.bar.local full hostname is: bla.foo.bar.local short hostname is: bla I installed winbind. here is my smb.conf: name resolve order = lmhosts host wins bcast here is my nsswitch.conf: hosts: files mdns4_minimal [NOTFOUND=return] dns wins mdns4 when I try to ping full hostname, I get: "ping: unknown host" when I ping short hostname it works and shows me PING bla.foo.bar.local (10.11.20.135) 56(84) bytes of data. 64 bytes from bla.foo.bar.local (10.11.20.135): icmp_req=1 ttl=62 time=49.7 ms *notice that it manages to get the full hostname!? :S now the only reason I need it is cuz I'm trying to reach intranet websites. when I type short hostname "bla" in firefox address bar, it automatically changes it to the full hostname (which is good, right?!) but then it says: Server not found Firefox can't find the server at bla.foo.bar.local. what am I doing wrong? it's driving me nutz. so if you are wandering then yes, it is company intranet I'm trying to reach from ubuntu. If I use my crappy winxp everything is working perfectly well.

    Read the article

  • when including a file using php function not work?

    - by John Smiith
    MY PHP FUNCTION IS function functionName() { include($_SERVER['DOCUMENT_ROOT']."/path/file.php"); } Content of File.php is $foo = 'bar'; Calling function (content of file test.php) functionName(); When call function and variable not work echo $foo; <- not works But when adding code below its works (content of file test.php) include($_SERVER['DOCUMENT_ROOT']."/path/file.php"); echo $foo; <- its works

    Read the article

  • Segment subdomains with Google Analytics?

    - by andrewpthorp
    So, when a website has multiple subdomains: www.example.com foo.example.com bar.example.com What is the best way to use Google Analytics to segment the data? I would prefer have access to 'All Data', 'Data from foo.example.com', and 'Data from bar.example.com'. I tried setting up 3 different views, and setting a filter on the foo/bar views that says: Include only traffic from the ISP domain that are equal to foo.example.com. However, I am not seeing any data collected into that View. I do, however, see all data in the 'All Data' view, but I can't figure out how to segment the data. I am including the analytics.js in the application.haml layout, which is always loaded in this app. Thanks!

    Read the article

  • Flowchart with subroutine

    - by Jordy
    I am not really sure how to correctly describe my question, so please forgive me if this is a duplicate. I am creating a flowchart for my program where I implement a method. Let's assume I call this method someMethod. The C code could look something like this: bool someMethod(int Foo, int Bar) { foo += 5; bar -= 5; return (foo == bar); } This means that my flowchart will have a subroutine block where I call this function. But how do I correctly show the reader which integers I pass? And when I create the flowchart of "someMethod", I face a similar problem: how do I correctly show the reader that Foo and Bar are passed parameters?

    Read the article

  • Using T[1] instead of T for functions overloaded for T(&)[N]

    - by Abyx
    The asio::buffer function has (void*, size_t) and (PodType(&)[N]) overloads. I didn't want to write ugly C-style (&x, sizeof(x)) code, so I wrote this: SomePacket packet[1]; // SomePacket is POD read(socket, asio::buffer(packet)); foo = packet->foo; But that packet-> looks kinda weird - the packet is an array after all. (And packet[0]. doesn't look better.) Now, I think if it was a good idea to write such code. Maybe I should stick to unsafe C-style code with void* and sizeof? Upd: here is another example, for writing a packet: SomePacket packet[1]; // SomePacket is POD packet->id = SomePacket::ID; packet->foo = foo; write(socket, asio::buffer(packet));

    Read the article

  • Grepping through the results of apachectl -S

    - by CamelBlues
    I have a server with about 300 virtual hosts. When I want to make sure a specific httpd.conf file is loaded into the Virual Host config and the syntax is correct, I run apachectl -S. The problem is, though, I get a ton of output. I've tried apacectl -S | grep 'foo' and apachectl -S > foo.txt to try and make this data a little bit more manageable, but the output of the command is not conducive to grepping or shoving into a text file. When I try apachectl -S | grep 'foo', it simply returns the entire output of apachectl -S. When I try apachectl -S > foo.txt, foo.txt is an empty file. This may have something to do with how the server is configured, because I am able to successfully grep on my local machine. Any suggestions?

    Read the article

  • Naming the implementation version of an interface function

    - by bolov
    When I need to write an implementation version of an interface function, I put the implementation function within a impl namespace, but with the same name as the interface function. Is this a bad practice? (the same name part, the namespace part I am confident it’s more than OK). For me, who I write the code, there is no confusion between the two, but I want to make sure this isn’t confusing for someone else. One other option would be to append impl suffix to the function name, but since it is already in a separate namespace named impl it seems redundant. Is there an idiomatic way to do this? E.g.: namespace n { namespace impl { // implementation function (hidden from users) // same name, is it ok? void foo() { // ... //sometimes it needs to call recursively or to call overloads of the interface version: foo(); // calls the implementation version. Is this confusing? n::foo(); // calls the interface version. Is this confusing? // ... } // namespace impl // interface function (exposed to users) void foo() { impl::foo(); } } // namespace n

    Read the article

  • Purpose of building file using Make

    - by foo
    I am trying to understand what is the purpose of making files using commands such as cmake .. and make, I have tried looking online but there is no concise explanation on its purpose that i could find. Also, is it necessary to use make on a project folder that is not built in order to use the source (C++) in other projects. I know this may be a simplistic question, but as i am fairly new to building files, any information or references would be much appreciated.

    Read the article

  • Google indexing pages with #! although we don't have any

    - by Benjamin Gruenbaum
    Our company has developed a Single Page Application using AngularJS and its routing. Google indexed our site decently with JavaScript but it did not index some pages very well so we have developed an HTML only version. We have followed the Ajax Crawling Specification posted here and have a <meta name='fragment' content='!'> tag and canonical urls. We expect http://www.example.com/foo/bar to be fetched from http://www.example.com/?_escaped_fragment_=/foo/bar. However, we have found out that when we rolled the AJAX specification we now have all pages indexed twice, once with the JavaScript version as http://www.example.com/foo/bar and once with the new version as http://www.example.com/#!/foo/bar. This is harmful to us since it's duplicate content and also mis-representing out site. I have tried looking for similar questions here and in the Google product forum but could not come up with anything.

    Read the article

  • How should i learn to make a game in c++ [on hold]

    - by Foo
    I been having a lot of trouble making a game by myself in c++. I know c++, I know how to implement anything I just don't know how to make all the classes work into a game it just turn into a lot of useless coding and the game never get off the basic drawing, input etc. I read sfml game developemt and the choice the author make are working and I say to myself "I would have never thought of making a scene node class and doing x that way" I just think I can't put my thoughts into working classes and make them communite the right way and make it work. Any help. Sorry I got bad English and I am not a native English speaker. And a grammar edits are welcome and tag fixing.

    Read the article

  • Using multiple servers for hosting [on hold]

    - by foo
    I need help understanding the concept of using multiple servers (for hosting at home). More specifically multiple WAMP servers. (I have tried looking online but have found no good resources, maybe i am searching for for the wrong things to answer my questions) Questions -How do multiple servers work together? -Do they all have their independent hard drives storing different information? and if so, does the server dynamically locate files? -How do shell commands get executed? -How do they share the "load"? i.e. processing power, resources? Please tag links to resources so i may use a reference. Cheers!

    Read the article

  • asp.net mvc, IIS 6 vs IIS7.5, and integrated windows authentication causing javascript errors?

    - by chris
    This is a very strange one. I have an asp.net MVC 1 app. Under IIS6, with no anon access - only integrated windows auth - every thing works fine. I have the following on most of my Foo pages: <% using (Html.BeginForm()) { %> Show All: <%= Html.CheckBox("showAll", new { onClick = "$(this).parent('form:first').submit();" })%> <% } %> Clicking on the checkbox causes a post, the page is reloaded, everything is good. When I look at the access logs, that's what I see, with one oddity - the js library is requested during the page first request, but not for any subsequent page requests. Log looks like: GET / 401 GET / 200 GET /Content/Site.css 304 GET /Scripts/jquery-1.3.2.min.js 401 GET /Scripts/jquery-ui-1.7.2.custom.min.js 401 GET /Scripts/jquery.tablesorter.min.js 401 GET /Scripts/jquery-1.3.2.min.js 304 GET /Scripts/jquery-ui-1.7.2.custom.min.js 304 GET /Scripts/jquery.tablesorter.min.js 304 GET /Content/Images/logo.jpg 401 GET /Content/Images/logo.jpg 304 GET /Foo 401 GET /Foo 200 POST /Foo/Delete 302 GET /Foo/List 200 POST /Foo/List 200 This corresponds to home page, click on "Foo", delete a record, click a checkbox (which causes the 2nd POST). Under IIS7.5, it sometimes fails - the click on the check box doesn't cause a postback, but there are no obvious reasons why. I've noticed under IIS7.5 that every single page request re-issues the requests for the js libraries - the first one a 401, followed by either a 200 (OK) or 304 (not modified), as opposed to the above log extract where that only happened during the 1st request. Is there any way to eliminate the 401 requests? Could a timing issue have something to do with the click being ignored? Would increasing the number of concurrent connections help? Any other ideas? I'm at a bit of a loss to explain this.

    Read the article

  • Any way for a class to prevent outside code from declaring variables of its type?

    - by supercat
    Is it possible for a class of exposing a type for function returns, without allowing users of that class to create variables of that type? A couple usage scenarios: A Fluent interface on a large class; a statement like "foo=bar.WithX(5).WithY(9).WithZ(19);" would be inefficient if it had to create three new instances of the class, but could be much more efficient if the WithX could create one instance, and the other statements could simply use it. A class may wish to support a statement like "foo[19].x = 9;" even when foo itself isn't an array, and does not hold the data in class instances that can be exposed to the public; one way to do that is to have foo[19] return a struct which holds a reference to 'foo' and the value '19', and has a member property 'x' which could call "foo.SetXValue(19, 9);" Such a struct could have a conversion operator to convert itself to the "apparent" type of foo[19]. In both of these scenarios, storing the value returned by a method or property into a variable and then using it more than once would cause strange behavior. It would be desirable if the designer of the class exposing such methods or properties could ensure that callers wouldn't be able to use them more than once. Is there any practical way to accomplish that?

    Read the article

  • Reversing Django URLs With Extra Options

    - by Justin Voss
    Suppose I have a URLconf like below, and 'foo' and 'bar' are valid values for page_slug. urlpatterns = patterns('', (r'^page/(?P<page_slug>.*)/', 'myapp.views.someview'), ) Then, I could reconstruct the URLs using the below, right? >>> from django.core.urlresolvers import reverse >>> reverse('myapp.views.someview', kwargs={'page_slug': 'foo'}) '/page/foo/' >>> reverse('myapp.views.someview', kwargs={'page_slug': 'bar'}) '/page/bar/' But what if I change my URLconf to this? urlpatterns = patterns('', (r'^foo-direct/', 'myapp.views.someview', {'page_slug': 'foo'}), (r'^my-bar-page/', 'myapp.views.someview', {'page_slug': 'bar'}), ) I expected this result: >>> from django.core.urlresolvers import reverse >>> reverse('myapp.views.someview', kwargs={'page_slug': 'foo'}) '/foo-direct/' >>> reverse('myapp.views.someview', kwargs={'page_slug': 'bar'}) '/my-bar-page/' However, this throws a NoReverseMatch exception. I suspect I'm trying to do something impossible. Any suggestions on a saner way to accomplish what I want? Named URLs aren't an option, since I don't want other apps that link to these to need to know about the specifics of the URL structure (encapsulation and all that).

    Read the article

  • using sfDoctrineGuardPlugin for regular member login?

    - by fayer
    i want to create users for my webapplication. im using symfony. i wonder if i should do that with sfDoctrineGuardPlugin or symfony's provided methods for this? // Add one or more credentials $user->addCredential('foo'); $user->addCredentials('foo', 'bar'); // Check if the user has a credential echo $user->hasCredential('foo'); => true // Check if the user has both credentials echo $user->hasCredential(array('foo', 'bar')); => true // Check if the user has one of the credentials echo $user->hasCredential(array('foo', 'bar'), false); => true // Remove a credential $user->removeCredential('foo'); echo $user->hasCredential('foo'); => false // Remove all credentials (useful in the logout process) $user->clearCredentials(); echo $user->hasCredential('bar'); => false or is the purpose of sfDoctrineGuardPlugin just securing the admin page and not the frontend logging system? thanks.

    Read the article

  • floating exception using icc compiler

    - by Hristo
    I'm compiling my code via the following command: icc -ltbb test.cxx -o test Then when I run the program: time ./mp6 100 > output.modified Floating exception 4.871u 0.405s 0:05.28 99.8% 0+0k 0+0io 0pf+0w I get a "Floating exception". This following is code in C++ that I had before the exception and after: // before if (j < E[i]) { temp += foo(0, trr[i], ex[i+j*N]); } // after temp += (j < E[i])*foo(0, trr[i], ex[i+j*N]); This is boolean algebra... so (j < E[i]) is either going to be a 0 or a 1 so the multiplication would result either in 0 or the foo() result. I don't see why this would cause a floating exception. This is what foo() does: int foo(int s, int t, int e) { switch(s % 4) { case 0: return abs(t - e)/e; case 1: return (t == e) ? 0 : 1; case 2: return (t < e) ? 5 : (t - e)/t; case 3: return abs(t - e)/t; } return 0; } foo() isn't a function I wrote so I'm not too sure as to what it does... but I don't think the problem is with the function foo(). Is there something about boolean algebra that I don't understand or something that works differently in C++ than I know of? Any ideas why this causes an exception? Thanks, Hristo

    Read the article

  • Using Qt signals/slots instead of a worker thread

    - by Rob
    I am using Qt and wish to write a class that will perform some network-type operations, similar to FTP/HTTP. The class needs to connect to lots of machines, one after the other but I need the applications UI to stay (relatively) responsive during this process, so the user can cancel the operation, exit the application, etc. My first thought was to use a separate thread for network stuff but the built-in Qt FTP/HTTP (and other) classes apparently avoid using threads and instead rely on signals and slots. So, I'd like to do something similar and was hoping I could do something like this: class Foo : public QObject { Q_OBJECT public: void start(); signals: void next(); private slots: void nextJob(); }; void Foo::start() { ... connect(this, SIGNAL(next()), this, SLOT(nextJob())); emit next(); } void Foo::nextJob() { // Process next 'chunk' if (workLeftToDo) { emit next(); } } void Bar::StartOperation() { Foo* foo = new Foo; foo->start(); } However, this doesn't work and UI freezes until all operations have completed. I was hoping that emitting signals wouldn't actually call the slots immediately but would somehow be queued up by Qt, allowing the main UI to still operate. So what do I need to do in order to make this work? How does Qt achieve this with the multitude of built-in classes that appear to perform lengthy tasks on a single thread?

    Read the article

  • WCF contracts - namespaces and SerializationExceptions

    - by qntmfred
    I am using a third party web service that offers the following calls and responses http://api.athirdparty.com/rest/foo?apikey=1234 <response> <foo>this is a foo</foo> </response> and http://api.athirdparty.com/rest/bar?apikey=1234 <response> <bar>this is a bar</bar> </response> This is the contract and supporting types I wrote [ServiceContract] [XmlSerializerFormat] public interface IFooBarService { [OperationContract] [WebGet( BodyStyle = WebMessageBodyStyle.Bare, ResponseFormat = WebMessageFormat.Xml, UriTemplate = "foo?key={apikey}")] FooResponse GetFoo(string apikey); [OperationContract] [WebGet( BodyStyle = WebMessageBodyStyle.Bare, ResponseFormat = WebMessageFormat.Xml, UriTemplate = "bar?key={apikey}")] BarResponse GetBar(string apikey); } [XmlRoot("response")] public class FooResponse { [XmlElement("foo")] public string Foo { get; set; } } [XmlRoot("response")] public class BarResponse { [XmlElement("bar")] public string Bar { get; set; } } and then my client looks like this static void Main(string[] args) { using (WebChannelFactory<IFooBarService> cf = new WebChannelFactory<IFooBarService>("thirdparty")) { var channel = cf.CreateChannel(); FooResponse result = channel.GetFoo("1234"); } } When I run this I get the following exception Unable to deserialize XML body with root name 'response' and root namespace '' (for operation 'GetFoo' and contract ('IFooBarService', 'http://tempuri.org/')) using XmlSerializer. Ensure that the type corresponding to the XML is added to the known types collection of the service. If I comment out the GetBar operation from IFooBarService, it works fine. I know I'm missing an important concept here - just don't know quite what to look for. What is the proper way to construct my contract types, so that they can be properly deserialized?

    Read the article

  • operator<< cannot output std::endl -- Fix?

    - by dehmann
    The following code gives an error when it's supposed to output just std::endl: #include <iostream> #include <sstream> struct MyStream { std::ostream* out_; MyStream(std::ostream* out) : out_(out) {} std::ostream& operator<<(const std::string& s) { (*out_) << s; return *out_; } }; template<class OutputStream> struct Foo { OutputStream* out_; Foo(OutputStream* out) : out_(out) {} void test() { (*out_) << "OK" << std::endl; (*out_) << std::endl; // ERROR } }; int main(int argc, char** argv){ MyStream out(&std::cout); Foo<MyStream> foo(&out); foo.test(); return EXIT_SUCCESS; } The error is: stream1.cpp:19: error: no match for 'operator<<' in '*((Foo<MyStream>*)this)->Foo<MyStream>::out_ << std::endl' stream1.cpp:7: note: candidates are: std::ostream& MyStream::operator<<(const std::string&) So it can output a string (see line above the error), but not just the std::endl, presumably because std::endl is not a string, but the operator<< definition asks for a string. Templating the operator<< didn't help: template<class T> std::ostream& operator<<(const T& s) { ... } How can I make the code work? Thanks!

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >