Search Results

Search found 56300 results on 2252 pages for 'local working'.

Page 21/2252 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Virtual Host under MacOSX not working

    - by David Casillas
    I have setup a virtualhost for MacOSX Apache instalation. This are my steps: edit /private/etc/apache2/httpd.conf removing comment from: Include /private/etc/apache2/extra/httpd-vhosts.conf edit /private/etc/apache2/extra/httpd-vhosts.conf, added: <VirtualHost *:80> ServerName test.local DocumentRoot "/Users/myusername/Sites/Test/public" <Directory "/Users/myusername/Sites/Test/public"> Options Indexes FollowSymLinks Includes AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> edit /private/etc/hosts added 127.0.0.7 test.local Restart Apache But the VirtualHost does not work. To further isolate the problem I check the same configuration with MAMP and the virtual host worked rigth, so the configuration files should be fine. What can be wrong?

    Read the article

  • Duplicate content in Top Level Domain and country specific website

    - by Ando
    I have myproduct.com which is my master product page. For UK I also own myproduct.co.uk which is a copy of myproduct.com with some localized content: landing page, promotions, prices, and specific tags. But there is also duplicate content: myproduct.com/FAQs/ is the same as myproduct.co.uk/FAQs/ I don't want to do a redirect from myproduct.co.uk/FAQs/ to myproduct.com/FAQs/ as I don't want people to leave the localized website. The myproduct.com/FAQs/ is my "go-to" FAQ page and it's the most likely to be up to date - so I want this page to be indexed my search engines, where as I don't care about myproduct.co.uk/FAQs/ being indexed (unless indexing this page would increase my page rank :) ). What to do now to be SEO friendly & SEO optimal? Stop indexing of myproduct.co.uk/FAQs/ via robots.txt? Do some rel="alternate" hreflang="x" configuring on both /FAQs/ page? Something else?

    Read the article

  • What tools, libraries, or framework is needed to create a completely offline Javascript application?

    - by makerofthings7
    I am interested in creating a HTML application that can run as disconnected from the server as possible. Two examples of this include OWA in Exchange 2013 and to a lesser extent and the client available at www.ripple.com With the focus on OWA in Exchange 2013, what is needed to replicate the offline functionality available in a different application? A list technologies, frameworks, etc would be immensely helpful

    Read the article

  • 301 Redirects for regional variants of a homepage

    - by Adam Jenkin
    I am planning on implementing a website which has regional homepage variants. For Example: mycompany.com/europe mycompany.com/us The rest of the site is region agnostic and content will continue such as: mycompany.com/news mycompany.com/about-us etc For homepage (.com) requests, I plan on redirecting users to the correct homepage variant (via 301). If I cannot determine the correct one, I will fallback to redirecting them to the US homepage (/us). From an SEO point of view, firstly is this ok? or should I be doing anything additional to this for making search engines aware of the regional differences? As crawlers are region agnostic, I plan on directing them to the US page with a 301, or should I have something on the .com page which they use? Being that the regional homepage's will likely be the most visited pages, they should show up in result sitelinks when searching for mycompany (which I think is a good thing). Apologies for the slightly open question - I know anything SEO related is more opinion/best practice than fact but am purely looking for advice.

    Read the article

  • Google Places seo?

    - by sam
    Im familiar with seo and getting higher google listings but for allot of services google has recently been making there search results (were applicable) much more location orientated.. for instance searching for a "accountant in london" or "accountancy firm london" will through return the first half of page 1 as google places listings, then under about 6 of these you will get your normal search results so somone who used to rank #1 on page 1 now will rank effectily #7. What i was wandering is that i cant see any reasons as to why the company that rank high in the places results get there, often they are not high up in the search results. Is there a way to optimise on or offsite to rise up the google places listings in your city ?

    Read the article

  • Country specific content vs global content

    - by Ando
    I have a global product presentation website myproduct.com For certain countries I also own the country domain: myproduct.co.uk, myproduct.com.au, myproduct.es, myproduct.de, etc. The presentation website is translated in multiple languages and I set up redirects: myproduct.es will redirect to myproduct.com/es/, myproduct.de will redirect to myproduct.com/de/, etc. . The content so far is the same, just translated in different languages. The advantages are that it's easy to keep the content aligned - everything is managed from one centralized dashboard (I'm using Wordpress with qtranslate). Now I'm running into trouble as for different countries I want localized content - for UK I want to run different promotions and use a different reseller than for .com.au so I would like that users coming from myproduct.co.uk see something different than those coming from myproduct.com.au (and not be redirected to myproduct.com as they are right now). How can I achieve this? I could duplicate the whole main website and modify only certain parts but then I would have a lot of duplicate content (e.g. info about how the product works) and I would have pages that are likely to change (FAQ page) that I would have to keep updated over all websites. I can duplicate only partially the main website: on the localized website I would have only the pages that are different and then all other links would point to the .com site. This would solve the duplication problem but would cause confusion for the user as you would navigate from .co.uk to .com without noticing and then wonder how to get back. Other, better option?

    Read the article

  • My Website title changes by itself in Google

    - by Kane
    I am doing seo on my friends site www.svipl.in each page has its own metas it did change to new ones then somehow google is taking company name as the meta title I just googled this topic and last few days other people getting same problem , I had similar issue in the past on my own site that soon changed after changing the metas again any seo experts with same problem please help , there is no h1 heading on the company name or alt tag with that also.

    Read the article

  • How to show the right country domain in Google Places?

    - by Baumr
    Background A site has multiple ccTLDs: example.com for the US, example.co.uk for UK users, example.de for Germans, etc. Googling for certain city keywords will return rich snippets with a list of Google Places: Problem When searching on Google Germany, the domain for US users (example.com) appears instead of the corresponding ccTLD (example.de). This is not good user experience, as users would most likely like to book on a site localized for them (e.g. language and currency). Question What solutions are there? Is it possible to return different ccTLDs in rich snippets for Google searches in Germany/UK? Ideas Would implementing the hreflang annotation resolve this? What about entering multiple corresponding URLs in the structured data markup?

    Read the article

  • JDeveloper does not recognize existing subversion working directory

    - by Bob Webster
    Just a quick note about an issue where JDeveloper no longer recognized an existing subversion working directory. Symptom:  JDeveloper Versioning menu offers to Version an Application that is already versioned in svn. Cause: The repository url contained in the hidden .svn folders of the working directory is no longer valid. Solution: Determine the correct url for the Subversion repository and update the .svn working directory.Fix the url contained in the svn folders of the working directory using the svn switch command. Example:           In a shell change directory to the Application folder.           Run the svn info command to confirm the current settings.                $ svn info                   Path: .                   URL: http://192.168.1.128/repos/jdeveloperrepo/AsyncExamples/BPELCallAsync/trunk                   Repository Root: http://192.168.1.128/repos/jdeveloperrepo                   Repository UUID: 3dc5eb88-3001-0010-8d6e-fd6f73825647                   Revision: 145                   Node Kind: directory                   Schedule: normal                   Last Changed Rev: 145                   Last Changed Date: 2012-06-07 07:15:56 -0700 (Thu, 07 Jun 2012)            In this case, the IP address in the repository URL is incorrect,           the svn server is located at 192.168.56.1           Note: The IP Address currently set is displayed after the Project Name in the            Application Navigator.  See the screen snapshot above.            Run the svn switch command with the --relocate option            Provide as much of the urls as necessary to correctly rewrite the url from current to new.            For example,            to change the repository server address from 192.168.1.128   to   192.168.56.1                     $  svn switch --relocate  http://192.168.1.128   http://192.168.56.1  .                               (Note the trailing period in the above command)           When the url is correct, JDeveloper should recognize the Subversion Working Directory.

    Read the article

  • Is it better to have multiple domains for cities or one single TLD?

    - by Brett
    I make websites for small businesses, and for some reason business owners love to have several domains with the same website but the TLD containing the city name. For example: 1. smallbizname.com 2. clevelandsmallbizname.com 3. columbussmallbizname.com 4. cincinnatismallbizname.com ... and so on. I've seen questions about localization per country aspects, but this is a much smaller scale, so I don't think the same rules apply. The problem I have is the companies never want to write separate content per domain, just have the same website hosted several times at each domain. I feel this probably hurts SEO for two reasons: 1. Traffic gets scattered throughout domains, could be boosting just one domain. 2. Duplicate content penalty because the content is identical. My question boils down to this... Should I redirect all the city domains to the main business name domain, or does having these separate sites help to rank better per city? And if they are redirected, how does google rank the redirects? Thanks for any input on this issue!!

    Read the article

  • What is the SEO-recommended method for using underscores and dashes in URLs that contain geographic locations?

    - by ElHaix
    In reading through this article: In Subfolder & File Names, Use Dashes, Not Underscores Good: Good: http://www.domain.com/sub-folder/file-name.htm Bad: http://www.domain.com/sub_folder/file_name.htm In my URL's, I may have one or two city names, ending with the province/state: Burnaby_New_Westminister-BC/[some search term]. My URL rules currently are defined such that everything after the dash is the prov/state. Some geographic locations already contain dashes: Notre-Dame-de-Grâce (in QC), which I would convert to ~/Notre_Dame_de_Grace-QC/ I thought of placing the prov/state after another "/", however in some cases the province/state name may not exist, thus ~/Notre_Dame_de_Grace/, so the first term after the domain name contains the geo location {city, city_name-state}. I am now revisiting this, and wondering if this rule set should change, and if so, what is the recommended way of implementing this? -- UPDATE -- After reviewing this video, I see that I should be using the dashes, rather than underscores. However since I still want to have my geo locations in the first URL section, is there anything wrong with using a double-dash separator - ie: /city-name--state/ ?

    Read the article

  • Do I need to physically host my website in separate countries for SEO?

    - by noelmcg
    I run an ecommerce store that is hosted in Ireland, and ranks ok with google.ie The market for this comapny is the Republic of Ireland and the UK. Is it beneficial for me to have a UK hosted version of my site (.co.uk) to rank higher in google.co.uk (and other localised search engines of course). If so, how would I prevent the site from being punished for duplicate content? Thanks in advance for any assistance on the above.

    Read the article

  • Site experiencing low traffic volume between 8AM and 4PM BST

    - by BizNuge
    There may be no definitive answer to this question but I thought peer review of the problem might stimulate some ideas on the topic. We have a boutique sales site that is experiencing low volumes of traffic (both UK and international) between 8AM and 4PM BST. This seems sort of strange since our target audience for the site is UK based, and this would seem to be when people are awake and online. We are in contact with another boutique site in the same sector who don't experience this issue, so it seems kinda strange. Later on in the day we are getting traffic from the UK, as well as a fair amount of international traffic, so I'm at a loss to figure this one out. The site is fairly well optimised including:- sitemap.xml Proper caching policies across the board google merchant dublin core microdata html5 pretty urls meta and content are reviewed as an ongoing concern we have decent sitelinks for direct queries thru google on the site name a decent amount of inbound links FB, Twitter, Google +1 Google maps listing [verified] site has been selling for ~4 months and is getting ~250 users per day. So I'm not entirely sure how to explain the mid day dip in our figures.... Any ideas at all would be useful. Cheers all!

    Read the article

  • Why is Web SQL database deprecated?

    - by user221287
    I am making a hybrid Android app. At first I decided to use localStorage, after spending 2 days, I realized that it is very strange and so dropped it. Then, I picked up indexedDB, after spending today's whole day and actually getting the output in Google Chrome, it is not running inside a WebView of the android app. And I never used Web SQL database at all because it was deprecated. Anyhow, it has come to my notice that PhoneGap still uses Web SQL and android's browsers support it. Why was Web SQL deprecated in the first place? And will it be a good idea for me to go with Web SQL now?

    Read the article

  • localhost .htaccess not working?

    - by diff
    Now i install WordPress then BuddyPress, after the install buddypress i run the site, home page was run, when click the other page link the page was not working. show this error Not Found The requested URL /Repo/website/groups/ was not found on this server. Apache/2.2.17 (Ubuntu) Server at localhost Port 80 After then i check my local htaccess, here is all are ok, but why the problem is showing. Here is my local htaccess .. <Directory /> Options FollowSymLinks AllowOverride all </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride all Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory>

    Read the article

  • how to change ruby path from /usr/bin/ruby to /usr/local/bin/ruby

    - by HelloWorld
    reading around the various ruby install tutorials it's required to change path from /usr/bin/ruby to /usr/local/bin/ruby but i cant seem to be able to do it. Ultimately i want to install Ruby 1.9.2, should i uninstall 1.8.7 or what? i tried to install Ruby 1.9.2 with macports, the installation seemed to go ok, but i cant find the new version, i seem to be stuck with 1.8.7 iMac:~ rebel$ which ruby /usr/bin/ruby rebel$ ruby -v ruby 1.8.7 (2009-06-12 patchlevel 174) [universal-darwin10.0] .profile export PATH=/opt/local/bin:/opt/local/sbin:$PATH export PATH="/usr/local/bin:/usr/local/sbin:/usr/local/mysql/bin:$PATH"

    Read the article

  • KRL and Yahoo Local Search

    - by Randall Bohn
    I'm trying to use Yahoo Local Search in a Kynetx Application. ruleset avogadro { meta { name "yahoo-local-ruleset" description "use results from Yahoo local search" author "randall bohn" key yahoo_local "get-your-own-key" } dispatch { domain "example.com"} global { datasource local:XML <- "http://local.yahooapis.com/LocalSearchService/V3/localsearch"; } rule add_list { select when pageview ".*" setting () pre { ds = datasource:local("?appid=#{keys:yahoo_local()}&query=pizza&zip=#{zip}&results=5"); rs = ds.pick("$..Result"); } append("body","<ul id='my_list'></ul>"); always { set ent:pizza rs; } } rule add_results { select when pageview ".*" setting () foreach ent:pizza setting pizza pre { title = pizza.pick("$..Title"); } append("#my_list", "<li>#{title}</li>"); } } The list I wind up with is . [object Object] and 'title' has {'$t' => 'Pizza Shop 1'} I can't figure out how to get just the title.

    Read the article

  • How many hours a day (of the standard 8) do you actually work? [closed]

    - by someone
    Possible Duplicate: How much do you [really] work a day When I started working (not so long ago), I was very conscientious about really working. If I didn't work for 10 minutes at a time, I felt like I was cheating. But as I started to look around me, I realized that I was the only one... and most of my coworkers were spending a big percentage of their time browsing the internet or playing solitaire. I started to slack off a little more than usual... while still basically getting all my work done. But while I do all that's required of me, and usually quickly, I no longer beg for work to fill up my spare time; I'm content to do what I'm told and play around when no one makes sure I'm busy enough. Which means that I'm often bored and underutilized. (Which I was even when I begged for work - people are pretty laid back about the workload and don't seem to realize how much I can get done if pushed to the fullest.) But I was just talking to a friend who graduated with me and also recently started working... and she came to me with the same concerns about slacking. She's working remotely, which means there are often gaps in communication when she can't really get anything done... And she's feeling guilty about it. Which made me rethink the whole thing... So, as workers, how many hours, out of the 8 standard average, are you actually working (honestly)? And, as bosses, how many hours do you expect your workers to work? And from an ethical standpoint, how much free time, or space out time, can workers have during the day without being considered to be "cheating" their office of labor and money?

    Read the article

  • Establishing WebLogic Server HTTPS Trust of IIS Using a Microsoft Local Certificate Authority

    - by user647124
    Everyone agrees that self-signed and demo certificates for SSL and HTTPS should never be used in production and preferred not to be used elsewhere. Most self-signed and demo certificates are provided by vendors with the intention that they are used only to integrate within the same environment. In a vendor’s perfect world all application servers in a given enterprise are from the same vendor, which makes this lack of interoperability in a non-production environment an advantage. For us working in the real world, where not only do we not use a single vendor everywhere but have to make do with self-signed certificates for all but production, testing HTTPS between an IIS ASP.NET service provider and a WebLogic J2EE consumer application can be very frustrating to set up. It was for me, especially having found many blogs and discussion threads where various solutions were described but did not quite work and were all mostly similar but just a little bit different. To save both you and my future (who always seems to forget the hardest-won lessons) all of the pain and suffering, I am recording the steps that finally worked here for reference and sanity. How You Know You Need This The first cold clutches of dread that tells you it is going to be a long day is when you attempt to a WSDL published by IIS in WebLogic over HTTPS and you see the following: <Jul 30, 2012 2:51:31 PM EDT> <Warning> <Security> <BEA-090477> <Certificate chain received from myserver.mydomain.com - 10.555.55.123 was not trusted causing SSL handshake failure.> weblogic.wsee.wsdl.WsdlException: Failed to read wsdl file from url due to -- javax.net.ssl.SSLKeyException: [Security:090477]Certificate chain received from myserver02.mydomain.com - 10.555.55.123 was not trusted causing SSL handshake failure. The above is what started a three day sojourn into searching for a solution. Even people who had solved it before would tell me how they did, and then shrug when I demonstrated that the steps did not end in the success they claimed I would experience. Rather than torture you with the details of everything I did that did not work, here is what finally did work. Export the Certificates from IE First, take the offending WSDL URL and paste it into IE (if you have an internal Microsoft CA, you have IE, even if you don’t use it in favor of some other browser). To state the semi-obvious, if you received the error above there is a certificate configured for the IIS host of the service and the SSL port has been configured properly. Otherwise there would be a different error, usually about the site not found or connection failed. Once the WSDL loads, to the right of the address bar there will be a lock icon. Click the lock and then click View Certificates in the resulting dialog (if you do not have a lock icon but do have a Certificate Error message, see http://support.microsoft.com/kb/931850 for steps to install the certificate then you can continue from the point of finding the lock icon). Figure 1: View Certificates in IE Next, select the Details tab in the resulting dialog Figure 2: Use Certificate Details to Export Certificate Click Copy to File, then Next, then select the Base-64 encoded option for the format Figure 3: Select the Base-64 encoded option for the format For the sake of simplicity, I choose to save this to the root of the WebLogic domain. It will work from anywhere, but later you will need to type in the full path rather than just the certificate name if you save it elsewhere. Figure 4: Browse to Save Location Figure 5: Save the Certificate to the Domain Root for Convenience This is the point where I ran into some confusion. Some articles mentioned exporting the entire chain of certificates. This supposedly works for some types of certificates, or if you have a few other tools and the time to learn them. For the SSL experts out there, they already have these tools, know how to use them well, and should not be wasting their time reading this article meant for folks who just want to get things wired up and back to unit testing and development. For the rest of us, the easiest way to make sure things will work is to just export all the links in the chain individually and let WebLogic Server worry about re-assembling them into a chain (which it does quite nicely). While perhaps not the most elegant solution, the multi-step process is easy to repeat and uses only tools that are immediately available and require no learning curve. So… Next, go to Tools then Internet Options then the Content tab and click Certificates. Go to the Trust Root Certificate Authorities tab and find the certificate root for your Microsoft CA cert (look for the Issuer of the certificate you exported earlier). Figure 6: Trusted Root Certification Authorities Tab Export this one the same way as before, with a different name Figure 7: Use a Unique Name for Each Certificate Repeat this once more for the Intermediate Certificate tab. Import the Certificates to the WebLogic Domain Now, open an command prompt, navigate to [WEBLOGIC_DOMAIN_ROOT]\bin and execute setDomainEnv. You should then be in the root of the domain. If not, CD to the domain root. Assuming you saved the certificate in the domain root, execute the following: keytool -importcert -alias [ALIAS-1] -trustcacerts -file [FULL PATH TO .CER 1] -keystore truststore.jks -storepass [PASSWORD] An example with the variables filled in is: keytool -importcert -alias IIS-1 -trustcacerts -file microsftcert.cer -keystore truststore.jks -storepass password After several lines out output you will be prompted with: Trust this certificate? [no]: The correct answer is ‘yes’ (minus the quotes, of course). You’ll you know you were successful if the response is: Certificate was added to keystore If not, check your typing, as that is generally the source of an error at this point. Repeat this for all three of the certificates you exported, changing the [ALIAS-1] and [FULL PATH TO .CER 1] value each time. For example: keytool -importcert -alias IIS-1 -trustcacerts -file microsftcert.cer -keystore truststore.jks -storepass password keytool -importcert -alias IIS-2 -trustcacerts -file microsftcertRoot.cer -keystore truststore.jks -storepass password keytool -importcert -alias IIS-3 -trustcacerts -file microsftcertIntermediate.cer -keystore truststore.jks -storepass password In the above we created a new JKS key store. You can re-use an existing one by changing the name of the JKS file to one you already have and change the password to the one that matches that JKS file. For the DemoTrust.jks  that is included with WebLogic the password is DemoTrustKeyStorePassPhrase. An example here would be: keytool -importcert -alias IIS-1 -trustcacerts -file microsoft.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase keytool -importcert -alias IIS-2 -trustcacerts -file microsoftRoot.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase keytool -importcert -alias IIS-2 -trustcacerts -file microsoftInter.cer -keystore DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase Whichever keystore you use, you can check your work with: keytool -list -keystore truststore.jks -storepass password Where “truststore.jks” and “password” can be replaced appropriately if necessary. The output will look something like this: Figure 8: Output from keytool -list -keystore Update the WebLogic Keystore Configuration If you used an existing keystore rather than creating a new one, you can restart your WebLogic Server and skip the rest of this section. For those of us who created a new one because that is the instructions we found online… Next, we need to tell WebLogic to use the JKS file (truststore.jks) we just created. Log in to the WebLogic Server Administration Console and navigate to Servers > AdminServer > Configuration > Keystores. Scroll down to “Custom Trust Keystore:” and change the value to “truststore.jks” and the value of “Custom Trust Keystore Passphrase:” and “Confirm Custom Trust Keystore Passphrase:” to the password you used when earlier, then save your changes. You will get a nice message similar to the following: Figure 9: To Be Safe, Restart Anyways The “No restarts are necessary” is somewhat of an exaggeration. If you want to be able to use the keystore you may need restart the server(s). To save myself aggravation, I always do. Your mileage may vary. Conclusion That should get you there. If there are some erroneous steps included for your situation in particular, I will offer up a semi-apology as the process described above does not take long at all and if there is one step that could be dropped from it, is still much faster than trying to figure this out from other sources.

    Read the article

  • Use a preferred username but authenticate against Kerberos principal

    - by Jason R. Coombs
    What I desire to do should be pretty simple. I have an Ubuntu 10.04 box. It's currently configured to authenticate users against a kerberos realm (EXAMPLE.ORG). There is only one realm in the krb5.conf file and it is the default realm. [libdefaults] default_realm = EXAMPLE.ORG PAM is configured to use the pam_krb5 module, so if a user account is created on the local machine, and that username matches the [email protected] credential, that user may log in by supplying his kerberos password. What I would like to do instead is create a local user account with a different username, but have it always authenticate against the canonical name in the kerberos server. For example, the kerberos principal is [email protected]. I would like to create the local account preferred.name and somehow configure kerberos that when someone attempts to log in as preferred.name, it uses the principal [email protected]. I have tried using the auth_to_local_names in krb5.conf, but this doesn't seem to do the trick. [realms] EXAMPLE.ORG = { auth_to_local_names = { full.name = preferred.name } I have tried adding [email protected] to ~preferred.name/.k5login. In all cases, when I attempt to log in as preferred.name@host and enter the password for full.name, I get Access denied. I even tried using auth_to_local in krb5.conf, but I couldn't get the syntax right. Is it possible to have a (distinct) local username that for all purposes behaves exactly like a matching username does? If so, how is this done?

    Read the article

  • fwbuilder/iptables manually scripted + autogenerated rules at startup?

    - by Jakobud
    Fedora 11 Our previous IT-guy setup iptable rules on our firewall in a way that is confusing me and he didn't document any of it. I was hoping someone could help me make some sense of it. The iptables service is obviously starting at startup, but the /etc/sysconfig/iptables file was untouched (default values). I found in /etc/rc.local he was doing this: # We have multiple ISP connections on our network. # The following is about 50+ rules to route incoming and outgoing # information. For example, certain internal hosts are specified here # to use ISP A connection while everyone else on the network uses # ISP B connection when access the internet. ip rule add from 99.99.99.99 table Whatever_0 ip rule add from 99.99.99.98 table Whatever_0 ip rule add from 99.99.99.97 table Whatever_0 ip rule add from 99.99.99.96 table Whatever_0 ip rule add from 99.99.99.95 table Whatever_0 ip rule add from 192.168.1.103 table ISB_A ip rule add from 192.168.1.105 table ISB_A ip route add 192.168.0.0/24 dev eth0 table ISB_B # etc... and then near the end of the file, AFTER all the ip rules he just declared, he has this: /root/fw/firewall-rules.fw He's executing the firewall rules file that was auto-generated by fwbuilder. Some questions Why is he declaring all these ip rules in rc.local instead of declaring them in fwbuilder like all the other rules? Any advantage or necessity to this? Or is this just a poorly organized way to implement firewall rules? Why is he declaring ip rules BEFORE executing the fwbuilder script? I would assume that one of the first things the fwbuilder script does it get rid of any existing rules before declaring all the new ones. Am I wrong about this? If that was the case, the fwbuilder script would basically just delete all the ip rules that were defined in rc.local. Does this make any sense? Why is he executing all this stuff at startup in rc.local instead of just using iptables-save to keep the firewall settings at /etc/sysconfig/iptables that will get implemented at runtime?

    Read the article

  • Apache: Virtual Host and .htacess for URL Rewriting not working

    - by parth
    I have configured a virtual host in my local machine and every thing is working fine. Now I want to use SEO friendly urls. To achieve this I have used the .htaccess file. My virtual host configuration is: <VirtualHost *:80> DocumentRoot "C:/xampp/htdocs/ypp" ServerName ypp.com ServerAlias www.ypp.com ##ErrorLog "logs/dummy-host2.localhost-error.log" ##CustomLog "logs/dummy-host2.localhost-access.log" combined </VirtualHost> and my .htaccess file has: AllowOverride All RewriteEngine On RewriteBase /ypp/ RewriteRule ^/browse$ /browse.php RewriteRule ^/browse/([a-z]+)$ /browse.php?cat=$1 RewriteRule ^/browse/([a-z]+)/([a-z]+)$ /browse.php?cat=$1&subcat=$2 The above .htaccess setting is not working. After that I modified my virtual host setting and it is working. The new virtual host setting is: <VirtualHost *:80> RewriteEngine On RewriteRule ^/browse$ /browse.php RewriteRule ^/browse/([a-z]+)$ /browse.php?cat=$1 RewriteRule ^/browse/([a-z]+)/([a-z]+)$ /browse.php?cat=$1&subcat=$2 ServerAdmin [email protected] DocumentRoot "C:/xampp/htdocs/ypp" ServerName ypp.com ServerAlias www.ypp.com ##ErrorLog "logs/dummy-host2.localhost-error.log" ##CustomLog "logs/dummy-host2.localhost-access.log" combined <Directory "C:/xampp/htdocs/ypp"> AllowOverride All </Directory> </VirtualHost> Please let me know where I am going wrong in the .htacess file for url rewriting. I do not want to use the settings in virtual host, since for every change I have restart apache.

    Read the article

  • Internet is not working in base machine

    - by surendar
    I have a Ubuntu desktop. I am running a virtual windows machine using virtual box. Few days before Internet is not working in Ubuntu but it is working in the virtual machine. Even the samba shares are also accessible. I don't know why internet is not working in the base machine. I have tried to ping google.com, but it returns Ubuntu@desktop:~$ ping google.com ping: unknown host google.com ifconfig command's output Ubuntu@desktop:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:27:0e:1b:86:2a inet addr:192.168.1.7 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::227:eff:fe1b:862a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:38221 errors:0 dropped:0 overruns:0 frame:0 TX packets:28161 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:39144616 (39.1 MB) TX bytes:6143919 (6.1 MB) Interrupt:27 Base address:0x2000 eth0:0 Link encap:Ethernet HWaddr 00:27:0e:1b:86:2a inet addr:192.168.2.7 Bcast:192.168.2.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:27 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:14944 errors:0 dropped:0 overruns:0 frame:0 TX packets:14944 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1735451 (1.7 MB) TX bytes:1735451 (1.7 MB) vmnet1 Link encap:Ethernet HWaddr 00:50:56:c0:00:01 inet addr:192.168.243.1 Bcast:192.168.243.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fec0:1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:77 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vmnet8 Link encap:Ethernet HWaddr 00:50:56:c0:00:08 inet addr:172.16.162.1 Bcast:172.16.162.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fec0:8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:78 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

    Read the article

  • Apache: Virtual Host and .htacess for URL Rewriting not working

    - by parth
    I have configured virtual host in my local machine and every thing working fine . Now I want to use SEO friendly urls. To achive this I have used .htacess file . My virtual host configuration is : <VirtualHost *:80> DocumentRoot "C:/xampp/htdocs/ypp" ServerName ypp.com ServerAlias www.ypp.com ##ErrorLog "logs/dummy-host2.localhost-error.log" ##CustomLog "logs/dummy-host2.localhost-access.log" combined </VirtualHost> and my .htacess file has : AllowOverride All RewriteEngine On RewriteBase /ypp/ RewriteRule ^/browse$ /browse.php RewriteRule ^/browse/([a-z]+)$ /browse.php?cat=$1 RewriteRule ^/browse/([a-z]+)/([a-z]+)$ /browse.php?cat=$1&subcat=$2 The above .htacess setting is not working . After that I have modigied my virtual host setting and it is working . new virtual host setting is : <VirtualHost *:80> RewriteEngine On RewriteRule ^/browse$ /browse.php RewriteRule ^/browse/([a-z]+)$ /browse.php?cat=$1 RewriteRule ^/browse/([a-z]+)/([a-z]+)$ /browse.php?cat=$1&subcat=$2 ServerAdmin [email protected] DocumentRoot "C:/xampp/htdocs/ypp" ServerName ypp.com ServerAlias www.ypp.com ##ErrorLog "logs/dummy-host2.localhost-error.log" ##CustomLog "logs/dummy-host2.localhost-access.log" combined <Directory "C:/xampp/htdocs/ypp"> AllowOverride All </Directory> </VirtualHost> Please guide me where I am wrong in .htacess file for url rewriting . I donot want to use setting in virtual host because for every change I have restart apache .

    Read the article

  • Are there opportunities working as full-time paid programmer for Non-profit organizations

    - by Rick
    Some recent events in my life have made me want to contribute more to causes I believe in rather than just working for a profit-driven company. I have been thinking that if I could find a non-profit organization that I like and believe in then I might feel more fulfilled working for them. I have a decent amount of web development experience and currently work as a Java / Spring web developer. I realize the compensation wouldn't have the same "ceiling" potential as for-profit but am wondering if its possible to get at least something close to a market rate for work as I am planning to start a family sometime soon and still need a legitimate income. If anyone has any knowledge or experience about this sort of thing would be happy to hear from you. EDIT: Without getting in to too much personal detail, I have a relative who recently passed away who suffered from a mental illness so while it doesn't have to be an organization specifically dedicated to this, I am hoping to work for something along these lines at least where there is more of a social cause rather than just working on an open-source project whose only cause is the advancement of technology.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >