Search Results

Search found 32130 results on 1286 pages for 'local search'.

Page 304/1286 | < Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >

  • Simulating network latency for localhost connection on Windows 7

    - by nitro2k01
    I need to simulate network latency to a program running on the local computer, connecting to a local service. Thus far I have tried dummynet (a windows build of ipfw) which I got working after some trial and error. While it generally works, I can't seem to get it to filter localhost traffic. Even after adding a rule from any to any which affects external traffic, this makes no difference for local connections. I would appreciate if anyone knows how to simulate local latency using dummynet or a different tool. The tool should be able to simulate latency generically in IP packets, (TCP and UDP) and not be protocol specific.

    Read the article

  • Add directory to $PATH if it's not already there

    - by Doug Harris
    Has anybody written a bash function to add a directory to $PATH only if it's not already there? I typically add to PATH using something like: export PATH=/usr/local/mysql/bin:$PATH If I construct my PATH in .bash_profile, then it's not read unless the session I'm in is a login session -- which isn't always true. If I construct my PATH in .bashrc, then it runs with each subshell. So if I launch a Terminal window and then run screen and then run a shell script, I get: $ echo $PATH /usr/local/mysql/bin:/usr/local/mysql/bin:/usr/local/mysql/bin:.... I'm going to try building a bash function called add_to_path() which only adds the directory if it's not there. But, if anybody has already written (or found) such a thing, I won't spend the time on it.

    Read the article

  • FreeNAS - can't start ftp service

    - by Ze'ev
    I am trying to activate the ftp service on our FreeNAS, but I keep getting Dec 5 15:44:20 Wheelhouse NAS notifier: - Fatal: error processing configuration file '/usr/local/etc/proftpd.conf' Dec 5 15:44:20 Wheelhouse NAS root: /usr/local/etc/rc.d/proftpd: WARNING: failed to start proftpd Dec 5 15:44:20 Wheelhouse NAS notifier: /usr/local/etc/rc.d/proftpd: WARNING: failed to start proftpd I haven't touched proftpd.conf -- can I delete it? Or is there a default version I can replace it with?

    Read the article

  • APPPATH and LOCALAPPPATH environment variables are not set on a profile in Windows 7 Pro 32bit

    - by Timur Fanshteyn
    I am having problem with a user account on a Windows 7 machine (local install, admin user account) APPPATH and LOCALAPPPATH environment are not set. Another user on the same machine, (also a local account, but without admin rights) has the variables set. This started to happen recently, however, I can not figure out if there was something installed on the machine to cause this. This is creating issues with applications that are trying to expand the variables to store local files. Thank you for the help.

    Read the article

  • sudoers security

    - by jetboy
    I've setup a script to do Subversion updates across two servers - the localhost and a remote server - called by a post-commit hook run by the www-data user. /srv/svn/mysite/hooks/post-commit contains: sudo -u cli /usr/local/bin/svn_deploy /usr/local/bin/svn_deploy is owned by the cli user, and contains: #!/bin/sh svn update /srv/www/mysite ssh cli@remotehost 'svn update /srv/www/mysite' To get this to work I've had to add the following to the sudoers file: www-data ALL = (cli) NOPASSWD: /usr/local/bin/svn_deploy cli ALL = NOEXEC:NOPASSWD: /usr/local/bin/svn_deploy Entries for both www-data and cli were necessary to avoid the error: post commit hook failed: no tty present and no askpass program specified I'm wary of giving any kind of elevated rights to www-data. Is there anything else I should be doing to reduce or eliminate any security risk?

    Read the article

  • Building a List of All SharePoint Timer Jobs Programmatically in C#

    - by Damon Armstrong
    One of the most frustrating things about SharePoint is that the difficulty in figuring something out is inversely proportional to the simplicity of what you are trying to accomplish.  Case in point, yesterday I wanted to get a list of all the timer jobs in SharePoint.  Having never done this nor having any idea of exactly how to do this right off the top of my head, I inquired to Google.  I like to think my Google-fu is fair to good, so I normally find exactly what I’m looking for in the first hit.  But on the topic of listing all SharePoint timer jobs all it came up with a PowerShell script command (Get-SPTimerJob) and nothing more. Refined search after refined search continued to turn up nothing. So apparently I am the only person on the planet who needs to get a list of the timer jobs in C#.  In case you are the second person on the planet who needs to do this, the code to do so follows: SPSecurity.RunWithElevatedPrivileges(() => {    var timerJobs = new List();    foreach (var job in SPAdministrationWebApplication.Local.JobDefinitions)    {       timerJobs.Add(job);    }    foreach (SPService curService in SPFarm.Local.Services)    {       foreach (var job in curService.JobDefinitions)       {          timerJobs.Add(job);       }     } }); For reference, you have the two for loops because the Central Admin web application doesn’t end up being in the SPFarm.Local.Services group, so you have to get it manually from the SPAdministrationWebApplication.Local reference.

    Read the article

  • Whats the best cloud backup solution for a small scale server environment?

    - by nbv4
    I have a server that runs a postgres database that contains about 200MB of data. Currently I have a cron job setup on my home computer which: ssh's into my server runs a remote script which makes a backup of the database scp's that dump over to my local hard drive for storage. Each dump is 20MiB. does this every six hours (one months of backups is roughly 2GiB) The problem with this setup is that if my local machine goes down for whatever reason, no backups will be made. Also, I can't have the cron run from the server, because I can't have it scp'd to my local machine from my server (firewalls and all that crap). My local machine is running Ubuntu 10.04, and my server is Ubuntu 9.10 server edition. I looked into Ubuntu One, but currently it's gui-only. I also looked into dropbox, but it's a pain in the ass to get setup in linux without gui support. Amazon S3 looks good but it's not free (yet dirt cheap). Is there any other alternative that I should look into? I'd prefer something where I can just have my script dump the database into a directory, and have the backup service 'watch' that folder and sync accordingly. I can maybe also have my local machine sync to the cloud backup so I have even more redundancy, plus easy access to my backups for use in testing. Edit: My server is a VPS, so what solution I end up using has to be 100% software based.

    Read the article

  • Make Safari 5's location bar more like Omnibox or AwesomeBar

    - by Lri
    When searching for history or favorites, the search phrase has to be an exact substring of the URL or title. For example super awesome wouldn't match this page. Can the criteria be made more liberal? When an item that was matched by its title is selected from the suggestion list, the title is filled in in place of the URL. The filled in part sometimes starts from the middle of a URL or a title. Can either of these behaviors be changed? Can you redirect unresolved addresses to the default search engine or a custom URL? In Firefox you can go to about:config and set keyword.URL to http://www.google.com/search?btnI&q=. Can you remove or hide the web search field? In Camino, Cruz, and Fluid it can be resized to zero width.

    Read the article

  • COM+ applications deployment behaves different on different systems

    - by sharptooth
    In order to give my COM+ application enough credentials I want its components to be instantiated under "Local Service" account. When I create a server application with a wizard on Win2k3 it offers to choose under whom to instantiate components - "Local Service" is one of choices. But on WinXP "Local Service" is not offered at all in the wizard. When I open the "Identity" tab of the COM+ application under Win2k3 there'a a handful of choises, "Local Service" included, and I can select any of them. But on WinXP the same "Identity" tab only offers "Interactive user". What does this difference depend on? Does it depend on the system or on something else?

    Read the article

  • Can I tell Chrome not to redirect mistyped URLs?

    - by Nathan Long
    If I mistype a URL, Chrome sometimes redirects me to a search. For instance, typing "example_url_i_sometimes_mistype.conm" into the location bar gets me: Your search - example_url_i_sometimes_mistype.conm - did not match any documents. Not only is this annoying, but if the mistyped URL was one on private DNS, I've now just told Google that the domain exists. (A small concern, but bad in principle.) Can I configure Chrome to just show an error and not blab to Google Search about it?

    Read the article

  • Web Development - How to access custom host, defined in my hosts file, from another device in the same network

    - by Neara
    Ok, I hope i'll be able to explain the issue im experiencing. I'm working on a project, that has 2 parts: one takes all requests from usual localhost, the other handles requests from myhost.local. While trying to access both addresses from my computer, it works ok. But now i need to test myhost.local on mobile devices, connected to the same network. Usually i would just run server from my computer ip in the network: python manage.py runserver 10.0.0.8:8000 And then from any device, going to 10.0.0.8:8000 would show the project im working on. However, now accessing that ip address routes me straight to localhost. So, my question is how to access myhost.local from another device in same network? I don't want to change router settings, if that can be avoided, cos sometimes i work from places where i can't access router admin. Is there any network settings on my computer, that i can change to fix the routing to myhost.local w.o losing access to localhost as well?

    Read the article

  • Nginx: Loopback connection via PHP's getimage size crashes server (Magento's CMS)

    - by Alex
    We were able to trace down a problem that is crashing our NGINX server running Magento until the following point: Background info: Magento Backend has a CMS function with a WYSIWYG editor. This editor loads some pictures via a controller in magento (cms/directive). When we set the NGINX error_log level to info, we get the following lines (line break inserted for better readability): 2012/10/22 18:05:40 [info] 14105#0: *1 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: XXXXXXXXX, server: test.local, request: "GET index.php/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL,,/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9024", host: "test.local" When checking the code in the debugger, the following call does never return (in ´Varien_Image_Adapter_Abstract::getMimeType()` # $this->_fileName is http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif` # $_SERVER['REQUEST_URI'] = http://test.local/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL list($this->_imageSrcWidth, $this->_imageSrcHeight, $this->_fileType, ) = getimagesize($this->_fileName); The filename requests is an URL to the same server which is requesting the script a link to a static .gif that is not existing. Sample URL: http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif When the above line executed, any subsequent request to the NGNIX server does not respond any more. After waiting for around 10 minutes, the NGINX server starts answering requests again. I tried to reproduce the error with a simple test script that only calls getimagesize() with the given URL - but this not crash. It simple leads to an exception saying that the URL could not be loaded (which is fine as the URL is wrong)

    Read the article

  • Is there any way to synchronize AD users with Office 365 but still be able to edit them online?

    - by Massimo
    I'm performing a migration to Office 365 from a third-party mail server (MDaemon); the local Active Directory doesn't include any Exchange server, and never had any. We will need directory synchronization in order to enable users to log on to Office 365 using their domain credentials; but it seems that as soon as you enable directory synchronization, you can't perform any action anymore on Office 365 users: all changes need to be made on the local Active Directory, and then replicated by the synchronization process. For ordinary users with a single e-mail address and standard features, this is not a big problem; but what about users which need an additional address? What if I need to configure some nonstandard setting, like "hide from address list" or a custom mailbox quota? From what I've gathered, the only supported way to do this, as you can't directly edit Office 365 objects anymore after synchronization is enabled, is to extend the local AD schema with Exchange attributes, and then manually edit them (!). Or, you can install at least one local Exchange server, and then use the Exchange administrative tools to configure the required settings. Is this correct or am I missing something? Is there any way to synchronize user accounts and password, but still be able to edit user settings directly in Office 365? If not (everything really needs to be set locally and then synchronized), is there any simpler way to do this than manually editing LDAP attributes or installing a local Exchange server?

    Read the article

  • Library conflict in Mac OS X

    - by Juan Medín
    I was trying to install the ImageMagick library on Mac OS X Snow Leopard, and first I tried port and, after it failed, homebrew. It updated some dependencies and installed ImageMagick without problems. So far so good. The problem came when I ran Apache. I got the following error in the system log: 07/04/11 12:55:15 org.apache.httpd[41841] httpd: Syntax error on line 115 of /private/etc/apache2/httpd.conf: Cannot load /opt/local/apache2/modules/libphp5.so into server: dlopen(/opt/local/apache2/modules/libphp5.so, 10): Library not loaded: /opt/local/lib/libpng12.0.dylib\n Referenced from: /opt/local/apache2/modules/libphp5.so\n Reason: image not found I checked the /opt/local/lib and surprise! I don't have the libpng12.0 but the libpng14.0. So, as far as I can tell, something went wrong installing the ImageMagick library. Now, I can't find a way to rollback to the previous libraries, other than copying them from the backup. Do you know if is there a way to recover the previous state or reinstall Apache? Or is this just a corrupt state and I must reinstall OS X?

    Read the article

  • Problem manipulating text using grep

    - by moata_u
    I want to search for a line that contains log4j and take 7 lines before and 3 lines after the match. grep -B7 -A3 "log4j" web.xml After that I want to add comment tags before this paragraph and after it. <!-- paragraph that i found by grep --> I wrote this script bellow: search=`find . -name 'web.xml'` text=`grep -B7 -A3 "log4j" $search` sed -i "/$text/c $newparagraph" $search It's not working. Is there any way to just add comment symbol not replace the paragraph? What I want to the script to do: search for the paragraph append append -- at the end Edit: This is the paragraph that am trying manipulate : <context-param> <param-name>log4jConfigLocation</param-name> <param-value>/WEB-INF/classes/log4j.properties</param-value> </context-param> <listener> <listenerclass> org.springframework.web.util.Log4jConfigListener </listener-class> </listener> This paragraph is part of many paragraphs! I want make it like this: <!-- <context-param> <param-name>log4jConfigLocation</param-name> <param-value>/WEB-INF/classes/log4j.properties</param-value> </context-param> <listener> <listenerclass> org.springframework.web.util.Log4jConfigListener </listener-class> </listener> -->

    Read the article

  • Subversion 1.7.x and expat location in configure

    - by ditto
    I am running CentOS 6.3 64bit and DirectAdmin control panel. Currently I have installed Apache Subversion 1.6.19 without any problems. I have installed expat and expat-devel and neon-devel using yum. When installing Apache Subversion 1.6.19 this configure command works fine: ./configure --prefix=/usr --with-ssl --with-apxs=/usr/sbin/apxs --with-apr=/usr/bin/apr-config However when installing Apache Subversion 1.7.7 using the same configure command as above, I get this error after doing commmand "make": /etc/httpd/lib/libaprutil-1.so: undefined reference to `XML_StopParser' collect2: ld returned 1 exit status make: *** [subversion/svnadmin/svnadmin] Error 1 However I found out I can solve that problem by adding this into the configure command: --with-expat=includes:lib_search_dirs:libs So it then looks like this: ./configure --prefix=/usr --with-ssl --with-expat=includes:lib_search_dirs:libs --with-apxs=/usr/sbin/apxs --with-apr=/usr/bin/apr-config However that configure command then give this warning: configure: WARNING: Expat found amongst libraries used by APR-Util, but Subversion libraries might be needlessly linked against additional unused libraries. It can be avoided by specifying exact location of Expat in argument of --with-expat option. So I want to solve that. I have experimentet alot, but not been able to figure out how to "specifying exact location of Expat" in configure command, and how to find out what the location should be? However after a lot of searching I found this: http://subversion.tigris.org/issues/show_bug.cgi?id=3997 - that is a FreeBSD user saying this: Building Subversion 1.7.x on FreeBSD currently requires a configure flag: --with-expat=/usr/local/include:/usr/local/lib:expat As that is the default location of expat on that platform, it would be nice if configure detected it automatically. However I am not using FreeBSD, I am running CentOS 6.3 64bit. Also remember I said I have installed expat and expat-devel and neon-devel using yum. However I tried to use the expat/command path posted by the FreeBSD user, and it seems to work, it does not give errors when running configure command, and does not give errors when running "make". This is what I used then: ./configure --prefix=/usr --with-ssl --with-expat=/usr/local/include:/usr/local/lib:expat --with-apxs=/usr/sbin/apxs --with-apr=/usr/bin/apr-config But this server is a production server, and therfor I need your help to advice if this is also correct to run on a CentOS server? Is the following path in expat command correct on CentOS?: --with-expat=/usr/local/include:/usr/local/lib:expat If not, please advice what it should be changed to. Thanks in advance for any confirmation or help on this!

    Read the article

  • Is there a convenient method to pull files from a server in an SSH session?

    - by tel
    I often SSH into a cluster node for work and after processing want to pull several results back to my local machine for analysis. Typically, to do this I use a local shell to scp from the server, but this requires a lot of path manipulation. I'd prefer to use a syntax like interactive FTP and just 'pull' files from the server to my local pwd. Another possible solution might be to have some way to automatically set up my client computer as an ssh alias so that something like scp results home:~/results would work as expected. Is there any obscure SSH trick that'll do this for me? Working from grawity's answer, a complete solution in config files is something like local .ssh/config: Host ex HostName ssh.example.com RemoteForward 10101 localhost:22 ssh.example.com .ssh/config: Host home HostName localhost Port 10101 which lets me do commands exactly like scp results home: transferring the file results to my home machine.

    Read the article

  • DNS caching server config problem

    - by Alex
    I have a Bind DNS caching-only server setup that is working. I am bringing up a new AD domain controller that will also be a DNS server for that AD but I don't want it responding to any DNS queries except those that are AD related. So, my goal is to leave this caching server as the primary DNS server for stations on the network and have it forward requests for the AD domain to the domain controller. My understanding is that I just need a forward zone for that domain pointing to the domain controller. However it does not seem to be working. So that leaves me to think that my caching server is not forwarding properly. For example, this AD is going to have a naming convention of hostname.mydomain.local. If I do an nslookup and specify the domain controller's IP address as the server, I can query addresses that exist in DNS on that server, such as dc1.mydomain.local. However, queries to my caching server times out (I get a response from the caching server if I query mydomain.local but none of the objects in that domain). Any suggestions? Here is my named.conf file: options { directory "/var/named"; listen-on { 192.168.0.14; 127.0.0.1; }; forwarders { ; ; }; forward first; }; zone "." in { type hint; file "db.cache"; }; zone "0.0.127.in-addr.arpa" in { type master; file "db.127.0.0"; }; //forward zone for mydomain.local zone "mydomain.local" { type forward; forwarders { 192.168.1.21; }; };

    Read the article

  • How to remove an entry from Chrome's Remembered URLs from the url bar?

    - by cmcculloh
    I've got a url in Chrome "local.mysite.com" that autopopulates when I start typing "local.my" into the URL bar. Note that this URL DOES NOT EXIST in my browser history (at chrome://history/#e=1&p=0) because it isn't a real site and therefor couldn't ever be successfully visited and therefor never shows up in my history. The URL I want is "local.mysite.com/subdir/". That URL is like 3 down in the suggested results because I keep accidentally hitting "enter" when it auto-suggests the unwanted first URL and thus re-enforcing it's assumption that that is the one I want. How do I get rid of the "local.mysite.com" entry in Chrome's memory?

    Read the article

  • Exchange Server 2007 Send and Receive Connectors

    - by Mistiry
    I have gotten awesome advice from users on here for getting Exchange on Windows SBS 2008 set up. I think this is the final piece and I'm ready for roll-out! I need to set up Exchange so that it RECEIVES mail from our existing mail server as a Forward [aliases on the existing mail server to forward mail from [email protected] to [email protected]] (not using the POP3 Connector), and SENDS mail through that server as well (sends from [email protected] to [email protected] and then out to the world, showing in the headers as from [email protected] or at absolute least have the reply-to set as this). Alternatively, as long as the .net email address doesn't show in the From and replies are directed to the .com account, email can go from Exchange to the outside world without directing through the existing mail server. External Domain: domain.com Internal Domain: domain.local Internet Domain Name Set in SBS Console: domain.net When I go to http://remote.domain.net I get the Remote Web Workspace, and can login to both Sharepoint and OWA. I can send an email from OWA to a GMail account. I receive it from [email protected], which is an alias of [email protected]. I cannot, however, send an email from OWA to ANY domain.com email addresses. I am also not receiving any email to this Exchange account (except for NDRs). When I try sending an email to a domain.com account, here is the error (I had to replace all < and with { and }): Delivery has failed to these recipients or distribution lists: [email protected] The recipient's e-mail address was not found in the recipient's e-mail system. Microsoft Exchange will not try to redeliver this message for you. Please check the e-mail address and try resending this message, or provide the following diagnostic text to your system administrator. Generating server: IFEXCHANGE.domain.local [email protected] #550 5.1.1 RESOLVER.ADR.RecipNotFound; not found ## Original message headers: Received: from IFEXCHANGE.domain.local ([fe80::4d34:abc5:f7fd:e51a]) by IFEXCHANGE.domain.local ([fe80::4d34:abc5:f7fd:e51a%10]) with mapi; Tue, 17 Aug 2010 14:14:14 -0400 Content-Type: application/ms-tnef; name="winmail.dat" Content-Transfer-Encoding: binary From: John Doe {[email protected]} To: "[email protected]" {[email protected]} Date: Tue, 17 Aug 2010 14:14:12 -0400 Subject: asdf Thread-Topic: asdf Thread-Index: AQHLPjf+h6hA5MJ1JUu1WS4I4CiWeA== Message-ID: {E4E10393768D784D8760A51938BA456A029934BA30@IFEXCHANGE.domain.local} Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: {E4E10393768D784D8760A51938BA456A029934BA30@IFEXCHANGE.domain.local} MIME-Version: 1.0 I hope I explained the situation well enough for someone to be able to explain to me what I'm missing. If I could, I'd be putting a 10K bounty, but unfortunately I've got only 74 reputation (hey, I'm a newbie here!). I'm pretty sure the obvious "RecipNotFound" error is why its not working, my question is how to resolve this. The email account exists, it receives mail just fine, yet when I send it from the Exchange server it fails. EDIT In OC-Hub Transport, the Email Address Policies has 2 entries. "Windows SBS Email Address Policy" is set up to: Include All Recipient Types, no conditions, and SMTP %[email protected]. "Default Policy" set to: Include All Recipient Types, no conditions, and SMTP @domain.net. Three Authoritative Accepted domains domain.com domain.local (Default) domain.net Remote Domains tab has two entries. Default with domain * Windows SBS Company Web Domain with domain companyweb.

    Read the article

  • One specific VirtualHost in MAMP getting all the requests

    - by julien_c
    I'm pulling my hair out over a seemingly trivial issue... I'm using MAMP 2.0 and want to configure a Virtual Host for local development. Here's my httpd-vhosts.conf: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /Applications/MAMP/htdocs/mysite/public ServerName mysite.local </VirtualHost> As soon as I add the VirtualHost directive, every request to http://localhost gets redirected to the DocumentRoot specified by mysite.local. Why?

    Read the article

  • CPanel has two entries for site, need to use SSL one

    - by beingalex
    I have a website that is meant to be using SSL, however there are two entries in Cpanel's httpd.conf which seem to be causing an issue. When I visit just www.website.com I require it to go to https://www.website.com. However I have to write the https:// directly for the site to work. The secure site also has a different IP. I am not meant to edit the httpd.conf directly either and am unsure as to how to change the following directives: <VirtualHost 1.1.1.1:80> ServerName website.com ServerAlias www.website.com DocumentRoot /home/websitec/public_html ServerAdmin [email protected] ## User websitec # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup websitec websitec </IfModule> <IfModule !mod_disable_suexec.c> <IfModule !mod_ruid2.c> SuexecUserGroup websitec websitec </IfModule> </IfModule> <IfModule mod_ruid2.c> RUidGid websitec websitec </IfModule> CustomLog /usr/local/apache/domlogs/website.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." CustomLog /usr/local/apache/domlogs/website.com combined ScriptAlias /cgi-bin/ /home/websitec/public_html/cgi-bin/ </VirtualHost> <VirtualHost 2.2.2.2:443> ServerName website.com ServerAlias www.website.com DocumentRoot /home/websitec/public_html ServerAdmin [email protected] UseCanonicalName Off CustomLog /usr/local/apache/domlogs/website.com combined CustomLog /usr/local/apache/domlogs/website.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." ## User websitec # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup websitec websitec </IfModule> <IfModule !mod_disable_suexec.c> <IfModule !mod_ruid2.c> SuexecUserGroup websitec websitec </IfModule> </IfModule> <IfModule mod_ruid2.c> RUidGid websitec websitec </IfModule> ScriptAlias /cgi-bin/ /home/websitec/public_html/cgi-bin/ SSLEngine on SSLCertificateFile /etc/ssl/certs/www.website.com.crt SSLCertificateKeyFile /etc/ssl/private/www.website.com.key SSLCACertificateFile /etc/ssl/certs/www.website.com.cabundle CustomLog /usr/local/apache/domlogs/website.com-ssl_log combined SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown <Directory "/home/websitec/public_html/cgi-bin"> SSLOptions +StdEnvVars </Directory> # To customize this VirtualHost use an include file at the following location # Include "/usr/local/apache/conf/userdata/ssl/2/websitec/website.com/*.conf" </VirtualHost> As you can see there is obviously the unsecure directive before the secure one. And this is probably the issue, however if I try to change the IP for the site in WHM I get an error saying that the IP (2.2.2.2) is already in use. Which it is I guess. Any help is appreciated.

    Read the article

  • Windows 7 migration: How to move application settings?

    - by FrustratedWithFormsDesigner
    Migrating from WindowsXP Home to Windows 7 Pro. The last bit that I'm stuck on is migrating application settings, such as the settings for Opera, Firefox, MSN Messenger, and others. On the XP system, this all seems to be in "user/Local Settings" and "user/Application Data". On the Windos 7 system, there is a "user/AppData" folder as well as "user/Application Data" and "user/Local Settings". When I try to access "Application Data" and "Local Settings" on Windows 7, I get an "Access Denied" error (even though my user is an admin). So... if I can't copy my application settings files to "Application Data" and "Local Settings" on Windows 7, where to I copy them to?

    Read the article

  • How to Keep SEO Score from Dropping with Duplicate Content

    - by joeh0717
    I'm hoping that someone has a solution for what I'm trying to accomplish. I'm working on a travel agency web site and there's a "Overview" section for each cruise line. These overviews are located on the index page for each cruise line. Here's my issue: The company is creating a search engine that includes details on each cruise line. Their write-ups on each cruise line are great, so I'd like to include the overview they created for each cruise line, rather than having to create all new ones. However, I don't want duplicating their content to negatively affect the SEO scores of the pages they originally put this content on. It's gong to duplicate, since each page that's dynamically generated by their search engine is going to include a section about the cruise line (where I'd want to place the overview). Question: Is there any way that I can include these overviews (ideally, copying the exact HTML that they've already implemented) without the search engines indexing those particular code sections? I'd want the rest of the search result pages to be indexed...just not the section of each page that contains this duplicate code. I saw something about using a span class named robots-nocontent in Yahoo (not sure if this also applies to Bing) and googleon / googleoff tags in Google. Is this the best solution? I'm open to any suggestions, thanks!

    Read the article

  • Outlook 2010 PST not indexing

    - by kellyllek
    I've had this problem for months: Outlook is unable to search recent items in the inbox. I was running Outlook 2010 Beta. I've just moved to the Trial version, hoping to solve the issue. I have tons of PST files but one central one I'm mainly concerned with. As of now it seems none of it is indexing. I've been through all the sites and made all the changes; rebuilt the index, changed the name of the PST files, run scanpst, stopped and started the search services, made sure the Windows features under programs and features has the indexing option checked, etc... Status now says 'zero items left to index', and 150,000 have been indexed. I think I have a lot more files than that, and also nothing is showing up on any search. I'm not sure what else to do? Side question. I'm going to be moving out of Outlook. However I have 10Gigs+ of PST files over the years. I want to merge them and make them search-able in the easiest way possible. Any idea on how to do that? Could I even move over to Thunderbird right now and be able to index and search my PST files? Also, Google Desktop won't index Outlook 2010 email either...

    Read the article

< Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >