Search Results

Search found 3558 results on 143 pages for 'hosted'.

Page 74/143 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Configure akamai to ignore favicon errors [on hold]

    - by Aki
    We have hosted our services through akamai and have configured and alert in akamai to notify us of 404 errors. We dont want to serve favicon from our services (as they are rest webservices which are not consumed by humans, hence no point in serving favicons). But whenever thesewebservices are accessed from a browser the browser would send a request for the favicon, which ends up being logged as a 404 and akamai sends us an alert for this. Is there a way to configure akamai in a way that it understands that favicon 404s should not contribute to the alert?

    Read the article

  • HTTP Redirect from www.mydomain.com to my amazon ec2 account (instance)?

    - by fabius
    Hello! I have a domain, that is registered at a service provider but my site (wordpress blog) is hosted in a shared account with a friend in another other host service. I want to become seperate from this friend because I'm tired of boring him with my blog downtimes. Now, my problem is that I signed up to Amazon EC2 service and I created a instance (a virtual machine) to host my wordpress blog and now I'd like to redirect mydomain.com to this instance at Amazon EC2 and I don't know how to proceed in order to achieve that. The instance at Amazon EC2 is up and running (it's a 64bit linux machine) but I couldn't redirect mydomain.com to this instance at my host service webpanel. Could someone help me please???

    Read the article

  • Security considerations for my first eStore.

    - by Rohit
    I have a website through which I am going to sell few products. It is hosted on a simple shared-hosting and does not have SSL. On the products page, each product has a Buy Now button created from my PayPal Merchant account. PayPal recommends to use it's Button Factory to create secure buttons and save it inside PayPal itself. I have followed the same advice and the code of any button is secure and does not disclose any information on either a product or it's price. When the user clicks on a Buy Now button, he/she is taken to PayPal site where a page is opened in SSL for the user to fill in the credit card and shipping details. After a successful transaction, the control is passed back to my site. I want to know whether there is still any chance when security could be compromised.

    Read the article

  • What is the Best Internet Provider in the Salt Lake Valley for hosting your future online business f

    - by Justin
    This is for people familiar with the ISP scene in Salt Lake. Also, UTOPIA is not available in my neighborhood yet. I'm looking for comparisons between Comcast, Qwest, and especially other providers I'm not aware of. While I will have online backup (of course!), I want to host some things from my own home at the start of my business. Once money starts flowing in, I will move to a hosted provider, but in the meantime I would like a provider which provides fast (1+ mb/s at least) upload speeds (fast download a given), a static IP, and especially a reasonable price.

    Read the article

  • Replicate portion of an LDAP directory to external server

    - by colemanm
    We're in the process of setting up a Jabber server on Amazon EC2 right now, and we'd like to have our internal users authenticate via LDAP so we don't have to create/manage a separate set of user accounts than the master directory in the office. My question is: is there a way to copy, unidirectionally, a segment of our internal LDAP directory (the user accounts OU) to an external LDAP server and authenticate Jabber against that? We're trying to work around having our externally hosted machines out in the cloud accessing our internal network directly... If we can replicate in one direction only a subset of the user accounts, then if that gets compromised we don't necessarily have a critical security breach into our internal network.

    Read the article

  • Can ping/nmap server, nothing else

    - by lowgain
    I was SSHed into our ubuntu LAMP server , and was just doing a svn update, which hung. I disconnected, and since then, I have not been able to SSH in or view any of our websites (neither from my network or through a remote machine). I would have just assumed the server went down, but I can ping the machine and get really quick responses. Using nmap on the box shows all the normal ports open, so I am confused This server is hosted remotely in a datacenter, do I have any remaining options except contacting them for support? Thanks!

    Read the article

  • PDAnet on Android IP on PC is not public IP. Where does the NAT take place, PDAnet or Verizon?

    - by lcbrevard
    When using PDAnet on a PC (Win7 ultimate) to USB tether a Motorola Droid on Verizon 3G the IP address of the PC appears to be public - 64.245.171.115 (64-245-171-115.pools.spcsdns.net) - but connections show as coming from another public IP - 97.14.69.212 (212-sub-97.14.69.myvzw.com). Someone is performing Network Address Translation - either PDAnet or within the Verizon 3G network. Can someone tell me who is doing the NAT? Is it PDAnet or is it at Verizon? Is there any possibility of setting up port forwarding, such that connections to the public IP 97.14.69.212 (212-sub-97.14.69.myvzw.com) are forward to the PC? We are testing a network protocol that requires either a true public IP or forwarding a range of ports from the public Internet to the system on which the software runs (actually Linux hosted by VMware Player or Workstation on a PC running Windows).

    Read the article

  • Recovering database files from a corrupted VHD

    - by Apocalypse9
    We have a SQL server hosted on a virtual machine. Our hosting company updated/restarted the server and for some reason the virtual machines became unbootable. We've spoken to Microsoft and used a few higher level tools to attempt to recover the virtual machines but were unsuccessful. In browsing the file system the database folder doesn't even appear. I'm wondering if there are any lower level tools that might be able to find and copy the database files. As far as I know the physical hard drive is ok, so I'm hoping there may be some way to recover the files themselves even if the rest of the virtual machine file-system is a loss. Obviously we're in a bit of a bind, and any help/ suggestions are very much appreciated.

    Read the article

  • How to properly shrink a disk size of a server that is being backed up off-site?

    - by JKM
    We have a Virtual machine (lets call this one source) that is being hosted locally with a 1TB disk space (that's how big the virtual disk is) and it has been replicated remotely via Veeam to an off-site server (lets call this clone). However, there has been some server configuration changes that has made source not require as much disk space. I am contemplating shrinking the disk size of source, or using the standalone converter to create a new image with a much smaller disk size requirement (about 300GB). The reason behind this is to lessen the time required for the "Discovering replica VM" step during the replication process. My question is what happens to clone when the replication job is run? Do I need to redo the replication/set up a new backup to create an initial seed for source? Will the job automatically pick up that the disk size has shank and adjust the disk size of clone appropriately? What is the best method for accomplishing this?

    Read the article

  • HUGE MAC FILTER and scripting

    - by user195917
    I make an dhcp server on CentOS, and i apply a mac filter for my clients. Now, with a small number of clients (max 10) ,is not that hard, but what I will do with 2000 clients? My idea was to create a list (ex. "macfilter.lst") and this list, to be updated after a database. I have tow questions. First: How do i create a filter in IPTABLES that takes info`s from a file (file hosted on server) Second: Any idea about how to write a script, that update a file after a database?? Thanks so much for your help.

    Read the article

  • How to secure svn+ssh checkout users?

    - by vvanscherpenseel
    All our SVN repositories are hosted on a dedicated machine on which all the developers have access. Every now and then we need to checkout a repository on a machine we don't own or operate ourselves. Currently we all use our own system (SSH) account for this, but instead I would like to use some generic 'checkoutsvn' user that can be used for this. This user is only used for checking out from a repository, but should not be allowed to log in to the system (no shell access). I tried to do this by setting the default shell of that account to /sbin/nologin but then SVN fails, as apparently svn+ssh requires shell access. How do you do this? Is there a good solution for this?

    Read the article

  • Mail sent from local Postfix marked as "possible phishing" in Outlook

    - by leo grrr
    Hi folks, Sorry for the newbie question--this is not my area of expertise by a long shot. I work at a small development shop and we finally got around to doing code reviews. (Yay!) I set up an instance of Review Board -- an open-source code review tool -- on one of our local servers but it doesn't seem to like talking to our hosted Exchange server to send notification emails. I decided to just install Postfix on that same box and send mail from localhost, which is working much more reliably, but Outlook disables all links in the email announcements and marks it as possible phishing. What is making these emails look suspicious and what can I change? Would the best thing be to figure out how to relay to Exchange from Postfix? Thanks!

    Read the article

  • How secure is cloud computing?

    - by Rhubarb
    By secure, I don't mean the machines itself and access to it from the network. I mean, and I suppose this could be applied to any kind of hosting service, when you put all your intellectual property onto a hosted provider, what happens to the hard disks as they cycle through them? Say I've invested million into my software, and the information and data that I have is valuable, how can I be sure it isn't read off old disks as they're recycled? Is there some kind of standard to look for that ensures a provider is going to use the strictest form of intellectual property protection? Is SAS70 applicable here?

    Read the article

  • Setting up IIS7 to mimic a GoDaddy shared hosting plan

    - by NerdFury
    I host multiple domains on a GoDaddy shared hosting account. I would like to setup a website locally in IIS 7 that mimics the setup of my hosted account so that I can test and debug applications locally before deploying, as debugging after deploying, or discovering there are issues after deploying is frustrating. I have created a folder WebRoot, at put my main application in that folder. I created a website in IIS 7 and pointed it at that folder. I setup bindings with a fake domain, and created a matching entry in my hosts file to make the fake domain point at my 127.0.0.1. I then created a folder www.otherdomain.com under webroot. I then created an application underneath my website, and pointed it at this folder. I can't find how I can add bindings to the web application to have it referenced as a different fake domain, rather than a subdirectory under my root domain. What would be the proper way to setup IIS to best simulate the environment on the GoDaddy servers.

    Read the article

  • migrating storage to a different controller

    - by bellocarico
    Hello, I've just purcheased a couple of adaptec controller (2405/5405) for my ESXi 4.0 U1 servers. Currently ESXi and a couple of VMs are hosted on single sata boot disk connected to a nvidia on board non-RAID controller. I know that it's possible to migrate from single disk to RAID 1 with adaptec and I'm pleased with that, but I'm not sure if ESXi has already the right drivers installed/loaded for this controller. Is there any way I can check this? Is ESXi clever enough to recognize the new hardware and load the right module? Thanks

    Read the article

  • FeedValidator & Feedburner get 404 when accessing wordpress RSS feeds when permalinks are enabled.

    - by Wazbaur
    I'm helping a friend set up a self-hosted Wordpress blog + feedburner and I'm seeing a problem with the feeds that I'm finding somewhat mysterious. Using the default permalink structure (e.g., ?p=123) everything works as expected; I can follow the feed in Google reader, navigate to it manually, and set it up in feedburner. However, once I switch away from the default permalink structure, feedburner and feedvalidator both report that accessing the feed is returning HTTP-404 and Google reader no longer shows new posts (I'm assuming for the same reason), but I can navigate to the feed using a browser. When I do that it appears as though nothing is wrong; there is a feed there and it contains all the posts I expect it to have. I've re-started the feedburner & reader set-up from the beginning after changing the link structure, so I don't think they're doing anything silly like looking at the feed at its old address. I've seen people with similar problems in various other places but there doesn't seem to be a good answer anywhere.

    Read the article

  • Enable POST on IIS 7

    - by user26712
    Hello, I have a WCF service that requires POST verb. This service is hosted in a ASP.NET application on IIS 7. I have successfully confirmed that GET works, but POST does not. I have the following two operations, GET works, POST does not. [OperationContract] [WebInvoke(UriTemplate = "/TestPost", BodyStyle = WebMessageBodyStyle.Bare, RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)] public string TestPost() { return "great"; } [OperationContract] [WebGet(UriTemplate = "/TestGet", BodyStyle = WebMessageBodyStyle.Bare, RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)] public string TestGet() { return "great"; } When I try to access TestPost, I receive a message that says: "Method not allowed". Can someone help me configure IIS 7 to allow POST requests? Thank you!

    Read the article

  • Cloud services can't be reached from complex customer infrastructure

    - by Nock
    We have several services running on a cloud, they all are hosted on Windows Server 2012 R2, have public IP address and specific port. Some of our customers can't reach them because for "some reason" the ports are cut between a firewall between them and us. (some customers are using a shared internet connection in a multi tenant office and they can't change firewall communication) Well, you get it, we don't have the possibility to make all the firewall "allowing" the communication. My customers all runs Windows 7 at least. What is the best counter solution in such case, using Microsoft (Windows Server) technologies? The best would be some kind of tunneling communication or VPN, but the customer should also be able to access his/her enterprise resources. Bby the way, today we using IPSec using Windows Firewall to secure the communication, is IPSec tunneling a solution for us? Otherwise, is there a service in Windows to enable some kind of VPN between a client and a server but only for a given set of servers?

    Read the article

  • Getting 404 in Android app while trying to get xml from localhost

    - by Patrick
    This must be something really stupid, trying to solve this issue for a couple of days now and it's really not working. I searched everywhere and there probably is someone with the same problem, but I can't seem to find it. I'm working on an Android app and this app pulls some xml from a website. Since this website is down a lot, I decided to save it and run it locally. Now what I did; - I downloaded the kWs app for hosting the downloaded xml file - I put the file in the right directory and could access it through the mobile browser, but not with my app (same code as I used with pulling it from some other website, not hosted by me, only difference was the URL obviously) So I tried to host it on my PC and access it with my app from there. Again the same results, the mobile browsers had no problem finding it, but the app kept saying 404 Not Found: "The requested URL /test.xml&parama=Someone&paramb= was not found on this server." Note: Don't mind the 2 parameters I am sending, I needed that to get the right stuff from the website that wasn't hosted by me. My code: public String getStuff(String name){ String URL = "http://10.0.0.8/test.xml"; ArrayList<NameValuePair> params = new ArrayList<NameValuePair>(2); params.add(new BasicNameValuePair("parama", name)); params.add(new BasicNameValuePair("paramb", "")); APIRequest request = new APIRequest(URL, params); try { RequestXML rxml = new RequestXML(); AsyncTask<APIRequest, Void, String> a = rxml.execute(request); ... } catch(Exception e) { e.printStackTrace(); } return null; } That should be working correctly. Now the RequestXML class part: class RequestXML extends AsyncTask<APIRequest, Void, String>{ @Override protected String doInBackground(APIRequest... uri) { HttpClient httpclient = new DefaultHttpClient(); String completeUrl = uri[0].url; // ... Add parameters to URL ... HttpGet request = null; try { request = new HttpGet(new URI(completeUrl)); } catch (URISyntaxException e1) { e1.printStackTrace(); } HttpResponse response; String responseString = ""; try { response = httpclient.execute(request); StatusLine statusLine = response.getStatusLine(); if(statusLine.getStatusCode() == HttpStatus.SC_OK){ // .. It crashes here, because statusLine.getStatusCode() returns a 404 instead of a 200. The xml is just plain xml, nothing special about it. I changed the contents of the .htaccess file into "ALLOW FROM ALL" (works, cause the browser on my mobile device can access it and shows the correct xml). I am running Android 4.0.4 and I am using the default browser AND chrome on my mobile device. I am using MoWeS to host the website on my PC. Any help would be appreciated and if you need to know anything before you can find an answer to this problem, I'll be more than happy to give you that info. Thank you for you time! Cheers.

    Read the article

  • Entourage to Outlook Migration questions

    - by George Bluff
    I am currently migrating a users information from a pop email account to my exchange server. I have already migrated them over to my hosted exchange, and their email is following properly. Now, the user is moving from Entourage on a Mac (10.7) to Outlook 2010 on a PC (Windows 7). I was wondering what the easiest way was to migrate him since there is no .pst files. I have been able to get his email over by dragging the inbox from Entourage to the desktop, then converting the files to .eml using IMAPSize, importing them to Outlook Express (which will only work on Windows XP), then exporting to a pst, then importing in the new account. Takes awhile with large emails, but it works. The issue I am now having is for calendar items. I exported the calendar and got a folder with all the .ics files, but Outlook 2010 doesn't seem to have an easy way to import all of them. Any thoughts?

    Read the article

  • LAMP server has gone down a few times. Ideas for server optimization?

    - by MattB
    Hi all, Our production web server has gone down a few times over the course of the last half year. In the end, we've needed to contact the web host and have them restart as I'm unable to even SSH in. This appears to only affect the web server and not the MySQL database server which is separate. When it affects the web server, all hosted websites time out. I'd like to examine web server optimization/corrections to get to the root of this issue. Any recommendations on how to proceed with that? I'm sure log files would play a role. I'm able to find my way around a Linux-based server and make needed changes, but would be interested in any tips I may not have thought of yet. It may be best for us to speak with an outside consultant as another option. Thanks.

    Read the article

  • If I let Google handle my emails for my domain, my Wordpress site won't send out emails anymore

    - by Fulvio
    Since I decided to let Google handle all my emails for my domain, while the domain is hosted on a 3rd party server, emails send out by a Wordpress installation no longer work. My supposition is that since all email is being routed to Google, my specific account on that server for that domain is unable to send out emails. I definitely wish to keep using google services for handling my emails since it comes with all the advantages connected to a Google account. However I need my Wordpress installation to send out administrative emails. I run my server with CPanel. How to configure that specific account and/or Wordpress to keep it able to send out emails? I don't need people to answer these emails sent out from server (eventually I might set a reply-to-address perhaps) thanks

    Read the article

  • Cisco 2900 series router - 3x 3g HWIC - Can you use the same subnet for each HWIC?

    - by Lance
    We host a site with a 2900 series router with 3x 3G-HWIC cards installed. It is hosted with telstra and plugs into our corporate WAN. The card authenticates against radius and advertises a route into the WAN for which subnet it routes for. We have always used the same advertised subnet on each. Telstra have advised us that this could be the cause of some drop out issues whereby some services will work for some people and not for others and are saying effectively that their system will only use one of these at a time even though we can see the interface is online and assigned a WAN IP address. Has anyone out there configured a multi HWIC setup before and if so are they using different subnets for each or the same?

    Read the article

  • How to set default permissions for automounted FAT drives in Ubuntu 9.10?

    - by piman
    I've got many FAT32 drives that I'd like to mount in Ubuntu such that they have permission mode 700 for directories and 600 for all other files. By default, they have 755 for all files, which is not particularly useful since almost no non-directories should be executable, and it screws up version control repos hosted on the drives. "Back in the day" I would have had the drives listed in /etc/fstab with the umask/dmask I want and there was no such thing as a default. These days, drives automount under their volume names. Which is great, except now I have no idea how to set the default. I have tried changing the /system/storage/default_options/vfat/mount_options gconf key with no apparently effect. It was 077 initially but the mounted drive reflected a default of 022; changing it and re-inserting the drives resulted in the files still having permission bits of 755.

    Read the article

  • dig gets the right result from DNS server, but name still fails to resolve

    - by EMiller
    Under what conditions would the following occur? From a given OSX machine on an internal network: $~ cat /etc/resolv.conf nameserver 10.102.120.7 nameserver 10.102.120.2 From the same machine: $~ dig @10.102.120.7 in.local <snip> ... ;; QUESTION SECTION: ;in.local. IN A ;; ANSWER SECTION: in.local. 43200 IN A 10.102.123.30 <snip> ... And yet, this workstation cannot ping in.local, nor load pages hosted by apache on that machine. 10.102.123.30 is definitley up (2 OSX machines I know fail to resolve in.local - but other machines on the network can). I have also checked their /etc/hosts to see if anything there might interfere... Not sure what else to check...

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >