Search Results

Search found 13804 results on 553 pages for 'amazon elastic ip'.

Page 79/553 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • API to lookup product information by UPC?

    - by officespace672
    Is there an API that allows lookup of product information by UPC? I know that Amazon has the Product Advertising API but don't think it can be used for any purpose other than sending traffic to amazon.com as per their license agreement here. Specifically, my application would not have the principal purpose of advertising and marketing the Amazon Site and driving sales of products and services on the Amazon Site Does such an API exist that I can do anything I want with the data? UPDATE I would want to use the API for my application, not create create such an API.

    Read the article

  • What is this for an IP in my google app engine log file?

    - by Christian Harms
    I get many normal log lines in my google app engine application. But today I go these instead the 4-part number: 2a01:e35:2f20:f770:6c54:3ee8:67fb:df8 What is this for an format? ipv6 are 6 numbers, mac address too... Normal logfile line: 187.14.44.208 - - [19/Mar/2010:14:31:35 -0700] "GET /geo_data.js HTTP/1.1" 200 776 "http://www.xxx.com.br/spl19/index.php?refid=gv_av_ri" "Mozilla/5.0 (Windows; U; Windows NT 5.1; pt-BR; rv:1.9.2) Gecko/20100115 Firefox/3.6 (.NET CLR 3.5.30729),gzip(gfe)" This special logfile line: 2a01:e35:2f20:f770:6c54:3ee8:67fb:df8 - - [18/Mar/2010:17:00:37 -0700] "GET /geo_data.js HTTP/1.1" 500 450 "http://www.xxx.com.br/spl19/index.php?refid=cm_av_ri" "Mozilla/5.0 (Windows; U; Windows NT 6.1; pt-PT; rv:1.9.2) Gecko/20100115 Firefox/3.6,gzip(gfe)"

    Read the article

  • What structure of data use to communicate via tcp/ip in java?

    - by rmaster
    Let's assume I want to send many messages between 2 programs made in java that use TCP sockets. I think the most convienient way is to send objects like: PrintStream ps = new PrintStream(s.getOutputStream()); ObjectOutputStream oos = new ObjectOutputStream(ps); some_kind_of_object_here; oos.writeObject(some_kind_of_object_here); ps.print(oos); I want to send, strings, numbers, HashMaps, boolean values How can I do this using fx 1 object that can store all that properties? I though about ArrayList that is serializable and we can put there everything but is not elegant way. I want to send different types of data because user can choose from a variety of options that server can do for it. Any advices?

    Read the article

  • Configuring Wireless on Cisco 851W

    - by Aequitarum Custos
    Either a powersurge or something caused our router's configuration to get wiped, and our last backup was before the wireless network was setup. We have not been able to reconfigure the wireless since then, so was curious if anyone here would be able to determine what configuration is needed. We are using a Cisco 851W running 12.4(15)T9 We would like to use WPA encryption, and have it on the same network as the rest of the office network. Config file is below: User Access Verification Building configuration... Current configuration : 3857 bytes ! version 12.4 no service pad service timestamps debug datetime msec service timestamps log datetime msec service password-encryption no service dhcp ! hostname BOB ! boot-start-marker boot-end-marker ! enable secret 5 ********************* ! no aaa new-model ! ! dot11 syslog no ip source-route ! ! ip cef no ip bootp server ip domain name BOB.com ip name-server 61.11.1.1 ip name-server 61.11.1.2 ! ! ! username BOBB privilege 15 password 7 ************************* ! ! archive log config hidekeys ! ! ip tcp synwait-time 10 ! ! ! interface FastEthernet0 no cdp enable ! interface FastEthernet1 no cdp enable ! interface FastEthernet2 no cdp enable ! interface FastEthernet3 no cdp enable ! interface FastEthernet4 description WAN Connection$ETH-WAN$ ip address 61.11.1.14 255.255.254.0 ip nat outside ip virtual-reassembly duplex auto speed auto no cdp enable ! interface Dot11Radio0 no ip address shutdown ! encryption mode ciphers tkip speed basic-1.0 basic-2.0 basic-5.5 6.0 9.0 basic-11.0 12.0 18.0 24.0 36.0 48.0 54.0 station-role root no cdp enable ! interface Dot11Radio0.1 encapsulation dot1Q 1 native no cdp enable bridge-group 1 bridge-group 1 subscriber-loop-control bridge-group 1 spanning-disabled bridge-group 1 block-unknown-source no bridge-group 1 source-learning no bridge-group 1 unicast-flooding ! interface Dot11Radio0.20 ip access-group Guest-ACL in no cdp enable ! interface Vlan1 description Internal Network ip address 192.168.2.60 255.255.255.0 ip nat inside ip nat enable ip virtual-reassembly ! ip forward-protocol nd ip route 0.0.0.0 0.0.0.0 61.11.2.14 ! ip http server no ip http secure-server ip nat inside source list 1 interface FastEthernet4 overload ! ip access-list extended Guest-ACL deny ip any 192.0.0.0 0.0.0.255 permit ip any any ! access-list 1 permit 192.0.0.0 0.0.0.255 access-list 100 remark SDM_ACL Category=2 access-list 100 permit ip 192.0.0.0 0.0.0.255 any no cdp run ! control-plane ! !

    Read the article

  • Communicating with all network computers regardless of IP address

    - by Stephen Jennings
    I'm interested in finding a way to enumerate all accessible devices on the local network, regardless of their IP address. For example, in a 192.168.1.X network, if there is a computer with a 10.0.0.X IP address plugged into the network, I want to be able to detect that rogue computer and preferrably communicate with it as well. Both computers will be running this custom software. I realize that's a vague description, and a full solution to the problem would be lengthy, so I'm really looking for help finding the right direction to go in ("Look into using class XYZ and ABC in this manner") rather than a full implementation. The reason I want this is that our company ships imaged computers to thousands of customers, each of which have different network settings (most use the same IP scheme, but a large percentage do not, and most do not have DHCP enabled on their networks). Once the hardware arrives, we have a hard time getting it up on the network, especially if the IP scheme doesn't match, since there is no one technically oriented on-site. Ideally, I want to design some kind of console to be used from their main workstation which looks out on the network, finds all computers running our software, displays their current IP address, and allows you to change the IP. I know it's possible to do this because we sell a couple pieces of custom hardware which have exactly this capability (plug the hardware in anywhere and view it from another computer regardless of IP), but I'm hoping it's possible to do in .NET 2.0, but I'm open to using .NET 3.5 or P/Invoke if I have to.

    Read the article

  • Using Amazon S3 for multiple remote data site uploads, securely

    - by Aitch
    I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case. We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day. I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be. The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know. Let's say I write or use a script like s3cmd/s3sync to potentially push up the files. Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?) Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? ) having a bucket per remote machine does not seem feasible due to bucket limits? I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct? I hope my inexperience has not blinded me to some other solution that might be suggested! I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong... I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible? TIA, Aitch

    Read the article

  • yum not working on EC2 Red Hat instance: Cannot retrieve repository metadata

    - by adev3
    For some reason yum has stopped working in my Amazon EC2 instance, located in the EU West sector. There seems to be something wrong with the path of the repo metadata, is this correct? I would be very grateful for any help, as my experience in this field is somewhat limited. Thank you very much. cat /etc/redhat-release: Red Hat Enterprise Linux Server release 6.2 (Santiago) yum repolist: Loaded plugins: amazon-id, rhui-lb, security https://rhui2-cds01.eu-west-1.aws.ce.redhat.com/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. https://rhui2-cds02.eu-west-1.aws.ce.redhat.com/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. repo id repo name status rhui-eu-west-1-client-config-server-6 Red Hat Update Infrastructure 2.0 Client Configuration Server 6 0 rhui-eu-west-1-rhel-server-releases Red Hat Enterprise Linux Server 6 (RPMs) 0 rhui-eu-west-1-rhel-server-releases-optional Red Hat Enterprise Linux Server 6 Optional (RPMs) 0 repolist: 0 yum update: (I needed to remove the base URLs below because of ServerFault's restrictions for new users) Loaded plugins: amazon-id, rhui-lb, security [same as base url 1 above]/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. [same as base url 2 above]/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: rhui-eu-west-1-client-config-server-6. Please verify its path and try again

    Read the article

  • Can I subnet a subnet?

    - by Portman
    Apologies in advance for the botched terminology. I have read the Server Fault Subnet Wiki but this is more of an ISP question. I currently have a /27 block of public IPs. I use give my router the first address in this pool and then use 1-to-1 NAT for all the servers behind the firewall, so that they each get their own public IP. The router/firewall is currently using (actual addresses removed to protect the guilty): IP Address: XXX.XXX.XXX.164 Subnet mask: 255.255.255.224 Gateway: XXX.XXX.XXX.161 What I would like to do is break out my subnet into two separate /28 subnets. And do this in a way that is transparent to the ISP (i.e., they see me as continuing to operate a single /27). Currently, my topology looks like: ISP | [Router/Firewall] | [Managed Ethernet Switch] / \ \ [Server1] [Server2] [Server3] (etc) Instead, I would like it to look like: ISP | [Switch] / \ [Router1] [Router2] | | | | [S1] [S2] [S3] [S4] (etc) As you can see, this would partition me into two separate networks. I'm struggling with what the correct IP settings would be on Router1 and Router2. Here's what I have right now: Router1 Router2 IP Address: XXX.XXX.XXX.164 XXX.XXX.XXX.180 Subnet mask: 255.255.255.240 255.255.255.240 Gateway: XXX.XXX.XXX.161 XXX.XXX.XXX.161 Note that normally you would expect Router2 to have a gateway of .177, but I'm trying to get them both to use the gateway originally given to me by the ISP. Is subnetting like this in fact possible, or am I completely botching the most basic concepts?

    Read the article

  • Multiple public IPs through DD-WRT without 1-to-1 NAT

    - by Stephen Touset
    I've done a search here and wasn't able to find anything relevant to my situation. I apologize in advance if I've missed an existing post on the topic. Our ISP has provided us with 6 static IP addresses. We are currently using two of them (plus one for the Comcast-provided router). One of the static addresses routes to our internal network, and the other goes to our VOIP phone system. Unfortunately, the Comcast machine doesn't support QoS, so our VOIP calls have been choppy. We plan to put the Comcast-provided router into bridge mode and replace it with an ASUS RT-N16 running DD-WRT. However, I'm unsure how to set up DD-WRT to function similarly to our existing Comcast router. The Comcast router's WAN IP is the first of our static IP addresses. We did not need to provide an internal LAN IP address — simply connecting machines that use our other public addresses to the LAN ports on the Comcast router is enough for it to route between the connected machines and our internet connection. Is there a way to do a similar setup through the DD-WRT? Thanks in advance.

    Read the article

  • How to redirect all Internet traffic to OpenVPN Server

    - by JuliaS
    I have seen working solutions around the issue of forcing Internet traffic to go through the OpenVPN server but they are all done in Linux, all I want to know is how to add an entry to the route table in windows to make this happen. connectivity between the client and server is fine, my Windows 7 client can establish a connection to the Windows 2008 Server, but when established Internet traffic is still going from the local Windows 7 machine. Here are the details: Server: Windows 2008 Server with one NIC OpenVPN IP Address: 192.168.0.1 Local NIC IP Address (connects the server to the Internet): 10.242.69.107 Client: Windows 7 with one NIC OpenVPN IP Address: 192.168.0.2 ISP allocated IP Address: 10.0.8.2 (gateway 10.0.8.1) Server OpenVPN Config: dev tun ifconfig 192.168.0.1 192.168.0.2 secret static.key push "redirect-gateway def1" Client OpenVPN Config: remote xxx.xxx.com dev tun ifconfig 192.168.0.2 192.168.0.1 secret static.key I'm not an expert with adding routes...etc. I would be grateful if someone could let me know how to add this entry in my server/client route table. EDIT: Output from the client's netstat -rnv IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.0.8.1 10.0.8.2 20 10.0.8.0 255.255.255.252 On-link 10.0.8.2 276 10.0.8.2 255.255.255.255 On-link 10.0.8.2 276 10.0.8.3 255.255.255.255 On-link 10.0.8.2 276 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.0.0 255.255.255.252 On-link 192.168.0.2 286 192.168.0.2 255.255.255.255 On-link 192.168.0.2 286 192.168.0.3 255.255.255.255 On-link 192.168.0.2 286 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.0.8.2 276 224.0.0.0 240.0.0.0 On-link 192.168.0.2 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.0.8.2 276 255.255.255.255 255.255.255.255 On-link 192.168.0.2 286 ===========================================================================

    Read the article

  • SCM8014 to FVS338

    - by Jack
    I have a SMC8014 Router/Modem that Comcast provided me with their business class service. It was not filtering malicious traffic as aggressively as I had hoped, so I purchased a NetGear ProSafe FVS338, and put this behind the SMC8014, and all my machines behind that. After some brief configuration, all machines can see out to the internet. I also have a single web server, and I have not been able to configure things so that incoming requests can reach it. This is where I need help! I would like to have the FVS339 do NAT, so that I can assign a 192.168 address to my webserver. I've tried everything I know of, and I can't get things going. I set the SMC8014 to have a LAN facing IP of 10.0.0.1, and I assigned the FVS339 a WAN facing IP of 10.0.0.2. I would like to be able to tell the SMC8014 to just forward all traffic to 10.0.0.2, but I haven't had any success. In my (unfortunately limited) understanding, what I probably want here is a static route, but I don't know how to cofigure one, or if this is really what I want. The SMC8014 wants a Destination IP, a Subnet Mask and a Gateway IP. Any help would be appreciated.

    Read the article

  • Can I replicate data between mySQL and SQL Server/SQL Azure?

    - by Ernest Mueller
    I have a replicated mySQL setup running happily on Amazon AWS, making user data available locally in various regions. Now I'm faced with an app that needs to go up on Microsoft Azure and I need to replicate the data over to there as well. So that's annoying. I am faced with several options: Replicate from mySQL to SQL Azure/SQL Server seems like it would be lovely - is this possible? I'd consider using a third party tool and paying $$ if I had to. We're not using anything complicated in the db feature set, it's just data in tables. Get mySQL working on Microsoft Azure - which seems really dicey at best. All the HOWTOs I can find say "this is possible but you really shouldn't try this for production apps." Go non-realtime and do syncs from mySQL to SQL Azure, which may be somewhat expensive and slower. Rip out all my mySQL on Amazon and use SQL Server there, which would make Baby Jesus cry. Has anyone gotten mySQL to SQL Azure/SQL Server replication or syncing working? Or have any other approaches (a NoSQL solution that replicates and might meet our but-we-need-to-join-some-tables needs that can easily be run on Amazon and Azure)?

    Read the article

  • How can I debug a port/connectivity issue?

    - by rfw21
    I am running a simple WebSocket server on Amazon EC2 (Fedora Core). I've opened the relevant port using ec2-authorize, and checked that it's opened. Iptables is definitely not running. However I can't connect to the port from outside EC2. I've tried the following (my server is running on port 7000): telnet ec2-public-dns.xx.xx.xx.amazon.com 7000 (from within EC2: connects fine) nmap localhost (output includes line: 7000/tcp open afs3-fileserver) telnet ec2-public-dns.xx.xx.xx.amazon.com 7000 (this time from my local machine: I get "connection refused: Unable to connect to remote host") The strange thing is this: if I start Nginx on port 7000 then it works and I can connect from outside EC2! And the WebSocket server fails on port 80, where Nginx works fine. To me this suggests a problem with the WebSocket server, BUT I can connect to it successfully from within EC2. (And it works fine on a different VPS account). How can I debug this further? If anybody can stop me tearing my hair out, I'd be very grateful indeed :)

    Read the article

  • I'm using a shared server, and as such Gmail marks my email as spam (all from headers are different from the same IP)

    - by chipperyman573
    I have a shared server, meaning many people share the same IP. When I send an email, the @website.com is different from someone else that shares the same IP with me, therefore Gmail marks it as spam. For example: My website's IP is 1.2.3.4. My website is mywebsite.com Person 2's website's IP is hosted by the same host, and as such their IP is 1.2.3.4 Person 2's website is person2.com. When they send an email, it gets sent from [email protected] When I send an email, it gets sent from [email protected] According to Gmail's spam thing: "Use the same address in the 'From:' header on every bulk mail you send." Again, the only similarities between our websites is the IP. However, this causes Gmail to mark both our mail as spam. Is there a way to sort this out with Gmail?

    Read the article

  • How do I Install Intermediate Certificates (in AWS)?

    - by getmizanur
    I have installed private key (pem encoded) and public key certificate (pem encoded) on Amazon Load Balancer. However, when I check the SSL with site test tool, I get the following error: Error while checking the SSL Certificate!! Unable to get the local issuer of the certificate. The issuer of a locally looked up certificate could not be found. Normally this indicates that not all intermediate certificates are installed on the server. I converted crt file to pem using these commands from this tutorial: openssl x509 -in input.crt -out input.der -outform DER openssl x509 -in input.der -inform DER -out output.pem -outform PEM During setup of Amazon Load Balancer, the only option I left out was certificate chain. (pem encoded) However, this was optional. Could this be cause of my issue? And if so; How do I create certificate chain? UPDATE If you make request to VeriSign they will give you a certificate chain. This chain includes public crt, intermediate crt and root crt. Make sure to remove the public crt from your certificate chain (which is the top most certificate) before adding it to your certification chain box of your Amazon Load Balancer. If you are making HTTPS requests from an Android app, then above instruction may not work for older Android OS such as 2.1 and 2.2. To make it work on older Android OS: go here click on "retail ssl" tab and then click on "secure site" "CA Bundle for Apache Server" copy and past these intermediate certs into certificate chain box. just incase if you have not found it here is the direct link. If you are using geo trust certificates then the solution is much the same for Android devices, however, you need to copy and paste their intermediate certs for Android.

    Read the article

  • Obtaining the correct Client IP address when a Physical Load Balancer and a Web Server Configured With Proxy Plug-in Are Between The Client And Weblogic

    - by adejuanc
    Some Load Balancers like Big-IP have build in interoperability with Weblogic Cluster, this means they know how Weblogic understand a header named 'WL-Proxy-Client-IP' to identify the original client ip.The problem comes when you have a Web Server configured with weblogic plug-in between the Load Balancer and the back-end weblogic servers - WL-Proxy-Client-IP this is not designed to go to Web server proxy plug-in. The plug-in will not use a WL-Proxy-Client-IP header that came in from the previous hop (which is this case is the Physical Load Balancer but could be anything), in order to prevent IP spoofing, therefore the plug-in won't pass on what Load Balancer has set for it.So unfortunately under this Architecture the header will be useless. To get the client IP from Weblogic you need to configure extended log format and create a custom field that gets the appropriate header containing the IP of the client.On WLS versions prior to 10.3.3 use these instructions:You can also create user-defined fields for inclusion in an HTTP access log file that uses the extended log format. To create a custom field you identify the field in the ELF log file using the Fields directive and then you create a matching Java class that generates the desired output. You can create a separate Java class for each field, or the Java class can output multiple fields. For a sample of the Java source for such a class, seeJava Class for Creating a Custom ELF Field to import weblogic.servlet.logging.CustomELFLogger;import weblogic.servlet.logging.FormatStringBuffer;import weblogic.servlet.logging.HttpAccountingInfo;/* This example outputs the X-Forwarded-For field into a custom field called MyOriginalClientIPField */public class MyOriginalClientIPField implements CustomELFLogger{ public void logField(HttpAccountingInfo metrics,  FormatStringBuffer buff) {   buff.appendValueOrDash(metrics.getHeader("X-Forwarded-For");  }}In this case we are using 'X-Forwarded-For' but it could be changed for the header that contains the data you need to use.Compile the class, jar it, and prepend it to the classpath.In order to compile and package the class: 1. Navigate to <WLS_HOME>/user_projects/domains/<SOME_DOMAIN>/bin2. Set up an environment by executing: $ . ./setDomainEnv.sh This will include weblogic.jar into classpath, in order to use any of the libraries included under package weblogic.*3. Compile the class by copying the content of the code above and naming the file as:MyOriginalClientIPField.java4. Run javac to compile the class.$javac MyOriginalClientIPField.java5. Package the compiled class into a jar file by executing:$jar cvf0 MyOriginalClientIPField.jar MyOriginalClientIPField.classExpected output is:added manifestadding: MyOriginalClientIPField.class(in = 711) (out= 711)(stored 0%)6. This will produce a file called:MyOriginalClientIPField.jar This way you will be able to get the real client IP when the request is passing through a Load Balancer and a Web server before reaching WLS. Since 10.3.3 it is possible to configure a specific header that WLS will check when getRemoteAddr is called. That can be set on the WebServer Mbean. In this case, set that to be X-Forwarded-For header coming from Load Balancer as well.

    Read the article

  • How do you initialize networking on a new Xen guest VM?

    - by Marten Veldthuis
    We have a Citrix XenServer setup, and while I personally lean more towards Dev than Ops, I've got an issue that's been bugging me. When you provision a new (Linux/Ubuntu) guest, how do you get it to have the correct IP-address? I'd want my application servers to exist in the range of 10.20.0.0/24, preferably being .1, .2, etc, so I can keep my sanity. I guess that the actual IP-address is something set in Linux itself, and Xen can't touch that, but then what's the best practice for getting it done? If you set up DHCP, don't you just move the problem to getting the adapters the "correct" MAC-addresses? Do you just have to hardcode a large table of MAC-addresses to IP-addresses, and then provision new guests always with the correct MAC-address on the virtual ethernet adapter? What we currently do is have an image of a "app server" that we boot up a new instance of, and then finalize it (with a script) that (among other things) modifies the /etc/networking/interface file to give it the correct IP. But that feels dirty to me, and I feel like surely there must a better way. Please enlighten me?

    Read the article

  • Apache local verses external (domain)

    - by Jessy Houle
    I have an Apache server running on Ubuntu server 10, using Passenger for Ruby on Rails. I have configured my site under the sites-enabled directory of Apache and can hit the server with an internal IP address (192.168.X.X) and the site comes back as expected. However, whenever I try to hit the site externally, either through the domain name or the IP address tied to the domain name, the site will not come back. I have a router in the middle with a static IP address, with Port Forwarding turned on (forwarding 80/443) to the server and I'm quite confident the issue isn't there. In fact, I even DMZed to the Ubuntu Server just to make sure. Also, all router firewall options have been turned off. So here is the question... Is there something else I have to do with Ubuntu server to allow externally requested port 80 traffic? Otherwise, is there some settings that need to be set in Apache to allow domain or external IP address port 80 traffic through? I'm pretty new to Apache, so, please take it a bit easy on me :-) Thank you for your responses. -Jessy Houle

    Read the article

  • Can a website see/know my MAC address even if I use a VPN?

    - by ilhan
    I have searched other results and read many of them but I could not get an enough information. My question is that can a website see my MAC address or can they have an information about that I'm the same person under these conditions: I am using a VPN and I use two IPs: first one is normal one, the second one is the VPN's IP. I use two browsers to hide behind browser fingerprinting. I use both browsers with Incognito Mode. I always use one for normal IP, one for the VPN IP. I do not know that if the website uses cookies or not. But can they collect an enough information to prove that these two identities belong to same person? Is there any other way for them to see that I am the same person? I use different IPs, different browsers and I use both browsers in incognito mode. I even changed one of browsers language to only English. So even if they collect my info from browser, they will see two browsers using different languages. (Addition after edit): So I have changed my IP and browser information and the website can not reach this information anymore to prove that I am the same person using two accounts. Then let's come to the title: Can they see my MAC address? Because I think that it is the last way that they can identify me and my main question is that. I wrote the information above to mention that I changed IPs and I have some precautions to avoid browser fingerprinting (btw my VPN provider already has a service about blocking it). I wrote them because I read similar advices in some related questions but my question is that can they see my MAC address (or anything else that can make me detected) despite all these precautions. And lastly, Is there an extra way to be anonymized that I can do? For example, can my system clock or anything else give an information? Thanks in advance.

    Read the article

  • Mysterious swap usage on EC2

    - by rusty
    We're in the middle of a project to move our infrastructure from a co-lo situation into Amazon EC2 and we've noticed some weird memory characteristics of the processes in our setup. Without going into too much detail about the specifics of our processes, we've noticed that on our EC2 instances "top" will show processes using a lot of swap space -- in fact, much greater than the amount of available swap or (if you add it all up) more than the available disk. Here's a sample top output: Mem: 7136868k total, 5272300k used, 1864568k free, 256876k buffers Swap: 1048572k total, 0k used, 1048572k free, 2526504k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP COMMAND 4121 jboss 20 0 5913m 603m 14m S 0.7 8.7 3:59.90 5.2g java 22730 root 20 0 2394m 4012 1976 S 2.0 0.1 4:20.57 2.3g PassengerHelper 20564 rails 20 0 2539m 220m 9828 S 0.3 3.2 0:23.58 2.3g java 1423 nscd 20 0 877m 1464 972 S 0.0 0.0 0:03.89 876m nscd You can see, for instance, that jboss is reportedly using 5.2 gigs of swap space which is definitely impossible since there's only 1G allocated and none is being used (probably because there's still 1.8G of RAM free). And here's the results of uname -a: Linux xxx.yyy.zzz 2.6.35.14-106.53.amzn1.x86_64 #1 SMP Fri Jan 6 16:20:10 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux We're running an AMI based off of the default Amazon Linux AMI (Amazon Linux AMI release 2011.09, so some RHEL5 and RHEL 6) with not too many customizations and definitely no kernel-level customizations. Something here tells me that on this particular kernel/distribution, the reporting of swap or maybe even total memory usage isn't what it appears to be... Any help would be appreciated!

    Read the article

  • Static DHCP binding

    - by Alex
    Good time of day, SF people. I have created a manual DHCP binding entry on a Cisco router so that a client would always get leased to it. The clients wants to get the same address on both of his dual-boot linux systems. He tries to get an IP address leased and he succeeds on one of the dual-boot operating systems. When he reboots to another one he gets a lease for a completely different one. I don't get it. The MAC addresses are the same (we checked in ifconfig, so what could be happening here? Why is the router confused? Or is it something else? Also, how can I check DHCP server IP address who I have got an IP address from (on Linux)? Configuration on Cisco: ip dhcp pool MANUAL_BINDING0001 host 192.168.0.64 255.255.255.0 hardware-address dead.beef.1337 dns-server 192.168.8.11 default-router 192.168.0.254 domain-name verynicedomainigothere.cn PS. Is it mandatory to use client-name configuration line?

    Read the article

  • Is there a limit to how many sites can be hosted on a single IP address when using HTTP Host Headers on Windows 2008?

    - by Kev
    For reasons that are lost in the mists of time, our older Windows (2000, 2003) servers have been configured with a "Administrative" IP address and three further "Hosting" IP addresses. There are also additional IP's for sites with SSL certificates. The "Administrative" IP address is where all our internal provisioning, monitoring and other such apps are bound to. We lock this down and don't permit access to it from the outside world (other than over our VPN). The three "Hosting" IP addresses are used for IIS website hosting (in conjunction with host headers). Historically, new site IP address allocations have been rotated through these three IP addresses. I'm not really sure why. I'm building a new batch of servers and I'm considering just having a single hosting IP address. Our servers can host up to 1200 sites on a single machine. Is there a technical limit to the number of IIS sites that can bind to a single IP address? Our Linux platform seems to do just fine with just a single shared IP + host headers. I initially thought this might be an SEO thing, but given that IPv4 address space conservation is paramount I hardly think Google or other search engines could reasonably penalise site rankings just because hundreds of sites hang off the same IP.

    Read the article

  • Amazon EC2 Instance - m1.medium Ubuntu 12.04 - Started to crash three days ago

    - by Joy
    The environment: Amazon EC2 Instance - m1.medium Ubuntu 12.04 Apache 2.2.22 - Running a Drupal Site Using MySQL DB Server RAM info: ~$ free -gt total used free shared buffers cached Mem: 3 1 2 0 0 0 -/+ buffers/cache: 0 2 Swap: 0 0 0 Total: 3 1 2 Hard drive info: Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 4.7G 2.9G 62% / udev 1.9G 8.0K 1.9G 1% /dev tmpfs 751M 180K 750M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 0 1.9G 0% /run/shm /dev/xvdb 394G 199M 374G 1% /mnt The problem About two days ago the site started failing becaue the MySQL server was shut down by Apache with the following message: kernel: [2963685.664359] [31716] 106 31716 226946 22748 0 0 0 mysqld kernel: [2963685.664730] Out of memory: Kill process 31716 (mysqld) score 23 or sacrifice child kernel: [2963685.664764] Killed process 31716 (mysqld) total-vm:907784kB, anon-rss:90992kB, file-rss:0kB kernel: [2963686.153608] init: mysql main process (31716) killed by KILL signal kernel: [2963686.169294] init: mysql main process ended, respawning That states that the VM was occupying 0.9GB, but my Ram has 2GB free, so 1GB was still left free. I understand that in Linux applications can allocate more memory than physically available. I don't know if this is the problme, it's the first time that it has started to happen. Obviously, the MySQL server tries to restart, but there's no memory for it apparently and it won't restart. Here is its error log: Plugin 'FEDERATED' is disabled. The InnoDB memory heap is disabled Mutexes and rw_locks use GCC atomic builtins Compressed tables use zlib 1.2.3.4 Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 Completed initialization of buffer pool Fatal error: cannot allocate memory for the buffer pool Plugin 'InnoDB' init function returned error. Plugin 'InnoDB' registration as a STORAGE ENGINE failed. Unknown/unsupported storage engine: InnoDB [ERROR] Aborting [Note] /usr/sbin/mysqld: Shutdown complete I simply restarted the Mysql service. About two hours later it happened again. I restarted it. Then it happened again 9 hours later. So then I thought of the MaxClients parameter of apache.conf, so I went to check it out. It was set at 150. I decided to drop it down to 60. As so: <IfModule mpm_prefork_module> ... MaxClients 60 </IfModule> <IfModule mpm_worker_module> ... MaxClients 60 </IfModule> <IfModule mpm_event_module> ... MaxClients 60 </IfModule> Once I did that, I had the apache2 service restart and it all went smoothly for 3/4 of a day. Since at night the MySQL service shut down once again, but this time it wasn't killed by the Apache2 service. Instead it called the OOM-Killer with the following message: kernel: [3104680.005312] mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 kernel: [3104680.005351] [<ffffffff81119795>] oom_kill_process+0x85/0xb0 kernel: [3104680.548860] init: mysql main process (30821) killed by KILL signal Now I'm out of ideas. Some articles state that the ideal thing to do is change the kernel behaviour with the following (include it to the file /etc/sysctl.conf ) vm.overcommit_memory = 2 vm.overcommit_ratio = 80 So no overcommits will take place. I'm wondering if this is the way to go? Keep in mind I'm no server administrator, I have basic knowldege. Thanks a bunch in advance.

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >