Search Results

Search found 11466 results on 459 pages for 'geo ip'.

Page 359/459 | < Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >

  • Why do people crawl sites without downloading pictures?

    - by Michael
    Let me show you what I mean: IP Pages Hits Bandwidth 85.xx.xx.xxx 236 236 735.00 KB 195.xx.xxx.xx 164 164 533.74 KB 95.xxx.xxx.xxx 90 90 293.47 KB It's very clear that these person are crawling my site with bots. There's no way that you could visit my site and use <1MB bandwidth. You might say that there's the possibility that they could be browsing the site using some browser or plug-in that does not download images, js/css files, etc., but the simple fact of the matter is that there are not 90-236 pages that are linked from the home page (outside of WP files), even if you visited every page twice. I could understand if these people were crawling the site for pictures, but once again, the bandwidth indicates that this isn't what is happening. Why, then, would they crawl the site to simply view the HTML/txt/js/etc. files? The only thing that I can come up with is that they are scanning for outdated versions of WordPress, SQL injection vulnerabilities, etc., which makes me inclined to outright ban the IPs, but I'm curious, is it possible that this person is a legitimate user, or at the very least, not intending to be harmful?

    Read the article

  • Troubleshooting wireless network connectivity

    - by taserian
    I'm currently running Ubuntu 10.10, and I'm running into trouble keeping my wireless connection alive. After rebooting, I get about 5-10 minutes of a good speed connection; afterwards, the connection just zeroes out. I've gotten around to stirring up a shell script that gives me another 10 minutes or so of connectivity. Script contents below: #! /bin/bash sudo ifconfig wlan0 up echo "Enabling wireless device . . ." sleep 5 sudo iwconfig wlan0 essid MyNetworkName echo "Connecting to network. . ." sleep 10 sudo dhclient wlan0 echo "Getting IP address. . ." sleep 5 echo "Done. Closing window. . ." sleep 5 Shortly after running the line "sudo iwconfig wlan0 essid MyNetworkName" from the script, I notice the speed pick up. Other computers in my home running Windows XP are not affected by this problem, so all indications point to my Ubuntu machine. Does anyone have any pointers as to how to resolve this?

    Read the article

  • dell vostro 1000 broadcom wireless connection

    - by lorrenuy
    I have a problem with the hardware broadcom wifi. I press the hotkey fn+f2 to activate the hardware and this will not work. I'll look at the drivers but it appears to be installed. How can I solve this problem? Ubuntu is all new to me so if possible, give me a clear explanation. Now do I connect the lan cable. I use the Ubuntu 11.10 lawrence@lawrence-Vostro-1000:~$ sudo lshw -class network [sudo] password for lawrence: PCI (sysfs) *-network description: Network controller product: BCM4311 802.11b/g WLAN vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:05:00.0 version: 01 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=b43-pci-bridge latency=0 resources: irq:18 memory:c0200000-c0203fff *-network description: Ethernet interface product: BCM4401-B0 100Base-TX vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:08:00.0 logical name: eth1 version: 02 serial: 00:1c:23:a2:b9:a9 size: 100Mbit/s capacity: 100Mbit/s width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=b44 driverversion=2.0 duplex=full ip=192.168.1.18 latency=64 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:21 memory:c0300000-c0301fff lawrence@lawrence-Vostro-1000:~$ lawrence@lawrence-Vostro-1000:~$ rfkill list all 0: dell-wifi: Wireless LAN Soft blocked: yes Hard blocked: yes

    Read the article

  • Reasonable technological solutions to create CRM using .NET eventually Java

    - by user1825608
    My background(If it's too long, just skip it please ; ) ): I am Java programmer(because of demand): mostly teacher for other students, worked on few thesis for others, but during my journey I discovered that .NET and Microsoft's tools are on at least two levels higher than Java and its tools so I want to learn more about them. I programmed little bit on Windows Phone(NFC Tags, TCP Clients, guitar tuner using internal microphone, simple RSS), used WPF, integrated WPF with Windows Forms, Apple Bonjour(.NET), I have expierience with IP cameras and with unusal problems, I learn Android, but I don't like it at all. Problem: I was asked by my friend to create CRM for small new company. There will maximum 20 workers in the company working at computers in few cities in the country(Poland). They just want to store contracts with the clients and client's data. I am not sure what exacly they do but probably sell apartments so there will be at most few thousands of contracts to store in far future. Now I am totally new to CRM but I want to learn. I have few questions: Should the data be stored on a server in the company's building running 24/7 or cloud. If cloud which one? Should I use ASPX or WPF. I read one topic about it but as far as I know aspx sites can be viewed from every device with internet browser: tablets, phones(Android, WP, iOS) and computers at the same time- so the job is done once and for all(Am I right?), I don't know nothing about aspx. Can WPF be also used in manner that does not need to port it for other platforms?

    Read the article

  • Centrino Wireless-N 1000 takes forever to connect and keeps asking for password

    - by waclock
    A few days ago I started having this problem. When I tried to connect to any WiFi Connection it would stay connecting forever, and after a minute or so it would ask me for the password again. The strange thing is that this happened out of nowhere, I did not install any new drivers or anything like that. After this happened I decided to uninstall ubuntu and install it again ("inside windows") but the problem is still there. Any suggestions would be greatly appreciated. 0: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: no 1: hp-bluetooth: Bluetooth Soft blocked: yes Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:07:00.0 logical name: eth0 version: 06 serial: 2c:27:d7:aa:e4:7d size: 10Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=rtl8168e-3_0.0.4 03/27/12 latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:50 ioport:4000(size=256) memory:c0404000-c0404fff memory:c0400000-c0403fff *-network description: Wireless interface product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:0d:00.0 logical name: wlan0 version: 00 serial: 00:1e:64:09:9c:58 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.2.0-23-generic-pae firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:52 memory:c4500000-c4501fff *-network description: Ethernet interface physical id: 1 bus info: usb@2:1.2 logical name: eth1 serial: ee:85:2f:7d:80:96 capabilities: ethernet physical configuration: broadcast=yes driver=ipheth ip=172.20.10.2 link=yes multicast=yes

    Read the article

  • lxc containers fail to autoboot in 14.04 trusty using 'lxc.start.auto = 1'

    - by user273046
    In trusty 14.04 containers fail to autoboot despite all settings being set as 14.04 requires. They show all as STOPPED I have correctly configured 2 LXC containers: calypso encelado They run perfectly if I run sudo lxc-autostart then sudo lxc-ls --fancy results in: ubuntu@saturn:/etc/init$ sudo lxc-ls --fancy NAME STATE IPV4 IPV6 AUTOSTART calypso RUNNING 192.168.1.161 - YES encelado RUNNING 192.168.1.162 - YES The problem is trying to run them at boot. I have at: /var/lib/lxc/calypso/config: # Template used to create this container: /usr/share/lxc/templates/lxc-download # Parameters passed to the template: # For additional config options, please look at lxc.conf(5) # Distribution configuration lxc.include = /usr/share/lxc/config/ubuntu.common.conf lxc.arch = x86_64 # Container specific configuration lxc.rootfs = /var/lib/lxc/calypso/rootfs lxc.utsname = calypso # Network configuration lxc.network.type = veth lxc.network.flags = up #lxc.network.link = lxcbr0 lxc.network.link = br0 lxc.network.hwaddr = 00:16:3e:64:0b:6e # Assegnazione IP Address lxc.network.ipv4 = 192.168.1.161/24 lxc.network.ipv4.gateway = 192.168.1.1 # Autostart lxc.start.auto = 1 lxc.start.delay = 5 lxc.start.order = 100 and I have LXC_AUTO="false" as required inside /etc/default/lxc: LXC_AUTO="false" USE_LXC_BRIDGE="false" # overridden in lxc-net [ -f /etc/default/lxc-net ] && . /etc/default/lxc-net LXC_SHUTDOWN_TIMEOUT=120 Any idea on why the containers don't start at boot? At reboot they are always in the STOPPED state: ubuntu@saturn:~$ sudo lxc-ls --fancy NAME STATE IPV4 IPV6 AUTOSTART calypso STOPPED - - YES encelado STOPPED - - YES and then again they can be started manually, using sudo lxc-autostart

    Read the article

  • multiple godaddy domains to home router then reverse proxy to multiple internal servers [closed]

    - by Dan
    I need someone to steer me correctly... advising on all required components. I have multiple domains with godaddy, say site1.com , site2.com, site3.net I have multiple home LAMP servers... one on say, lampsrv1 = 192.168.0.2:8080 (windows) and lampsrv2 = 192.168.0.3:9080 (linux srv2) I would like to have server1.site1.com point to lampsrv1 and server2.site1.com point to lampsrv2 . I may also want server1.site2.com also point to my lampsrv1 as an option. My thinking is - I have a dedicated linux srv1 with a reverse proxy server behind the router, ie Apache or NGINX or equivalent directing to appropriate LAMP server. It's the godaddy subdomains, cnames or redirections, etc I'm having a challenge with for starters... I have tested apache with virtual servers but can't get proxying to work based on host header info... seems to go to one address making me think its actually the apache reverse proxying that's not quite working. Finally to add to this, my router has a dynamic IP but does lease for quite a while but that would be my final piece. So, I'm sure this might be a popular question but can't seem to piece this together. I need someone who has actually configured this scenario to advise but will take other suggestions.... please indicate if you have successfully configured this.

    Read the article

  • Augmenting your Social Efforts via Data as a Service (DaaS)

    - by Mike Stiles
    The following is the 3rd in a series of posts on the value of leveraging social data across your enterprise by Oracle VP Product Development Don Springer and Oracle Cloud Data and Insight Service Sr. Director Product Management Niraj Deo. In this post, we will discuss the approach and value of integrating additional “public” data via a cloud-based Data-as-as-Service platform (or DaaS) to augment your Socially Enabled Big Data Analytics and CX Management. Let’s assume you have a functional Social-CRM platform in place. You are now successfully and continuously listening and learning from your customers and key constituents in Social Media, you are identifying relevant posts and following up with direct engagement where warranted (both 1:1, 1:community, 1:all), and you are starting to integrate signals for communication into your appropriate Customer Experience (CX) Management systems as well as insights for analysis in your business intelligence application. What is the next step? Augmenting Social Data with other Public Data for More Advanced Analytics When we say advanced analytics, we are talking about understanding causality and correlation from a wide variety, volume and velocity of data to Key Performance Indicators (KPI) to achieve and optimize business value. And in some cases, to predict future performance to make appropriate course corrections and change the outcome to your advantage while you can. The data to acquire, process and analyze this is very nuanced: It can vary across structured, semi-structured, and unstructured data It can span across content, profile, and communities of profiles data It is increasingly public, curated and user generated The key is not just getting the data, but making it value-added data and using it to help discover the insights to connect to and improve your KPIs. As we spend time working with our larger customers on advanced analytics, we have seen a need arise for more business applications to have the ability to ingest and use “quality” curated, social, transactional reference data and corresponding insights. The challenge for the enterprise has been getting this data inline into an easily accessible system and providing the contextual integration of the underlying data enriched with insights to be exported into the enterprise’s business applications. The following diagram shows the requirements for this next generation data and insights service or (DaaS): Some quick points on these requirements: Public Data, which in this context is about Common Business Entities, such as - Customers, Suppliers, Partners, Competitors (all are organizations) Contacts, Consumers, Employees (all are people) Products, Brands This data can be broadly categorized incrementally as - Base Utility data (address, industry classification) Public Master Reference data (trade style, hierarchy) Social/Web data (News, Feeds, Graph) Transactional Data generated by enterprise process, workflows etc. This Data has traits of high-volume, variety, velocity etc., and the technology needed to efficiently integrate this data for your needs includes - Change management of Public Reference Data across all categories Applied Big Data to extract statics as well as real-time insights Knowledge Diagnostics and Data Mining As you consider how to deploy this solution, many of our customers will be using an online “cloud” service that provides quality data and insights uniformly to all their necessary applications. In addition, they are requesting a service that is: Agile and Easy to Use: Applications integrated with the service can obtain data on-demand, quickly and simply Cost-effective: Pre-integrated into applications so customers don’t have to Has High Data Quality: Single point access to reference data for data quality and linkages to transactional, curated and social data Supports Data Governance: Becomes more manageable and cost-effective since control of data privacy and compliance can be enforced in a centralized place Data-as-a-Service (DaaS) Just as the cloud has transformed and now offers a better path for how an enterprise manages its IT from their infrastructure, platform, and software (IaaS, PaaS, and SaaS), the next step is data (DaaS). Over the last 3 years, we have seen the market begin to offer a cloud-based data service and gain initial traction. On one side of the DaaS continuum, we see an “appliance” type of service that provides a single, reliable source of accurate business data plus social information about accounts, leads, contacts, etc. On the other side of the continuum we see more of an online market “exchange” approach where ISVs and Data Publishers can publish and sell premium datasets within the exchange, with the exchange providing a rich set of web interfaces to improve the ease of data integration. Why the difference? It depends on the provider’s philosophy on how fast the rate of commoditization of certain data types will occur. How do you decide the best approach? Our perspective, as shown in the diagram below, is that the enterprise should develop an elastic schema to support multi-domain applicability. This allows the enterprise to take the most flexible approach to harness the speed and breadth of public data to achieve value. The key tenet of the proposed approach is that an enterprise carefully federates common utility, master reference data end points, mobility considerations and content processing, so that they are pervasively available. One way you may already be familiar with this approach is in how you do Address Verification treatments for accounts, contacts etc. If you design and revise this service in such a way that it is also easily available to social analytic needs, you could extend this to launch geo-location based social use cases (marketing, sales etc.). Our fundamental belief is that value-added data achieved through enrichment with specialized algorithms, as well as applying business “know-how” to weight-factor KPIs based on innovative combinations across an ever-increasing variety, volume and velocity of data, will be where real value is achieved. Essentially, Data-as-a-Service becomes a single entry point for the ever-increasing richness and volume of public data, with enrichment and combined capabilities to extract and integrate the right data from the right sources with the right factoring at the right time for faster decision-making and action within your core business applications. As more data becomes available (and in many cases commoditized), this value-added data processing approach will provide you with ongoing competitive advantage. Let’s look at a quick example of creating a master reference relationship that could be used as an input for a variety of your already existing business applications. In phase 1, a simple master relationship is achieved between a company (e.g. General Motors) and a variety of car brands’ social insights. The reference data allows for easy sort, export and integration into a set of CRM use cases for analytics, sales and marketing CRM. In phase 2, as you create more data relationships (e.g. competitors, contacts, other brands) to have broader and deeper references (social profiles, social meta-data) for more use cases across CRM, HCM, SRM, etc. This is just the tip of the iceberg, as the amount of master reference relationships is constrained only by your imagination and the availability of quality curated data you have to work with. DaaS is just now emerging onto the marketplace as the next step in cloud transformation. For some of you, this may be the first you have heard about it. Let us know if you have questions, or perspectives. In the meantime, we will continue to share insights as we can.Photo: Erik Araujo, stock.xchng

    Read the article

  • django & postgres linux hosting (with SSH access) recommendations

    - by Justin Grant
    We're looking for a good place to host our custom Django app (a fork of OSQA) and its postgresql backend. Requirements include: Linux Python 2.6 or (ideally) Python 2.7 Django 1.2 Postgres 8.4 or later DB backup/restore handled by the hoster, not us OS & dev-platform-stack patching/maintenance handled by the hoster, not us SSH access (so we can pull source code from GitHub, so we can install python eggs, etc.) ability to set up cron jobs (e.g. to send out dail email updates) ability to send up to 10K emails/day good performance (not ganged up with a zillion other sites on one CPU, not starved for RAM) FTP or SCP access to web logs dedicated public IP SSL support Costs under $1000/month for a relatively small site (<5M pageviews/month) Good customer service We already have a prototype site running on EC2 on top of a Bitnami DjangoStack. The problem is that we have to patch the OS, patch postgres, etc. We'd really prefer a platform-as-a-service (PaaS) offering, like Heroku offers for Rails apps, where all we need to worry about is deploying our code instead of worrying about system software patching and maintenance. Google App Engine is closest to what we're looking for, but they don't offer relational DB access (not yet at least). Anyone have a recommendation?

    Read the article

  • What's so bad about pointers in C++?

    - by Martin Beckett
    To continue the discussion in Why are pointers not recommended when coding with C++ Suppose you have a class that encapsulates objects which need some initialisation to be valid - like a network socket. // Blah manages some data and transmits it over a socket class socket; // forward declaration, so nice weak linkage. class blah { ... stuff TcpSocket *socket; } ~blah { // TcpSocket dtor handles disconnect delete socket; // or better, wrap it in a smart pointer } The ctor ensures that socket is marked NULL, then later in the code when I have the information to initialise the object. // initialising blah if ( !socket ) { // I know socket hasn't been created/connected // create it in a known initialised state and handle any errors // RAII is a good thing ! socket = new TcpSocket(ip,port); } // and when i actually need to use it if (socket) { // if socket exists then it must be connected and valid } This seems better than having the socket on the stack, having it created in some 'pending' state at program start and then having to continually check some isOK() or isConnected() function before every use. Additionally if TcpSocket ctor throws an exception it's a lot easier to handle at the point a Tcp connection is made rather than at program start. Obviously the socket is just an example, but I'm having a hard time thinking of when an encapsulated object with any sort of internal state shouldn't be created and initialised with new.

    Read the article

  • Ubuntu-installer fails preseed configuration file

    - by user76171
    I try to install Ubuntu 12.04 over network unattended. I installed a DHCP server (Dnsmasq), a TFTP server (tftpd-hpa), I got the netboot.tar.gz archive with the pxelinux.0 file, the pxelinux.cfg directory, the linux kernel and the initrd.gz image and I put a preseed file into my web server. Dnsmasq, tftpd-hpa, pxelinux and Apache are all on the same machine. The PCs MB doesn’t support PXE, so I use iPXE and boot it from a CD. The PC gets an IP from the DHCP, then iPXE loads #pxelinux.cfg/default, which I edited like this: timeout 5 prompt 0 default install label install kernel ubuntu-installer/i386/linux append vga=normal locale=en_GB setup/layoutcode=sl_SI console-setup/layoutcode=sl_SI netcfg/choose_interface=auto initrd=ubuntu-installer/i386/initrd.gz netcfg/get_hostname=ubuntux preseed/url=#http://192.168.10.10/ins/preseed.cfg Then it loads the linux kernel and the initrd.gz image. Then I got a question: Detect keyboard layout? I desided to bother with this later. So I answer No, and then twice on Englishjust to get trough and then I get to the error: The installer failed to process the preconfiguration file from #http://192.168.10.10/ins/preseeed.cfg. The file may be corrupt. I created the file myself and copied the d-I commands into it. I also tried to get the preseed.cfg over a web browser and it works fine. So why is the installer failing?

    Read the article

  • Cannot ping any computers on LAN

    - by Timothy
    I havem't been able to find a straight forward answer on this yet. I'm hoping people here are able to help! Keep in mind that I'm a complete beginner at this - this is the first installation i've done for any LINUX systems ever so please keep that in mind when answering this question. We are a complete Windows shop, using nothing but Microsoft products but looking into the value of OpenStalk however have been having problems getting Ubuntu Server installed and speaking to the network. The machine is getting an IP address which is telling me that some sort of DHCP activity is working but I'm not able to ping any computer on our network as well as not able to connect to the internet. Every time I try to ping i'm getting; Destination Host Unreachable I've tried using modifying the resolv.conf file with our static details to match my Windows 7 machine still with no luck. Even tried disabling the firewall on Ubuntu Server 11 and no luck. Any ideas? Please let me know if there is any information you need from the server and I'll post up.

    Read the article

  • What am I doing wrong in my config for MySql?

    - by Knight Hawk3
    When I load my my.conf with the config at the bottom Mysql fails to start and prints no errors. I am running Arch Linux (Updated) with the latest MySQL (5.5) and the latest nginx (Well latest in the repository, Not sure how to check. Only installed it today) I will give you any info you ask for. Thanks for helping! # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /var/run/mysqld/mysqld.sock # Here follows entries for some specific programs # The MySQL server [mysqld] port = 3306 socket = /var/run/mysqld/mysqld.sock skip-locking key_buffer = 16K max_allowed_packet = 1M table_cache = 4 sort_buffer_size = 64K read_buffer_size = 256K read_rnd_buffer_size = 256K net_buffer_length = 2K thread_stack = 64K # Don’t listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (using the “enable-named-pipe” option) will render mysqld useless! # #skip-networking server-id = 1 # Uncomment the following if you want to log updates #log-bin=mysql-bin # Uncomment the following if you are NOT using BDB tables skip-bdb # Uncomment the following if you are using InnoDB tables #innodb_data_home_dir = /var/lib/mysql/ #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /var/lib/mysql/ #innodb_log_arch_dir = /var/lib/mysql/ # You can set .._buffer_pool_size up to 50 – 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size = 16M #innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 5M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 1 #innodb_lock_wait_timeout = 50 skip-innodb [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 1M sort_buffer_size = 1M [myisamchk] key_buffer = 1M sort_buffer_size = 1M [mysqlhotcopy] interactive-timeout So what is my silly error?

    Read the article

  • Bad archive mirror using PXE boot method

    - by user11566
    I'm trying to automatically install Ubuntu on a client PC by using the PXE BOOT method....my Objectives are below: I am following the steps given in this link installation using PXE BOOT the server will have a KICKSTART config file which contains the parameters for the OS installation and the files which are required for the OS installations. the client will have to detect this configuration along with the setup files and complete the installation without any input from the user. In my server I have installed DHCP3-server,Apache2 and TFTP to help me with the installation. I have nearly achieved my first objective, I am able to boot my client using the files stored in the server but during the installation stage it is asking me to CHOOSE A MIRROR OF UBUNTU ARCHIVE I gave the server's IP address and the path in the server where the files are located but then its giving me this error BAD ARCHIVE MIRROR So is it possible that instead of downloading all the files from the internet and storing them on my disk can I use the files which comes with the UBUNTU-CD, and how to store these files in what format (should I zip them) on the disk? secondly I am also generating the ks.cfg which I wanted to give to the client for automatic installation of the OS. So how should the configuration file be given to the installation process?

    Read the article

  • An adequate message authentication code for REST

    - by Andras Zoltan
    My REST service currently uses SCRAM authentication to issue tokens for callers and users. We have the ability to revoke caller privileges and ban IPs, as well as impose quotas to any type of request. One thing that I haven't implemented, however, is MAC for requests. As I've thought about it more, for some requests I think this is needed, because otherwise tokens can be stolen and before we identify this and deactivate the associated caller account, some damage could be done to our user accounts. In many systems the MAC is generated from the body or query string of the request, however this is difficult to implement as I'm using the ASP.Net Web API and don't want to read the body twice. Equally importantly I want to keep it simple for callers to access the service. So what I'm thinking is to have a MAC calculated on: the url, possibly minus query string the verb the request ip (potentially is a barrier on some mobile devices though) utc date and time when the client issues the request. For the last one I would have the client send that string in a request header, of course - and I can use it to decide whether the request is 'fresh' enough. My thinking is that whilst this doesn't prevent message body tampering it does prevent using a model request to use as a template for different requests later on by a malicious third party. I believe only the most aggressive man in the middle attack would be able to subvert this, and I don't think our services offer any information or ability that is valuable enough to warrant that. The services will use SSL as well, for sensitive stuff. And if I do this, then I'll be using HMAC-SHA-256 and issuing private keys for HMAC appropriately. Does this sound enough? Have I missed anything? I don't think I'm a beginner when it comes to security, but when working on it I always. am shrouded in doubt, so I appreciate having this community to call upon!

    Read the article

  • Need help with cybersquatting complaint: can a domain name forward AND resolve at same time? [on hold]

    - by Alan
    Probably a silly question for you pros... but for this novice here, I just want to make sure my understanding is correct. Context: I am trying to prove that a domain name owner has been cybersquatting and has never used the domain name in question. There are 4 shots from WayBackMachine over a three-year period that show the domain name resolving to a basic server index page with either no files or a single cgi-bin folder. The domain name owner claims, however, that the domain name was forwarded over the entire time from to another website, and that these captures probably coincided with occasional "outages." It is my understanding that: a) domain name forwarding is binary: if a domain name is forwarded to a valid site, it cannot simultaneously resolve to a valid IP address. Is this correct? b) domain name forwarding is not subject to "outages": servers can have outages, and websites can be down, but the forwarding itself cannot be down, as this is simply a pointer. (Or, the entire registrar where the DNS settings are hosted would have to malfunction. Is this correct? FINALLY, bonus question for pro webmasters: What is the likelihood that the WayBackMachine would capture the domain name on just those occasions when the webmaster disabled forwarding to supposedly work on the new site? Mucho thanks in advance!

    Read the article

  • using Moniker.com's nameservers

    - by user7519
    I have a VPS with A2Hosting for which i need to upgrade the OS. However, they've changed their VPS packages and forced me to order a new one. I went with an "unmanaged" package and have only just realised that they do not provide any DNS service at all, not even nameservers. Support tells me that "since your domain is not hosted with us, but with Moniker, you would not be able to use these nameservers. Your domain registrar should have a set of default nameservers that you can use, then create a A record to point to" my IP address. Moniker does provide for using their nameservers but i'm confused about which "pre-defined zone configuration" to use. They are: Domain Parking Domain Parking with Email Forwarding URL and Email Forwarding URL Forwarding URL Forwarding & CoolHandle Email I just want to use their nameserver and then create A & MX records pointing to the VPS. What do they mean by forwarding? I get the feeling it's a service that i don't want. Or, is it that i need to have a pre-defined zone only temporarily, and THEN set the A & MX? Which of these should i choose.

    Read the article

  • Unable to print login-required images in IE

    - by Tim Fountain
    I have some images in a section of a site that require the user to be logged in in order to view. These images are served by a PHP script, which checks the user's login state and if valid, serves the binary data with the appropriate headers. This all works fine. The issue comes when a user tries to print one of these images. In Internet Explorer, when they go to print preview they get the broken image box with a red cross in the corner instead of the actual file. This is what gets printed also. All other browsers can print the images without issue. I have some images elsewhere on the site that are also served via. PHP but these don't require a login. These print fine. The PHP-powered HTML pages on the site that require a login also print fine in IE. It's just login-required images. The user hitting print preview does not seem to result in additional HTTP request to the server for the file. However I do see an additional HTTP request a few seconds later that comes from the same IP (may or may not be related), This request includes no host header, no REQUEST_URI and no user agent. The 'please login' page sends an appropriate 403 header. I've also added a far-in-future expires header to the image response itself to ensure that browsers can serve/print the files from their own cache but this hasn't made any difference. Why can't IE print the images and what else can I do to investigate or fix the problem?

    Read the article

  • android app unable to connect to the hsqldb server

    - by Chinta
    I am trying to connect my android app to the hsql db server. Server runs on computer-1. I can connect to the db server from local machine through java as well as Db-visualizer. I can connect to the db server from another computer(computer-2) using Db-visualizer with comouter-1 ip address. Now trying to connect from my app in Nexus 7 the same way I was connecting from computer-2. I am getting "No Suitable Driver" error. Below is the log. 11-02 12:01:41.235: W/System.err(9803): connection string <jdbc:hsqldb:hsql://192.168.2.6:9001/qBank> 11-02 12:01:41.235: W/System.err(9803): user id string <SA> 11-02 12:01:41.235: W/System.err(9803): password string <> 11-02 12:01:41.235: W/System.err(9803): ERROR: failed to get connection. 11-02 12:01:41.235: W/System.err(9803): java.sql.SQLException: No suitable driver 11-02 12:01:41.235: W/System.err(9803): at java.sql.DriverManager.getConnection(DriverManager.java:186) 11-02 12:01:41.235: W/System.err(9803): at java.sql.DriverManager.getConnection(DriverManager.java:213) 11-02 12:01:41.235: W/System.err(9803): at com.scan.util.GatherData.getConnection(GatherData.java:135)

    Read the article

  • Problem connecting to isp server using xl2tpd as client. Ubuntu server 13.04

    - by Deon Pretorius
    I have followed guides found on google and ubuntu support pages and can get xl2tpd connection up but only under the following conditions: 1 - ADSL model must be configured and connected to the ISP or 2 - ADSL modem in bridge mode I must have an existing PPPoe connection established. If neither of the above are active xl2tpd wont trigger pppd and connect to the isp and thus tunnel connection fails to connect to the L2TP server of the ISP. Am I doing something wrong; /etc/ppp/options.l2tpd.axxess ipcp-accept-local ipcp-accept-remote refuse-eap refuse-chap require-pap noccp noauth idle 1800 mtu 1200 mru 1200 defaultroute usepeerdns debug lock connect-delay 5000 name (name used for ppp connection) /etc/ppp/pap-secrets # * password (name used for ppp connection as above) * (ppp password supplied by isp) /etc/xl2tpd/xl2tpd.conf [global] ; Global parameters: auth file = /etc/xl2tpd/l2tp-secrets ; * Where our challenge secrets are access control = yes ; * Refuse connections without IP match debug tunnel = yes [lac axxess] lns = 196.30.121.50 ; * Who is our LNS? redial = yes ; * Redial if disconnected? redial timeout = 5 ; * Wait n seconds between redials max redials = 5 ; * Give up after n consecutive failures hidden bit = yes ; * User hidden AVP's? length bit = yes ; * Use length bit in payload? require pap = yes ; * Require PAP auth. by peer require chap = no ; * Require CHAP auth. by peer refuse chap = yes ; * Refuse CHAP authentication require authentication = yes ; * Require peer to authenticate name = BLA85003@axxess ; * Report this as our hostname ppp debug = yes ; * Turn on PPP debugging pppoptfile = /etc/ppp/options.l2tpd.axxess ; * ppp options file for this lac /etc/xl2tpd/l2tp-secrets # Secrets for authenticating l2tp tunnels # us them secret # * marko blah2 # zeus marko blah # * * interop * vzb_l2tp (*** secret supplied by isp) ^ isp server host name Any help will be greatly appreciated

    Read the article

  • Can an internally developed fast evolving, agile, short sprint web application lend itself to offshoring?

    - by Gavin Howden
    I have recently been set a target to achieve readiness to successfully manage and deliver results through the usage of offshore teams on our mainline development project within 12 months. Our mainline is a multi-thousand user highly available web application, and various related SAAS components delivered through the above mentioned web application. We work agile on the mainline with a rapid 1 week sprint using continuous integration. Our delivery platform is a bespoke php framework, although we have some .net services and components in the mix. My view is: an offshore team could work if we either ship out an entire isolated project for offshore development, or we specify a component for our system in huge detail up front. But we don't currently work like that, and it will conflict with the in-house method, and unless the off-shore is working within our team, with our development/deployment chain it could be an integration nightmare. So my question is, given we have a closed source bespoke framework (Private IP) which we train our developers to use, and we work agile minimising documentation, maximising communication and responding to rapidly changing requirements, and much of the quality control is via team skills building and peer review, how can I make off-shoring work on our main line development?

    Read the article

  • Can't Start ISC DHCP IPv6 Server

    - by MrDaniel
    Trying to enable the ISC DHCP server for just IPv6 on Ubuntu 12.04 LTS. I have downloaded and installed the DHCP server via the following command: $ sudo apt-get install isc-dhcp-server Then I have followed the instructions in the following resources, Ubuntu Wiki DHCPv6, SixXS - Configuring ISC DHCPv6 Server and Linux IPv6 HOWTO - Configuration of the ISC DHCP server for IPv6 . So from review all those resources it seems like I need to: set a static IPv6 address for the Interface I want to run the DHCPv6 server from that is part of the IPv6 network subnet outside the DHCP range. Edit the /etc/dhcp/dhcpd6.conf file to configure the DHCPv6 range etc. Create the /var/lib/dhcp/dhcpd6.leases Manually start the DHCPv6 server. Setting the Static IP for eth0 $ sudo ifconfig eth0 inet6 add 2001:db8:0:1::128/64 My dhcpd6.conf default-lease-time 600; max-lease-time 7200; log-facility local7; subnet6 2001:db8:0:1::/64 { #Range for clients range6 2001:db8:0:1::129 2001:db8:0:1::254; } Created the dhcpd6.leases file As indicated in the dhcpd.leases man page. $ touch /var/lib/dhcp/dhcpd6.leases #Tried with sudo as well Manually starting the DHCPv6 server. Attempted to start the server using the following command: $ sudo dhcp -6 -f -cf /etc/dhcp/dhcpd6.conf eth0 The problem, the DHCP will not start, with an append error for the dhcpd6.leases file as indicated below when running the manual start command noted above. Can't open /var/lib/dhcp/dhcpd6.leases for append. Any ideas what I might be missing?

    Read the article

  • Missing driver ASUS PCE-N53 11n N600 PCI-E Adapter

    - by oyse
    I have problems with getting an Asus PCE-N53 11n N600 PCI-E Adapter card to work on my desktop computer. As far as I can tell no drivers are installed for the card. I know I can manually download the drivers directly from Asus, but I would rather not go that route. If there are anyone that knows about any packages or other things I can do to make this work would be much appreciated. Some systems details: $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.1 LTS Release: 12.04 Codename: precise $ sudo lshw -C network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 06 serial: d4:3d:7e:03:b9:1d size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl8168e-3_0.0.4 03/27/12 ip=192.168.0.173 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:43 ioport:d000(size=256) memory:f2104000-f2104fff memory:f2100000-f2103fff *-network UNCLAIMED description: Network controller product: Ralink corp. vendor: Ralink corp. physical id: 0 bus info: pci@0000:04:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:f7100000-f710ffff $ lsmod Module Size Used by nvidia 12319264 51 vesafb 13844 1 snd_hda_codec_hdmi 32474 1 joydev 17693 0 bnep 18281 2 rfcomm 47604 0 bluetooth 180104 10 bnep,rfcomm snd_hda_codec_realtek 224173 1 snd_seq_midi 13324 0 ppdev 17113 0 snd_rawmidi 30748 1 snd_seq_midi usbhid 47199 0 hid 99559 1 usbhid nouveau 774641 0 parport_pc 32866 1 snd_hda_intel 33773 5 ttm 76949 1 nouveau snd_hda_codec 127706 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel drm_kms_helper 46978 1 nouveau drm 242038 3 nouveau,ttm,drm_kms_helper snd_seq_midi_event 14899 1 snd_seq_midi snd_hwdep 13668 1 snd_hda_codec snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event i2c_algo_bit 13423 1 nouveau mxm_wmi 12979 1 nouveau wmi 19256 1 mxm_wmi mac_hid 13253 0 snd_pcm 97188 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec psmouse 97362 0 video 19596 1 nouveau snd_timer 29990 2 snd_seq,snd_pcm snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq snd 78855 20 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_rawmidi,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_seq,snd_pcm,snd_timer,snd_seq_device serio_raw 13211 0 soundcore 15091 1 snd snd_page_alloc 18529 2 snd_hda_intel,snd_pcm mei 41616 0 lp 17799 0 parport 46562 3 ppdev,parport_pc,lp r8169 62099 0

    Read the article

  • Reverse WiFi to Broadcast connection coming from a USB device

    - by Daniel Clem
    I am using the app called Clockworkmod Tether. It connects using a script ( command line " gksu ./run.sh " ) on the computer. All my programs connect to the internet perfectly, minitube, midori, transmission torrents. But the network manager does not show any connection, wired or wireless. So this may cause issues? What I want to do is take this connection, and be able to share it some way, any way, by wireless. This Acer Timeline "Aspire 5810TZ" does have an Ethernet, so wired out to a router might be an option. But I would prefer to simply reverse the Wireless card to broadcast out to about 2 or 3 devices. Is this possible? Yes I have taken a look at the other questions already posted here, but the answers are 1 year old or older, and not clear at all. I am moderately comfortable (4.5 out of 10 ) on the command line. But pretty much need line by line directions for what commands are needed and what order, ect. Edit I have already tried the method of "Left click network manager, Create New Wireless Network" It is created fine, and I am able to connect to it with a tablet, but am un-able to get an outside connection with it. Using the "Shared to other computers" option because DHCP doesn't seem to work, and WEP Passphrase Security. I get an IP address on the connected device. But as I said, won't bring up any outside webpages or the like. So perhaps this is the wrong approach?

    Read the article

  • "Oracle Coherence 3.5" Book - My Humble Review

    - by [email protected]
      After reviewing the book in more detail I say again that it is a great guide for sure. Lots of important concepts that sometimes can be somewhat confusing are deeply reviewed, including all types of caching schemes and backing maps, and the cache topologies with their corresponding performances and very useful "When to use it?" sections. Some functionalities that are very desirable or used a lot are reviewed with examples and best practices of implementation, including: Data affinity Querying Pagination Indexes Aggregations Event processing, listening and triggering Data persistence Security Regarding the networking and architecture topics, Coherence*Extend is exhaustively reviewed, including C++ and .NET clients, with very good tips and examples, even including source codes. Personally, I am also glad to see that the address providers (<address-provider> tag), new feature in Coherence 3.5 which is a way to programmatically provide well-known addresses in order to connect to the cluster, is mentioned on the book, because it provides new functionalities to satisfy some special configuration requirements for example: Provide a way to switch extend nodes in cases of failure Implement custom load balancing algorithms and/or dynamic discovery of TCP/IP connection acceptors Dynamically assign TCP address and port settings when binding to a server socket Another very interesting and useful section is the "Coherent Bank Sample Application", which is a great tutorial, useful to understand how Coherence interacts with third party products establishing a clear integration with them, including the use of non-Oracle products like MS Visual Studio.  

    Read the article

< Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >