Search Results

Search found 23782 results on 952 pages for 'claims based authorizatio'.

Page 274/952 | < Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >

  • Is this a solution for having multiple SSL certificates on the same IP

    - by Saif Bechan
    I am running CentOS running on a VPS. I read some guides on having multiple SSL certificates on the same system, but I can not get the basics to work. The guide I got that makes the most sense to me is the doing the following. In CentOS I can make virtual NIC's. So I made 2 virtual NIC's to start with. 192.168.10.1, 192.168.10.2. Now I work in ISP manager Pro, so this is listening on my primary ip 1.1.1.1 For each website I have them listening on 192.168.10.1:80, 192.168.10.1:443 In the hosts file I made the following 2 entries 192.168.10.1 1st.com 192.168.10.2 2nd.com Now the strange thing is that when I browser to 1st.com I do not get the website located at 192.168.10.1, I get the website located at my prim IP 1.1.1.1 Should I do something like forwarding or routing for this setup to work? And the basic question: Will this setup even work? Are the SSL certificates based on the IP adress, or are the based on the host name, 1st.com and 2nd.com.

    Read the article

  • IP Blacklists and suspicious inbound and outbound traffic

    - by Pantelis Sopasakis
    I administer a web server and recently we had our IP banned (!) from our host after they received a notification e-mail for abuse. In particular our server is allegedly involved in spam attacks over HTTP. The content of the abuse report email we received was not much informative - for example the IP addresses our server is supposed to have attacked against are not included - so I started a wireshark session checking for suspicious traffic over TCP/HTTP while trying to locate possible security holes on the system. (Let me note that the machine runs a Debian OS). Here is an example of such a request... Source: 89.74.188.233 Destination: 12.34.56.78 // my ip Protocol: HTTP Info: GET 'http://www.media.apniworld.com/image.php?type=hv' HTTP/1.0 I manually blacklisted this host (as well as some other ones) blocking them with iptables, but I can't keep on doing manually all day long... I'm looking for an automated way to block such IPs based on: Statistical analysis, pattern recognition or other AI-based analysis (Though, I'm reluctant to trust such a solution, if exists) Public blacklists Using DNSBL I actually found out that 89.74.188.233 is blacklisted. However other IPs which are strongly suspicious like 93.199.112.126 (i.e. http://www.pornstarnetwork.com/account/signin), unfortunately were not blacklisted! What I would like to do is to automatically connect my firewall with DNSBL (or some other blacklist database) and block all traffic towards blacklisted IPs or somehow have my local blacklist automatically updated.

    Read the article

  • Apache stops responding to http requests -- https continues to work

    - by Apropos
    Okay. Very strange problem that I'm having here. I just recently updated to Apache 2.4.2 from 2.2.17, mostly to try to get name-based SSL VirtualHosts working (although they should have been working on 2.2.17). Server is Win2008 R2 (so x64 by definition) running with PHP 5.4.3 and MySQL 5.1.40 (outdated, I know). When I launch the server, it initially works fine. Responds to all requests, VirtualHosts all in order. However, after an uncertain amount of time (appears to only take a few minutes for the most part, but sometimes takes hours), it stops responding to regular HTTP requests (on any VirtualHost). HTTPS continues to work. No errors in the log, and nothing in the access logs when I attempt to connect. I'm having a hard time finding the source of this error given its intermittent nature. When removing all SSL-based VirtualHosts, it seemingly increased stability (still responding to HTTP requests twelve hours later). This could be mere coincidence, though. Entirety of SSL VirtualHost is as follows, should there happen to be a problem with it. <VirtualHost *:443> DocumentRoot "C:\Server\www\virtualhosts\mysite.net" ErrorLog logs/ssl.mysite.net-error_log CustomLog logs/ssl.mysite.net-access_log common env=!dontlog SSLEngine on SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM SSLCertificateFile C:/Server/bin/apache/apache2.4.2/conf/ssl/server.crt SSLCertificateKeyFile C:/Server/bin/apache/apache2.4.2/conf/ssl/server.key SSLCertificateChainFile C:/Server/bin/apache/Apache2.4.2/conf/ssl/sub.class1.server.ca.pem SSLCACertificateFile C:/Server/bin/apache/Apache2.4.2/conf/ssl/ca.pem </VirtualHost> Any ideas what I'm missing?

    Read the article

  • Fixed Resource Monitor Graph Scale in Windows Server 2008 R2

    - by Clever Human
    In Windows Server 2008 R2's Resource Monitor, is there a way to set the scale of the various graphs to be constant values instead of variable based on data? It seems to me that the utility of a graph is to get a quick overview glance at the values those graphs are showing. So if I look at the CPU graph and the line is up near the top, I can know immediately that something is using all my CPU and go investigate what. I don't really care if the CPU is jumping between .01% and 2%. Or if the network usage monitor is up near the top, I will know that all my bandwidth is being used up, and go figure out what. But the way things are now, the graphs are meaningless because the scales constantly shift. If you look at the network usage graph in one second it might have a scale out of 100kbps, and the next second have a scale based on 1mbps! So... is there a registry key or something that will peg the scale of these graphs to logical maximums?

    Read the article

  • How to diagnose remote assistance problem

    - by cantabilesoftware
    I have a long standing issue with remote assistance between a home and work PC. My wife and I both use MSN messenger and I used to be able to control her PC at home via MSN Remote Assistance. Some time ago however this stopped working and I don't know why. We're both running the latest versions of MSN Live Messenger and I've checked the appropriate firewall ports are open, but it still doesn't work and MSN just says something useless like "The person isn't responding". Any suggestions for how can I diagnose this? More info: I just tried direct Remote Desktop between work PC and home PC and it works fine - so I presume all the appropriate ports are open. Just Remote Assistance doesn't work. I'd like to get RA working so I can demonstrate how to do things remotely. With Remote Desktop the person at the other end gets booted off and can't see. With Remote Assistance they can follow along step by step. Some comments below suggest using other solutions, which is fine and do work, but there must be a way to diagnose RA and get it working. Experimenting with this some more, the notebook that I was using at work today that refused to connect works fine for remote assistance when I bring it home. So I guess this must be a problem with our network configuration at work. I've checked that 3389 is open on firewall on office router and remote desktop works both ways.... just not remote assistance. I've read that remote assitance won't work if client and server are both behind Non-UPnP/NAT routers. If one has UPnP it's supposed to work. Office router doesn't have UPnP enabled but my home one does. I've also scoured the event logs on both ends, nothing noteworthy - unless I'm looking in the wrong spot). Note (copied from comment): I've just tried ShowMyPC which is based on VNC and it works, but I'd still like to figure out what's wrong with RA - it's just bugging me. The question is only about Remote Assistance, no need to propose solutions based on other programs.[/edit by Gnoupi]

    Read the article

  • Getting Dell E6320 with I7 to work with 3 monitors at 1920x1080p x 3

    - by MadBoy
    I want to buy Dell E6320 which comes with Intel Core I7-2620M (2.70GHz, 4MB cache, Dual Core) with Intel HD Graphics 3000. Laptop will come with docking station. I want to connect 3 monitors to that docking station so that working at home would give me some additional boost. Docking station will allow me to connect only 2 monitors so I'm looking at following other options: Matrox TRIPLEHEAD2GO DIGITAL Edition or TRIPLEHEAD2GO DP Edition. But reading Matrox Support Page intel GPU can't run the highest resolution with 3 monitors connected, it even gets worse since it seems monitors would have to be able to work at 50hz. Also I'm not sure but it seems that Matrox doesn't split the monitors as 3 separate monitors but simply as one big space (which is a bit opposite to what I need) Buy 2 or maybe just 1 USB based monitor but it would also mean having 1 or 2 different monitors then the main one, unless I buy 3 USB based monitors which would mean more money to spend. Also I found only couple of models and most of them require USB 3.0 and no other cables to plug in (nice but costly - couldn't find decent monitor with only USB for sending signal and having power connected normally) . But docking station has only one USB 3.0 port. Can I use hub and still get it to work? Find some converters from Digital to USB (I think DisplayLink does some?) Buy different laptop but what kind? I need it to be I7, small (13"), fast and lightweight. At same time it requires docking station that I can use at home to connect 3 external monitors. Some other suggested solution... Edit: I need 3 monitors for work in terms of coding in Visual Studio or having word/excel/outlook open. Nothing fancy. Maybe some movie once in a while.

    Read the article

  • Ubuntu 12.04 suddenly cannot connect to WPA2/WPA Personal protected connection. Windows 7 can

    - by d4ryl3
    I have a laptop with Windows 7 and Ubuntu 12.04. I have a Cisco E1200 and when I set it up, it created 2 SSIDs. Let's name them: MyConnection (WPA/WPA2 personal), and MyConnection-Guest (no authentication, guest password entered via web browser). I had no problem connecting to MyConnection before, either in Windows 7 and Ubuntu. But now, I can't access MyConnection on Ubuntu. It just says "connecting..." then disconnects after a while. But I'm able to access the internet (on Ubuntu) when I connect to MyConnection-Guest. MAC filtering is off (even if it's on its MAC address is in the white list). Any idea why I'm unable to connect to MyConnection in Ubuntu? Thanks. Update: My Ubuntu installation can connect to ANY WiFi connection (WPA/WEP/no auth), except for MyConnection. Update2: This is what "The not so easy way" returned: Initializing interface 'eth1' conf '/etc/wpa_supplicant.conf' driver 'default' ctrl_interface 'N/A' bridge 'N/A' Configuration file '/etc/wpa_supplicant.conf' -> '/etc/wpa_supplicant.conf' Reading configuration file '/etc/wpa_supplicant.conf' Priority group 0 id=0 ssid='MyConnection' id=1 ssid='MyConnection' id=2 ssid='MyConnection' id=3 ssid='MyConnection' WEXT: cfg80211-based driver detected SIOCGIWRANGE: WE(compiled)=22 WE(source)=21 enc_capa=0xf capabilities: key_mgmt 0xf enc 0xf flags 0x0 netlink: Operstate: linkmode=1, operstate=5 Own MAC address: xx:xx:xx:xx:xx:xx wpa_driver_wext_set_key: alg=0 key_idx=0 set_tx=0 seq_len=0 key_len=0 wpa_driver_wext_set_key: alg=0 key_idx=1 set_tx=0 seq_len=0 key_len=0 wpa_driver_wext_set_key: alg=0 key_idx=2 set_tx=0 seq_len=0 key_len=0 wpa_driver_wext_set_key: alg=0 key_idx=3 set_tx=0 seq_len=0 key_len=0 wpa_driver_wext_set_key: alg=0 key_idx=4 set_tx=0 seq_len=0 key_len=0 ioctl[SIOCSIWENCODEEXT]: Invalid argument Driver did not support SIOCSIWENCODEEXT wpa_driver_wext_set_key: alg=0 key_idx=5 set_tx=0 seq_len=0 key_len=0 ioctl[SIOCSIWENCODEEXT]: Invalid argument Driver did not support SIOCSIWENCODEEXT wpa_driver_wext_set_countermeasures RSN: flushing PMKID list in the driver Setting scan request: 0 sec 100000 usec WPS: UUID based on MAC address - hexdump(len=16): 16 3b d8 47 9e 24 50 89 96 16 6d 66 35 f3 58 37 EAPOL: SUPP_PAE entering state DISCONNECTED EAPOL: Supplicant port status: Unauthorized EAPOL: KEY_RX entering state NO_KEY_RECEIVE EAPOL: SUPP_BE entering state INITIALIZE EAP: EAP entering state DISABLED EAPOL: Supplicant port status: Unauthorized EAPOL: Supplicant port status: Unauthorized Added interface eth1

    Read the article

  • What is the optimum way to secure a company wide wiki?

    - by Mark Robinson
    We have a wiki which is used by over half our company. Generally it has been very positively received. However, there is a concern over security - not letting confidential information fall into the wrong hands (i.e. competitors). The default answer is to create a complicated security matrix defining who can read what document (wiki page) based on who created it. Personally I think this mainly solves the wrong problem because it creates barriers within the company instead of a barrier to the external world. But some are concerned that people at a customer site might share information with a customer which then goes to the competitor. The administration of such a matrix is a nightmare because (1) the matrix is based on department and not projects (this is a matrix organisation), and (2) because in a wiki all pages are by definition dynamic so what is confidential today might not be confidential tomorrow (but the history is always readable!). Apart from the security matrix, we've considered restricting content on the wiki to non super secret stuff, but off course that needs to be monitored. Another solution (the current) is to monitor views and report anything suspicious (e.g. one person at a customer site having 2000 views in two days was reported). Again - this is not ideal because this does not directly imply a wrong motive. Does anyone have a better solution? How can a company wide wiki be made secure and yet keep its low threshold USP? BTW we use MediaWiki with Lockdown to exclude some administrative staff.

    Read the article

  • SSH attack CentOS Amazon EC2

    - by user37143
    Hi, I run a few Rightscale CentOS AMI based instances on Amazon EC2. Two months back I found that our SSHD security is compromised( I had added host.allow and host.deny for ssh). So I created new instances and done an IP based ssh that allows only our IPs through AWS Firewall(ec2-authorize) and chnaged the ssh 22 default port to some other port but two days back I found I was not able to login to the server and when I tried on 22 port the ssh got connected and I found that sshd_conf was changed and when I tried to edit sshd_config I found root had no write permission on the file. So I tried a chmod and it said access denied for 'root' user. This is very strange. I checked secure log and history and found nothing informative. I have PHP, Ruby On Rails, Java, Wordpress apps running on these server. This time I did a chkrootkit scan and found nothing. I renamed the /etc/ssh folder and reinstalled openssh through yum. I had faced this on 3 instances on CentOS(5.2, 5.4) I have instances on Debian as well those working fine. Is this a CentOS/Rightscale issue. Guys, what security measures I should take to prevent this. Please support me this is very critical. Thanks

    Read the article

  • I cannot access Windows Update at all

    - by Cardinal fang
    I have been unable to access the Windows update site for a couple of weeks now. I just get a message saying "Internet Explorer cannot display the webpage" and saying I have connection problems. Same thing is replicated with any other Microsoft site I try to access. The Automatic Updates also do not work. I can access every other wesbite I've surfed to. I've tried Googling the problem and based on what other site have suggested I have cleared my cache and temp files. I've scanning my hard drive with my antivirus in case I have a virus (nada). I've tried turning off my firewall and anti-virus (I run Zone Alarm). I've downloaded SpyBot and scanned my drive with that in case something was missed by Zone Alarm (again nada). Based on suggestions from the smart cookies on the Bad Science forum, I've used nslookup to check my translation isn't wonky (got all the info they said I should get). I've also tried navigating there directly using the IP address I was given (nope). I normally access the internet through a 3 mobile broadband connection, but have also tried connecting using a mate's wi-fi connection in case it was something on my mobile modem interferring. I run Windows XP SP3 with Internet Explorer 7 and Zone Alarm Internet Security Suite as my anti-virus/ firewall. Any suggestions?

    Read the article

  • Move the uploads folder in Wordpress

    - by Victor Hurdugaci
    Currently, my Wordpress' upload folder is located in \wp-content\uploads. Initially there was no structure so all files were put there. After a while it was changed to upload the files in \wp-content\uploads\YEAR\MONTH. Now that folder contains a mix of files (those starting with + are folders) like: +wp-content | +2010 | | +02 | | | File-1 | | | File-2 | | | .. | | | File-n | | +01 | | | File-1 | | | File-2 | | | .. | | | File-n | +2009 | | +12 | | | File-1 | | | File-2 | | | .. | | | File-n | | +11 | | | File-1 | | | File-2 | | | .. | | | File-n | +.. | | | .. | Unstructured-file-1 | Unstructured-file-2 | ... | Unstructured-file-n Based on the dates of the unstructured files, I would like to move them in a structured hierarchy (based on date, move it to a folder \wp-content\uploads\YEAR\MONTH). Now, my questions are: Where do I write and execute a script to the movement (I don't have full access to the server, just to a cPanel and to the Wordpress Admin page)? What must be updated so that the links in posts, that reference the unstructured files, point to the new location of those files? Not fully related to the previous description: is it alright to move the whole uploads folder to another location, like \uploads? PS: Moving the files/updating the database manually is not an option :)

    Read the article

  • Need advise for choosing software\hardware for virtualization.

    - by Anatoly
    Currently we have these servers : Windows SBS 2003 premium on IBM X266 double Xeon F43, 2GB ram. DC, exchange (70 users), Mssql. Windows 2003 R2 32bit on IBM x3400 with double XEON E5310 and 4GB ram. Terminal server (40+ users), ERP application based on uniPaaS platform from Magicsoftware, and Pervasive sql. Ubuntu 8.04 (simple pc box) with squid proxy, GLPI system and PHPBB3 forum for internal use. Recently number of concurrent users on Terminal server passed 40 users in rush hours and it gets stuck frequently. Therefore we need an upgrade. I think about transfer all physical servers to virtual servers based on cluster of 2 physical servers for reducing downtime. I think we will grow till 50-60 concurrent terminal users in rush hours. I also plan to virtualize 10-15 Win XP/7 workstation (office,ERP etc), and there is a little probability for Asterisk\Hylafax for 100 users (if it possible on same VM). Also we need NAS storage for 2-3TB. What hardware upgrade/purchase we need for complete this task? Which VM solution is preferable VmWare or Hyper-V? What backup software should we choose? Acronis or something another? Thank you in advance.

    Read the article

  • Knowledge and user generated content management system to track files, research, proposals, etc.?

    - by Eshwar
    I'll try keep it short. Here's the scenario: We have employees all over the world performing similar work i.e. research, generating powerpoint slides, word documents, graphics, etc. Many times a lot of this previous work can be reused for another future project. The current arrangement is email and phone calls which as you would agree is quick if you know where to look but otherwise archaic and very very inefficient. So I am looking for software that will allow me to do the following: Tag files e.g. an investor presentation on cellphone usage in kenya would be tagged investor, cellphone, kenya Manage references e.g. if we read something on the internet, should be able to paste that link in some fashion and tag it as above. Preferably cloud based so that it can be accessed by anybody and additionally would be nice (though NOT must) to have access levels (director, manager, everyone) A nice interface that non technically savvy folks can warm up to ;) A desktop app would be handy so that people don't always have to click upload or something A tree based system is inefficient in this case because content is usually linked across branches and also people might not quite agree on one format of a tree. Tagging works around this very nicely. What I have considered so far: Evernote (for its more professional look) Springpad (for its versatility with content) Mendeley (this is a research manager and in some ways ideal, but i fear its limited to PDFs) The goal is that when somebody wants to look for a document, they don't have to ask a colleague, they can just search with keywords and all relevant information shows up. Thanks!

    Read the article

  • Configuring SASL support in libmemcached

    - by John Keyes
    I'm trying to build libmemcached with SASL support on OS X Mountain Lion. I have built memcached (1.4.15) with SASL support: $ memcached -S -vv Initialized SASL. slab class 1: chunk size 96 perslab 10922 ... slab class 42: chunk size 1048576 perslab 1 <17 server listening (binary) <18 server listening (binary) <19 send buffer was 9216, now 3728270 <20 send buffer was 9216, now 3728270 <19 server listening (udp) <20 server listening (udp) ... I am trying to build libmemcached with SASL support too. I have tried the following: $ ./configure --prefix=/usr/local \ --with-memcached-sasl=/usr/local/bin/memcached ... $ ./configure --prefix=/usr/local \ --with-memcached-sasl="/usr/local/bin/memcached -S" ... But the resulting configuration summary is the same for both: Configuration summary for libmemcached version 1.0.11 * Installation prefix: /usr/local * System type: apple-darwin12.2.0 * Host CPU: x86_64 * C Compiler: i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00) * C Flags: -O2 -Werror -Wall -Wextra -std=c99 -Wbad-function-cast -Wmissing-prototypes -Wnested-externs -Woverride-init * C++ Compiler: i686-apple-darwin11-llvm-g++-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00) * C++ Flags: -O2 -Werror -Wall -Wextra -Wpragmas -D_FORTIFY_SOURCE=2 -Waddress -Wchar-subscripts -Wcomment -Wctor-dtor-privacy -Wfloat-equal -Wformat=2 -Wmissing-field-initializers -Wmissing-noreturn -Wnon-virtual-dtor -Wnormalized=id -Woverloaded-virtual -Wpointer-arith -Wredundant-decls -Wshadow -Wshorten-64-to-32 -Wsign-compare -Wstrict-overflow=1 -Wswitch-enum -Wundef -Wunused-variable -Wwrite-strings -fwrapv -ggdb * CPP Flags: -I/usr/local/include * Assertions enabled: no * Debug enabled: no * Warnings as failure: no * SASL support: Am I doing something incorrectly? Thanks.

    Read the article

  • How to set up mysql storage for certain rsyslog input matches?

    - by ylluminate
    I'm draining various logs from Heroku to an rsyslog linux (ubuntu) server and am starting to have a little more to bite off than I can chew in terms of working with my log histories. I am needing to be able drill back in time based on more flexible details and more flexible access than what the standard syslog file(s) provide. I'm thinking that logging to mysql may be the correct approach, but how do I set this up such that it pulls only certain log entries into a table based on an identified? For example, I see a long hex string identifying each log entry from a certain Heroku app instance. I assume that I can just pipe those into the mysql socket vs ALL rsyslog input into mysql... Could someone please direct me to a resource that can walk me through the process of setting something like this up or simply provide some details that can help? I have 15+ years of Unix experience so I just need some nudging in the right direction as I've not really done a tremendous amount of work with syslog daemons previously in terms of pooling various servers into one. Additionally, I'd be interested in any log review tools that could make drilling through log arrangements like this more handy for developers.

    Read the article

  • Scaling databases with cheap SSD hard drives

    - by Dennis Kashkin
    Hey guys! I hope that many of you are working with high traffic database-driven websites, and chances are that your main scalability issues are in the database. I noticed a couple of things lately: Most large databases require a team of DBAs in order to scale. They constantly struggle with limitations of hard drives and end up with very expensive solutions (SANs or large RAIDs, frequent maintenance windows for defragging and repartitioning, etc.) The actual annual cost of maintaining such databases is in $100K-$1M range which is too steep for me :) Finally, we got several companies like Intel, Samsung, FusionIO, etc. that just started selling extremely fast yet affordable SSD hard drives based on SLC Flash technology. These drives are 100 times faster in random read/writes than the best spinning hard drives on the market (up to 50,000 random writes per second). Their seek time is pretty much zero, so the cost of random I/O is the same as sequential I/O, which is awesome for databases. These SSD drives cost around $10-$20 per gigabyte, and they are relatively small (64GB). So, there seems to be an opportunity to avoid the HUGE costs of scaling databases the traditional way by simply building a big enough RAID 5 array of SSD drives (which would cost only a few thousand dollars). Then we don't care if the database file is fragmented, and we can afford 100 times more disk writes per second without having to spread the database across 100 spindles. . Is anybody else interested in this? I've been testing a few SSD drives and can share my results. If anybody on this site has already solved their I/O bottleneck with SSDs, I would love to hear your war stories! PS. I know that there are plenty of expensive solutions out there that help with scalability, for example the time proven RAM-based SANs. I want to be clear that even $50K is too expensive for my project. I have to find a solution that costs no more than $10K and does not take much time to implement.

    Read the article

  • What to do when a device has no driver for Windows 7 but it has Vista, XP drivers

    - by Mehper C. Palavuzlar
    This has always been a bothersome matter for me. Some devices (printers, scanners, etc.) have drivers for older versions of Windows (Vista, XP, 2000, NT) but no driver for Windows 7. What are my chances to install such devices on Windows 7? Example case: I have a Sharp printer & scanner (Sharp AR-122E N) which I have used for my old Windows XP based PC. Now I want to install it on a Windows 7 x64 based PC. Windows 7 cannot load its driver. I used the original driver CD but when I run the setup.exe (which is included in AR122EN111.exe, 6713KB), it says Cannot install driver on this operating system. Supported operating systems are: Windows 2000, XP, Vista. I tried to install the driver using compatibility settings. I tried Windows Vista and Windows XP SP3, but to no avail. The setup gave the same error. I also googled for Windows 7 driver for "Sharp AR-122E N" but it only listed the original driver that I tried. The official site of Sharp does not even list the driver for this product. In the past, the compatibility setting workaround did work for some devices, but this time it failed. What else can I do to overcome this problem?

    Read the article

  • How do I run multiple MVC apps within a subdomain on IIS7?

    - by Matthew Patrick Cashatt
    Hello and thanks for looking. Background I am currently wrapping up a development contract and the client would like for me to push a build of the application to their IIS 7-based server in which they would like to run multiple MVC apps. One of the issues I have off of the bat is that this server is already a subdomain on their larger network. So, if I enter SERVERNAME in my browser, it automatically directs to SERVERNAME.COMPANYNAME.COM. Now, this is just fine if I place my application in the default website/root. In this scenario, clicking a link that requests admin.html directs to `SERVERNAME.COMPANYNAME.COM/admin.html' as usual. BUT they want me to place the app in a subdomain on this server so that they can also run other apps on the same server. So I assume that I need MYAPP.SERVERNAME.COMPANYNAME.COM but I have no idea how to do that. Complicating matters is that my app and the future ones they wish to install are all MVC based which intercepts and re-writes URLs. I assume that this takes care of itself if I can just successfully get my app into a subdomain to begin with. What I have tried Creating a new site on the server in it's own app pool Setting the binding for that site to MYAPP.SERVERNAME.COMPANYNAME.COM Setting the binding for that site to MYAPP Setting the binding for that site to MYAPP.SERVERNAME Setting the binding for that site to MYAPP.SERVERNAME.COM Setting the binding for that site to MYAPP.COMPANYNAME.COM Nothing is working. Am I missing something simple here? Thanks, Matt

    Read the article

  • Laptop authentication/logon via accelometer tilt, flip, and twist

    - by wonsungi
    Looking for another application/technology: A number of years ago, I read about a novel way to authenticate and log on to a laptop. The user simply had to hold the laptop in the air and execute a simple series of tilts and flips to the laptop. By logging accelerometer data, this creates a unique signature for the user. Even if an attacker watched and repeated the exact same motions, the attacker could not replicate the user's movements closely enough. I am looking for information about this technology again, but I can't find anything. It may have been an actual feature on a laptop, or it may have just been a research project. I think I read about it in a magazine like Wired. Does anyone have more information about authentication via unique accelerometer signatures? Here are the closest articles I have been able to find: Knock-based commands for your Linux laptop Shake Well Before Use: Authentication Based on Accelerometer Data[PDF] Inferring Identity using Accelerometers in Television Remote Controls User Evaluation of Lightweight User Authentication with a Single Tri-Axis Accelerometer Identifying Users of Portable Devices from Gait Pattern with Accelerometers[PDF] 3D Signature Biometrics Using Curvature Moments[PDF] MoViSign: A novel authentication mechanism using mobile virtual signatures

    Read the article

  • AWS: Multi-region setup using single RDS instance

    - by Ion
    I'm trying to scale our web application (PHP, MySQL, memcache) in a multi-region scheme. Currently we are using a setup with two EC2 instances behind an ELB and an RDS instance, all of them in US-EAST (Virginia) region. We would like to have a presence in the EU (Ireland) region as well. This means at least a new EC2 instance there (identical to the others, serving the same application). I have copied the desired AMI, setup the new instance, setup a same ELB configuration (required for SSL termination) and configured latency-based routing in Route53. And it works as suggested. But, clients from EU have speed problems. This is due to the fact that the EU EC2 instances connect to the US-based RDS instance. As far as I know Amazon has not yet enabled RDS multi-region replication. Do you have any suggestions on how to properly speed up the whole setup while using the single RDS instance? Also, any ideas in general on how to scale things up? Ideally we would like to continue using the RDS technology for various reasons. Nevertheless, I am open to suggestions (I guess the next idea would be to host our own MySQL servers).

    Read the article

  • Best way to build / implement a corporate developer Linux distro with multiple kernels?

    - by Garen
    At work we have Linux users who understandably prefer using Ubuntu. Problem is, we also have developer tools that only work with 'officially' supported Linux distributions that use much older 2.6.18 based kernels. (And even if they worked with newer ones, the vendors could always say they won't "support" the software unless it's on one of their 'officially' supported platforms.) We could of course just tell them to use CentOS or something else 2.6.18-based, and I'm sure their response would be something like: "you can take Ubuntu from our cold, dead hands." :) Which brings to me some questions--is there any good/easy/recommended way to run something like Ubuntu as a host VM and Centos 5.x as a guest OS (with which system--Xen,KVM,VMWare, ...?), and then roll that into our own custom internal distribution that could be easily installed? KVM looks like a good high-performance option just recently included in RHEL 5.4, but if hardware support for virtualization like Intel-VT or AMD-V is necessary, then I'd guess only those folks with fairly new PCs will be able to do it. Would be very interested to hear how anyone else has addressed this kind issue. EDIT: The target audience / users of this kind of system would be developers, each one needs to run locally licensed commercial software, so building out some separate beefy central machines isn't an option unfortunately due to license restrictions. Even if that weren't the case, a couple developers could quickly eat up the resources with parallel builds. :) Ideally, I was hoping there was some step-by-step guide out there to build your own pre-built distribution that had e.g. CentOS 5.x and Ubuntu Desktop as a guest.

    Read the article

  • Route return traffic to correct gateway depending on service

    - by Marnix van Valen
    On my office network I have two internet connections and one CentOS server running a website (HTTPS on port 443). The website should be publicly accessible through the public IP of the first internet connection (ISP-1). The other internet connection, ISP-2, id the default gateway on the network. Both internet connections have routers (the household-kind) with NAT, SPI firewalls etc. The router on ISP-2 is a Netgear WNDR3700 (aka N600) with original firmware. The problem is that the website is unreachable. Looks like incoming traffic on ISP-1 will reach the server but the returning traffic is routed through ISP-2, effectively making the site unreachable. As far as I can tell I can't do port based routing on the WNDR3700. What are my options to make this work? I've been looking at implementing an iptables / routing based solution on the server itself but haven't been able to make that work. Update: Note that the server has one network interface connecting it to both routers.

    Read the article

  • mod_rewrite not working for subdomain in Apache2

    - by Matt
    Hi, I'm having some trouble with mod_rewrite. So I'm implementing it through .htaccess, and I can get it working on my main vhost, domain.com - what I want it to do is rewrite http:// domain.com to force it to https:// domain.com, which it does well. I want to have name-based vhosts for the one IP with the following redirects: (I'm breaking up domain names with a space because otherwise serverfault recognises them as links) http:// domain.com -- https:// domain.com http:// staging.domain.com -- https:// staging.domain.com http:// test.domain.com -- https:// test.domain.com http:// beta.domain.com -- https:// beta.domain.com domain.com redirects to https:// domain.com, but staging.domain.com doesn't, although I can access https:// staging.domain.com. The .htaccess is identical for both, just with the domain name different. It doesn't seem to do any rewriting at all for staging.domain.com, I've tested this by trying to get it to rewrite to www.google.com. I have a wildcard DNS record, *.domain.com which points to the domain IP. Is there a particular way I should have the virtualhosts configured to allow this? I keep reading in the Apache documentation that it doesn't support multiple SSL name-based vhosts. But I can access both https:// domain.com and https:// staging.domain.com just fine. Any thoughts? Thanks to everyone for your help with this.

    Read the article

  • Can My Personal GMail Query A Remote LDAP Server?

    - by Maarx
    I have a personal GMail account, from which I frequently send e-mail to a great many various users of a specific business. The corporation has been kind enough to provide me with the credentials to access their LDAP server, with which I would like my GMail web client to be able to auto-complete partial addresses or names for which that LDAP server has an entry. Is there any way I can get a personal GMail account (or it's corresponding entire Google account) account to incorporate an LDAP server into it's Contacts? If I cannot get it to query dynamically and on-demand, is there an idiot-proof way (assuming the client permits, which they may not) to query the LDAP server for it's entire database, save it, and bulk import it to GMail? Perhaps, even, something I could set to repeat periodically (weekly, perhaps), without human interaction? If I did the latter, I assume it would be trivial to import all of these contacts under a single category that could be easily manipulated from within the GMail web-based client. I have been a staunch user and supporter of the GMail web-based client since it's instantiation, but this one is kind of a deal-breaker for me. If it's impossible, what do you suggest I do?

    Read the article

  • SBS2011 Standard DNS suddenly not resolving some domains

    - by Matt
    Suddenly today I am unable to resolve common domains like serverfault.com, facebook.com; but other domains like google.com, cnn.com work fine. This is on a client machine (Win7 Pro) connected to an SBS2011 Standard domain. The only DNS server is the SBS2011 server. The same domains work fine on all client PCs I have tried, and the same ones do not work. Using nslookup, I get 'no such domain' errors for facebook.com, and the correct DNS entries for the ones that do work. When I add Google's Public DNS to my client PC as a backup (primary = local SBS server, secondary = 8.8.8.8), everything works fine for my client PC, but querying from the SBS server directly or from other client PCs are broken (so I don't believe it's a firewall issue). My main question is how can I see what servers the SBS2011 server queries if it doesn't know about a domain? There is nothing in our firewall logs that say it blocked any DNS-based packets, but I also wanted to query based on the IP/FQDN on the servers that the SBS server was likely to contact to find out about facebook.com for example. Update 23/05/2012: It appears DNS is working again this morning for the affected websites. Both the DC on its own and all client PCs can once again access the websites that were not loading last night, as well as the websites that were working. I haven't changed anything overnight, so it appears that there was some kind of temporary glitch, but I can't understand what would have caused it on the network.

    Read the article

< Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >