Search Results

Search found 4618 results on 185 pages for 'websites'.

Page 172/185 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • How does an ASP.NET programmer go from working on/developing existing sites, to creating one from sc

    - by SLC
    I've been an ASP.NET developer for some time, always working on existing ASP.NET pages, modifying functionality, adding features, tweaking things etc. but have never built a site up from scratch. I've read books on ASP.NET, and they generally talk you through the various features of ASP.NET with a mock up site, but it's always very basic and they jump straight in. The time has come however, to write a site from scratch for a client. I've never done this before. There are design considerations, but like a lot of ASP.NET sites, the basic idea is, you have a site, where users can log in, and save some information like their name and password and address. The site has some functionality, but that's the basic design of a majority of (business-related) asp.net websites I would wager. I know how to program in ASP.NET already on an existing site, but I don't know how to design my own properly that meets the criteria above. I guess the main worry is security. I don't know the best way to handle a simple log-in system that stores user information like their name and password. I understand there are a few approaches to this, but the catch with this project is that it has to be absolutely bulletproof. Maximum security. All those good practices for security, it needs to have them all. I'm not asking what they are, but I am asking where to begin. What should be the first steps after I do File New Project ? Where can I look for information about setting up a secure ASP.NET website? I'll figure out the content and page layout later, it's the framework that is the big thing. Any and all advice would be welcome. I really want to get my first from-scratch project right from the beginning. Just to confuse things, it's possible I will be using MVC, I am not sure if this has any impact.

    Read the article

  • How an application or website finds your ip?

    - by johnkills
    I think there are only two ways a application or a server could get your IP. If it is an application, java/flash, I think it could check your network settings locally and send your IP back to the server. Then the server would know. The other way it could find is that it could analyze the packet headers. Then find there your IP information. But if I wanted it to stop doing it. If it was analyzing locally my IP information I could stop that packet or change its information so the website would be confused about the IP information. If it was analyzing the packet headers and if knew what packets it was analyzing because it wont analyze every packet, I could stop sending those packets. Example: Websites that checks your IP, how does it do it? If you are not downloading any application, you would exclude the 1. scenarion. Then the only possibility is that it was analyzing packet headers but what kind of packets? It was not one question only but if anyone knows something about it, I would like to know too. :) Thanks

    Read the article

  • Why is no encoding set in reponse by tomcat? How can I deal with it?

    - by Dishayloo
    I had recently a problem with encoding of websites generated by servlet, that occured if the servlets were deployed under tomcat, but not under jetty. I did a little bit of research about it and simplified the problem to the following servlet: public class TestServlet extends HttpServlet implements Servlet { @Override public void service(HttpServletRequest request, HttpServletResponse response) throws IOException { response.setContentType("text/plain"); Writer output = response.getWriter(); output.write("öäüÖÄÜß"); output.flush(); output.close(); } } If I deploy this under Jetty and direct the browser to it, it returns the expected result. The data is returned as ISO-8859-1 and if I take a look into the headers, then Jetty returns: Content-Type: text/plain; charset=iso-8859-1 The browser detects the encoding from this header. If I deploy the same servlet in a tomcat, the browser shows up strange characters. But Tomcat also returns the data as ISO-8859-1, the difference is, that no header tells about it. So the browser has to guess the encoding, and that goes wrong. My question is, is that behaviour of tomcat correct or a bug? And if it is correct, how can I avoid this problem? Sure, I can always add response.setCharacterEncoding("UTF-8"); to the servlet, but that means I set a fixed encoding, that the browser might or might not understand. The problem is more relevant, if no browser but another service accesses the servlet. So how I should deal with the problem in the most flexible way?

    Read the article

  • Help - use PHP-broswer, or proxy or get_page_contents or include page, or something else ??

    - by userlite
    Hi, I am trying to develop a web application for which I need to capture a specific user-driven event (such as mouse dblclick) occurring on a different-website page loaded through my website. What I want to do is : User visits my website - hosted by me. There, user types in any website URL (e.g.: http://www.example.com) That URL page gets loaded as is. When user double-clicks mouse over any link or image from that page, a popup/side-panel is displayed with content related to that particular image or link. I can do this with a combination of PHP get_page_contents or include-page, and javascript dblclick. However, when user clicks on any link or submits a form, the control goes to that other website, where I cannot show the side-panel. I might be able to handle the links by proxifying them when user clicks on any of them. How do I handle forms submission and other stuff ? I can use a full-featured proxy, but that will be too heavy just for the purpose of capturing the event. My question is that is there a way to write some kind of light PHP script that sits on my website - that loads other websites contents as is, but lets me capture the mouse-dblclick event to show related-content in the side panel . I have already searched the internet, but could not find anything. Any help is really appreciated. Thanks.

    Read the article

  • Where do you use Java code? Trouble sending .jar files

    - by leigero
    Apologies for the generic nature of this question. I have been learning Java for a few months now, and I've created a few simple projects which have some functionality. I wanted to send the projects to a friend and I'm running into countless errors and struggles. Passing a .jar file is causing "Main class not found" errors when they try to open it. I tried using third party software to wrap the .jar files into an .exe file and the same errors still persist. Beyond that, I'm convinced that passing around .jar files wrapped into .exe files via third party software is NOT how Java was intended to be used. I've read two books on Java and they all talk about structuring the language, but I'm confused about WHERE I'm supposed to be using this code because it has become painfully obvious that it is NOT intended to be passed around in file format. Is this a server programming language? Used on the back end of websites mainly? I'm not sure where one would be using the code written in Java.

    Read the article

  • Should Wordpress be used to create a real estate listing site?

    - by John
    I have a real estate agent client who wants a website to list the properties he's selling. Although there are great 3rd party web apps out there that do this, he adamantly demands that I recreate a simple and custom website for him. I can do this quickly with a php framework like Code Igniter that comes with MVC, data access objects and data bind controllers. The database would be straightforward: t_page: generic content pages t_property: for each property on the market, has fields like address, price, #of bed rooms etc.. However, the client has heard many great things about Wordpress, and strongly advises that I build his real estate site with it. I've only used Wordpress to create blogs and relatively straightforward websites. SO I dont know how effective it is as a real estate property content management system or how effective it is for users to search for real estate properties based on attributes such as "# of bedrooms, square footage, is basement finished etc..." So my question is, is it a good idea to build a real estate agent website with Wordpress? Or should I try harder to convince him to build it with web framework like Code Igniter?

    Read the article

  • Ruby - RegEx problem or maybe another solution altogether

    - by r3nrut
    Ok the problem I'm having is that I have a block of javascript I've successfully scraped out of a websites source and now I have to sift through the js to get the specific values I'm looking for. Below is the chunk i'm needing to deal with. I need to find "flvFileName" and get all the file names listed. In this case its: trailer1,trailer2,trailer3. At first I started using regex to match the start and end tags and them match the file names and extract them to an array but the problem is that there isn't always 3 videos in the list. Could be 0, 1, 2, 3, 4 etc. So matching doesn't work. Any thoughts on a way to approach this that won't make me continue to abuse my laptop? ["", "\r\n", "\n", "\r\n function IgnoreEnter(e) {\r\n var code;\r\n if (!e) // IE\r\n {\r\n var e = window.event;\r\n }\r\n if (e.keyCode) {\r\n code = e.keyCode;\r\n }\r\n else if (e.which) // Firefox, Opera\r\n {\r\n code = e.which;\r\n }\r\n\r\n if (code == 13) {\r\n e.cancelBubble = true;\r\n e.returnValue = false;\r\n }\r\n }\r\n\r\n function ResetDefault() {\r\n __defaultFired = false;\r\n }\r\n", "", "\r\n// <![CDATA[\r\n$(doc).ready(function () { $('#VideoObject').flash({ swf: '/scinema/video.swf', height: 300, width: 480, hasVersion: 8, menu: false, wmode: 'transparent', bgcolor: '#000',flashvars: {flvFileName: 'trailer1,trailer2,trailer3', age: 'no', isForced: 'true'} }); });

    Read the article

  • Where is a small, simple CMS that has no Front End done in PHP?

    - by user559469
    The keys are: small and simple PHP MySql no Front End By "no front end" I mean literally, I can control the look 100%. I just want a CMS on the "backend" to manage content (user login/security, upload images, udate articles, etc.) that will not dictate in anyway how the managed data is presented. Maybe it just keeps the info in a (MySql) database (which I can query and extract myself) or if it writes content, it is in super-clean xhtml fragments or even just xml I will parse myself? I have looked at Wordpress -- and don't like the code it generates, not to mention the sites look too "canned" (you can usually spot a WP site a mile a way.) Joomla and Drupal look more customizable, but they are bloated now in my opinion, and really I just want something lightweight and simple. For one-user mom-and-pop sites. (No tiered publishing/approval systems, and all that.) I envision plugging this CMS into existing websites/web apps where most of the site is made and managed by me, but a few choice areas are managed by the site owner.

    Read the article

  • What's the difference between UI development and front-end development?

    - by Nick Lowman
    I'm a front-end developer and really enjoy jQuery and JavaScript. I've built a lot a websites, done some good jQuery work and built a few JavaScript based applications and would really like to get in UI development. Or so I thought. I guessed it would be pretty similar to what I already do except maybe a little more JavaScript heavy but when I looked into it all the job specs said I needed to know about Scrum or Agile development, knowledge of testing frameworks and a good knowledge of JavaScript frameworks and custom events. So, from the specs I get the idea that a UI developer is actually a dedicated JavaScript developer. Is that the case? I understand (with much help from the users on stackoverflow), about JavaScript OO, inheritance, closures, custom events, debugging in Firefox or Aptana etc, and the people I work with seem to think I pretty OK at what I do but clearly my knowledge is not good enough to go for UI jobs. If anyone could tell me a little more about UI development and if there are any good resources for learning about it I would be most grateful as I couldn't find much on the internet.

    Read the article

  • Website hosting from home - IIS6 [closed]

    - by Paul
    I'm wanting to host a few websites from home, primarily because I'm using some BETA Microsoft software (.NET 4 and EF) and don't want to install it on my production server which is hosted at eukhost.com. Basically, I'm completely new to this sort of thing. So far, here is what I've done: Registered the domain name at namecheap.com (let's call it mydomain.com) Gone to "Nameserver Registration" in the panel and entered my IP address for the NS1 and NS2 records (let's say the IP is 0.0.0.0). Gone to "Domain Name Server Setup" and entered ns1.mydomain.com & ns2.mydomain.com Forwarded requests from port 80 to my internal IP (let's say 192.168.1.254) Created the website in IIS (I'm just testing with a single website so far, so have not created any host header values) Now, if I type in the IP address (http://0.0.0.0) I get the site as expected. However, if I enter http://www.mydomain.com I get an error saying "DNS Error - Cannot find server". I'm aware that there is a service from DynDNS that will automatically change the IP if I have a dynamic address, however my IP has remained static since I installed the ISP (since October) so I don't need this. Is there any way that I can get the DNS to work just by configuring IIS or something in Windows? I don't really want to have to pay for any 3rd party service. Thanks,

    Read the article

  • C vs C++ function questions

    - by james
    I am learning C, and after starting out learning C++ as my first compiled language, I decided to "go back to basics" and learn C. There are two questions that I have concerning the ways each language deals with functions. Firstly, why does C "not care" about the scope that functions are defined in, whereas C++ does? For example, int main() { donothing(); return 0; } void donothing() { } the above will not compile in a C++ compiler, whereas it will compile in a C compiler. Why is this? Isn't C++ mostly just an extension on C, and should be mostly "backward compatible"? Secondly, the book that I found (Link to pdf) does not seem to state a return type for the main function. I check around and found other books and websites and these also commonly do not specify return types for the main function. If I try to compile a program that does not specify a return type for main, it compiles fine (although with some warnings) in a C compiler, but it doesn't compile in a C++ compiler. Again, why is that? Is it better style to always specify the return type as an integer rather than leaving it out? Thanks for any help, and just as a side note, if anyone can suggest a better book that I should buy that would be great!

    Read the article

  • Need help on HttpWebrequest

    - by ASPUSER
    HI Guys I have the same issue and I am looking to solve it. Here is detail I have two web sites WebsiteA and WebSiteB (WebsiteB is not in my control, A type of black box for me.). Both websites have seprate login page I have alist of users,password of websiteB which I stored in database. I want a kind of common login page. If user is login to websiteA and he want to go to websiteB, he dont have to enter the login and password information again. I can not touch the code of websiteB. it's alredy deployed and runing. In websiteB in login form they have a Userid textbox and Password textbox and and a login Button. This butoon is not a submit button. It has a click event which calls a function to validate the user. it's not a simple post. WebsiteB has one webpage which has different frames. After login sucessfull. The pages doesnt go to any other page it remain on the same page but load the different frame. According to my knowledge. I can use httpwebrequest class. But faceing the following problem. Can not click the button. Response.Redirect does not work. It seems that WebsiteB is not storing any thing in cookies as cookies always return me a empty string I really appriciate if anyone can help me on it. How Can I use response.Redirect . As when I redirect it shows me the same login page.

    Read the article

  • Location of DB models in Zend Framework - want them centralized

    - by jeffkolez
    Maybe I've been staring at the problem too long and it's much simpler than I think, but I'm stuck right now. I have three websites that are going to share database models. I've structured my applications so that I have an application directory for each site and a public directory for each site. The DB models live in a directory in the library along with Zend Framework and my third party libraries. I use the Autoloader class and when I try to instantiate one of my DB classes, it fails. The library directory is in my include path, but for whatever reason it refuses to instantiate my classes. It will work if I have my models in my application directory, but that's not the point. They're supposed to be shared classes in a Library. $model = new Model_Login(); $model->hello_world(); This fails when its in the library. The class is just a test: class Model_Login { public function hello_world() { echo "hello world"; } } Everything works until I try to instantiate one of my models. I've even tried renaming the class to something else (Db_Login), but that doesn't work either. Any ideas? Thanks in advance.

    Read the article

  • recursive wget with hotlinked requisites

    - by dongle
    I often use wget to mirror very large websites. Sites that contain hotlinked content (be it images, video, css, js) pose a problem, as I seem unable to specify that I would like wget to grab page requisites that are on other hosts, without having the crawl also follow hyperlinks to other hosts. For example, let's look at this page https://dl.dropbox.com/u/11471672/wget-all-the-things.html Let's pretend that this is a large site that I would like to completely mirror, including all page requisites – including those that are hotlinked. wget -e robots=off -r -l inf -pk ^^ gets everything but the hotlinked image wget -e robots=off -r -l inf -pk -H ^^ gets everything, including hotlinked image, but goes wildly out of control, proceeding to download the entire web wget -e robots=off -r -l inf -pk -H --ignore-tags=a ^^ gets the first page, including both hotlinked and local image, does not follow the hyperlink to the site outside of scope, but obviously also does not follow the hyperlink to the next page of the site. I know that there are various other tools and methods of accomplishing this (HTTrack and Heritrix allow for the user to make a distinction between hotlinked content on other hosts vs hyperlinks to other hosts) but I'd like to see if this is possible with wget. Ideally this would not be done in post-processing, as I would like the external content, requests, and headers to be included in the WARC file I'm outputting.

    Read the article

  • Touch gestures in IE not working without explorer.exe being run once

    - by Michael
    Edit: Rephrasing my question: Upon further troubleshooting, I can conclude that: Touch gestures (dragging, pinch to zoom, touch-and-hold right click) in Internet Explorer start to work when: The system has been running for ~2 minutes. This coincides with the delayed start of services. Explorer.exe is being run, then killed. I assume Explorer.exe starts some services? The services with delayed start are as follows: Security Center Software Protection Windows Defender, Search and Update Windows Font Cache Service Microsoft .NET Framework NGEN v4.0.30319_X64 and X86 I see no connection between these services and touch gestures, but just in case, I manually tried starting these services, but without luck. What else happens delayed after system boot, which also happens when explorer is started? Old question: Details: Internet Explorer 9 and Windows 7 Professional, running on a HP TouchSmart (touch screen PC). It is going to be a kiosk PC (running a custom GUI for displaying websites). Scenario 1: When running Internet Explorer as a normal program in Windows 7, touch functions work perfectly. I can scroll the website by dragging it with my finger, I can pinch zoom and I can touch-and-hold right click. I now change the default shell in Windows to Internet Explorer (ie. IE starts instead of explorer.exe). Internet Explorer of course starts up when logging in. However, touch functions are reduced to basic clicking (no dragging, no pinch zooming, no touch-and-hold right click). Then I manually start explorer.exe, and the touch functions work again! And here is the weird part: When I kill explorer.exe, the touch functions keeps working - even if I close IE and start a new instance. Scenario 2: The exact same, but instead of changing the default shell to Internet Explorer, I change it to my own program, which uses an embedded Internet Explorer ("WebBrowser"). Same thing happens. What I've tried: Autorun programs: When explorer.exe launches, it launches all the autorun programs. There are no relevant programs being run by explorer, but just in case, I have manually started all the autorun programs, so that it is identical (but without explorer.exe) to a normal login. It still does not work (until I launch explorer.exe). Specifically TabTip.exe, TabTip32.exe and wisptis.exe are all running. All services are also started. To sum it up Running explorer.exe once changes something in the touch capabilities of Internet Explorer. It doesn't matter if explorer.exe is running - as long as it has been run once. Does anyone know what causes this behavior? Or how I can circumvent it neatly?

    Read the article

  • IRP_MJ_WRITE latency up to 15 seconds

    - by racitup
    We have written an application that performs small (22kB) writes to multiple files at once (one thread performing asynchronous queued writes to multiple locations on behalf of other threads) on the same local volume (RAID1). 99.9% of the writes are low-latency but occasionally (maybe every minute or two) we get one or two huge latency writes (I have seen 10 seconds and above) without any real explanation. Platform: Win2003 Server with NTFS. Monitoring: Sysinternals Process Monitor (see link below) and our own application logging. We have tried multiple things to try and solve this that have been gleaned from a few websites, e.g.: Making the first part of file names unique to aid 8.3 name generation Writing files to multiple directories Changing Intel Disk Write Caching Windows File/Printer Sharing Minimize memory used Balance Maximize data throughput for file sharing Maximize data throughput for network applications System-Advanced-Performance-Advanced NtfsDisableLastAccessUpdate - use fsutil behavior set disablelastaccess 1 disable 8.3 name generation - use "fsutil behavior set disable8dot3 1" + restart Enable a large size file system cache Disable paging of the kernel code IO Page Lock Limit Turn Off (or On) the Indexing Service But nothing seems to make much difference. There's a whole host of things we haven't tried yet but we wondered if anyone had come across the same problem, a reason and a solution (programmatic or not)? We can reproduce the problem using IOMeter and a simple setup: Start IOMeter and remove all but the first worker thread in 'Topology' using the disconnect button. Select the Worker thread and put a cross in the box next to the disk you want to use in the Disk Targets tab and put '2000000' in Maximum Disk Size (NOTE: must have at least 1GB free space; sector size is 512 bytes) Next create a new access specification and add it to the worker thread: Transfer Request Size = 22kB 100% Sequential Percent of Access Spec = 100% Percent Read/Write = 100% Write Change Results Display Update Frequency to 5 seconds, Test Setup Run Time to 20 seconds and both 'Number of Workers to Spawn Automatically' settings to zero. Select the Worker Thread in the Topology panel and hit the Duplicate Worker button 59 times to create 60 threads with identical settings. Hit the 'Go' button (green flag) and monitor the Results tab. The 'Maximum I/O Response Time (ms)' always hits at least 3500 on our machine. Our machine isn't exactly slow (Xeon 8 core rack server with 4GB and onboard RAID). I'd be interested to see what other people get. We have a feeling it might be something to do with the NTFS filesystem (ours is currently 75% full of fragmented files) and we are going to try a few things around this principle. But it is also related to disk performance since we don't see it on a RAMDisk and it's not as severe on a RAID10 array. Any help is much appreciated. Richard Right-click and select 'Open Link in New Tab': ProcMon Result

    Read the article

  • Home and Small to Medium Enterprise network manufacturer choice, Netgear, Linksys or D-Link ?

    - by Kedare
    (Please don't close this post, it's a serious post so... Be cool, no trolls please, I need an answer ;p) Hello, I am looking for an alternative to Cisco (too expensive for me !) for semi-pro utilization (at home but with advanced feature (I'm studying in IT)) and in small/medium enterprises. I think I will choose between LinkSys (Including Cisco Small Business), Netgear and D-Link, but I've never really used these products, that what I need is a manufacturer that make "almost" all type of networking equipment (Like Cisco but cheaper..), here are my needs : I need almost all my products to be rackable I need a good warranty (Netgear lifetime waranty rulez!) I need an "unified" network environment I made a little comparison of the characteristics that interest me after hours of search on Internet (based on result found on many websites): (Prices are based on the ldlc-pro.com french website) Hotline/Support Quality: Netgear : Not so bad Linksys : Not so bad D-Link : Poor! Most common Warranty: Netgear : Unlimited Lifetime Warranty! Linksys : Limited 3 years warranty D-Link : Limited 5 years warranty (Unlimited in US but I'm on France :(...) VPN protocols compatibles with routers on endpoint mode: Netgear : Only IPSEC :( Linksys : IPSEC, PPTP, L2TP D-Link : IPSEC, PPTP, L2TP Cheaper 8 ports Gb switch : Netgear : 30€ Linksys : 47€ D-Link : 30€ Cheaper 48 ports + 1Gb uplink(s) administrable switch : Netgear : 263€ Linksys : 630€ D-Link : 600€ Cheaper VPN router : Netgear : 100€ Linksys : 80€ D-Link : 60€ Cheaper rackable switch : Netgear : 50€ Linksys : 87€ D-Link : 50€ Cheaper rackable and administrable switch : Netgear : 120€ Linksys : 370€ D-Link : 171€ Netgear and D-Link are in the same range of price, where Linksys is more expensives. I've searched for some other criteria ( the full comparison is here, in french with shop/source links: http://forums.jeuxonline.info/showthread.php?t=1072280 ) and made a final score for each manufacturer : SCORE including IP camera sub-score: Netgear : 6.2/10 Linksys : 7.3/10 D-Link : 7.0/10 SCORE excluding IP camera sub-score: Netgear : 6.9/10 Linksys : 7.0/10 D-Link : 6.7/10 On both case, Linksys wins. So here is my little comparison, but because I've never really used these stuffs, I need your help to make a decision on witch manufacturer choose for both my personnal and corporate use. So here are the questions : What manufacturer do you recommend me (Not cisco (except Small business)) ? Why ? Have you called the call center of the customer support of one of these manufacturer ? How it was ? Did you had problems or bad experiences with these equipments ? Any other advices ? ;) Thank you !

    Read the article

  • NTPD seems to delete all network interfaces

    - by Aurelin
    We have a couple of virtual interfaces configured on eth0 on a CentOS, and every now and then, they went down seemingly out of the blue. Now after going through the log files, I found out that apparently ntpd deletes all eth0 interfaces, and that dhclient automatically brings eth0 back up. The virtual interfaces, however, stay down which causes several of our websites to be inaccessible. Can someone explain to me why ntpd deletes interfaces? Can / should that be turned off, or can / should I configure dhclient to bring the virtual interfaces back up automatically, too? EDIT// The log files that I should've posted : Nov 12 13:10:28 raptor dhclient[20048]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x6a825e97) Nov 12 13:10:42 raptor dhclient[20048]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 8 (xid=0x24554092) Nov 12 13:10:42 raptor dhclient[20048]: DHCPOFFER from 96.126.108.78 Nov 12 13:10:42 raptor dhclient[20048]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x24554092) Nov 12 13:10:42 raptor dhclient[20048]: DHCPACK from 96.126.108.78 (xid=0x24554092) Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #31 eth0, 50.116.50.97#123, interface stats: received=3255, sent=3256, dropped=0, active_time=1559394 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #32 eth0:0, 50.116.53.56#123, interface stats: received=3, sent=0, dropped=0, active_time=1559391 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #33 eth0:1, 66.175.211.192#123, interface stats: received=2, sent=0, dropped=0, active_time=1559389 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #34 eth0:2, 50.116.53.95#123, interface stats: received=3, sent=0, dropped=0, active_time=1559387 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #35 eth0:3, 97.107.132.32#123, interface stats: received=2, sent=0, dropped=0, active_time=1559385 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #36 eth0:4, 50.116.56.201#123, interface stats: received=2, sent=0, dropped=0, active_time=1559383 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #37 eth0:5, 66.175.212.121#123, interface stats: received=2, sent=0, dropped=0, active_time=1559381 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #38 eth0:6, 66.175.215.137#123, interface stats: received=2, sent=0, dropped=0, active_time=1559379 secs Nov 12 13:10:44 raptor NET[1573]: /sbin/dhclient-script : updated /etc/resolv.conf Nov 12 13:10:44 raptor dhclient[20048]: bound to 50.116.50.97 -- renewal in 32692 seconds. Nov 12 13:10:45 raptor ntpd[2109]: Listening on interface #39 eth0, 50.116.50.97#123 Enabled The eth0 config : DEVICE="eth0" ONBOOT="yes" BOOTPROTO="dhcp" IPV6INIT="no" IPADDR=50.116.50.97 NETMASK=255.255.255.0 GATEWAY=50.116.50.1 And the virtual interfaces (I posted the first one only, they look the same for the most part) : # Configuration for eth0:0 DEVICE=eth0:0 BOOTPROTO=none # This line ensures that the interface will be brought up during boot. ONBOOT=yes # eth0:0 IPADDR=50.116.53.56 NETMASK=255.255.255.0

    Read the article

  • How to manipulate default document with rewrite module on IIS7?

    - by eugeneK
    Until few month ago i've been using IIS 6 where i could add different Default Documents to each websites which are physically same directory. Since II7 which adds Default Document value to web config i couldn't use such technique as web.config was changed for all the directory. So I've found simple solution with rewrite module to change Default Document for each domain <defaultDocument enabled="false" /> <rewrite> <rewriteMaps> <rewriteMap name="ResolveDefaultDocForHost"> <add key="site1.com" value="Default1.aspx" /> <add key="site2.com" value="Default2.aspx" /> </rewriteMap> </rewriteMaps> <rules> <rule name="DefaultDoc Redirect If No Trailing Slash" stopProcessing="true"> <match url=".*[^/]$" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" /> </conditions> <action type="Redirect" url="{R:0}/" /> </rule> <rule name="PerHostDefaultDocSlash" stopProcessing="true"> <match url="$|.*/$" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" /> <add input="{ResolveDefaultDocForHost:{HTTP_HOST}}" pattern="(.+)" /> </conditions> <action type="Rewrite" url="{R:0}{C:1}" appendQueryString="true" /> </rule> </rules> </rewrite> Now i've got two other issues. first of all i can't use canonical url rewrite, if i set one then site1.com and site2.com redirected to www.site1.com instead of having www. for each. second problem is that there is a directory within site1' and site2' physical directory called members in which Default.aspx is always a Default Document doesn't matter which domain name was used. It doesn't work as well. Please help me with this issue because i've never thought i will get such problem with supposed to be better IIS7...

    Read the article

  • Ubuntu web server 11.10 ftp/server issue

    - by Nate
    I was wondering if I could get some help with FTP, atleast I'm pretty sure it has to do with FTP. Although it could have to do with something else, I'm not 100% sure.. Now, for fare warning, I'm no ubuntu dominator, I'm pretty newb. Anyway, I've attempted to build a webserver to to test php and what not for a site I'm building. Now everything works, the php, the sql etc. By the way, I built this in VMware, so it's virtual, over a network, so I can access stuff from anywhere. I'm in a college right now so yeah. The one problem I have is this. I go into the terminal, and do ifconfig to find my IP. I get it and go to a browser on a different machine and type that IP in. I get the "index of/" page, where I can browse the website I'm making. I can click through folders and what not. I can click on things and they open up. Now lets say I'm working on my desktop and open up an FTP and drag and drop something into there, go to the IP in the browser again and try to open it. I either get "Server error The website encountered an error while retrieving http://my_server_ip/phpinfo.php. It may be down for maintenance or configured incorrectly. Here are some suggestions: Reload this webpage later." or "Forbidden You don't have permission to access /html.html on this server." But, lets say I make it on the server itself, and try, bam, magic it works. I'm sure I set the permissions to let everyone open and view the files, but maybe I didn't? I'm not sure, and this is where I was hoping I could get some help. By the way, I followed a tutorial on changing the www folder (apache) from /var/www to home/"user"/www. I can't recall how I did that, but it's there and my ftp goes to the home/"user"/www folder. But yeah, any and all help is appreciated. Like I said, I'm really new to this, but I do enjoy attempting to make these servers and learning how they work, so it's not like making this webserver is a project for a class, It's just assisting me in testing stuff for another class and possibly other websites later on down the road. Anyway, anyone who decides to help, thanks so much, I'd really appreciate it. Nate. P.S. I'm using Ubuntu 11.10 desktop edition with a LAMP server

    Read the article

  • DNS problems with Google on Windows 7

    - by awishformore
    Hello dear superuser community. I had no idea where to post this because it is a problem that completely baffles me. I have a lot of experience with network configuration, but I am completely out of ideas on how to fix this. I have a Fritz!Box branded router on my ISP 1&1 in Germany. My computer is connected to it with a normal Ethernet cable. I always manually set my IP on the computer and use the Google DNS servers for name resolution. I also tried OpenDNS and the result is the same. With that configuration the following happens: Google search responds with big delay Gmail, Google Calendar & Google Drive requests time out the majority of the time In order to troubleshoot, I set the network connection to DHCP for both IP & DNS. At that point, what happens is the following: Google search times out most of the time Gmail, Google Calendar & Google Drive work most of the time Sometimes, it happens that the sites that time out will come up, but weirdly enough, the pictures on the buttons will be missing. For instance, the magnifying glass on Google will be gone or the circle arrow on Gmail (but all buttons of course). All other websites load just fine - and very quickly. All other network functionality is completely unimpacted. The behaviour of fixed IP & Google DNS vs automatic IP & DNS is easily reproducible. I am going crazy trying to fix this as I have no idea what the hell is going on at this point. Here a list of the things I have tried thus far: Flushed the DNS Tried on different browsers (Works fine on my laptop by the way) Tried disabling Teredo & IPv6 stack Emptied all caches Checked the HOSTS file Rebooted the router Reset the router Reinstalled the network adapter Tracert displayes normal route until timing out at one point Ping usually doesn't work for the unreachable sites either Ran both complete Norton 360 & Kaspersky 2012 scan Ran Kaspersky Virus Removal Tool in safe mode Tried connection in safe mode & networking enabled If you have any ideas, please let me know. I'm getting desperate...

    Read the article

  • Best Practise: DNS and VPN (with private network IPs)

    - by ribx
    I am trying to find the best solution for my DNS problem. We are running several services in our company that you can reach only over VPN. Other services, that are reachable through the internet got the domain ... At the moment all services inside the VPN network go by .local... These have an VPN IP of the private network 192.168.252.0/24. Clients reach from Linux over OSX to Windows. I can think of 4 possibilities to implement a DNS infrastructure: Most common: an internal DNS Server, that is pushed by the VPN. But this has several drawbacks: your DNS responses are limited to the speed of the VPN Connection and your own DNS server. Because of very complex websites, this can increase the time for a page to load quite a lot. Also: we have several VPNs that are not connected to each other and all of them have their own DNS server. Several DNS servers locally. These have to be configured by hand. And you have to use some third party tool like dnsmasq. If you start a DNS request, you ask your locally running DNS server, which decides which server to ask for which domain name. One college of mine uses such a solution with this OSX (I am sorry, I don't remember the name of the application). You use your domain hoster. Most of them have APIs available to manipulate your DNS entries. So you could pull your private network informations to your domain hoster. I am not sure whether they all accept private network IPs. But I guess there will be some problems in the same way as in number 4. The one we currently use, because it's for us the most logical choice: we forward the sub domain *.local.. to our own public DNS Server. This works quite good for some public DNS Servers like Google. But most ISPs do not forward the answers. Or don't do that always. Like my ISP sends me a positive result of the a DNS request of a *.local.. domain only every 10th time I make a nslookup. (Can someone explain this?) Here the real Question: Is there another solution we were not thinking about? Or: What of these methods do you use?

    Read the article

  • Nginx with Passenger setup problems

    - by Kreeki
    I'm trying to setup nginx webserver with Passenger support for Ruby on Rails application on Ubuntu 10.04 (on sub URI). All went fine until I tried to access the server/application from the browser. My instalation of nginx is in location /opt/nginx # my nginx.conf server { listen 80; server_name www.mydomain.com; root /websites/site/public; passenger_enabled on; passenger_base_uri /site; location / { # added by default, I don't know if its supposed to be here or not root html; index index.html index.htm; } Then I started the server. But when I put www.mydomain.com/site in browser I get 404 Not Found error. Error.log shows this: 2011/03/04 10:07:07 [error] 21387#0: *2 open() "/opt/nginx/html/favicon.ico" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /favicon.ico HTTP/1.1", host: "80.79.23.71", referrer: "http://80.79.23.71/" 2011/03/04 10:07:07 [error] 21387#0: *2 open() "/opt/nginx/html/404.html" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /favicon.ico HTTP/1.1", host: "80.79.23.71", referrer: "http://80.79.23.71/" 2011/03/04 10:07:11 [error] 21387#0: *4 open() "/opt/nginx/html/favicon.ico" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /favicon.ico HTTP/1.1", host: "80.79.23.71:80", referrer: "http://80.79.23.71:80/" 2011/03/04 10:07:11 [error] 21387#0: *4 open() "/opt/nginx/html/404.html" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /favicon.ico HTTP/1.1", host: "80.79.23.71:80", referrer: "http://80.79.23.71:80/" 2011/03/04 10:07:13 [error] 21387#0: *5 open() "/opt/nginx/html/site" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" 2011/03/04 10:07:13 [error] 21387#0: *5 open() "/opt/nginx/html/404.html" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" 2011/03/04 10:07:15 [error] 21387#0: *6 open() "/opt/nginx/html/site" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" 2011/03/04 10:07:15 [error] 21387#0: *6 open() "/opt/nginx/html/404.html" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" 2011/03/04 10:07:19 [error] 21387#0: *7 open() "/opt/nginx/html/site" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" 2011/03/04 10:07:19 [error] 21387#0: *7 open() "/opt/nginx/html/404.html" failed (2: No such file or directory), client: 90.182.7.150, server: www.mydomain.com, request: "GET /site HTTP/1.1", host: "80.79.23.71:80" Why does nginx look for site in /opt/nginx/html/site as log shows when there's another path set in nginx.conf? Any idea whats wrong with my setup?

    Read the article

  • Intermittent internet access on a flat network - Router is connected

    - by Naveed
    I’m looking for some help with network settings. I’ve just started a new job (non-IT!) and we have problems with our office network. I’m the most IT literate in the organisation (15 permanent employees) and so have been dealing with IT issues. Our main bit of software is web-based so we need constant web access but it sometimes goes down for between 20 minutes and 3 hours despite everything seemingly working fine. It’s a flat network with wireless APs, BT Business Broadband 8Mbit connection and that’s about it. We have no servers and no standard settings and staff are encouraged to bring in their own laptops and connect! The network basically exists to provide internet access and that’s it. We also have students accessing the wireless (and I know there’s a whole list of access and content issues etc but right now we just need internet access stabilised). This is what we have: Building 1 Cisco SLM-224P 24-port PoE 10/100 switch with 2 gigabit ports 3 x ZyXEL NWA-3160 wireless APs Samsung OfficeServ 7100 phone server which borrows the building’s wiring Building 2 Netgear GS605-UK 5-port 10/100/1000 switch 1 x ZyXEL NWA-3160 wireless AP 1 x BT Business Hub – 2wire BT2700hgv – is the DHCP server We have 2 link cables between the buildings. One connects the two switches on a gigabit port. The second (oddly) connects the switch in building 2 to the OfficeServ server in building 1. When the internet goes down I can still access the router through a wireless connection. I can also ping websites and get a response. Firefox just says “Cannot connect” etc. The system then heals itself when it feels like it. (Sorry if this is asking too much but) These are my immediate questions… Why would browser-based internet go down? I don’t know enough about protocols etc but I can try to standardise settings. The WAPs have a DNS server setting and I don’t know whether it should be “None” or “From DHCP”. What should be the DHCP server? The router or the Cisco switch? Or something else?! Would there be any problem in connecting the second link from switch to switch? Is that good practice? Is it worth swapping the Netgear GS605 with either a Cisco SG200-08 or Netgear GS108T-200? Is it worth upgrading the router to, for instance, a Cisco RV042G Dual Gigabit router which would also act as a switch? Or is it better to have a separate router and switch in Building 2?

    Read the article

  • DNS and DHCP dies after ~2 days of use on ClearOS

    - by TheLQ
    I'm using ClearOS (based on CentOS, so any info specific to it should apply here) as a gateway, DHCP, and DNS server. I had this server running perfectly for a month or two before replacing it with another server. However due DNS and DHCP failing 2 days in and a host of other performance issues (the box was a little underpowered), I changed back to the origional server. However 2 days in DHCP and DNS are failing again, and I'm out of idea's on why. In both cases to my knowledge no network or server changes occurred after installation. Right after installing (and at least a day in) DNS and DHCP was working just fine. However later (Day 2) I get a call saying their internet is down (translation: Nobody can get to websites because DNS is down) I've tried to fix the problem by checking if the dnsmasq is even running (it is), restarting the service, and restarting the server to no effect. I do have two internal servers that have static DHCP leases but one's lease must of expired as I can't connect to it anymore. I'm hesitant to do any dhcp testing on the last server as I'll not be able to connect to it anymore. Is there anything anyone can think of on why DNS and DHCP would fail 2 days in to running perfectly? More info: Running dnsmasq in debug mode. This is all that's displayed even when running nslookup quackwall. I'm not sure though if nslookup commands should show up in the log [root@quackwall ~]# /usr/sbin/dnsmasq -dq dnsmasq: started, version 2.49 cachesize 150 dnsmasq: compile time options: IPv6 GNU-getopt no-DBus no-I18N DHCP TFTP dnsmasq-dhcp: DHCP, IP range 10.0.0.100 -- 10.0.0.254, lease time 12h dnsmasq: reading /etc/resolv.conf dnsmasq: using nameserver 74.128.17.114#53 dnsmasq: using nameserver 74.128.19.102#53 dnsmasq: read /etc/hosts - 5 addresses dnsmasq-dhcp: read /etc/ethers - 2 addresses On the other server DNS and the Gateway are all configured correctly (10.0.0.2 is quackwall) lordquackstar@quackgame:~$ netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.0.0.0 0.0.0.0 255.255.240.0 U 0 0 0 eth0 0.0.0.0 10.0.0.2 0.0.0.0 UG 0 0 0 eth0 lordquackstar@quackgame:~$ cat /etc/resolv.conf nameserver 10.0.0.2 domain highwow.lan search highwow.lan

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >