Search Results

Search found 46419 results on 1857 pages for 'web traffic'.

Page 50/1857 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • zero-config CGI enabled web server

    - by halp
    To serve static content of a directory over http, one can simply navigate to that directory and type: python -m SimpleHTTPServer 11111 which will start a http server on port 11111. This hack is nice because it requires zero-config: no stand-alone web server, no config files at all. Is it possible to extend this example, or have an alternate way to achieve this goal, but also have CGI support? The final goal is to have a quick and lazy way of serving a web site from a certain directory. The site has static content (HTML pages, images), but also a CGI script. The CGI script must work properly when accessed via browser. Of course I could setup a virtual host in apache, allow CGI inside it etc. But that's not a zero-config approach.

    Read the article

  • Server 2008 Web Edition IIS6 SMTP conflict

    - by user219313
    I'm using IIS6 Manager to setup the SMTP service on Windows Server 2008 Web Edition. There seems to be a conflict (port 25?) which means that I cannot start and stop the Default SMTP server within IIS6. I can start and stop it with the services.msc snap in and this is reflected in state of the SMTP server in IIS6 manager. I'm worried that none of the settings I want to get at within IIS6 (logging, authentication etc..) are having any effect. None of these settings are available within IIS7 in Web Edition.

    Read the article

  • Filesystem access through web interface

    - by Jorge Suárez de Lis
    I have an SSH+Samba server so people can access its files from anywhere on the network. I thought it would be also interesting to provide access through a web interface, so they can access the files even when they don't have access to the VPN or a Samba/SSH client. Something like the Ubuntu One or Dropbox web interface. The http server could be on the same machine as the SSH+Samba, so it should just provide access to local files and some way to login with their username/password. Someone knows any software like this?

    Read the article

  • No sound from Java web applet

    - by Tom Savage
    Using Google Chrome and Mozilla Firefox browsers on Ubuntu 9.10, I am unable to get any sound out from Java (version 6 update 15) on Runescape or WebSDR. I'm only interested in getting WebSDR working and Runescape was the only other web applet I knew would have sound. Sound does work in a test applet I downloaded when run from the command line so it seems to be a web specific issue. Anyone else encountered or solved this or a similar issue? Are there any better applets out there that I can use to test my sound?

    Read the article

  • Web browsing over SSH

    - by Alex Marshall
    Hello, I have something of a difficult situation : our company has a webserver in a remote data center that's, at the moment, only accessible by SSH and the firewall is not easily modifiable because the techs at the data center are unreliable and unreachable lately (not my choice of data center, and switching is not an option at the moment). Are there any browsers or plugins out there that will let me browse over an SSH connection ? I can browse with links and lynx on the SSH command line, but that doesn't give me access to various functionality I need, and it's too hard to find things in the web application running on a Tomcat server on the box that I need access to. Does anybody have any suggestions ? We're already working on getting direct access to the web application by having the firewall opened up, but I need something better in the mean time.

    Read the article

  • How to setup a user account for a web application

    - by ximus
    Hi, What are the main guidelines to setting up a user account on a Linux machine for a web app? In my case it is a Rails application that does file management. First thing I can think of is to limit access rights to only the directories it needs. But how exactly should I go about this? Setup rights through a user group or a through the user's ownership of those directories. I have very little experience in user rights management. What else do I need to consider? I've heard of ACL's and SELinux, do I need to look into any of these to guaranty decent security for my simple web app? Any advice about this and anything not mentioned welcomed, Thanks, Max. I will be using Ubuntu.

    Read the article

  • Web clipping / note taking software on Linux

    - by bguiz
    Hi, I use this great web-clipping and note taking app called Evernote on my Windows machine. However, there's no Linux version of Evernote (doesn't work properly in Wine). I would like to get some suggestions for something with similar capabilities that runs on Linux/Ubuntu. Specifically I need to be able to select parts of a web page in Firefox, and press some key combination, to save that clip to disk, in some sort of searchable database The clip needs to have pictures and basic text formatting, anything extra is unnecessary I also need to be able to create empty note or edit existing one. Storing the notes on a local machine only is fine - I don't need the sync features of Evernote

    Read the article

  • Save a single web page (with background images) with Wget

    - by mikael
    I want to use Wget to save single web pages (not recursively, not whole sites) for reference. Much like Firefox's "Web Page, complete". My first problem is: I can't get Wget to save background images specified in the CSS. Even if it did save the background image files I don't think --convert-links would convert the background-image URLs in the CSS file to point to the locally saved background images. Firefox has the same problem. My second problem is: If there are images on the page I want to save that are hosted on another server (like ads) these wont be included. --span-hosts doesn't seem to solve that problem with the line below. I'm using: wget --no-parent --timestamping --convert-links --page-requisites --no-directories --no-host-directories -erobots=off http://domain.tld/webpage.html

    Read the article

  • Software to cache a web application for use offline

    - by littlecharva
    My boss quite regularly has to demo our web application to clients in a situation with no wifi available and sketchy 3G access, quite often the 3G lets him down. I have considered setting a copy of our server up in a virtual machine on his laptop so he could demo it offline, but I fear this will just introduce more headaches when he forgets how to boot the VM up. What I'd ideally like is an app that records you logging into a web app, saves copies of all the pages and ties the links and buttons you click up to offline copies of the pages it saves. So you could run through the demonstration you're going to give and have it cache the pages. When you then click the same buttons and links in offline mode it will present the relevant offline pages. Does such a thing exist? Can anyone recommend any alternative solutions to this problem? Thanks, Anthony

    Read the article

  • Web application design with distributed servers

    - by Bonn
    I want to build a web application/server with this structure: main-server sub-server transaction-server (create, update, delete) view-server (view, search) authentication-server documents-server reporting-server library-server e-learning-server The main-server acts as host server for sub-server. I can add many sub-servers and connect it to main-server (via plug-play interface maybe), then it can begin querying data from another sub-servers (which has been connected to the main-server). The sub-servers can be anywhere as long as connected to internet. The main-server can manage all sub-servers which are connected to it (query data, setting permission between sub-servers, etc). The purpose is simple, the web application will be huge as the company grows, so I want to distribute it into small connected plug-able servers. My question is, does the structure above already have a standardized method? or are there any different views? what are the technologies needed? I need a lot of researches before the execution plan begin. thanks a lot.

    Read the article

  • low speed web application, Server problem or Application

    - by Ashian
    Hi, I have a web application written by asp.net (c#) sql server 2005. we host it on 2 dedicated server ( IIS and SQL server ) From some month ago , in some days of week we have many reports about speed issue. we have some other application on this server using same database. when we have speed problem all aplication on these server have this problem, but applications on other server in same data center work correctly. ram and cpu usage are ok. how can I check that the problem related to internet connection or my application design? which parameters must be checked. Some other information In applications users can upload several files to server , each file up to 3 MB. we use a sql web admin application, on same server that has same problem, this is a standard application which work perfectly on other servers. Thanks

    Read the article

  • Cisco NAT + IPSec + Web Server Configuration Question

    - by zagman76
    Hello - I currently have a Cisco 881W, and it is configured with one of our static IPs to do basic NAT for the network. We also have a web server that needs it's own IP. I configured the NAT for the 2nd IP, however now traffic through our IPSec VPN doesn't route to the web server properly (well, it routes to the internet, rather than through the Tunnel). I followed the instructions here: http://www.cisco.com/en/US/tech/tk583/tk372/technologies_configuration_example09186a0080094634.shtml But now the outbound NAT doesn't seem to be working properly - it keeps going to the NAT of the Cisco, and not the designated IP address. If anyone can assist, I would appreciate it greatly. Let me know what you need, and I'll get it to you! Thanks!

    Read the article

  • Pause Nagios reloading in web interface

    - by 2rec
    Is there any option how could I turn off reloading of web page in Nagios web interface? Many times I checked many services and I needed the webpage to stay static and don't reload. One solution come to my mind - turn off the whole reloading for a while. Problem is that other people are using it too and they may want it at the time I don't want it. If anybody know about any kind of workaround or solution, please don't hesitate to write an answer. ;-) EDIT (+ reaction to the first answer): Maybe there could be a better way how to do it instead of modyfying nagios core. Interesting is, that I tried to disable javascript, it refreshed. I tried to disable http refreshing, it refreshed anyway. Has anybody know how and where is the refresh implemented?

    Read the article

  • What is the world wide web? [closed]

    - by think123
    I don't know where to post this question, so please move it if necessary. Ok, so I've heard of how the professional hosting companies can create 'links' to the world wide web to register an unregistered domain. So that's where my question comes from. Is the world wide web a server to which servers link? Is it created by abstract linkage? I'm not sure. Also, what does it mean for the DNS to be updated throughout the whole world?

    Read the article

  • Web Content Filtering for Windows Clients

    - by djoyce
    I'm working with a small business to solve a bunch of problems. One is their Windows 7 POS registers need to have web access restricted to only three remote support sites, but the back office machine needs an unfiltered connection. I'd like something I can install and configure on the few registers to block all but those few sites. In a perfect world this would restrict the normal register user, but the admin user would not be filtered. Free is best, if it works, but a small fee would be alright too. Microsoft's Family Safety filter is close, but requires a Windows Live account, which isn't ideal, but may be alright. Anyone use this in a small business environment? I'd prefer something easily managed at the local machines. K9 Web Protection is interesting and I'm going to look into it more. Are there other options? Seems like someone would have made something simple like this as an open source project, but maybe not.

    Read the article

  • Block all third party domains from web pages

    - by wizlb
    When I'm browsing the web, I'd like to not be tracked by any third party services like Facebook or Google. For instance, if I visit somepage.com I don't want my browser requesting things from facebook.com unless I allow it. However, if I visit facebook.com, Facebook still works. Does anyone know of a Chrome or Firefox extension that will allow me to do this? AdBlock in Chrome doesn't seem to work because it just hides the web page elements, it doesn't stop the browser from downloading them. I imagine that some kind of proxy/browser extension hybrid would be the best. Any suggestions? Thank you.

    Read the article

  • Deployed Web Application Requests for User Name and Password

    - by user43175
    Deployed Web Application Requests for User Name and Password I recently deployed a .NET web application into the server. Authentication mode is set to Windows (since the application is accessible only to Intranet users. Testing some machines, the application loads up properly. For some machines, a logon dialog window appears asking for User Name or Password. These dialog windows are those that you also normally see when you are trying to log into a Windows domain. Any idea why this happens randomly? Thanks.

    Read the article

  • Take Control Of Web Control ClientID Values in ASP.NET 4.0

    Each server-side Web control in an ASP.NET Web Forms application has an ID property that identifies the Web control and is name by which the Web control is accessed in the code-behind class. When rendered into HTML, the Web control turns its server-side ID value into a client-side id attribute. Ideally, there would be a one-to-one correspondence between the value of the server-side ID property and the generated client-side id, but in reality things aren't so simple. By default, the rendered client-side id is formed by taking the Web control's ID property and prefixed it with the ID properties of its naming containers. In short, a Web control with an ID of txtName can get rendered into an HTML element with a client-side id like ctl00_MainContent_txtName. This default translation from the server-side ID property value to the rendered client-side id attribute can introduce challenges when trying to access an HTML element via JavaScript, which is typically done by id, as the page developer building the web page and writing the JavaScript does not know what the id value of the rendered Web control will be at design time. (The client-side id value can be determined at runtime via the Web control's ClientID property.) ASP.NET 4.0 affords page developers much greater flexibility in how Web controls render their ID property into a client-side id. This article starts with an explanation as to why and how ASP.NET translates the server-side ID value into the client-side id value and then shows how to take control of this process using ASP.NET 4.0. Read on to learn more! Read More >

    Read the article

  • Take Control Of Web Control ClientID Values in ASP.NET 4.0

    Each server-side Web control in an ASP.NET Web Forms application has an ID property that identifies the Web control and is name by which the Web control is accessed in the code-behind class. When rendered into HTML, the Web control turns its server-side ID value into a client-side id attribute. Ideally, there would be a one-to-one correspondence between the value of the server-side ID property and the generated client-side id, but in reality things aren't so simple. By default, the rendered client-side id is formed by taking the Web control's ID property and prefixed it with the ID properties of its naming containers. In short, a Web control with an ID of txtName can get rendered into an HTML element with a client-side id like ctl00_MainContent_txtName. This default translation from the server-side ID property value to the rendered client-side id attribute can introduce challenges when trying to access an HTML element via JavaScript, which is typically done by id, as the page developer building the web page and writing the JavaScript does not know what the id value of the rendered Web control will be at design time. (The client-side id value can be determined at runtime via the Web control's ClientID property.) ASP.NET 4.0 affords page developers much greater flexibility in how Web controls render their ID property into a client-side id. This article starts with an explanation as to why and how ASP.NET translates the server-side ID value into the client-side id value and then shows how to take control of this process using ASP.NET 4.0. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Taking web sites offline for demonstration

    While working in software development in general, and in web development for a couple of customers it is quite common that it is necessary to provide a test bed where the client is able to get an image, or better said, a feeling for the visions and ideas you are talking about. Usually here at IOS Indian Ocean Software Ltd. we set up a demo web site on one of our staging servers, and provide credentials to the customer to access and review our progress and work ad hoc. This gives us the highest flexibility on both sides, as the test bed is simply online and available 24/7. We can update the structure, the UI and data at any time, and the client is able to view it as it suits best for her/him. Limited or lack of online connectivity But what is going to happen when your client is not capable to be online - no matter for what reasons; here are some more obvious ones: No internet connection (permanently or temporarily) Expensive connection, ie. mobile data package, stay at a hotel, etc. Presentation devices at an exhibition, ie. using tablets or iPads Being abroad for a certain time, and only occasionally online No network coverage, especially on mobile Bad infrastructure, like ie. in Third World countries Providing a catalogue on CD or USB pen drive Anyway, it doesn't matter really. We should be able to provide a solution for the circumstances of our customers. Presentation during an exhibition Recently, we had the following request from a customer: Is it possible to let us have a desktop version of ResortWork.co.uk that we can use for demo purposes at the forthcoming Ski Shows? It would allow us to let stand visitors browse the sites on an iPad to view jobs and training directory course listings. Yes, sure we can do that. Eventually, you might think why don't they simply use 3G enabled iPads for that purpose? As stated above, there might be several reasons for that - low coverage, expensive data packages, etc. Anyway, it is not a question on how to circumvent the request but to deliver a solution to that. Possible solutions... or not? We already did offline websites earlier, and even established complete mirrors of one or two web sites on our systems. There are actually several possibilities to handle this kind of request, and it mainly depends on the system or device where the offline site should be available on. Here, it is clearly expressed that we have to address this on an Apple iPad, well actually, I think that they'd like to use multiple devices during their exhibitions. Following is an overview of possible solutions depending on the technology or device in use, and how it can be done: Replication of source files and database The above mentioned web site is running on ASP.NET, IIS and SQL Server. In case that a laptop or slate runs a Windows OS, the easiest way would be to take a snapshot of the source files and database, and transfer them as local installation to those Windows machines. This approach would be fully operational on the local machine. Saving pages for offline usage This is actually a quite tedious job but still practicable for small web sites Tool based approach to 'harvest' the web site There quite some tools in the wild that could handle this job, namely wget, httrack, web copier, etc. Screenshots bundled as PDF document Not really... ;-) Creating screencast or video Simply navigate through your website and record your desktop session. Actually, we are using this kind of approach to track down difficult problems in order to see and understand exactly what the user was doing to cause an error. Of course, this list isn't complete and I'd love to get more of your ideas in the comments section below the article. Preparations for offline browsing The original website is dynamically and data-driven by ASP.NET, and looks like this: As we have to put the result onto iPads we are going to choose the tool-based approach to 'download' the whole web site for offline usage. Again, depending on the complexity of your web site you might have to check which of the applications produces the best results for you. My usual choice is to use wget but in this case, we run into problems related to the rewriting of hyperlinks. As a consequence of that we opted for using HTTrack. HTTrack comes in different flavours, like console application but also as either GUI (WinHTTrack on Windows) or Web client (WebHTTrack on Linux/Unix/BSD). Here's a brief description taken from the original website about HTTrack: HTTrack is a free (GPL, libre/free software) and easy-to-use offline browser utility. It allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site's relative link-structure. Simply open a page of the "mirrored" website in your browser, and you can browse the site from link to link, as if you were viewing it online. And there is an extensive documentation for all options and switches online. General recommendation is to go through the HTTrack Users Guide By Fred Cohen. It covers all the initial steps you need to get up and running. Be aware that it will take quite some time to get all the necessary resources down to your machine. Actually, for our customer we run the tool directly on their web server to avoid unnecessary traffic and bandwidth. After a couple of runs and some additional fine-tuning - explicit inclusion or exclusion of various external linked web sites - we finally had a more or less complete offline version available. A very handsome feature of HTTrack is the error/warning log after completing the download. It contains some detailed information about errors that appeared on the pages and the links within the pages that have been processed. Error: "Bad Request" (400) at link www.resortwork.co.uk/job-details_Ski_hire:tech_or_mgr_or_driver_37854.aspx (from www.resortwork.co.uk/Jobs_A_to_Z.aspx)Error: "Not Found" (404) at link www.247recruit.net/images/applynow.png (from www.247recruit.net/css/global.css)Error: "Not Found" (404) at link www.247recruit.net/activate.html (from www.247recruit.net/247recruit_tefl_jobs_network.html) In our situation, we took the records of HTTP 400/404 errors and passed them to the web development department. Improvements are to be expected soon. ;-) Quality assurance on the full-featured desktop Unfortunately, the generated output of HTTrack was still incomplete but luckily there were only images missing. Being directly on the web server we simply copied the missing images from the original source folder into our offline version. After that, we created an archive and transferred the file securely to our local workspace for further review and checks. From that point on, it wasn't necessary to get any more files from the original web server, and we could focus ourselves completely on the process of browsing and navigating through the offline version to isolate visual differences and functional problems. As said, the original web site runs on ASP.NET Web Forms and uses Postback calls for interaction like search, pagination and partly for navigation. This is the main field of improving the offline experience. Of course, same as for standard web development it is advised to test with various browsers, and strangely we discovered that the offline version looked pretty good on Firefox, Chrome and Safari, but not in Internet Explorer. A quick look at the HTML source shed some light on this, and there are conditional CSS inclusions based on the user agent. HTTrack is not acting as Internet Explorer and so we didn't have the necessary overrides for this browser. Not problematic after all in our case, but you might have to pay attention to this and get the IE-specific files explicitly. And while having a view at the source code, we also found out that HTTrack actually modifies the generated HTML output. In several occasions we discovered that <div> elements were converted into <table> constructs for no obvious reason; even nested structures. Search 'e'nd destroy - sed (or Notepad++) to the rescue During our intensive root cause analysis for a couple of HTML/CSS problems that needed some extra attention it is very helpful to be familiar with any editor that allows search and replace over multiple files like, ie. sed - stream editor for filtering and transforming text on Linux or my personal favourite Notepad++ on Windows. This allowed us to quickly fix a lot of anchors with onclick attributes and Javascript code that was addressed to ASP.NET files instead of their generated HTML counterparts, like so: grep -lr -e '.aspx' * | xargs sed -e 's/.aspx/.html?/g' The additional question mark after the HTML extension helps to separate the query string from the actual target and solved all our missing hyperlinks very fast. The same can be done in Notepad++ on Windows, too. Just use the 'Replace in files' feature and you are settled. Especially, in combination with Regular Expressions (regex). Landscape of browsers Okay, after several runs of HTML/CSS code analysis, searching and replacing some strings in a pool of more than 4.000 files, we finally had a very good match of an offline browsing experience in Firefox and Chrome on Linux. Next, we transferred that modified set of files to a Windows 8 machine for review on Firefox, Chrome and Internet Explorer 7 to 10, and a Mac mini running Mac OS X 10.7 to check the output on Safari and again on Chrome. Besides IE, for reasons already mentioned above, the results were identical. And last but not least it was about to check web site on tablets. Please continue to read on the following articles: Taking web sites offline for demonstration on Galaxy Tablet Taking web sites offline for demonstration on iPad

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >