Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 412/916 | < Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >

  • IBM x3620 Server takes a long time to boot past UEFI to OS

    - by Joel Coel
    I have a pair of IBM System x3620 servers. These servers do fine once they finally reach the point where the operating system takes over, but it takes them forever to get past the new-fangled UEFI boot system... a good five minutes or so; maybe longer. I haven't timed it, but it's the kind of thing where you go get a cup of coffee while you wait and it's still going when you come back. Normally the only time I shut these down is for a monthly maintenance cycle (usually just windows updates), and so it's not a big deal. But in the case where I might have an outage I'd sure like to get that 5 minutes back. Is there anything I can do to tell them to just go ahead and boot already?

    Read the article

  • How can I avoid a 302 for Fetch as Bot?

    - by CookieMonster
    I originally posted this on Stackoverflow, but I believe here is a better place to ask. My web application is very similar to notepad.cc which redirects to a randomly generated URL upon access, e.g. http://myapp.com/roTr94h4Gd. (Please note that notepad.cc is not my site.) Probably because of this redirect feature, when I do "fetch as Google" or "fetch as Bingbot", I get a 302 and no html content. Not even a <html></html> tag. HTTP/1.1 302 Moved Temporarily Server: nginx/1.4.1 Date: Tue, 01 Oct 2013 04:37:37 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.17-1~dotdeb.1 Set-Cookie: PHPSESSID=vp99q5e5t5810e3bnnnvi6sfo2; expires=Thu, 03-Oct-2013 04:37:37 GMT; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /roTr94h4Gd How should I avoid 302 in this case? I suppose I could modify my site to prevent the redirect, but it is a necessary feature of my web app to generate a random URL on each access. I added <meta name="fragment" content="!"> tag into my index page and set it to return a static snapshot of my page when the flag is set. But this still returns a 302. I also added a header to return 200 before redirecting, but this had no effect, either. Could someone tell me a good suggestion to solve this problem?

    Read the article

  • X11 for apache user

    - by fuenfundachtzig
    We are using inkscape to convert SVG images uploaded to our server via a web form. For this inkscape offers a batch mode via the -z option, but this batch mode has a flaw: When inkscape is run by the apache user, it breaks saying $ inkscape -z -W drawing.svg X11 connection rejected because of wrong authentication. The application 'inkscape' lost its connection to the display localhost:11.0; most likely the X server was shut down or you killed/destroyed the application. If you do the same as a normal user you also get errors: Xlib: connection to "localhost:11.0" refused by server Xlib: PuTTY X11 proxy: MIT-MAGIC-COOKIE-1 data did not match (inkscape:24050): Gdk-CRITICAL **: gdk_display_list_devices: assertion `GDK_IS_DISPLAY (display)' failed 301.27942 But at least inkscape gives the correct answer (here the number stating the width of the image). Does somebody know how to make this also work for the apache user? Does it make sense to authorize apache to use X (if so how)? In any case it doesn't feel like the right solution...

    Read the article

  • OpenSSH SFTP: chrooted user with access to other chrooted users' files

    - by HannesFostie
    Decided to re-phrase the question entirely in order to not have to make a new one. I currently have an SFTP server set up using OpenSSH's SFTP functionality. All my users are chrooted, and everything works. What I need most right now is for one user, which is not root (because this user can't have any real SSH powers!), to have access to all other users' chrooted dirs. This user's job is to fetch all uploaded documents every once in a while. Directory structure as of now is: /home |_ /home/user1 |_ /home/user2 |_ /home/user3 With ChrootDirectory set as /home/%u User "adminuser" should have access to user1, user2 and user3's directories without having access to /home or at the very least not to anything but /home. Bonus points for the one who can tell me how to let users write inside /home/%u without having to make a new directory inside that dir which they own themselves, and not root as is the case with /home/%u (openssh chroot prerequisite).

    Read the article

  • Inverted LACK Table Serves as a Perfect Gear Rack [DIY]

    - by Jason Fitzpatrick
    We’ve seen IKEA gear hacked to hold audio and computer gear before, but this mod adds in a simple and effective twist. LACK end tables are, conveniently, the same width as a standard server rack. This makes it super simple for DIYers to mount their gear right into the legs of the table with no modification necessary. In this case, however, Winston Smith included a clever update to the mod. Rather than leave it like a standard table, he flipped the table upside down for increased stability and a stronger connection between the legs of his improvised audio rack and the table-top-turned-floor-plate. He then finished it with a matching LACK shelf piece to serve as a turn-table stand. His gear is stored cleanly, off the floor, and in a sturdy container all for about $25–a definite bargain when it comes to storage racks. Hit up the link below for more information and pictures. LACK Rack & EXPEDIT Desktop [IKEA Hackers] HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How

    Read the article

  • Extract MP3 URL from a SWF file

    - by Charles
    I have a SWF file that basically shows a streamer, and it plays a song (I'm guessing it's MP3) that it links to, somewhere. I know the audio isn't embedded in the file since the SWF's file size is ~370KB. With most flash FLV and MP3 players, Internet Download Manager catches the URL as soon as the video/audio starts to load, and I can easily download it using IDM. In this case, IDM doesn't seem to sense anything - so I thought maybe I could extract the MP3 location myself. I tried decompiling the SWF file, as I'd heard before that with some files, decompiling can help in breaking down a file and getting just the info you need - but I honestly don't know what to look for in this particular file. Suggestions?

    Read the article

  • Using default parameters for 404 error (PHP with mvc design)?

    - by user1175327
    I have a custom made Framework (written in PHP). It all works very good, but i have some doubts about a certain thing. Right now when a user call this url for example: http://host.com/user/edit/12 Which would resolve to: user = userController edit = editAction() in userController 12 = treated as a param But suppose the controller 'userController' doesn't exist. Then i could throw a 404. But on the other hand, the url could also be used as params for the indexController (which is the default controller). So in that case: controller = indexController user = could be an action in indexController, otherwise treated as a param edit = treated as a param 12 = treated as a param That is actually how it works right now in my framework. So basically, i never throw a 404. I could ofcourse say that only params can be given if the controller name is explicitly named in the URL. So if i want the above url: http://host.com/user/edit/12 To be invoked by the indexController, in the indexAction. Then i specifically have to tell what controller and action it uses in the URL. So the URL should become: http://host.com/index/index/user/edit/12 index = indexController index (2nd one) = the action method user = treated as a param edit = treated as a param 12 = treated as a param That way, when a controller doesn't exist, i don't reroute everything as a param to the index controller and simply throw a 404 error. Now my question is, which one is more preffered? Should i allow both options to be configurable in a config file? Or should i always use one of them. Simply because that's the only and best way to do it?

    Read the article

  • diagnosing random power down crashes on a hackintosh [closed]

    - by Iain
    I have a hackintosh running on a Gigabyte P55M UD2. It has run without a single issue as a hackintosh for about 9 months, and the machine is about 2.75 years old. Never had issues when it was just linux box. Power supply is 750 Watts. It recently started spontaneously rebooting, no seeming connected to any particular use case or ram/cpu load. There is no hanging or anything, just suddenly acts like the power went out. The frequency of which seems to be increasing. I've tried using just one of each of the two ram sticks, and it happens with either one in. I'm not sure what to try next, is there some way of determinging whether it's a mobo, cpu, or powersupply issue short of replacing them all? thanks! Iain

    Read the article

  • Recover a deleted webpage

    - by rc
    Suppose, a blog or a nice article was hosted on a website and it got deleted or worse the website was brought down. How do you view that web page? I tried searching for the cached version in Google. But, looks like the content was deleted long ago and is not listed in the search results directly. There are annotations to the link from many other sites, but still the actual content is not fully available. Now, can anybody help me see this page... I am actually looking for http://effectize.com/become-coolest-programmer :) And, moreover, in addition to bookmarking a favorite link, is it possible to cache the content of the link as well for later reference in case it gets deleted? EDIT: Looks like a URL can be cached for future reference. Try: http://backupurl.com/

    Read the article

  • Upgrading Fedora 16 to 17 with crypted LVM

    - by nijansen
    As the title suggests, I want to upgrade Fedora 16 to the Fedora 17 Alpha build, but I am struggling to do so because of my crypted HDD. To avoid the hustle of CD-ROM or USB install, I thought the preupgrade would be a good idea. It downloads the stuff, stores an image somewhere and creates an entry in my boot manager. When I choose to upgrade from the boot manager it crashes halfway through because it can not access any of the prepared files (because it's crypted) and hands me a debug console. Unfortunately, this case apparently is not covered by the Fedora troubleshooting advice, at least I was not able to find anything there. I would guess I have to mount my HDD manually, but 1) how? and 2) how do I resume the upgrade afterwards? I would really appreciate a push in the right direction.

    Read the article

  • Central renderer for a given scene

    - by Loggie
    When creating a central rendering system for all game objects in a given scene I am trying to work out the best way to go about passing the scene to the render system to be rendered. If I have a scene managed by an arbitrary structure, i.e., an octree, bsp trees, quad-tree, kd tree, etc. What is the best way to pass this to the render system? The obvious problem is that if simply given the root node of the structure, the render system would require an intrinsic knowledge of the structure in order to traverse the structure. My solution to this is to clip all objects outside the frustum in the scene manager and then create a list of the objects which are left and pass this simple list to the render system, be it an array, a vector, a linked list, etc. (This would be a structure required by the render system as a means to know which objects should be rendered). The list would of course attempt to minimise OpenGL state changes by grouping objects that require the same rendering operations to be performed on them. I have been thinking a lot about this and started searching various terms on here and followed any additional information/links but I have not really found a definitive answer. The case may be that there is no definitive answer but I would appreciate some advice and tips. My question is, is this a reasonable solution to the problem? Are there any improvements that I could make? Are there any caveats I should know about? Side question: Am I right in assuming that octrees, bsp trees, etc are all forms of BVH?

    Read the article

  • How to make a vm scale when demand for resource increases

    - by Cray XT3
    i am having a server with 16 virtual core and 24G RAM,using Xen virtualization and ubuntu as dom0 Created 4 VMs (in para mode),each with different applications. CPU Load vary on each vm,somtimes first vm reaches nearly 100% CPU and others under 25% or even less. So is there a way in which vm can get cpu from other vms when they are not actually using it or utilization is under 25%.Same in the case of RAM also. I am not sure whether i am mentioning Cloud here. Initially i would like to give every vm a single VCPU,but can scale up to 8 or more by taking cpu from other vms if they are not using it. Is there any kind of tool that makes vm to scale its resources when demand increases. Is cloudstack and openstack designed for these kind of purpose or is that just a GUI to manage VMs.

    Read the article

  • Static IP Address on Ubuntu 12.04 Virtual Machine

    - by chrisnankervis
    I've setup a VM running Ubuntu 12.04 specifically for local web development and am having some problems ensuring it has a static IP address. A static IP address is important as I'm using the IP address in my hosts file to assign a .local suffix to addresses used both in browser and to connect to the correct database on the VM. Currently, every time I connect to a new network or my VM is assigned a new IP address I need to reconfigure my whole environment which is becoming quite a pain. It also probably doesn't help that the default-lease-time on the Ubuntu VM is set to 1800 by default. At the moment I'm using VMWare Fusion and the Network Adapter is enabled and set to "Autodetect" under Bridged Networking. I've tried to set a static IP address within the dhcpd.conf using the code below: host ubuntu { hardware ethernet 00:50:56:35:0f:f1; fixed-address: 192.168.100.100; } The fixed-address that I've used is also outside the range specified in the subnet block (which in this case is 192.168.100.128 to 192.168.100.254). I've tried adding and removing the network adapter and restarting my Mac after each time to no avail. Below is an ifconfig of the VM that might be of some help: eth0 Link encap:Ethernet HWaddr 00:50:56:35:0f:f1 inet addr:192.168.0.25 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fe35:ff1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1624 errors:0 dropped:0 overruns:0 frame:0 TX packets:416 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:147348 (147.3 KB) TX bytes:41756 (41.7 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Are there any specific issues with 12.04 that I'm missing? Otherwise has anyone else got any ideas? Thanks in advance.

    Read the article

  • Ubuntu won't suspend anymore, but it did upon install.

    - by Bruce Connor
    I fresh installed Ubuntu 10.10 back when it came out, and my laptop was suspending fine. All of a sudden, I can't get my laptop to suspend anymore. It's an HP Pavilion dv2-1110, but I don't think it's a hardware issue, here's why: It suspended fine upon first install. I haven't installed any new kernels since then, but I have installed tons of packages, so it's probably a package. The suspend and hibernate options disappeared from the shutdown menu. If I press my keyboard's suspend button (or if I close the lid) I get the following message: If I try the command pmi action suspend, I get the error message: Error org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.Hal was not provided by any .service files. If I try the command echo -n mem > sudo /sys/power/state I get absolutely no output and no visible effect. What might be causing this behavior? I thought a list of installed packages might be useful, but it's huge and I don't know how to post it here in collapse/expand mode or something. EDIT:Just in case someone asks, none of the installed packages are kdm or anything like that (which would justify the lack of options in gnome's shutdown menu).

    Read the article

  • What can be the cause of new bugs appearing somewhere else when a known bug is solved?

    - by MainMa
    During a discussion, one of my colleagues told that he has some difficulties with his current project while trying to solve bugs. "When I solve one bug, something else stops working elsewhere", he said. I started to think about how this could happen, but can't figure it out. I have sometimes similar problems when I am too tired/sleepy to do the work correctly and to have an overall view of the part of the code I was working on. Here, the problem seems to be for a few days or weeks, and is not related to the focus of my colleague. I can also imagine this problem arising on a very large project, very badly managed, where teammates don't have any idea of who does what, and what effect on other's work can have a change they are doing. This is not the case here neither: it's a rather small project with only one developer. It can also be an issue with old, badly maintained and never documented codebase, where the only developers who can really imagine the consequences of a change had left the company years ago. Here, the project just started, and the developer doesn't use anyone's codebase. So what can be the cause of such issue on a fresh, small-size codebase written by a single developer who stays focused on his work? What may help? Unit tests (there are none)? Proper architecture (I'm pretty sure that the codebase has no architecture at all and was written with no preliminary thinking), requiring the whole refactoring? Pair programming? Something else?

    Read the article

  • Is it safer to use the same IV all times data are encrypted, or use a dynamic IV that is sent together the encrypted text? [closed]

    - by kiamlaluno
    When encrypting data that is then send to a server, is it better to always use the same IV, which is already known from the receiving server, or use a dynamic IV that is then sent to the receiving server? I am referring to the case the remote server receives data from another server, or from a client application, and executes operations on a database table, in the table row identified by the received data. Which of the following PHP snippets is preferable? $iv = mcrypt_create_iv(mcrypt_enc_get_iv_size($td), MCRYPT_RAND); $ks = mcrypt_enc_get_key_size($td); $key = substr(md5('very secret key'), 0, $ks); mcrypt_generic_init($td, $key, $iv); $encrypted = mcrypt_generic($td, 'This is very important data'); send_encripted_data(combine_iv_encrypted_text($iv, $encrypted)); $ks = mcrypt_enc_get_key_size($td); $key = substr(md5('very secret key'), 0, $ks); mcrypt_generic_init($td, $key, $iv); send_encripted_data(mcrypt_generic($td, 'This is very important data')); In which way is one of the snippets more vulnerable than the other one?

    Read the article

  • SQL Server Transaction Log RAID

    - by Eric Maibach
    We have three SQL Server servers, and each server has a about five or six databases on it. We are in the process of moving these servers to a new SAN and I am working on the best RAID configuration. Currently all of the log files for all of the databases share a RAID array, there is nothing else on this RAID array except for the log files, but all of the databases use this same array for their log files. I have read that it is best to have log files on separate disks. But in our case I am not sure whether it would be best to have one big array with about 8 drives that all the log files are on. Or would it be better to create four two disk arrays and give some of the larger databases their own dedicated disks for their log files?

    Read the article

  • Webserver on a rotating server with NAT IP or changing IPs

    - by hpsoftware
    i would have to elaborate my questions so please have patience Explaining the logic. if you are familiar with logmein then it installs a client software on your computer then it kinda keeps tracks where you computer is as long as it's connected to internet. So you can always access your computer no matter where it is whatever it's IP is you just go to logmein.com and then you can just access it. Now what i am asking 1. Let's assume i have a website hosted on my laptop let's call it webserver. so then i move around i have a new IP sometime even on a hotel network is it possible to do something like what logmein does so i can keep moving around my Webserver to new IP but it has some local client or something which keeps updating my IP or something i am sure i would need a gateway server somewhere which is connected to my domain name via DNS so somebody accessing my website www.mywebsite.com goes to my main server then gets routed to my laptop which could be anywhere but my gateway server is able to communicate to my webserver I will keep updating the case description based on comments to make more sense. please have patience with me. Regards

    Read the article

  • Strange resizing of partition after reinstalling Ubuntu 14.04 64bit

    - by Mike
    I started with Windows 7 on 120GB SSD and Ubuntu 14.04 32bit installed on 60GB partition on separate 1TB HDD. I just did a fresh reinstall of 14.04 64bit on the 1TB HDD. In the installation set up process, I selected the second choice of "deleting Ubuntu 14.04 and all it's files,documents, photos etc and reinstalling" to what I figured would reinstall the 64bit OS on the already existing 60GB allocated partition. Instead, it reinstalled Ubuntu as 43.5 GB and created a separate 15.8 partition. So now it reads that my disk space for Ubuntu ( in settingsdetails) is 43.5GB (instead of the previous 60GB that my old 32bit had) The upside is I can now access my 1TB HDD from my toolbar(and all the files located on it) Before, I could only access that through Windows (I can also access the SSD too, but that was always the case) Both drives are mounted now. My initial reaction was to go into Windows 7 disk management delete the strange/new 15gb partitionextend the 43.5 to the unallocated space. But I'm not sure if this is necessary or would even work. My question is why did it create a 15gb partition shrinking my ubuntu disk space, and is it useful? I don't want wasted space, so before I go through all my set up of Ubuntu, should I change this. At this time my HDD reads as 43.5 partiton, 15.8 partition, and 874GB exfat32 (939GB total)

    Read the article

  • Why does 301 redirect work for http but not for https?

    - by Tom G
    Through my domain registrar I have set up a domain, essayme.co.uk, to automatically forward to https://google.com. If I go to http://essayme.co.uk it works as expected and redirects me to https://google.com. $curl -i http://essayme.co.uk HTTP/1.1 301 Moved Permanently Cache-Control: max-age=900 Content-Type: text/html Location: https://google.com Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Sat, 07 Jun 2014 11:14:16 GMT Content-Length: 0 Age: 0 Connection: keep-alive However, if I go to https://essayme.co.uk it just freezes and times out. $curl -i https://essayme.co.uk curl: (7) Failed connect to essayme.co.uk:443; Operation timed out What is happening in the second case? (and, if possible, how can I get the redirect to work for https?) Problem background/clarification: I don't have an SSL certificate for the essayme.co.uk domain above, but I do for my live domain (let's call it mywebsite.com), and I was seeing the exact same problem on this domain (hence why I'm trying to debug the problem). Unfortunately I can't experiment with the live domain (as it's live) and I would like to avoid having to buy a second certificate for essayme.co.uk just for debugging (unless absolutely necessary). The problem I was seeing: my live domain, mywebsite.com (not its real name), has a valid SSL certificate. Visiting https://www.mywebsite.com displayed the webpage as expected. I had set up forwarding (like in the question above) from the naked domain (mywebsite.com) to https://www.mywebsite.com) Visiting http://mywebsite.com redirected to https://www.mywebsite.com as expected. However, visiting https://mywebsite.com would freeze and time out (as in the question above). I also tried forwarding it to http://www.otherwebsite.com as an experiment (i.e. forwarding to another site that does not use SSL), but the result was the same: Visiting http://mywebsite.com redirected to http://www.otherwebsite.com as expected. Visiting https://mywebsite.com would freeze and time out again. So I set up essayme.co.uk as an experiment to try and understand why it doesn't work.

    Read the article

  • Execute local script requiring arguments on Linux via plink

    - by c_maker
    Is it possible to execute (from windows) a local script with arguments on a remote linux system? Here's what I got: plink 1.2.3.4 -l root -pw mypassword -m hello.sh Is there a way to do this same thing, but able to give input parameters to hello.sh? I've tried many things, including: plink 1.2.3.4 -l root -pw mypassword -m hello.sh input1 input2 In this case it seems that plink thinks that input1 and input2 are its arguments.. which makes sense. What are my options?

    Read the article

  • Is there a way to do a Windows 7 repair install when you are unable to start/boot Windows 7?

    - by irrational John
    My understanding is that the only way to perform a "repair install" in Windows 7 is to run the install setup.exe within the Windows 7 installation you want to repair. This seems a little brain dead to me since usually the reason I wanted to perform the repair install was because the existing installation was so broken that I could no longer boot and use it. It seems Microsoft is saying my only option in that case is to do a clean install and then reinstall all my apps. So I'm wondering if anyone knows of a way to perform a Windows 7 repair install ... one that preserves your existing OS settings and application installs ... on a Windows 7 partition that cannot be booted.

    Read the article

  • Asus z53 laptop overheating problem

    - by Tiberiu Hajas
    hi all, wondering if anyone encountered overheating of asus laptop ? especially the z53 model ? usually the right side of the laptop and vent in the upper corner is blowing hot air when under even minimal load, the CPU temperature can easily get to 65-70C and GPU is even above 80C. I'm using NHC (notebook health control) to set to a higher conservatory power consumption but that helps only a bit, anyone opened up the case ? wondering is require a dust clean ...etc ? I still have some warranty on it. thanks

    Read the article

  • New MySQL Enterprise Edition Demo

    - by Bertrand Matthelié
    @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; } In case you haven’t seen it yet, we released last week a new MySQL Enterprise Edition Flash Demo. @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }div.Section1 { page: Section1; } This demo helps you understand in only 3 minutes how Oracle’s MySQL Enterprise Edition reduces the risk, cost and time required in developing, deploying and managing business-critical MySQL applications. You can watch it here. After watching the demo, you can easily go ahead and try MySQL Enterprise Edition, and/or get more detailed information in our whitepaper. @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }div.Section1 { page: Section1; } Enjoy the demo!

    Read the article

  • Cannot Enter Repro Admin Web Interface at Port 5080

    - by aqua
    I have followed the instructions on this website www.rtcquickstart.org to set up my firewall, DNS settings, TLS, and have installed the TURN server and repro proxy as instructed, and have restarted repro. However, I am not able to access the web interface of repro on port 5080, either at localhost:5080 / 127.0.0.1:5080 or at the server's IP address: IPADDRESS:5080 (I have set the server's IP for binding in repro.config). I get the browser error message: 'Unable to connect to server' whenever trying to connect to the web interface via port 5080. I initially had Apache2 installed, which loaded pages correctly at port 80 / address root, and when checked it 'listened' at port 5080 after it was configured in /etc/apache2/ports.conf, however the repro web interface still did not work at port 5080. I have tried uninstalling Apache2 in case that was conflicting with repro's web server, but the problem persists, and testing port 5080 now shows that nothing is 'listening' on port 5080. I have tried reinstalling / purging repro but it has not helped. My router is correctly set to allow all ports; port 5080 is open and forwarding correctly. I can connect to the internet and ping all websites through the server and everything else is working correctly. I would be gateful if anyone could offer advice on how to solve this problem.

    Read the article

< Previous Page | 408 409 410 411 412 413 414 415 416 417 418 419  | Next Page >