Search Results

Search found 22471 results on 899 pages for 'firefox os'.

Page 400/899 | < Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >

  • Why does TCP sometimes not acknowledge the packets it receives?

    - by misteryes
    I use firefox to visit a web server running on a computer on my LAN. I notice that sometimes TCP doesn't acknowledge the packets it receives. For example, in the following captured packets: https://docs.google.com/file/d/0B-LaBUj9KtQhS0RYNXF1RjZTa2M/edit?usp=sharing the 7th 9th and 11th packets are duplicated ACK, the browser receives TCP packets 6,8 and 10, but the browser TCP stack doesn't acknowledge the received packets, why?

    Read the article

  • What's the logic behind the Ctrl-Tab order in Internet Explorer 7?

    - by torbengb
    Using Ctrl+Tab in Internet Explorer 7 (on Windows XP) seems to follow no sequence that I can work out. Using Shift+Ctrl+Tab even jumps around with no clear logic in what tab order is being followed. It doesn't even work to the reverse as expected from using it in other programs, such as Firefox. Please explain to me how this key combo works in IE7 because I can't figure it out.

    Read the article

  • Nginx SSL redirect for one specific page only

    - by jjiceman
    I read and followed this question in order to configure nginx to force SSL for one page (admin.php for XenForo), and it is working well for a few of the site administrators but is not for myself. I was wondering if anyone has any advice on how to improve this configuration: ... ssl_certificate example.net.crt; ssl_certificate_key example.key; server { listen 80 default; listen 443 ssl; server_name www.example.net example.net; access_log /srv/www/example.net/logs/access.log; error_log /srv/www/example.net/logs/error.log; root /srv/www/example.net/public_html; index index.php index.html; location / { if ( $scheme = https ){ rewrite ^ http://example.net$request_uri? permanent; } try_files $uri $uri/ /index.php?$uri&$args; index index.php index.html; } location ^~ /admin.php { if ( $scheme = http ) { rewrite ^ https://example.net$request_uri? permanent; } try_files $uri /index.php; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS on; } location ~ \.php$ { try_files $uri /index.php; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS off; } } ... It seems that the extra information in the location ^~ /admin.php block is unecessary, does anyone know of an easy way to avoid duplicate code? Without it it skips the php block and just returns the php files. Currently it applies https correctly in Firefox when I navigate to admin.php. In Chrome, it downloads the admin.php page. When returning to the non-https website in Firefox, it does not correctly return to http but stays as SSL. Like I said earlier, this only happens for me, the other admins can go back and forth without a problem. Is this an issue on my end that I can fix? And does anyone know of any ways I could reduce duplicate configuration options in the configuration? Thanks in advance! EDIT: Clearing the cache / cookies seemed to work. Is this the right way to do http/https redirection? I sort of made it up as I went along.

    Read the article

  • Truecrypt and hidden volumes

    - by user51166
    I would like to know the opinion of some users using (or not) the hidden volume encryption feature of Truecrypt. Personally until now I never used this feature: on Windows I encrypt the system drive as a standard volume, on GNU/Linux I encrypt using LUKS which is Truecrypt's equivalent to standard volume. As for data I use the standard volume approach as well. I read that this feature is nice and all, but it isn't really used by most people. Do you use it or not? Why? Do you only store inside it VERY sensible data or what else? Because technically speaking doing a hidden volume which has (almost) the same size as the outer one doesn't make sense: the outer volume will be encrypted but no data will be on it, which will appear very strange. So not only one has to plan which data store where, but has even to remember each time to mount the outer volume with hidden volume protection (otherwise there'll be a data loss when writing to it). It's a bit messy: hidden OS + outer OS + outer volume + hidden volume = 4 partitions :( Similar question about the hidden operating system (which I don't use [yet]).

    Read the article

  • PassEnv does not find ENV variables

    - by quodlibetor
    I've got this /etc/profile.d/myfile.sh: export MYVAR=myval I also have a PassEnv MYVAR line in a <virtualhost> section of an apache conf dir. That lets me do things like: $ echo $MYVAR myval $ python >>> import os; os.getenv('MYVAR') 'myval' $ sudo echo $MYVAR myval $ sudo -i root# echo $MYVAR myval But then, despite that being the case I get: root# /sbin/service httpd restart /sbin/service httpd restart Stopping httpd: [ OK ] Starting httpd: [Mon Oct 22 14:44:02 2012] [warn] PassEnv variable MYVAR was undefined [ OK ] And all of my attempts to access MYVAR from within my wsgi scripts just don't work. Thoughts? Am I doing something obviously wrong? EDIT for more detail I've got a swarm of computers/VMs and a swarm of developers working on a swarm of projects. I need a simple central place to keep environment information, the most common is the "environment" (dev/stage/prod). The scheme that we've got (modifying *.wsgi programmatically) is turning out to be more fragile than we'd like. The main options that I see are: put things in the shell environment put things in other config files Getting things into the shell environment is the best, because we won't need to write yet more duplicated "what is my environment" code.

    Read the article

  • set image for file extension

    - by acidzombie24
    On windows 7 instead of opening GIF on IE8 (The default) I set it to FF. When viewing the image on 'details' setting GIF images has a firefox logo instead of what they had before. How do i set it back? or select an ico (such as the one on XP).

    Read the article

  • Best format for hard drive for Windows and Mac?

    - by Neil
    I have a 500 GB USB External Hard Drive. I need four partitions on it, for the following purposes: 160 GB for a bootable backup of my Mac. 160 GB for a bootable backup of my Windows. 11 GB for a bootable Snow Leopard Install Disk Rest as for file storage. Now I need a partition table which will get recognised on both Windows and Mac, without needing extra software on Windows, which will let me keep bootable copies of both OS'es, but let me access the file storage from both OS'es. Currently, I have a GUI Partition Table, with Mac OS Extended (Journaled) Partitions for the two backups, Mac OS Extended for the Install Disk, and NTFS for the file storage. While this gets recognised perfectly on my Mac, thanks to an NTFS for Mac driver from Paragon, when connected to Windows, the drive is detected by the machine (listed in Safely Remove USB), but not recognised in Windows Explorer unless I install MacDrive, which is not feasible for me to install on public Windows Machines I might wanna access my storage area on. Can someone recommend the best combination of formats and software/drivers to get this done seamlessly?

    Read the article

  • SSH with public/private key to iMac fails.

    - by bennedich
    I'm trying to connect to my iMac (server) from my macbook (client) on my LAN. Both have Mac OS X 10.6.4. Server running on a new clean install of the OS. When just activating Remote Login in System Preferences everything works fine. But when setting up ssh to only work with public/private key I get the following error messages from the server log depending on if I use a rsa passphrase or not: With passphrase (case 1): PAM: user account has expired for <myServerUserName> from 192.168.X.X via 192.168.X.Y Without passphrase (case 2): Failed publickey for <myServerUserName> from 192.168.X.X port AAAAA ssh2 This is my setup algorithm: Create a private and public key on client with command ssh-keygen -t rsa. In case 1 I also set a passphrase. Move the id_rsa.pub to the server path /Users/<myServerUserName>/.ssh/ In this folder I execute cat id_rsa.pub > authorized_keys Making sure Remote Login isn't active, I now execute sudo /usr/sbin/sshd -d on the server. Back on the client I now type ssh -v -v -v <myServerUserName>@192.168.X.Y and get prompted to accept RSA key fingerprint. This is NOT the same fingerprint as the one from when I created the private/public key (should it be?). I accept. Depending on case: CASE 1: Client gets halted for password and the response is permission denied even though correct password is given. Back on the server I can read the error message I stated above for case 1: PAM: user account has expired... CASE 2: Client gets message Connection closed by 192.168.X.Y. Back on the server I can read the error message I stated above for case 2: Failed publickey... What could possibly cause this?

    Read the article

  • processing of Group Policy failed only on 2008 Servers and Name Resolution failure on the current domain controller

    - by Ken Wolfrom
    Spent last 3 months doing a upgrade from 2003 domain to a 2008R2 domain. our last DC was rebuilt (5 total) and brought up on line. After it was put on line we have some 2008 and 2008R2 servers (10 now) getting these errors in the event logs. ERRORS Description: The processing of Group Policy failed. Windows could not resolve the user name. This could be caused by one of more of the following: a) Name Resolution failure on the current domain controller. b) Active Directory Replication Latency (an account created on another domain controller has not replicated to the current domain controller).\ Can duplicate this if we drop to command prompt and run GPUPDATE manually When our users attempt to do a \directory\shared access to shared drive on an affected server get this error.– “THERE ARE CURRETLY NO LOGON SERVER AVAIALBE TO SERICE THE LOGON REQUEST. This is only affecting the 2008 OS and it is a random set of abotu 10 servers out of some 30 with this OS. The Services on the machines are running Ok and login. Able to log in with domain/user to the consoles and via RDP. WE can log onto an affected machine, and can get to the \domainname\sysvol and can see the GPO's Have checked the replication topology of the domain and it states all servers can replicate with no errrors. We went back to the last DC, demoted it, removed DNS and then removed it from the domain and waited 24 hours and issue still persist. Picked one server, removed it from domain, reboooted, and added back to domain with no problems, but still has this behavior. bottom line is we have some servers that the domain will not let any UDP/client server apps or GPO's process ,but the tcp related items seeme to work fine, http, tcp calls, sql and oracle dbs's connect and process. Any inputs on some possible reasons for this issue and fixes. It is only affecting the 2008 servers on a 2008R2 domain.

    Read the article

  • Browser http port-forwarding

    - by Kakao
    When using a browser like Firefox I need that any url of the domain example.com to have appended the port :8008. Not only when I type it at address bar but any where it is referenced within the served html page. All the other domains should be left as is. I know I can setup a proxy like Squid or use a pac file in a web site but I want it simpler if possible.

    Read the article

  • Why does xvid encoding lag/lock up windows 7?

    - by acidzombie24
    It seems to encode just fine so you can see the results http://www.sendspace.com/file/msku4q If you look at the mouse cursor you'll see firefox locks up once i click it. Calculator seems fine but when i try to move it, it locks up. The resource monitor and task manager are up so you can see if the CPU is being used up. It isnt as you can see <30% was used.

    Read the article

  • Gzip not working in browser

    - by Cathal
    According to whatsmyip.org none of my browsers (Firefox, Chrome etc) on W7 are gzip enabled, it's saying 'NO, your browser is not requesting compressed content' which agrees with Chrome developer tools as I was testing a site and it was complaining that the page and css etc weren't compressed. I've searched for an answer but cannot find anything for this. I've tested from another pc connected to the router and that works fine, something on this pc is broke.... Any help tia

    Read the article

  • Radeon X1300 acceleration

    - by user30966
    Hello, I've just bought a Samsung XL2370 with a native resolution of 1920x1080. Should a Radeon X1300 be capable of shifting around windows on a screen that size? Because maximising and minimising windows, scrolling in Firefox and VS2008 seems very slow and jerky. Does the Radeon X1300 have any hardware accelearation? My old display was only 1028x768 and I never noticed any problems. Maybe is it time to buy a new graphics card? Thanks, AJ

    Read the article

  • Amazon EC2 - Free memory

    - by Damo
    We have an amazon ec2 small instance running and over the past few days we noticed that the memory is going down and down. On the small instance, we are running apache and tomcat6 Tomcat is started with the following JVM parameters -Xms32m -Xmx128m -XX:PermSize=128m -XX:MaxPermSize=256m We use nagios to monitor stuff like updates to apply, free disk space and memory. Everything else is behaving as expected but our memory is going down all the time. Our app receives approx half a million hits a day When I shutdown apache and tomcat, and ran free -m, we had only 594mb of memory free out out of the 1.7gb of memory. Not much else is running on the small instance and when running the top command I cannot see where the memory is going. The app we run on tomcat is a grails webapp. Could there be a possibility that there is a memory leak within our application? I read online and folks say that a small amazon instance is perfect for running apach and tomcat. I found a few posts online that showed how to setup apache and tomcat to limit the memory usage and I have already performed those steps. The memory is not being used up as quick but the memory is still decreasing over time. We have other amazone ec2 small instances running grails apps and the memory is fairly standard on those nodes. But they would not be receiving as much traffic Just to add, when I run the top command on the problem server, I cannot see where all the memory is being used Any help with this is greatly appreciated The output of free -m when run on my server is as follows total used free shared buffers cached Mem: 1657 1380 277 0 158 773 -/+ buffers/cache: 447 1209 Swap: 895 0 895 In your opinion, does this look ok? At what stage would the OS give back memory, would it wait to the memory reaches 0% or is this OS dependent?

    Read the article

  • What are your favorite open source tools? (that are not very famous)

    - by sucuri
    I believe every system administrator is used to open source by now. From Apache to Firefox or Linux, everyone uses it at least a little bit. However, most open source developers are not good in marketing, so I know that there are hundreds of very good tools out there that very few people know. To fill this gap, share your favorite open source tool that you use in your day-to-day work that is not very famous. *I will post mine in the comments.

    Read the article

  • Multiboot USB (OSX only): How to customize partition name?

    - by wrk2bike
    Trying to deal with all the Mac OSX recovery disks I've got by moving them to bootable USB images. I've got a big USB drive with multiple partitions for each recovery disk, and it's easy to use Disk Utility to "restore" the recovery DVD to a partition. When I boot my target Mac while holding down the Alt key, I can see all my bootable images and they work great. Problem is, they've all got the same name: "Mac OS X Install DVD." I manage Macs of various vintages. If my target Mac needs 10.6.3 for example, my only option seems to be to try each one until I get past the "Mac OSX can't be installed on this computer" message. I originally named my partitions with the OSX revision number, but that name is replaced by the disk image name during Disk Utility restore. Is there any way to customize the name during or after Disk Utility restore? I tried making a new DVD image on disk first and renaming it, but when I restore it to my recovery partition it has the original name. EDIT: After booting to the wrong partition, and getting the "..can't be installed" message, I can open the Startup Disk menu and see the other partitions - and as I select each one, the info at the bottom indicates which OS revision is on that partition. So I know the info is in there! Just want it at the boot screen if possible.

    Read the article

  • Install KVM based Windows 2008 remotely over SSH on a headless, no graphics Ubuntu 10.04 server?

    - by taazaa
    Hi, I have a Dell server at a remote data center with Ubuntu 10.04 as the host. It is a minimal install with the necessary virtulization packages. There is no X and the machine is headless. I have the win2008 DVD in the machine and want to remotely install it. I tried: virt-install --connect qemu:///system -n vmwin2k8 -r 1024 --disk path=server2k8.qcow2,size=50 --cdrom /dev/sr0 --vnc --noautoconsole --os-type windows --os-variant win2k8 The qcow2 image get created However I don't understand how to connect to see the install via VNC. This is my first time doing it so it may be trivial or may not be possible. Remotely I have a Win 7 machine with Putty and RealVNC viewer. Where is the graphic output of VNC going? Do I have to have VNC server running on the host or some other machine and then connect to it from my VNC client? Please let me know or point me to the right direction. I have been searching the web for several days to figure out how this is suppose to work. Thanks!

    Read the article

  • Shrinking windows and recovery partitions on the samsung new series 9

    - by bobbaluba
    I just bought a samsung NP900X3C, and as I was going to install linux, I noticed the windows partitions and recovery partitions occupied a major portion of the disk. The disk is a 128 GB SSD, and I want to keep the windows partition in order to play some games once in a while, but the windows disk is already 45GB full (with no installed programs) and the recovery partition is 20GB. That leaves under 60 GB for linux, which is not optimal, since that is what I'm going to be using most of the time, and there would be no room for games on the windows partition. There are also two small partitions that I don't know what are doing, one 100mb at the start of the disk that I'm guessing is some kind of boot partition, and one 5GB, that is described as an OS/2 hidden C: drive What I'm wondering is: can i delete the recovery partition? What about the mystical 5gb partition? Here is what fdisk reports: ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 128.0 GB, 128035676160 bytes 255 heads, 63 sectors/track, 15566 cylinders, total 250069680 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x83953ffc Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 198273023 99033088 7 HPFS/NTFS/exFAT /dev/sda3 198273024 207276031 4501504 84 OS/2 hidden C: drive /dev/sda4 207276032 250068991 21396480 27 Hidden NTFS WinRE

    Read the article

  • How to decouple development server from Internet?

    - by intoxicated.roamer
    I am working in a small set-up where there are 4 developers (might grow to 6 or 8 in cuople of years). I want to set-up an environment in which developers get an internet access but can not share any data from the company on internet. I have thought of the following plan: Set-up a centralized git server (Debian). The server will have an internet access. A developer will only have git account on that server, and won't have any other account on it. Do not give internet access to developer's individual machine (Windows XP/Windows 7). Run a virtual machine (any multi-user OS) on the centralized server (the same one on which git is hosted). Developer will have an account on this virtual machine. He/she can access internet via this virtual machine. Any data-movement between this virtual machine and underlying server, as well as any of the developer's machine, is prohibited. All developers require USB port on their local machine, so that they can burn their code into a microcontroller. This port will be made available only to associated software that dumps the code in a microcontroller (MPLAB in current case). All other softwares will be prohibited from accessing the port. As more developers get added, providing internet support for them will become difficult with this plan as it will slow down the virtual machine running on the server. Can anyone suggest an alternative ? Are there any obvious flaws in the above plan ? Some key details of the server are as below: 1) OS:Debian 2) RAM: 8GB 3) CPU: Intel Xeon E3-1220v2 4C/4T

    Read the article

  • Routing connections through VPN based on hostname (not IP range)

    - by Michal M
    This bugs me immensly. I need to connect to client's network through VPN. But I definitely do not want to send all the traffic through client's network so this option is out of question. What I need basically is for the OS to know that all client's network subdomains (*.example.com) need to go through the VPN connection. I tried a couple of options: Changing order of services and setting the VPN on top, but this works the same as "Send all traffic over VPN connection". Using "VPN on Demand" option from network advanced options, but this feature is quite rubbish to be honest. Seems to work only in Safari (?!) and it doesn't route the connection, but it basically triggers the OS to connect to the selected VPN. The reason I need it to work based on hostnames rather than IP range is simple - my client has a lot of servers inside his network and it's impossible for me to remember all IPs. They are all within a range, but this doesn't help me remembering. Another option would be to put the VPN connection on the bottom of network services and untick "Send all traffic..." and then put all known hostnames in hosts file, but considering there could be hundreds of servers (therefore hostnames and ips too) it ridiculous job. And if new server appears on the network I'd need to edit the hosts file again. Sisyphean labours. However this works on Windows very simply. If a hostname is not available through default network interface, then it seems to try VPN connection and this works brilliantly. So, how can I achieve that on Mac, then? I know client's internal DNS addresses if that is of any help (like directing a certain domains through a different DNS)? PS. Using latest version 10.6.6. PS2. I am using VPN to access intranet, version control servers (svn://), samba shares and for SSH access to servers.

    Read the article

  • Browser Install with Bookmarks

    - by Tawm
    Goal: Install firefox and chrome with a default set of bookmarks on Windows 7 x86. The immediate need is for freshly imaged machines to have bookmarks. So I'm looking for an msi /flag or something to either import bookmarks for a file or import settings from IE. Another possibility it seems is push installers/import actions from GPO on the domain. Seems like this might be best since if stuff changes in the future I can update as needed. I'm open to other options as well.

    Read the article

< Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >