Search Results

Search found 25660 results on 1027 pages for 'booting issue'.

Page 771/1027 | < Previous Page | 767 768 769 770 771 772 773 774 775 776 777 778  | Next Page >

  • Macbook Pro won't boot from DVD with SSD

    - by Adam Carr
    Here's the timeline of events. Had a running MBP 17 Early 2011 Thunderbolt with OWC Mercury Extreme Pro SSD 115GB drive. Installed Windows 7 via bootcamp. I have done this multiple times before and every time I need to format the bootcamp partition before installing. I think this time I actually deleted the partition and then selected the freespace to install. This worked fine for the most part but I wasn't able to boot the boot camp partition using vmware fusion. I gave up and used the boot camp assistant to revert back to one mac partition. I was getting some odd behavior so I rebooted the machine. It then came up with a message saying no bootable partiton. This made me think (and still does) that the windows install using the free space versus the boot camp partition caused the windows MBR boot loader to get installed incorrectly and mucked up the OS X installation. Ok, fine, I can just reinstall. I can't seem to boot from the original MBP installation DVD. I hold down c on boot but I never get past the all grey screen. I hear the DVD drive spin up but it eventually stops. I put the original HD back in it and everything works fine but when I put the SSD in, I can't boot from the DVD drive. I have already set up an RMA with OWC to send back the drive but considering the order of events, I feel as though it isn't a hardware issue but can't seem to figure out how to fix it. I can always send it back in but figured I would check and see if anyone could offer some guideance/assistence before doing so.

    Read the article

  • Getting 2560x1600 out of an ATI Radeon HD 4670 on Windows 7

    - by Alexey
    Greetings, I've got a Dell Studio XPS 1640 laptop (with an ATI Radeon HD 4670 graphics card) running Windows 7, and just bought a Dell 3007HPC 30-inch monitor for it. I'm trying to figure out how to get the full 2560x1600 experience out of this setup. Here's what I've done so far: Plug in using an HDMI cable and an HDMI--D-DVI converter on the monitor side. Open up Screen Resolution. Maximum supported setting is 1920x1080. Tried that (several times) - sometimes it doesn't work at all (blank screen); other times, it only shows the first 1280x800 pixels on the bigger screen. Tried using the Catalyst control center - played with various settings there, couldn't get the screen to show anything interesting. Tried using PowerStrip to set a custom resolution, again, no luck. Spoke to a Dell Preferred Custom Support guy for about an hour before giving up. He remote-accessed my computer, and told me that (1) The maximum supported resolution for XPS 1640 is 1920x1080, and (2) 'it seems to be working from where he sees it, must be a connection issue'. None of this has helped. Does anybody have ideas? Should I be using a different cable set up? Am I using Powerstrip wrong?

    Read the article

  • ffmpeg - creating DNxHD MFX files with alphas

    - by Hugh
    Hi all, I'm struggling with something in FFMpeg at the moment... I'm trying to make DNxHD 1080p/24, 36Mb/s MXF files from a sequence of PNG files. My current command-line is: ffmpeg -y -f image2 -i /tmp/temp.%04d.png -s 1920x1080 -r 24 -vcodec dnxhd -f mxf -pix_fmt rgb32 -b 36Mb /tmp/temp.mxf To which ffmpeg gives me the output: Input #0, image2, from '/tmp/temp.%04d.png': Duration: 00:00:01.60, start: 0.000000, bitrate: N/A Stream #0.0: Video: png, rgb32, 1920x1080, 25 tbr, 25 tbn, 25 tbc Output #0, mxf, to '/tmp/temp.mxf': Stream #0.0: Video: dnxhd, yuv422p, 1920x1080, q=2-31, 36000 kb/s, 90k tbn, 24 tbc Stream mapping: Stream #0.0 -> #0.0 [mxf @ 0x1005800]unsupported video frame rate Could not write header for output file #0 (incorrect codec parameters ?) There are a few things in here that concern me: The output stream is insisting on being yuv422p, which doesn't support alpha. 24fps is an unsupported video frame rate? I've tried 23.976 too, and get the same thing. I then tried the same thing, but writing to a quicktime (still DNxHD, though) with: ffmpeg -y -f image2 -i /tmp/temp.%04d.png -s 1920x1080 -r 24 -vcodec dnxhd -f mov -pix_fmt rgb32 -b 36Mb /tmp/temp.mov This gives me the output: Input #0, image2, from '/tmp/1274263259.28098.%04d.png': Duration: 00:00:01.60, start: 0.000000, bitrate: N/A Stream #0.0: Video: png, rgb32, 1920x1080, 25 tbr, 25 tbn, 25 tbc Output #0, mov, to '/tmp/1274263259.28098.mov': Stream #0.0: Video: dnxhd, yuv422p, 1920x1080, q=2-31, 36000 kb/s, 90k tbn, 24 tbc Stream mapping: Stream #0.0 -> #0.0 Press [q] to stop encoding frame= 39 fps= 9 q=1.0 Lsize= 7177kB time=1.62 bitrate=36180.8kbits/s video:7176kB audio:0kB global headers:0kB muxing overhead 0.013636% Which obviously works, to a certain extent, but still has the issue of being yuv422p, and therefore losing the alpha. If I'm going to QuickTime, then I can get what I need using Shake, but my main aim here is to be able to generate .mxf files. Any thoughts? Thanks

    Read the article

  • How do I run a stable Windows XP kvm guest on Ubuntu 10.04?

    - by Jean-Paul Calderone
    I have three Windows XP guests running on a recently upgraded 64-bit Ubuntu 10.04 system. Occasionally (on the order of once every few days), one of the guests will become non-responsive and the kvm process on the host which is running that guest will start consuming 100% CPU. It will continue to do so until it is killed. When restarted, it will be fine for a while, and then the issue repeats. The kvm command line used to run all three guests is this: /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 1024 -smp 1 -name bigdog21vmxp1 \ -uuid ea47ff84-125b-16f7-9a4d-a6d0d8bab46a \ -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/bigdog21vmxp1.monitor,server,nowait \ -monitor chardev:monitor \ -localtime \ -boot c \ -drive file=/var/lib/libvirt/images/windowsxp-1.qcow2,if=ide,index=0,boot=on,format=qcow2 \ -net nic,macaddr=54:52:00:02:06:0e,vlan=0,name=nic.0 \ -net tap,fd=58,vlan=0,name=tap.0 \ -chardev pty,id=serial0 \ -serial chardev:serial0 \ -parallel none \ -usb \ -usbdevice tablet \ -vnc 127.0.0.1:1 \ -k en-us \ -vga cirrus \ -soundhw es1370 Why do the systems misbehave this way sometimes? And what configuration can I change in order to fix it? Or, if the problem is due to a bug in kvm, what is the process for isolating a kvm failure so that the developers have a chance of fixing it?

    Read the article

  • Is there a quick way of undoing a folder change in Far Manager?

    - by Johannes Rössel
    I love Far Manager. However, it has a feature to quickly go to the root directory of a drive with Ctrl+\. I do sometimes need and use this feature, but more frequently I use Ctrl+? to quickly insert the file name under the cursor into the command line. As it so happens, the ? key is located dangerously close to \ which is why I sometimes erroneously go the root directory (which then is doubly unfortunate since I originally wanted to work with a file in the directory I was in). Now I could probably just redefine Ctrl+\ to do nothing, although I still sometimes need that (can be replicated with a quick cd\, though). But Windows Explorer, in the wake of the WWW, provided us with a handy directory history and two separate ways of navigating backwards: backwards through the history and backwards through the hierarchy. Is there something quick and easy to get back to the folder I were in? This is less of an issue in C:\Users\Me (still nagging) but more so in deeper hierarchies.

    Read the article

  • Primary zone will not transfer to secondary zone

    - by Matt Beckman
    Using DNS on Windows Server 2008, there is a constant struggle with adding primary and secondary zones. I will add a primary zone to NS1 for a new domain, edit it as needed, and when it's ready add the secondary zone to NS2. However, MOST of the time, the secondary zone remains in an error state, and will never acquire the primary zone data. I have gone back to domains a few weeks after adding them to find out that Windows never propagated the change. Annoying. Anyway, I recently updated SP1 to SP2 thinking this would help, but it hasn't. I added two new domains today, and spent an hour after the secondary zone would just not sync. During that time, the only error in the logs I had seen was for one of them where DNS complained about not being authoritative. In order to eventually resolve the issue, I ended up deleting the primary zone, creating a new primary zone, and hitting "Apply" after each and every field change. For example, after modifying the serial number from "1" to a date appropriate "2010093001", I hit apply, and then the Primary Server (apply), Responsible Person (apply), and finally Name Servers (apply). After I did this, the secondary zone didn't waste any time getting the data. Ideas?

    Read the article

  • Does Windows 7 Authenticate Cached Credentials on Startup

    - by Farray
    Problem I have a Windows domain user account that gets automatically locked-out semi-regularly. Troubleshooting Thus Far The only rule on the domain that should automatically lock an account is too many failed login attempts. I do not think anyone nefarious is trying to access my account. The problem started occurring after changing my password so I think it's a stored credential problem. Further to that, in the Event Viewer's System log I found Warnings from Security-Kerberos that says: The password stored in Credential Manager is invalid. This might be caused by the user changing the password from this computer or a different computer. To resolve this error, open Credential Manager in Control Panel, and reenter the password for the credential mydomain\myuser. I checked the Credential Manager and all it has are a few TERMSRV/servername credentials stored by Remote Desktop. I know which stored credential was incorrect, but it was stored for Remote Desktop access to a specific machine and was not being used (at least not by me) at the time of the warnings. The Security-Kerberos warning appears when the system was starting up (after a Windows Update reboot) and also appeared earlier this morning when nobody was logged into the machine. Clarification after SnOrfus answer: There was 1 set of invalid credentials that was stored for a terminal server. The rest of the credentials are known to be valid (used often & recently without issues). I logged on to the domain this morning without issue. I then ran windows update which rebooted the computer. After the restart, I couldn't log in (due to account being locked out). After unlocking & logging on to the domain, I checked Event Viewer which showed a problem with credentials after restarting. Since the only stored credentials (according to Credential Manager) are for terminal servers, why would there be a Credential problem on restart when remote desktop was not being used? Question Does anyone know if Windows 7 "randomly" checks the authentication of cached credentials?

    Read the article

  • Unable To Install SQL Server 2008 On Windows 7 & Windows XP

    - by mickburkejnr
    Hi Guys, you are my final hope of salvation! For the last five hours I've been trying to install SQL Server 2008, first on Windows 7 and then Windows XP. Both have had the same issue. Even before the installation starts, an error message appears saying: Microsoft .NET Framework 3.5 SP1 installation has failed. SQL Server 2008 Setup requires .NET Framework 3.5 SP1 to be installed. I have installed everything in the hope of getting past this. the service pack it refers to has even been installed and I can see them in the control panel! I have installed Framework 2.0, Framework 2.0 SP1, Framework 3.5, Framework 3.5 SP1, Windows Installer 3.5 and Windows Installer 4.0. Even installed the Service Pack for SQL Server 2008.... But, nothing, works! Does anyone have an idea what I'm doing wrong or what could be wrong? I'm seriously close to the end of my tether, I've even sent Steve Ballmer an email with my frustration. I would seriously appreciate some help with this. Many thanks!

    Read the article

  • Need to Remove Exchange 2003 Server That Crashed During Transition to 2010

    - by ThaKidd
    As the title stated, we were running an Exchange 2003 server that we knew was going down soon so we purchased a second server and installed Exchange 2010 into the AD. We managed to move all of the mailboxes off of 2003 and also managed to get the Offline Address Book setup on 2010. At this point the 2003 server bit the dust and will no longer boot. Therefore we were unable to properly uninstall Exchange and remove the last 2003 server so it still exists in AD. As far as the clients are concerned, everything is working properly. However, when I run the Microsoft Exchange Profile Analyzer, I still see the old server and its Administrative Group. I am going to guess that since the old server is showing up in AD, I will not be able to raise Exchange or AD functionality (as the 2003 server was also the only AD DC) levels. I have forced the 2003 DC out of AD so that is no longer an issue. Old Setup: Windows 2003 Server Enterprise & Exchange 2003 Standard New Setup: Windows 2010 Server Enterprise & Exchange 2010 Standard Two Questions: How do you go about manually forcing the 2003 server and its administrative group out of AD? When that is finished, where do you raise the Exchange mode (can't find this for the life of me)?

    Read the article

  • In GNU Screen, Recalled bash history command displays one character position to the left of actual location

    - by vergueishon
    I am running Red Hat 5 32-bit (2.6.18-194.26.1.el5). The issue is that when I recall any previous command in bash's history, the first character in the command is displayed immediately after the shell prompt, without any intervening space, likeso: \[me@mymachine tmp]$man mysql If I enter a Ctrl-C, and retype the command, it looks likeso: \[me@mymachine tmp]$ man mysql This makes recalling a command and editing it before re-entering a real pain. Basically, if I try to edit a recalled command, my changes occur one character position to the left (I believe) of what I see on the screen. It's a bit tedious to describe, and appears to only happen with commands with a large number of arguments. UPDATE: The contents of /etc/sysconfig/bash-prompt-screen, 1 #!/bin/bash 2 echo -n $'\033'"_${USER}@${HOSTNAME%%.*}:${PWD/#$HOME/~}"$'\033\\\\' and the contents of /etc/bashrc, 24 screen) 25 if [ -e /etc/sysconfig/bash-prompt-screen ]; then 26 PROMPT_COMMAND=/etc/sysconfig/bash-prompt-screen 27 else 28 PROMPT_COMMAND='echo -ne "\033_${USER}@${HOSTNAME%%.*}:${PWD/#$HOME/~}"; echo -ne "\033\\"' 29 fi 30 ;; I've disable bash-prompt-screen by renaming it--this fixed it. It's entirely possible that there is a fix to the bash-prompt-screen prompt line in the latest version of screen for RHEL 5. The error is seen under Screen version 4.00.03 (FAU) 23-Oct-06. (I noticed an update in the queue, which is installing as I write this.)

    Read the article

  • Git push over http (using git-http-backend) and Apache is not working

    - by Ole_Brun
    I have desperately been trying to get push for git working through the "smart-http" mode using git-http-backend. However after many hours of testing and troubleshooting, I am still left with error: Cannot access URL http://localhost/git/hello.git/, return code 22 fatal: git-http-push failed` I am using latest versions of Ubuntu (12.04), Apache2 (2.2.22) and Git (1.7.9.5) and have followed different tutorials found on the Internet, like this one http://www.parallelsymmetry.com/howto/git.jsp. My VHost file currently looks like this: <VirtualHost *:80> SetEnv GIT_PROJECT_ROOT /var/www/git SetEnv GIT_HTTP_EXPORT_ALL SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER DocumentRoot /var/www/git ScriptAliasMatch \ "(?x)^/(.*?)\.git/(HEAD | \ info/refs | \ objects/info/[^/]+ | \ git-(upload|receive)-pack)$" \ /usr/lib/git-core/git-http-backend/$1/$2 <Directory /var/www/git> Options +ExecCGI +SymLinksIfOwnerMatch -MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I have changed the ownership of the /var/www/git folder to root.www-data and for my test repositories I have enabled anonymous push by doing git config http.receivepack true. I have also tried with authenticated users but with the same outcome. The repositories were created using: sudo git init --bare --shared [repo-name] While looking at the apache2 access.log, it appears to me that WebDAV is trying to be used, and that git-http-backend is never fired: 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "GET /git/hello.git/info/refs?service=git-receive-pack HTTP/1.1" 200 207 "-" "git/1.7.9.5" 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "GET /git/hello.git/HEAD HTTP/1.1" 200 232 "-" "git/1.7.9.5" 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "PROPFIND /git/hello.git/ HTTP/1.1" 405 563 "-" "git/1.7.9.5" What am I doing wrong? Is it an issue with the version of git and/or apache that I am using perhaps? BTW: I have read all the git http related questions on ServerFault and StackOverflow, and none of them provided me with a solution, so please don't mark this as duplicate.

    Read the article

  • InstallShield or Windows installer corrupted

    - by Bobby S
    Just recently I've been unable to install any software on my Windows 7 machine. Anything that uses InstallShield or the Windows installer will just hang or give a weird error. I noticed there will be many duplicate isbew64.exe processes (like 25) that launch and then just sit there or else a lot of msiexec.exe *32 processes, depending on what I'm trying to install. One piece of software specifically is the Logitech Harmony software. It gives me an *is_string_not_defined* error, saying c:\program files (x86)\:\ the filename, directory name, or volume label syntax is incorrect. The other thing I was trying to install was Battlefield: Bad Company 2, and that just hangs as well, and then just leaves all the Windows installer processes running in the background after I quit the install process. Very odd. I've checked well and googled these issues, it doesn't appear to be any sort of malware issue. I feel like it's related to some kind of corrupted installer application. I've rebooted, deleted the InstallShield folder in program files/common files as some places online suggested but to no avail. I have no idea what to do, any ideas?

    Read the article

  • SharePoint Business Connectivity Services (BCS) Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'

    - by g18c
    I am running SharePoint 2010 with SQL 2012, I am trying to get Business Connectivity Services (BCS) running but I am facing a double-hope authentication issue. Everytime I try to connect to the external BCS list created in SharePoint designer, I get the error Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. In the event viewer on the SQL server I see a login failure for an anonymous user from the SP server IP address. Background information below: I have enabled Kerberos under SharePoint Central admin. I have the following AD domain accounts: SP_Farm - main website pool SP_Services - for SharePoint services (including BCS) SQL_Engine - SQL database engine I then created the following with SetSPN: SetSPN -S http/intranet mydomain\SP_Farm SetSPN -S http/intranet.mydomain.local mydomain\SP_Farm SetSPN -S SPSvc/SPS mydomain\SP_Farm SetSPN -S MSSQLSvc/SQL1 mydomain\SQL_DatabaseEngine SetSPN -S MSSQLSvc/SQL1.mydomain.local mydomain\SQL_DatabaseEngine SetSPN -S MSSQLSvc/SQL1:1433 mydomain\SQL_DatabaseEngine SetSPN -S MSSQLSvc/SQL1.mydomain.local:1433 mydomain\SQL_DatabaseEngine I then delegated the AD accounts for any authentication protocol to the following: SP_Farm - SP_Farm (http service type, intranet) SP_Farm - SQL_DatabaseEngine (MSSQLSvc, sql1) SP_Service - SP_Service (SPSvc) SP_Service - SQL_DatabaseEngine (MSSQLSvc, sql1) I have also checked the WFE is being logged on to with Kerberos, with the WFE server event log showing event ID 4624 with Kerberos authentication, this is OK. The SQL is also showing connections authenticated as Kerberos from the WFE with the following query: Select s.session_id, s.login_name, s.host_name, c.auth_scheme from sys.dm_exec_connections c inner join sys.dm_exec_sessions s on c.session_id = s.session_id Despite the above, credentials are not passed from the client through the SharePoint server to the SQL server, only the anonymous account is used. I get the following error in the WFE server for 'BusinessData' ID 8080: Could not open connection using 'data source=sql1.mydomain.local;initial catalog=MSCRM;integrated security=SSPI;pooling=true;persist security info=false' in App Domain '/LM/W3SVC/1848937658/ROOT-1-129922939694071446'. The full exception text is: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. If I set a username and password with the Secure Store Service and set the external list to use the impersonated credentials, the list works. Any ideas what I have missed and what can be tried next?

    Read the article

  • Cannot 301 redirect with IIS URL Rewrite Module

    - by Justin
    I am trying to troubleshoot my issue with the URL Rewrite Module on IIS 7. I migrated a Wordpress blog over to BlogEngine.net. There were only about 5 entries that I wanted to use 301 redirects to the new blog, so I wanted to simply create 5 exact match redirect rules using the rewrite module. For some reason the exact match rule never seems to take effect, I always get a 404 error when the original url is navigated to. I verified that my exact match pattern matched the existing backlinks and it does. I then tried a simple test and got the same behavior, no redirection. I created a page, test.html, on my site, I then created a second page, test2.html. So my exact match pattern is: "http://www.mydomain.com/test.html" And the rule is supposed to do a 301 redirect to "http://www.mydomain.com/test2.html " The redirect never happens. I created the steps for the rule based on the instructions in this page: http://learn.iis.net/page.aspx/461/creating-rewrite-rules-for-the-url-rewrite-module/ I don't see that I left out a step. After I apply the rule I've even gone as far as doing an IISReset to make sure it would be in effect but still no luck. Any thoughts on what I might have left out? (Note: my rewrite rules dont include the " " around them but I had to add since serverfault thinks I am trying to spam the system with multiple urls.)

    Read the article

  • Limit the amount of data that can be stored in a folder on Ubuntu Server 12.04?

    - by dougoftheabaci
    I'm in the process of building my first server. It's up, it's running, I'm transferring copious amounts of data away from my horrid little Drobo (DO NOT BUY ONE OF THESE, EVER). However, there's one thing I have yet to do: I'd like to set it up for Time Machine backups as well. I've seen all the guides and I have some idea of how to set the whole thing up, but the issue is that Time Machine will just fill up as much space as you let it. So if I let it lose in my 8 TB zpool it'll slowly consume every last available sector. This, of course, is not acceptable. I have a folder at the root of my zpool called "ZFS Time Machine" and I would like to limit it to 1 TB (all I need for backup purposes). However, I have no idea how to do that. Is this possible? I can continue using a small external hard drive attached via FW800 if I have to but I'd much rather prefer putting everything on my server.

    Read the article

  • Why doesn't the Windows 7 volume mixer remember per-application levels for all applications?

    - by mdives
    If I have the device's master level set to 50, and then lower an application level to 25; Once I close that application and reopen it, the volume levels should persist. The master level should remain at 50 and the application's at 25. This does happen for most applications. However, for one in particular, Napster, it does not. I subscribe to Napster's streaming service. I use the Napster desktop application to connect to that service. Every time that I open the Napster app, I have to adjust the application's volume level down in the volume mixer. When I open the app again after closing it, I have to do the same thing, the volume mixer is not remembering the set level. In fact. The level is reset back to 50, the same level as the device's master level. Has anyone else experienced this, with Napster or any other application? Is there a solution or is this a known issue?

    Read the article

  • Computer locking up, looking for bootable hardware diagnostic tool.

    - by Carl Menke
    Well today I helped my friend build a computer. All went pretty well until we got to installing Win7. Thing is, we thought, it was crashing constantly. I adjusted pretty much every setting in the BIOS and removed as much hardware as possible to try and prevent a crash. No dice. So far I've tried running an Ubuntu live cd without the harddrive installed. Nope, crashed on boot. And then I just tried Microsoft's ram utility disk and it eventually locked up on that (the ram passed though). So it seems to me like it's either the CPU (AMD PhenomII x3) or the motherboard that could be bad, but I don't know how to test them individually for problems. I thought it could be a overheating issue, but the BIOS reports that the CPU temp is fine idling around 34C. Any advice or diagnostic disk that could help me out? TL;DR: Computer locks up frequently during use (cannot even boot/install an operating system), memory is fine, probably CPU or Mobo, BIOS says CPU temps are fine. What should I try?

    Read the article

  • How is Apache still working?

    - by PJ
    Recently, I decided to set up a local development environment for my work projects. I'm a PHP developer, with just enough knowledge of Linux and Apache to break things mightily. To get the local environment looking like my work environment, I had to upgrade PHP. When I did, Apache wouldn't restart. I decided I wanted to start fresh (this is where things went wrong) and that I'd reinstall Apache and PHP using MacPorts. So, I went through and tried to delete all the Apache files. Yup. I ran locate apache2 and deleted any folders that looked important. (I know, I know) Then I did a /usr/libexec/locate.updatedb to make sure everything was up to date. I even restarted my machine, just to make sure. The issue is, http://localhost still works. As does an alias I set up, http://butler. Shouldn't they not work? Now that I'm this far in, are there any tips for how to completely remove Apache so I can start over? Worst case, I have a timemachine backup, so I can always just restore that... Thanks in advance.

    Read the article

  • Nginx + PHP5-FPM repeated cut outs 502

    - by James
    I've seen a number of questions here that highlight random 502 (Nginx + PHP-FPM = "Random" 502 Bad Gateway) and similar time outs when using Nginx + PHP-FPM. Even with all the questions, I'm still unable to find a solution. Using Ubuntu 10.10 + Nginx + PHP5-FPM + APC and every 1 out of 4 requests ends in a timeout and failure. This isn't a load issue or large traffic, it happens even in dev environment with one person. I am doing this across 3 1GB machines, each with the same configurations and same problems. fastcgi_params fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REDIRECT_STATUS 200; /etc/php5/fpm/main.conf ; FPM Configuration ; ;include=/etc/php5/fpm/*.conf ; Global Options ; pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log ;log_level = notice ;emergency_restart_threshold = 0 ;emergency_restart_interval = 0 ;process_control_timeout = 0 ;daemonize = yes ; Pool Definitions ; include=/etc/php5/fpm/pool.d/*.conf /etc/php5/fpm/pool.d/www.conf [www] listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data ;pm.max_children = 50 pm.max_children = 15 ;pm.start_servers = 20 pm.min_spare_servers = 5 ;pm.max_spare_servers = 35 pm.max_spare_servers = 10 ;pm.max_requests = 500 ;pm.status_path = /status ;ping.path = /ping ;ping.response = pong request_terminate_timeout = 30 ;request_slowlog_timeout = 0 ;slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes

    Read the article

  • Printing to a remote printer through the internet

    - by Lock
    I have a remote network (A) that is connected to a head office (B) through a private network. Network A only has 1 PC that requires the connection, and this is into a terminal server at network B. We want to save money by getting rid of the private network as only 1 PC now access it and it seems silly to pay ~$400 per month for something that is accessed by 1 PC. A VPN tunnel is out of the question as the provider wants to charge $600 a month for a VPN tunnel (more than a private network? I might get them to check these numbers). I was thinking of 2 options: 1) VPN client on the PC. This wouldn't cost a thing as we already have VPN users available. 2) Open up a port on the firewall of network B, forwarding to the terminal server. Now the problem is this: On the terminal server, the program that is accessed is for printing labels to the printer that is at network A. The program is setup to send all print jobs to a printer that is setup locally on the terminal server, which has its port mapped to the IP address of the printer that is at network A. If we got rid of the VPN tunnel and used clients/open up firewall port, the printer would no longer be able to find network A, and hence printing would not work. Any ideas to combat this issue? Can the printers at the remote network be setup as internet printers? I've never had any experience with internet printers. Can you open up ports and map to a public static IP address?

    Read the article

  • Webservice randomly dropping connections - possibly due to firewall nonevent data?

    - by adam
    I have a hosted webapp which requests data from a REST webservice in our office. Each page calls one (or several) webservices, which go from our host, via our firewall (a Watchguard Firebox) to a server in our office. All of a sudden, the app has dramatically slowed. We have determined that the webservice is timing out at random when called externally (it's fine when called within the office network). I'm pretty certain it's our connection which is dropping the webservice call, so I've written a quick php/curl script which calls the webservice over many iterations and shows the various timings. Below is an example output, showing both a failed and a successful call (with a 5 second timeout): http_code namelookup_time connect_time pretransfer_time starttransfer_time total_time 1 0 0.000096 0.0342 0.0000 0.0000 0.0342 2 200 0.000052 0.0332 0.1327 0.1751 0.1752 As per iteration #1 above, failed requests seem to be failing between connect and pretransfer. I'm not sure if this shows that the connection is successfully past the firewall, or could the firewall still cause an issue? Our firewall is showing a series of nondata event log messages for the relevant access rule. Our IT team tells me these are routine, although I can find no mention of these in Google. I'm not sure if this fits in between connect and pretransfer. Having elinated the webservice server (by testing internally) and the live webapp (by testing different code on different external servers, I am left suspecting the connection to the office. Could the firebox nondata events be causing a problem between connect and pretransfer?

    Read the article

  • Avoiding DNS timeouts when a dns server fails

    - by Neil Katin
    We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • Setting SVN permissions with Dav SVN Authz

    - by Ken
    There seems to be a path inheritance issue which is boggling me over access restrictions. For instance, if I grant rw access one group/user, and wish to restrict it some /../../secret to none, it promptly spits in my face. Here is an example of what I'm trying to achieve in dav_svn.authz [groups] grp_W = a, b, c, g grp_X = a, d, f, e grp_Y = a, e, [/] * = @grp_Y = rw [somerepo1:/projectPot] @grp_W = rw [somerepo2:/projectKettle] @grp_X = rw What is expected: grp_Y has rw access to all repositories, while grp_W and grp_X only have access to their respective repositories. What occurs: grp_Y has access to all repositories, while grp_W and grp_X have access to nothing If I flip the access ordering where I give everyone access and restrict it in each repository, it promply ignores the invalidation rule (stripping of rights) and gives everyone the access granted at the root level. Forgoing groups, it performs the same with user specific provisions; even fully defined such as: [/] a = rw b = c = d = e = f = g = rw [somerepo1:/projectPot] a = rw b = rw c = rw d = e = rw f = g = rw [somerepo2:/projectKettle] a = rw b c d = rw e = rw f = rw g Which yields the exact same result. According to the documentation I'm following all protocols so this is insane. Running on Apache2 with dav_svn

    Read the article

  • Basic OpenVPN setup

    - by WalterJ89
    I am attempting to connect 2 win7 (x64+ x32) computers (there will be 4 in total) using OpenVPN. Right now they are on the same network but the intention is to be able to access the client remotely regardless of its location. The Problem I am having is I am unable to ping or tracert between the two computers. They seem to be on different subnets even though I have the mask set to 255.255.255.0. The server ends up as 10.8.0.1 255.255.255.252 and the client 10.8.0.6 255.255.255.252. And a third ends up as 10.8.0.10. I don't know if this a Windows 7 problem or something I have wrong in my config. Its a very simple set up, I'm not connecting two LANs. this is the server config (removed all the extra lines because it was too ugly) port 1194 proto udp dev tun ca keys/ca.crt cert keys/server.crt key keys/server.key # This file should be kept secret dh keys/dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-to-client duplicate-cn keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 6 this is the client config client dev tun proto udp remote thisdomainis.random.com 1194 resolv-retry infinite nobind persist-key persist-tun ca keys/ca.crt cert keys/client.crt key keys/client.key ns-cert-type server comp-lzo verb 6 Is there anything I missed in this? keys are all correct and the vpn's connect fine, its just the subnet or route issue. Thank You

    Read the article

< Previous Page | 767 768 769 770 771 772 773 774 775 776 777 778  | Next Page >