Search Results

Search found 20569 results on 823 pages for 'pc settings'.

Page 284/823 | < Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >

  • I cannot connect to home server after a few hours

    - by Iago
    I have an old PC and I decided to revive it. A LAMP (for my own use) and a P2P server (torrent and e2dk). My old PC is an AMD Athlon XP (1400 MHz) with 384 Mb of RAM First of all I installed Ubuntu Server 11.10, SSH, FTP, SAMBA and LAMP. With this configuration my home server works well, with no problem. Then I went to the P2P server and I tried rTorrent and then uTorrent Server Alpha. And here is my problem. After a few hours (maybe 10 hours, or maybe 30 hours) with the torrent app running (rTorrent or uTorrent) I lose the connection to my home server. That is, I cannot access via ssh, I cannot access the apache server, etc. but I can ping the home server. It seems that the server freezes and all I can do is reboot the server physically. So, I have two questions: What is the problem? and How can I solve it?

    Read the article

  • Accessing a webpage folder with .htaccess in it via apache webdav?

    - by pingo
    I have setup webdav access in order to enable an external user to upload the content of his web page to his folder on my server that is served by apache to the web. This way he could update his web page via webdav. Now the problem is that the user requires a .htaccess file and of course .htaccess breaks webdav probably because it overrides settings. (new files cannot be uploaded anymore via webdav if below specified .htaccess exists) I am running Apache2.2.17 and this is my webdav config: Alias /folderDAV "d:/wamp/www/somewebsite/" <Location /folderDAV> Order Allow,Deny Allow from all Dav On AuthType Digest AuthName DAV-upload AuthUserFile "D:/wamp/passtore/user.passwd" AuthDigestProvider file require valid-user </Location> This config is part of my naive solution to fixing this problem. The idea was to specify an alias to the web page folder where webdav would be enabled and then set AllowOverride to none so that the .htaccess would have no effect. Of course I then found out that in <Location /> AllowOverride directive is not valid. The .htaccess file looks like this: #opencart settings Options +FollowSymlinks Options -Indexes <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] ErrorDocument 403 /403.html deny from 1.1.1.1/19 allow from 2.2.2.2 What would be the solution here? I would like to have the web page accessible from the web but at the same time be able to access and modify it via apache's webdav (with digest auth). How would I do that? Also if possible I would like a solution that permits the existence of the .htaccess so that the user still has the power to setup access rules for his web page.

    Read the article

  • what type of laptop do I need to run a amd64 or i386 VM?

    - by Frank Schwieterman
    I was running an amd64 build of Ubuntu on a VM on a Windows host which was also amd64. Later I found I could not run the same amd64 iso on my laptop, which is intel without hyper-V. I was confused I thought chipset mattered, but maybe it does not. When buying a PC or Apple, is there anything to check about the chipset to make sure it can run different types of VMs? In my case, I was trying to run ubuntu on a Thinkpad T520. Per answer below, I did need to enable some bios settings. I'm still having some issues. Running ubuntu on virtual box, when I try to use ubuntu-12.10-server-amd64.iso for the CD/DVD device to start a new VM, virtualbox complains "Failed to open the CD/DVD image . Could not get the storage format of the medium (VERR_NOT_SUPPORTED). When I try to use ubuntu-12.10-server-i386.iso the ISO is accepted, but then the VM complains "FATAL: No bootable medium found! System halted." I had been using an amd64 iso on my home PC which is amd64 and it works fine, which is why I suspected CPU mismatch was the problem at first. But it seems like I'm having issues, and maybe this superuser thread can be used to verify the cpu is irrelevant in this case.

    Read the article

  • Why are my Windows 7 updates continuously failing?

    - by Chris C.
    I'm an advanced level user here with an odd issue. I have two Windows Updates that are failing to install, every single time. I'm getting a mysterious "Code 1" error on both updates, an error for which I'm having difficulty finding a solution. The updates in question are: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Because these updates are failing, the Shut Down button in my start menu always has the shield icon next to it, indicating that "new" updates will be installed on shut down. But, of course, they'll fail and when the PC is restarted, the shield icon is still there. When checking the update history and viewing the details of the failed updates, I get the following: Security Update for Microsoft Visual C++ 2008 Service Pack 1 Redistributable Package (KB2538243) Installation date: ?6/?29/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important A security issue has been identified leading to MFC application vulnerability in DLL planting due to MFC not specifying the full path to system/localization DLLs. You can protect your computer by installing this update from Microsoft. After you install this item, you may have to restart your computer. More information: http://go.microsoft.com/fwlink/?LinkId=216803 and: System Update Readiness Tool for Windows 7 for x64-based Systems (KB947821) [May 2011] Installation date: ?6/?28/?2011 3:00 AM Installation status: Failed Error details: Code 1 Update type: Important This tool is being offered because an inconsistency was found in the Windows servicing store which may prevent the successful installation of future updates, service packs, and software. This tool checks your computer for such inconsistencies and tries to resolve issues if found. More information: http://support.microsoft.com/kb/947821 About My System I'm running Windows 7 Home Premium 64-bit. This is a custom PC build and the OS was installed fresh, not an upgrade from a previous version. I've been running this system for about four months. Windows Updates aside, the system is usually quite stable.

    Read the article

  • ASP.Net Session Timing Out Rapidly

    - by Zac
    We have an ASP.Net 3.5 website running on Windows Server 2008 with IIS7. The session timeout period for this site is configured to be 20 minutes - however, it is currently lasting for between 40 and 50 seconds. After researching the problem we investigated several configuration values which could be involved in the timeout period but none of them are set to less than 20 minutes. The areas we look are as follows: web.config system.web/sessionState element (20 minutes). web.config system.web/authentication/forms element (not present, defaults to 30 minutes). Sites/{website}/ASP/Session Properties/Time-out (20 minutes). Application Pools/{appPool}/Advanced Settings/Process Model/Idle Time-out (20 minutes). We've also noted that the CPU is staying around 0% and that RAM usage is flat-lining around 1.07 GB (of 8 GB available) - so there is no performance-based reason for IIS to be recycling the Application Pool as far as we can tell. Are there any settings we've overlooked which could cause the session timeouts to be expiring so quickly? EDIT A couple of additional points: This is not occurring in development, only on the server. The session is not sliding (i.e. if we refresh the page a few times it still times out approximately 40 - 50 seconds after the session was created.

    Read the article

  • TCP Keepalive and firewall killing idle sessions

    - by Carlos A. Ibarra
    In a customer site, the network team added a firewall between the client and the server. This is causing idle connections to get disconnected after about 40 minutes of idle time. The network people say that the firewall doesn't have any idle connection timeout, but the fact is that the idle connections get broken. In order to get around this, we first configured the server (a Linux machine) with TCP keepalives turned on with tcp_keepalive_time=300, tcp_keepalive_intvl=300, and tcp_keepalive_probes=30000. This works, and the connections stay viable for days or more. However, we would also like the server to detect dead clients and kill the connection, so we changed the settings to time=300,intvl=180,probes=10, thinking that if the client was indeed alive, the server would probe every 300s (5 minutes) and the client would respond with an ACK and that would keep the firewall from seeing this as an idle connection and killing it. If the client was dead, after 10 probes, the server would abort the connection. To our surprise, the idle but alive connections get killed after about 40 minutes as before. Wireshark running on the client side shows no keepalives at all between the server and client, even when keepalives are enabled on the server. What could be happening here? If the keepalive settings on the server are time=300,intvl=180,probes=10, I would expect that if the client is alive but idle, the server would send keepalive probes every 300 seconds and leave the connection alone, and if the client is dead, it would send one after 300 seconds, then 9 more probes every 180 seconds before killing the connection. Am I right? One possibility is that the firewall is somehow intercepting the keepalive probes from the server and failing to pass them on to the client, and the fact that it got a probe makes it think that the connection is active. Is this common behavior for a firewall? We don't know what kind of firewall is involved. The server is a Teradata node and the connection is from a Teradata client utility to the database server, port 1025 on the server side, but we have seen the same problem with an SSH connection so we think it affects all TCP connections.

    Read the article

  • Proper configuration for Windows SMTP Virtual Server to only send email from localhost, and tracking down source of spam emails

    - by ilasno
    We manage a server that is hosted on Amazon EC2, which has web applications that need to be able to send outgoing email. Recently we received a notice from Amazon about possible email abuse on that server, so i've been looking into it. It's Windows Server Datacenter (2003, i guess), and uses SMTP Virtual Server (you know, the one that requires IIS 6 for admin). The settings on the Access tab are as follows: - Authentication: Anonymous - Connection: Only from 3 ip addresses (127.0.0.1 and 2 others that refer to that server) - Relay: Only from 3 ip addresses (127.0.0.1 and 2 others that refer to that server) In the SMTP logs there are many entries like the following: 2012-02-08 23:43:56 64.76.125.151 OutboundConnectionCommand SMTPSVC1 FROM: 0 0 4 0 26364 SMTP - - - - 2012-02-08 23:43:56 64.76.125.151 OutboundConnectionResponse SMTPSVC1 250+ok 0 0 6 0 26536 SMTP - - - - 2012-02-08 23:43:56 64.76.125.151 OutboundConnectionCommand SMTPSVC1 TO: 0 0 4 0 26536 SMTP - - - - 2012-02-08 23:43:56 64.76.125.151 OutboundConnectionResponse SMTPSVC1 250+ok 0 0 6 0 26707 SMTP - - - - ([email protected] is sending quite a lot of emails :-/) Can anyone confirm if the SMTP server settings seem correct? I'm also wondering if a web application on the machine could be exposing a contact form or something that would allow this sort of abuse, looking into that (and how to look into that) further.

    Read the article

  • Western Digital My Book not recognized by WD software

    - by Kari
    A few years ago I bought a WD My Book Pro 2. It worked fine for a while, then one of the drives failed and I sent it back to be replaced under warranty. I never got around to setting up the new one when I got it back. I finally ran out of room on my internal drive, so I tried to use the external - no go. Both drives spin up, but aren't recognized by either Disk Utility (Mac) or the WD Drive Manager. I tried on a PC as well with fresh software. Then I pulled the drives out of the enclosure (warranty is already expired) and plugged them straight into the PC. Both recognized and working 100% in RAID0. BIOS recognizes either disk as functional; Windows only sees them when both are connected due to the RAID which I can't change without the WD software. The drives that were returned to me are the "Green" drives which I've read are NOT recommended for RAID. Is it possible that this is interfering with them reading externally? Any other ideas? My main computer is a laptop so using them internally isn't an option :(

    Read the article

  • how is the the linux console displayed to the user and how does the user go about changing the conso

    - by Chris
    I've been searching for the last two day on trying to understand how the console displays itself to the user and how to change the console settings. I've had some luck along the way but nothing that I've found has giving me a real clear explanation of how the console is displayed or how to change or control it's display settings. Some examples that of what I'm looking for are as follows: How is the console displayed on the screen? I know with X11 it uses your graphics card driver to display graphics to the screen, but how is the consoles text mode handled? Could some one ether explain this to me or point me to an in-depth overview of it all? Is it possible to have multi-head support in console mode with separate tty's on each screen? If so how would I go about setting this up? How would you go about changing the size of the console display from the default 80x25 to a custom size? I'm testing anything I find on a debian testing build, which is just the minimal base install on a virtual box. In time I will be using this information to setup my main system which is multi-head with 3 monitors. I would like to be able to support all three displays in console mode if possible.

    Read the article

  • Multiple copies off the same printer on Windows 7 from PrintUIEntry

    - by Kev
    I currently have a number of bat files which work perfectly fine on Windows XP which install the same printer multiple times with a number of finisher options set - e.g. after running the bat file below I would end up with four printers in the printer drop down called Sharp Kits Printer - A4 Single Sided Sharp Kits Printer - A4 Single Sided Stapled Sharp Kits Printer - A4 Duplex Stapled Sharp Kits Printer - A4 Duplex which all have there options configured in the relevant way. I have amended on Windows 7 to point to correct INF file and printer name in the INF files - a single printer installs fine. However when I run the complete batch file only the first printer in it is installed - occassionally the later ones flash up in the GUI but then vanish when you press F5 and are still missing after a reboot. SET QUEUENAME=http://192.168.7.123:631/printers/Sharp700 SET PPD=J:\DRIVERS\Printers\MX700-Win7-64\SJ1JWENG.INF SET PPDENTRYNAME=SHARP MX-M700U PPD J: cd "\DRIVERS\Printers\MX700-Win7-64" SET NICENAME="Sharp Kits Printer - A4 Single Sided" SET PREFS="J:\SCRIPTS\Printers-Win7-64bit\Sharp_SINGLE_SETTINGS.dat" %SYSTEMDRIVE%\WINDOWS\system32\rundll32.exe %SYSTEMDRIVE%\WINDOWS\system32\printui.dll,PrintUIEntry /w /b %NICENAME% /x /n "part of the n switch" /f "%PPD%" /if /r "%QUEUENAME%" /m "%PPDENTRYNAME%" rem restore settings go here... SET NICENAME="Sharp Kits Printer - A4 Duplex" SET PREFS="J:\SCRIPTS\Printers\Sharp_DUPLEX_SETTINGS.dat" %SYSTEMDRIVE%\WINDOWS\system32\rundll32.exe %SYSTEMDRIVE%\WINDOWS\system32\printui.dll,PrintUIEntry /w /b %NICENAME% /x /n "part of the n switch" /f "%PPD%" /if /r "%QUEUENAME%" /m "%PPDENTRYNAME%" rem restore settings go here... I have tried adding the "/u" paramater to the end, I have changed the "/n" paramater to be different (e.g. n1, n2,n3 etc) - both of these result in the same. I have also tried to change the port (/r) to have "_1" (etc) on the end like the GUI would but this errors as the port doesn't exist. Is it possible to do this on Windows 7, and if so how?

    Read the article

  • ssh hangs on "Last login" line

    - by Pavel H
    This happened for the first time three days ago - I ssh to the server, authenticate using a password, get the welcome message but it remains hanging on the "Last login:..." line. The command line doesn't show and the server doesn't react to my input. Other services on the server keep working ok (apache, tomcat, database, ..). The box has an out-of-band management using which I was able to restart it. After the restart the ssh worked ok again and I didn't find anything suspicious in the logs. Three days later the same problem occurs on this box again, and newly on yet another server in the cluster - 100% same symptoms. Both servers have about 2 month old installation of Debian Squeeze (6.0.2) and the problem never occurred before despite frequent ssh-ing, so it should not be a problem of settings. We haven't been installing anything new for quite some time now. I also made sure there is enough disk space on both servers. Since it started to happen all of a sudden on two servers at about the same time, I suspect some bug may have been introduced via Debian updates, yet I haven't been able to find anyone with the same problem. Most similar issues I have found: ssh freezes at the "Last Login Line" - in our case everything worked fine until recently, so nothing related to settings should be our problem. Diskspace checked, I couldn't check the memory but I would expect something would be in the logs if the system had been running out of it. Remote Fedora system unresponsive, odd but consistent behavior when trying to log in - problem with high load on the server; unlike in this case, nothing changes even if I wait for 10+ minutes

    Read the article

  • Acer Aspire Touchscreen Not Responding

    - by Jerry
    I have an Acer Aspire z3101 all-in-one Touch screen running Windows 7 Home Premium. I am unable to get the screen to respond to my touch. I have checked the system and all drivers are ok according to comp. Using the settings to setup the pen and touch and to calibrate does not work. My touch brings no response at all. I had reinstalled the applications - Windows Touch Pack and Acer Touch Portal which didn't help. Today I done a complete reinstall returning comp back to factory settings and still there is no response. At first I thought that perhaps a driver or file was missing but now doubt that due to the reinstall not helping. Have you any idea what could be causing this problem? I am now at the stage of pulling my hair out and haven't had too much help from Acer. I have had the computer for just over a year now and so it is no longer under guarantee and this has me very worried due to my being unemployed. Any help is much appreciated. Thank you.

    Read the article

  • Wireless Access Point stopped working

    - by Alex Pritchard
    I have a simple LAN set up at home using a Linksys WRT54GSV4 as my primary router and an Encore ENHWI-2AN3 as an access point. I connect the Encore to the Linksys by running a cable from one of the Linksys LAN ports into the Encore WAN input. I originally configured this using the Encore setup wizard, setting the device up in AP Router Mode. It detected the input network and worked about as expected, creating a second network that used my primary network to connect to the internet. It worked fine for about 2 weeks, then abruptly cut out today. I checked to make sure the network was still live through the cable going into the Encore (provides internet when connected to a laptop directly) and that devices are still able to connect to the network being broadcast by the Encore. When I try to rerun the connection wizard on the Encore, I receive the message "No Services found in WAN port." The WAN Settings is no longer retrieving a dynamic ip from the line. I tried providing a static IP, assigning an IP address within the subnet range of my primary router that wasn't being used and pointing the Default Gateway to the Linksys IP, but this did not work either. When I plug the cable into the WAN port, an internet light comes on that is not lit when a live network is not connected. I've tried doing a hard reset on the Encore (held down the rest button until the lights flashed, reconfigured from scratch), but the WAN settings are still not detected. Also tried powering off and on the modem, linksys, and encore. Any suggestions would be appreciated!

    Read the article

  • Motorola Surfboard SB6121 modem conected to 2WIREi38HG wireless router but there's no internet access

    - by Jessica
    I have just switched to Comcast cable internet from AT&T Uverse and I was hoping to use the 2WIRE wireless router with the new Surfboard modem so I can have wireless access. I messed around with some settings and got it working for my laptop (I'm not terribly well versed in computer stuff; I think it was mostly luck) for about a week. The other day I tried to get online and there was no internet connection. I restarted the equipment with no success and then plugged the modem directly into the laptop. This worked, so I knew there was no outage. I connected the ethernet cord to the router and a second cord to my laptop and that worked, too. But when I tried again just with the wireless the laptop connects to the router, but doesn't recognize it or find an internet connection. I tried to go to http://gateway.2Wire.net to fiddle with the settings, but all I get is a Server Not Found page. I tried to check the ip address but this is really kind of over my head and I get different things checking it while plugged into only the modem vs when I plug into the router. Can anyone help? The frustrating thing is that I had it working for awhile, so I know it can do it!

    Read the article

  • Is it possible to record a screen-video from a VNC server?

    - by nikie
    I have a computer that's running VNC server. I would like to record a video of what's going on on this computer, if possible without installing additional software on that computer. Is there a program that can connect to the VNC server port and instead of displaying the screen save it to an (e.g. AVI) video file? Background: One of our customers sometimes has problems with the software he bought from us when he's performing a complex procedure. To help him, we offered that someone (a service technician or programmer) watches what he's doing during that procedure to find out if he's doing something wrong or if there's a bug in the software. Currently, this is done live via VNC. That has a few disadvantages: The service technician has to be in the office at the time. As the customers are in different time zones, that can be in the middle of the night. If the service technician forgets something or doesn't notice something, it's lost. There's no way to see what happened again. Only a single computer can be watched by one service technician at a time. I know I could install normal screen-grab software on the computer, but we're talking about an embedded system with limited RAM, CPU, HDD space, so installing something new is not an easy decision. And VNC is already there. I could of course open a VNC client on some office PC and capture that PC's screen, but I can only record one remote computer that way. I often have to watch up to 8 screens in parallel. (And I don't think that screen-grabbing VNC would improve image quality, either.)

    Read the article

  • NGINX: How do I calculate an optimal no. of worker processes and worker connections?

    - by bodacious
    Our web app is running on a Linode 2048 server at the moment (~ 2048 GB of RAM) The MYSQL database is on another linode of it's own so this server is really only handling NGINX and and the Rails application. The application itself uses about 185976 of memory per instance (RSS). Our traffic is < 1000 per day and the pages are mostly cached so there are fewer hits to the rails app itself. My question is - how can I calculate optimal NGINX config settings for my app? Below is the current config: worker_processes 1; # pid of nginx master process pid /var/run/nginx.pid; events { worker_connections 1024; } http { access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; passenger_root /home/user/.rvm/gems/ree-1.8.7-2011.01@URTV/gems/passenger-3.0.3; passenger_ruby /home/user/.rvm/rubies/ree-1.8.7-2011.01/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; # gzip settings gzip on; gzip_http_version 1.0; gzip_comp_level 2; gzip_vary on; gzip_proxied any; gzip_types text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # load extra modules from the vhosts directory include /opt/nginx/vhosts/*.conf; } Any advice would be appreciated! :)

    Read the article

  • Win7 Hangs During App Install/Upgrade/Uninstall

    - by JadeMason
    I have a custom built PC that intermittently hangs when installing, uninstalling, or upgrading applications. Technical Specs ASUS P5E w/ WiFi Motherboard Intel Core2 Quad Q6600 Processor 4x 2GB G.Skill DDR2 800 SDRAM ASUS EAH2900XT / Radeon HD 2900XT 512MB Video Card Under normal operation the machine runs reliably, even under heavy load, such as video transcoding. The temperature never gets anywhere near where I would worry about it. However, the machine regularly hangs (complete lockup, no response to keyboard or mouse, no activity on-screen) when either installing a new application, uninstalling an existing application, or applying patches to existing applications or the OS. This is extremely frustrating as this machine is primarily used as a HTPC. Several apps are configured for automatic updates, and these updates sometimes cause the machine to lockup while we are watching content on the PC. In previously investigating this issue, I found one likely problem could be my Logitech Webcam. The Logitech software has a bug that leaves an entry in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\ Control\SessionManager\PendingFileRenameOperations Which references the Temp directory. My registry contained this error, so I uninstalled the webcam software and deleted this registry key value. Unfortunately, the machine will still intermittently hang. I've noticed that the hangs always happen when an install/upgrade/uninstall requires elevated privileges (presumably to modify the registry). I can typically get at least one install/upgrade/uninstall to complete after a reboot, but after that it is a game of russian roulette to see if the operation will succeed or hang the machine. The event log is not helpful, as log messages end at the time of the hang, with no record of a warning or error. My only recourse when the machine hangs in this way is to perform a hard reset/power cycle. Any tips on how to further debug this issue are greatly appreciated.

    Read the article

  • Windows 7 boot animation slows down startup by default?

    - by kngofwrld
    I just upgraded my HDD to an SSD drive. I am running a completely fresh install and enjoy the short boot time. I tweaked the startup to be as fast as I could by removing unneeded apps and such. Nor am I running a solid desktop background (which causes a 30-sec startup delay). I have a 2.1ghz 64 bit laptop with 4 gigs of ram, so it's not a liquid-cooled speed monster, but I checked some super high end PC boot vids on YouTube and noticed that they startup in almost the same time as my machine. I also noticed that the glowing Windows 7 animation plays all the way no matter how fast the PC is. I turned off the animation, and the startup time is unchanged. I turned on verbose startup info and noticed that it runs until the very end, where it looks like it just sits there for no reason waiting for something to happen for a few seconds. So now I think that the Windows 7 startup animation has a timer built into it that forces the computer to wait for no other reason than to play the full animation. Super-fast XP boot vids on YouTube seem to start much faster (and not just because they "have less to load"). Am I imagining things? My question is: How can I turn off not just the animation, but the timer for the animation. Here is a vid that tipped me off, I have no relation to the poster. (warning: soundtrack might be loud) http://www.youtube.com/watch?v=T5LkX3xejJ4

    Read the article

  • gitweb refusing to blame

    - by Slipp D. Thompson
    I'm attempting to get gitweb (git 1.8.4.2, via git instaweb) in a project dir on my Debian server to offer blame views. In my /etc/gitweb.conf: … # default logo, favicon, etc. settings $feature{'blame'}{'default'} = [1]; $feature{'pickaxe'}{'default'} = [1]; $feature{'snapshot'}{'default'} = ['tgz', 'txz', 'zip']; $feature{'highlight'}{'default'} = [1]; $feature{'pathinfo'}{'default'} = [1]; In my global config file: [gitweb] blame = true snapshot = tgz, txz, zip patches = 256 avatar = gravatar [instaweb] local = false httpd = apache2 -f port = 4321 In my project's .git/config file: [gitweb] blame = true And yet, when I try to load a git blame view (via hand-modifying the URL to http://myserversip:4321/?p=.git;a=blame;f=Tests/InchCoordProxyTests.m;h=b4b2…;hb=53b4, since blame action links don't show up): Doing a quick search for “Blame view not allowed” in the gitweb.cgi source reveals plainly that the gitweb_check_feature('blame') conditional is failing. What am I doing wrong? Or, is there a way to verbosely print out why gitweb is doing what it's doing (e.g. which config files were read, which settings were loaded from each file, etc.)?

    Read the article

  • Is there any way to synchronize AD users with Office 365 but still be able to edit them online?

    - by Massimo
    I'm performing a migration to Office 365 from a third-party mail server (MDaemon); the local Active Directory doesn't include any Exchange server, and never had any. We will need directory synchronization in order to enable users to log on to Office 365 using their domain credentials; but it seems that as soon as you enable directory synchronization, you can't perform any action anymore on Office 365 users: all changes need to be made on the local Active Directory, and then replicated by the synchronization process. For ordinary users with a single e-mail address and standard features, this is not a big problem; but what about users which need an additional address? What if I need to configure some nonstandard setting, like "hide from address list" or a custom mailbox quota? From what I've gathered, the only supported way to do this, as you can't directly edit Office 365 objects anymore after synchronization is enabled, is to extend the local AD schema with Exchange attributes, and then manually edit them (!). Or, you can install at least one local Exchange server, and then use the Exchange administrative tools to configure the required settings. Is this correct or am I missing something? Is there any way to synchronize user accounts and password, but still be able to edit user settings directly in Office 365? If not (everything really needs to be set locally and then synchronized), is there any simpler way to do this than manually editing LDAP attributes or installing a local Exchange server?

    Read the article

  • Pulling application updates from closest server?

    - by Mike Morris
    Setup: 6 Major Sites with Server 2003/2008 DCs doing DHCP/AD Integrated DNS, each on their own subnet. All connect back to datacenter through a 3 mbps WAN ERP server running in the datacenter, accessed by clients at all sites Currently, when we update the software, I manually push a copy of the updated client/config files down to each DC. I have a script that we run on each PC to update the clients. It determines what subnet the PC is on, and pulls the software from that DC. It's messy, but it works. The client has an autoupdate feature, but it'll only pull from the application server (which is housed in the datacenter, over the 3 meg link). It takes forever, since the updates are not "patches" but a full version of the client, even for minor upgrades (bad design). After the most recent patch, you can configure the clients to pull from a different server. Unfortunately, it is the same for all clients. Is there some kind of DNS magic I can use to pull from the local server? For instance, if I tell the clients their update server is ERPUPDATE, can I have their local DNS server return a different IP for ERPUPDATE than the other sites? Example: Client 1 is at site A, client 2 is at site b. They each run the software and a version change is detected. As per the config files, the clients look to ERPUPDATE for their updated client. Client 1 queries DNS for the IP of ERPUPDATE at its current location (site A) DNS at site A returns 192.1.1.5 Client 1 pulls update from 192.1.1.5 Client 2 queries DNS for the IP of ERPUPDATE at its current location (site B) DNS at site B returns 192.1.2.5 Client 2 pulls update from 192.1.2.5 Excuse the poor explanation, I worked 61 hours over the weekend and haven't completely rebounded. I'll be happy to clarify if needed!

    Read the article

  • WIN7 constant BSOD 0x7B on boot, not producing any dump files where to go from here?

    - by prayingpantis
    So my one win 7 pc has been getting a BSOD on boot (roughly a sec after load screen starts) after a power failure. The complete stop code is 0x0000007B (0x80786B58, 0xC0000034,0x00000000,0x00000000) I've searched for quite a while now on the net and it seems like most people gave up after gettting 0x7B and no dump files. What I've tried so far: startup repair - reports it cannot repair computer automatically. BadPatch is reported somewhere in a problem signature contained in the problem details. startup repair with a WIN 7 CD - also fails, I can't recall what the error was, but it was not the same as the error produced with the start up tool shipped with the version of WIN 7 installed on my machine (I think the text had something ACL-ish contained in it) used a boot disk (Hiren's boot iso) - I used it to enable the CrashDump registry key and then after BSOD, read the HDD's dump locations but it was empty. Note, I'm quite sure the registry keys I edited are the correct ones, since the reboot on BSOD option was enabled by default and after I changed the regkey controlling this functionalitty to 0 the BSOD stayed after I booted again. check disk - works and returns no problems, also it seems I'm able to access all my files on the HDD. mem test - works and returns no errors So I'm not sure what else I can do to figure out what is the problem here. I read somewhere that you can use WINDBG to remote debug another PC, but I'm not sure if this is possible since the OS isn't even loaded yet? Also the last driver change I made on the system was installing a video driver, but I had no problems with it and were able to reboot several times until the power outage happened and the BSOD appeared. Any help or guidance for a way to DEBUG this problem would really be appreciated (I'm not really that keen to try a whole bunch of random fixes, I'd rather try and narrow down the problem first).

    Read the article

  • ATI Radeon 5670 Won't Show Resolutions over 1400x900

    - by Phil Sandler
    Just got my new Dell computer with Windows 7 and an ATI Radeon 5670. I attached it to my current monitor, which is a Samsung 24" (2443bwt). Windows 7 does not allow me to display in resolutions greater than 1400 x 900. The setup through a VGA cable into the VGA port of the card. The card also has a DVI port, but I need to use the VGA port because a KVM that supports VGA only. My old PC (which is Windows XP, GeForce 8600 video) can display in 1900 x 1200 on the same monitor (which is what I want) and even higher. It does this through a vga cable also connected to the KVM (through the DVI port but using an adapter). I have tried the same setup (DVI = VGA adapter) on the new PC and nothing changed. I have tried: Updating the drivers via Windows "Update Driver" (says they are current) Installing the updated version of the drivers from ATI (made no difference) Installing Powerstrip (all the options I would need for a custom resolution are greyed out) Installing the drivers/software from ATI caused the ATI Catalyst Control Center software to stop functioning, so I can no longer even start it. I have found some references to other people having this problem and instructions on cleaning the software off and reinstalling it (as uninstalling normally doesn't solve it). I will try this tonight. In any case, I didn't see any options in CCC that would allow me to override the settings for max resolution. However I didn't tinker with it too much before I tried updating the drivers, so I may have missed a setting. I contacted Samsung via online chat and they say it's a problem with the video card/driver (of course--what else would they say?). Any thoughts on what else I could try?

    Read the article

  • Microphone doesn't work

    - by mandy
    I'm having a trouble with my built in microphone. Even if I use headphone with mic, it doesn't really work. The weird thing is, if I clap the green lines of the speaker icon jumps, but if I speak it doesn't. I have also tried some recordings, but I can not hear myself and adjusting the volume didn't help at all. I tried to restore it, still no change. I have updated it in the device manager, but it said there that it's up to date and the devices are working properly. Until I decided to recover the whole system (wherein I pressed zero and switch on button) to my surprise, the settings became different, most programs were deleted, even my files. It's like it was formatted and I'm so sad that the mic was not fixed. I really don't know what to do now. My laptop model is Toshiba satellite m840. I want to it return to its settings/set up just before I Recover the system and bring back all the programs that we're installed and of course, most of all, to fix my microphone so I can use Skype again and other video calling application.. I hope someone could help me. Thanks a lot!

    Read the article

  • Suspending/Screen Going Off When Still In Use (Ubuntu & Arch)

    - by luke
    I have a laptop (HP Pavilion G6) that was running Ubuntu and for a while now (at least 6 months) has been having a problems randomly suspending whilst still in use with a full battery and still being charged. Originally the problem was with Ubuntu so I first attempted to disable suspend using every way I could find (gui settings + dconf editor) this didn't work and it still kept suspending so I ended up switching to Arch Linux. Unfortunately not long after switching to Arch Linux I ended up experiencing the same problems. So yet again I modified the settings in /etc/systemd/logind.conf to prevent it from suspending and this time it worked, kind of. Now I am experiencing the screen going off and I have to change to a different tty (by using ctrl-alt-fx, which was something I also found I had to do sometimes when waking up from suspend in Ubuntu) to get the screen to go back on. The strange thing is this only happens when running a Linux distros and only occasionally (e.g. it may happen once/twice a week at most). But when it does happen it can happen multiple times in a row. And it only seems to happen when I am using it. This may just mean that it hasn't happened yet when I am not but generally if I leave it to run something or play a video it hasn't occurred only when I am using it regardless of which program I am using (e.g. it has occurred when using firefox, vim, even when using a virtualbox vm). At first I thought it could be the CPU temperature but after monitoring it I discovered it occurred a lot of the time when my CPU was less than 50 °C. I then checked /var/log/* but could not see anything related to it suspending only a few standard things from when it was woken up. I am really out of ideas and really hoping someone can help. Thanks in advance.

    Read the article

< Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >