Search Results

Search found 9916 results on 397 pages for 'counting sort'.

Page 358/397 | < Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >

  • Installing Cygwin C and C++ compilers for NetBeans IDE 7.2

    - by user1294663
    I am very new to Cygwin, C, C++ and NetBeans IDE 7.2. My PC is running MICROSOFT WINDOWS 7 OS. I have read the documentation on how to install the Cygwin C C++ compilers. http://netbeans.org/community/releases/72/cpp-setup-instructions.html#compilers I have tried to run Cygwin setup.exe that has the most recent version of the Cygwin DLL is 1.7.16-1. I am not very sure which exact package to install when the Cygwin setup.exe installer prompted for the selection of packages to download and install. I want to install the Cygwin C and C++ compilers so that i can create C and C++ projects using NetBeans 7.2 I selected those packages that has contains the following names gcc, g++, gdb and make. Then i proceed on to install the selected packages The installation took up a long time so i stopped after about 45 minutes or so. I browsed the installation folder and i saw some packages i selected were installed. I noticed that some packages came in some sort of "zip" file with tar.gz extension. i added the folder path into the PATH variable in the windows 7 environment variables window. I think this command works C: cygcheck -c cygwin but the rest doesn't work i think. C: gcc --version C: g++ --version C: make --version C: gdb --version I tried to create the C C++ project using the Netbeans IDE 7.2 and the IDE pops out a dialog message saying that there was no c c++ compilers found. Have i made some mistake here? like installing the wrong packages or something else??? Are there packages shown in the Cygwin setup.exe installer that contains exact names and exact version that is compatible with NetBeans IDE 7.2?? This i am not too sure. Because i i think i didn't really see some required packages with exact names and versions. My question is : Which exact packages do i install using the Cygwin setup.exe installer so that i can create C & C++ projects using Netbeans IDE 7.2? and what other steps do i have to take note to ensure complete successful installation? do i have to wait all the selected required packages to be installed? I WOULD LIKE TO KNOW THE EXACT NAMES AND THE VERSIONS FOR THE REQUIRED PACKAGES (NAMES AND VERSIONS DISPLAYED IN THE CYGWIN SETUP.EXE INSTALLER WHEN PROMPTED) NEEEDED FOR C & C++ PROGRAMMING USING NETBEANS IDE 7.2??

    Read the article

  • Some apps very slow to load on Windows 7, copy & paste very slow.

    - by Mike
    Hello, I searched here and couldn't find a similar issue to mine but apologies if I missed it. I've searched the web and no one else seems to be having the same issue either. I'm running Windows 7 Ultimate 64bit on a pretty high spec. machine (well, apart from the graphics): Asus M4A79T Deluxe AMD Phenom II 965 black edition (quad core, 3.4GHz) 8GB Crucial Ballistix DDR 1333MHz RAM 80GB Intex X25 SSD for OS 500GB mechanical drive for data. ATi Radeon HD 4600 series PCI-e Be Quiet! 850W PSU I think those are all the relevant stats, if you need anything else let me know. I've updated chipset, graphics and various other drivers all to no avail, the problem remains. I have also unplugged and replugged every connection internally and cleaned the RAM edge connectors. The problems: Video LAN (VLC) and CDBurnerXP both take ages to load, I'm talking 30 seconds and 1 minute respectively which is really not right. Copy and paste from Open Office spread sheet into Fire Fox, for example, is really, really slow, I'll have pressed control V 5 or 6 times before it actually happens, if I copy and then wait 5 to 10 seconds or so it'll paste first time so it's definitely some sort of time lag. Command and Conquer - Generals: Zero Hour. When playing it'll run perfectly for about 10 or 20 seconds then it'll just pause for 3 or 4 then run for another 10 or 20 seconds and pause again and so on. I had the Task Manager open on my 2nd monitor whilst playing once and I noticed it was using about 25% of the CPU, pretty stable but when the pause came another task didn't shoot up to 100% like others on the web have been reporting (similar but not the same as my issue, often svchost.exe for them) but dropped to 2 or 3% usage then back up to 25% when it started playing properly again. Very odd! But it gets even odder... I had a BSOD and reboot last week, when it rebooted the problem had completely gone, I could play C&C to my heart's content and both the other apps loaded instantly, copy and paste worked instantly too. I did an AVG update earlier this week which required a reboot, rebooted and the problem's back. I don't think it's AVG related though, I think it was just coincidence that's the app that required a reboot. I think any reboot would have brought the issue back. A number of reboots later and it hasn't gone away again. If any one could make any suggestions as to the likely cause and solution to these issues I'd be most grateful, it's driving me nuts! Thanks, Mike....

    Read the article

  • Slow wifi from Windows Server 2003 virtualized in XenServer

    - by John Clayton
    I'm a brand spanking new user of OS X, coming from a lifetime of Windows use. I've been setting up my new Macbook Pro and have run into a very unusual problem. Over wifi, I am unable to copy files to or from my Windows Home Server. The problem seems to exist only over wifi, and only to WHS. Here are the details of my setup: 2010 Macbook Pro (Core i7), OS X 10.6.3 Windows Home Server PP3 (virtualized in XenServer 5.5) Windows 7 Ultimate x64 desktop Windows 7 Ultimate x64 in Boot Camp D-Link DIR-655 wireless N router Here is what I've done to narrow down the problem: Files copy fine from WHS to OS X when using gigabit ethernet Files copy fine from desktop to OS X when using gigabit ethernet Files fail to copy from WHS to OS X when using wifi (error -51) Files copy fine from desktop to OS X when using wifi Files copy fine from WHS to Boot Camp when using wifi Files copy fine from desktop to Boot Camp when using wifi From what I can tell, it seems to be some sort of issue between OS X and WHS, but I can't for the life of me see what would be different between shares on WHS and my desktop. They are both connected using smb://ADDRESS (I've tried both by IP and name). I can browse the shares on the WHS, but copying to OS X fails. I originally found the issue while installing VS2010 off an ISO from WHS, mounted to a Windows 7 VM using VMware Fusion. During the installation the VM was unusable - even the clock got behind the host be about 8 minutes. Once I plugged in the ethernet and disabled the wifi things picked up and finished quickly. The Fusion 3.1 RC is the only I think of that I installed that may have messed with the wifi driver. I've also tried resetting the wifi router, and have changed it from being G & N to N-only. Under Boot Camp I get similar speeds as my wife's N laptop. Any ideas? Thanks! Update: The issue has been further narrowed down to Windows Server 2003, which Windows Home Server is based on, running in XenServer with the XenServer tools installed.

    Read the article

  • How can I simulate blocking RTMP over port 80 on Windows?

    - by Christian Nunciato
    It seems like this should be so simple, but since this isn't my area of expertise, I'm having a hell of a time figuring out how to do it. Basically, I have a Flash app and I'm connecting to a Flash Media Server to stream some content. The URL I'm using to do this, for example, looks like this: rtmp://someserver.com/some/path/mp3:somefile Everything works -- but that's sort of the problem. When I'm trying to do is simulate my users attempting to play back my media under more restrictive conditions than the ones I have here (i.e., none) -- namely being stuck behind firewalls or proxy servers that block access to RTMP streams. Flash, according to Adobe, is equipped to handle proxy servers and firewalls automatically, like so (from the docs): When you do not specify a port number in an RTMP address, Flash will attempt to connect to port 1935. If it fails it will then try to connect to port 443; if that fails, it will try port 80. [And if that fails, it will attempt to connect via RTMPT (i.e., HTTP tunneling) on port 80.] So no coding is required to access ports 1935, 443, or port 80 if you do not specify a port in the RTMP address. The problem I'm having is setting up a reliable environment in which to test that this behavior actually happens. I'm on a Windows machine, for example, so with Windows Firewall, I can block certain ports and protocols (1935, 443), but I don't want to block port 80, because the final fallback protocol (RTMPT) is supposed to run on port 80, and Windows Firewall only gives me enough granularity (as far as I know, anyway) to block "all outbound TCP traffic to remote port 80" -- that is, I can't, apparently, block "all outbound RTMP traffic to port 80" while leaving RTMPT traffic to port 80 unaffected. My understanding thus far is that I'll probably need to set up a proxy server to do this. Is this correct? Or is there a simpler way (on Win 7, at least) to filter out RTMP to 1935, RTMP to 443, RTMP to 80, but still allow RTMPT to 80 (where all four hostnames are identical)? And if I do have to set up a proxy server, what's the simplest way to go on Windows? I've set up WinProxy, which seems a bit janky but apparently works -- but then what I can't figure out is how to tell Windows to force all TCP traffic (including RTMP, RTMPT and HTTO) through this proxy server so I can turn around and reject the requests for RTMP. Any help would be hugely appreciated. This isn't my realm of expertise and I've alreasdy spent more time on it than I probably should. :)

    Read the article

  • Run Flyff without elevating user to Admin or requiring Admin Password

    - by AnonJr
    Bottom Line: I need to set up one game on my little sister's laptop to run without requiring an admin password/account. Its the only game that seems to insist on it... so far. Detailed Version: I set up my 14-year-old sister as a regular user on her Windows 7 Home Premium laptop, and almost everything has been fine - until she found a new game (Flyff) that doesn't seem to want to run without an Admin Password (or being logged in as an Admin). For what should be obvious reasons, I'm not going to make her an Admin. or give her the Admin password (which she swears she'll only use to run this game... anyone else buying that? Bueller?) Also, the parents aren't admins on her laptop (they are on their own, but that's another discussion for another day) and I'm not going to set them up as one as I know from past experience that the 3rd time my sister asks them to put in their password, they'll just tell her what it is - at which point I might as well as have just set her up as an admin from the outset. This is a Win7 Home Premium (64-bit, but I doubt that makes a difference) laptop, so using GPEdit is out. I also tried an answer provided in a related (but less specific) question. The app has read/write permissions for its folder in Program Files (x86), yet that doesn't seem to make a difference. I have not yet dug through the registry as mentioned in another answer to the aforementioned question. Just to be thorough, I have checked the "Run as Admin" option on the shortcut's properties to no avail. Am I missing something? Addendum 2010-11-11: Re-Checked permissions as per Joel's answer, and it didn't make a difference. Followed Jane T's suggestion (and Aeo's second) and created a "Games" folder outside Program Files, installing the game there - and making sure regular users had all the permissions they would need. No joy. After the latter of the above two changes, it occurred to me that it may be a UAC issue, so for kicks I turned off UAC - still the damn message. Last item noted: could it be a result of the publisher not being specified/verified? I've been taking a closer look at the error message and it occurred to me that the missing/unverified publisher info could have been the problem all along... Correct me if I'm wrong, but if that's the case, that means there's nothing I can do short of giving her some sort of Admin privileges (i.e. elevating her account, or giving her the password to a separate Admin account) or giving Mom an Admin account.

    Read the article

  • IIS 401.3 - Unauthorized on only 1 server out of 3 set up for network load balancing

    - by Tony
    Over the weekend our Server Admin set up two virtual Windows 2008 machines with IIS installed and set them up under NLB. I came in and changed the application pool the website was running under to our domain account that has proper access to the database and the file share hosting our .NET web application Sitefinity, and changed it to .NET 4 Integrated. NLB and everything was running fine on both servers. He brought up the third server for our cluster on Tuesday and I performed the same actions.. The only difference was that I was given admin rights for the third server so I could set it up remotely instead of going to his office. He has full control over the share and NTFS perms on \\hostname\Sitefinity and I believe I only had read access. I pointed the web site to the same \\hostname\Sitefinity\sitename share that the others were on and the authentication/authorization test settings passed. I hit the site from http://localhost (like I did successfully from the other two before trying the cluster's IP address) and I received a HTTP Error 401.3 - Unauthorized. I've verified many times that the application pool is running under the same service account. I tried hitting just a simple test.htm.. works fine on both of the first two servers but I get the same 401.3 on the third. I copied my dev project to the local inetpub directory and re-pointed the website and that ran perfectly. I turned on Failed Request Tracing and it acts like it's still running the local IUSR account I guess (instead of my domain account)? Here is an excerpt of the File Cache Access Start and the error from the trace: FileName \\hostname\sitefinity\sitename\test.htm UserName IUSR DomainName NT AUTHORITY ---------- Successful false FileFromCache false FileAddedToCache false FileDirmoned true LastModCheckErrorIgnored true ErrorCode 2147942405 LastModifiedTime ErrorCode Access is denied. (0x80070005) ---------- ModuleName IIS Web Core Notification 2 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 3 ErrorCode 2147942405 ConfigExceptionInfo Notification AUTHENTICATE_REQUEST ErrorCode Access is denied. (0x80070005) ---------- My personal AD account was then granted read/write perms to the share so I created a new application pool and set the site under it in case there was an issue with the application pool but no success. I created another under my own account and it still failed. It just seems like maybe it's not trying to access the files under the account my application pools are running under although that's the only way I've done things before. I set the Physicial Path Credentials in Advanced Settings on the site to the service account and it threw a 500 error of some sort so I assume that's not the answer (and I don't have to do it on the other servers). It's like somehow I'm trying to force impersonation on the IUSR account or something?

    Read the article

  • How do I troubleshoot root cause of a hung windows (2003) server?

    - by GregW
    I have a pair of Windows (2003 Server) servers both running MS SQL Server (2008 EE) that each hang every few months. This has been occurring intermittently :( for the past 15 months pretty much since we started using the servers. The symptoms are as-follows: I cannot remote desktop in to troubleshoot; when I attempt to, I get stuck on a blank black screen and am never offered a login prompt I can still ping the servers I can still open a SQL connection to the server, and, CURIOUSLY/BIZARRELY, when I do a "select getdate()", the time it returns appears to be stuck on the exact fraction of a second when (I presume) the server hung. Repeated attempts to do "select getdate()" keep getting that same date, suggesting that the clock is frozen. Filesharing attempts to connect to the hung server fail with the error message: "\ServerName is not accessible. You might not have permissions to use this network resource. Contact the administrator of this server to find out if you have access permissions. The server's clock is not synchronized with the primary domain controller's clock." This is consistent with a frozen clock. Post-reboot, if I investigate the Windows Event Viewer logs, I can see many security accesses (coming from me and others) that I recognize were login attempts during the "down" period, but all of them in the security log are associated with that same timestamp of when the server hung. This also suggests the clock is frozen. There is not a clear cause in the Application or System event logs. I have a local Admin account on the server and am in the process of getting a domain-credentialed Admin account for better remote admin access. HP is supposed to be supporting these machines and has some low-level ILO2 access but they seem incapable of finding the root cause. A reboot will "fix" the problem but I would like to get to the root cause and solve the issue. Has anyone ever seen something like this odd clock behavior?! (If it were just one server I'd perhaps say a bad hardware clock, but two?) Can anyone advise me on what I should try to troubleshoot this sort of situation to find the root cause (or what I should tell HP to try?)

    Read the article

  • Solution to route/proxy SNMP Traps (or Netflow, generic UDP, etc) for network monitoring?

    - by Christopher Cashell
    I'm implementing a network monitoring solution for a very large network (approximately 5000 network devices). We'd like to have all devices on our network send SNMP traps to a single box (technically this will probably be an HA pair of boxes) and then have that box pass the SNMP traps on to the real processing boxes. This will allow us to have multiple back-end boxes handling traps, and to distribute load among those back end boxes. One key feature that we need is the ability to forward the traps to a specific box depending on the source address of the trap. Any suggestions for the best way to handle this? Among the things we've considered are: Using snmptrapd to accept the traps, and have it pass them off to a custom written perl handler script to rewrite the trap and send it to the proper processing box Using some sort of load balancing software running on a Linux box to handle this (having some difficulty finding many load balancing programs that will handle UDP) Using a Load Balancing Appliance (F5, etc) Using IPTables on a Linux box to route the SNMP traps with NATing We've currently implemented and are testing the last solution, with a Linux box with IPTables configured to receive the traps, and then depending on the source address of the trap, rewrite it with a destination nat (DNAT) so the packet gets sent to the proper server. For example: # Range: 10.0.0.0/19 Site: abc01 Destination: foo01 iptables -t nat -A PREROUTING -p udp --dport 162 -s 10.0.0.0/19 -j DNAT --to-destination 10.1.2.3 # Range: 10.0.33.0/21 Site: abc01 Destination: foo01 iptables -t nat -A PREROUTING -p udp --dport 162 -s 10.0.33.0/21 -j DNAT --to-destination 10.1.2.3 # Range: 10.1.0.0/16 Site: xyz01 Destination: bar01 iptables -t nat -A PREROUTING -p udp --dport 162 -s 10.1.0.0/16 -j DNAT --to-destination 10.3.2.1 This should work with excellent efficiency for basic trap routing, but it leaves us completely limited to what we can mach and filter on with IPTables, so we're concerned about flexibility for the future. Another feature that we'd really like, but isn't quite a "must have" is the ability to duplicate or mirror the UDP packets. Being able to take one incoming trap and route it to multiple destinations would be very useful. Has anyone tried any of the possible solutions above for SNMP traps (or Netflow, general UDP, etc) load balancing? Or can anyone think of any other alternatives to solve this?

    Read the article

  • Are FC and SAS DAS devices standard enough?

    - by user222182
    Before I ask my questions, here is some background info that may or may not be useful: For the first time I find myself needing a DAS solution. My priority is data through-put in a single direction. I can write large blocks, and I don't need to read at the same time. The server (the data producing device) is not really a typical server, its a very powerful single board computer. As such I have limited options when it comes to the add-in cards I can install since it must use the fairly uncommon interface, XMC. Currently I believe I am limited PCIex8 gen 1 which means that the likely bottle neck for me will be this 16gbps connection. XMC Boards I have found so far offer the following connections: a) Dual 10GBE ethernet controller, total throughput 20gbps b) Dual Quad SAS 2.0 Connectors (SFF-8XXX) HBA (no raid), total throughput 48 gbps c) Dual FC 8gb HBA (no raid), total throughput 16gbps My questions for you guys are: 1) Are SAS and/or FC, and by extension their HBAs, standard enough that I could purchase a Dell or Aberdeen storage server with a raid controller that has external SAS or FC ports and expect that I can connect it to my SAS or FC HBA, be presented with a single volume (if I so configured the storage server), all without having to check for HBA compatibility? 2) On a device like a Dell PowerVault (either DAS or NAS) is there an OS on it to concern myself with, or is it meant to be remotely managed? Is there a local interface in case I cant remotely manage it (i.e. if my single board computer uses an OS not supported by Dell OpenManage). Would this be true of nearly any device which calls itself a DAS? 3) If I purchase some sort of Supermicro storage chassis, installed a raid controller with external connections, is there a nice lightweight OS I can run just for management of the controller? Would I even need an OS since the raid card would be configured pre-boot anyway? 4) It is much easier to buy XMC based 10gigabit ethernet cards (generally dual port). In what ways would I be getting into trouble by using iSCSI as a DAS are direct cabling with SFP+ cables? Thanks in advance

    Read the article

  • How do I get the latest FastCGI and PHP versions to peacefully coexist on IIS 6?

    - by BHelman
    I have been going round and round trying to get any sort of PHP running on IIS 6. I somehow managed to successfully get version 5.1.4 running using the php5isapi.dll file. However, I want to upgrade a website to begin using a Content Management System. I have never dug into CMS before so I'm open to programs that are easy to use. I am currently looking into TomatoCMS and ImpressCMS - but that's beside the point. I have never done an installation with PHP before and I think I'm getting familiar with how it works. However the current situation is this. Microsoft's Web Platform Installer 2.0 installed FastCGI for me. I need to upgrade to PHP 5.3.1 for a CMS system. So I downloaded the Windows installer and let it go at it. After consulting several other blog articles, I believe I know how it is supposed to work but I am currently not having luck. THE SETUP *.php is a registered extension in IIS 6 for all websites (on Win 2k3). The application that it calls is C:\Windows\system32\inetsvr\fcgiext.dll, like it should. The fcgiext.ini config has the proper lines: [Types] php=PHP [PHP] ext=C:\program files\PHP\php-cgi.exe And the php.ini file also has the correct configs. All extensions are disabled and I changed the correct things for FastCGI. And everything is registered correctly with the PATH variable. Everything is exactly how it should be. BUT when I launch the "info.php" page () on another computer, I get the following error: FastCGI Error The FastCGI Handler was unable to process the request. Error Details: * Section [PHP] not found in config file. * Error Number: 1413 (0x80070585). * Error Description: Invalid index. HTTP Error 500 - Server Error. Internet Information Services (IIS) A quick Google search reveals that I have it all setup correctly as far as the INI's go and the mapping of the php extension. I am completely at a loss. Does anyone have any suggestions? Although the server is hosting three small websites, I don't really care what I have to do to it to get it to work.

    Read the article

  • HP MSR 30-10a Router - Route Traffic over Default Route

    - by SteadH
    We have a brand new HP MSR 30-10a Router. We have a fairly simple routing situation - we have two IP blocks, one which has a route out. We need things on the first block to go through the router, and out. I have an old Cisco 2801 router doing the job right now. For our example - IP Block 1: 50.203.110.232/29, Router interface on this block is 50.203.110.237, route out is 50.203.110.233. IP Block 2: 50.202.219.1/27, Router interface on this block at 50.202.219.20. I have a static route created for: 0.0.0.0 0.0.0.0 50.203.110.233 The router seems to understand this. When on the CLI via serial cable, I can ping 8.8.8.8 and hear responses from Google DNS. Woo hoo! The issue arrives when any client sits on the IP Block 2 side. I configured my client with a static IP of 50.202.219.15/27, default gateway 50.202.219.20. I can ping myself. I can ping the near side of the router (50.202.219.20), and I can ping the far side of the router (50.203.110.237. I cannot ping anything else in IP block 1, nor can I ping 8.8.8.8. Here is my configuration file: <HP>display current-configuration # version 5.20.106, Release 2507, Standard # sysname HP # domain default enable system # dar p2p signature-file flash:/p2p_default.mtd # port-security enable # undo ip http enable # password-recovery enable # vlan 1 # domain system access-limit disable state active idle-cut disable self-service-url disable # user-group system group-attribute allow-guest # local-user admin password cipher $c$3$40gC1cxf/wIJNa1ufFPJsjKAof+QP5aV authorization-attribute level 3 service-type telnet # cwmp undo cwmp enable # interface Aux0 async mode flow link-protocol ppp # interface Cellular0/0 async mode protocol link-protocol ppp # interface Ethernet0/0 port link-mode route ip address 50.203.110.237 255.255.255.248 # interface Ethernet0/1 port link-mode route ip address 50.202.219.20 255.255.255.224 # interface NULL0 # ip route-static 0.0.0.0 0.0.0.0 50.203.110.233 permanent # load xml-configuration # load tr069-configuration # user-interface tty 12 user-interface aux 0 user-interface vty 0 4 authentication-mode scheme # My guess right now is there is some sort of "permission" needed to use the default route. The manuals haven't turned up a lot in this area that don't make the situation much more complicated (but maybe it needs to be more complicated?) Background: we use HP switches, and I love the CLI. I bought HP thinking the command line interface would be similar, or at least speak the same language. Whoops! I'd be happy to provide more information or perform any additional tests. Thanks in advance! Update 1: The manual mentions routing rules. I hadn't previously added these (since our Cisco 2801 seems to route anything by default). I added: ip ip-prefix 1 permit 0.0.0.0 0 less-equal 32 alas, still no dice.

    Read the article

  • How can I force a merge of all WAL files in pg_xlog back into my base "data" directory?

    - by Zac B
    Question: Is there a way to tell Postgres (9.2) to "merge all WAL files in pg_xlog back into the non-WAL data files, and then delete all WAL files successfully merged?" I would like to be able to "force" this operation; i.e. checkpoint_segments or archiving settings should be ignored. The filesystem WAL buffer (pg_xlog) directory should be emptied, or nearly emptied. It's fine if some or all of the space consumed by the pg_xlog directory is then consumed by the data directory; our DBA has asked for WAL database backups without any backlogged WALs, but space consumption is not a concern. Having near-zero WAL activity during this operation is a fine constraint. I can ensure that the database server is either shut down or not connectible (zero user-generated transaction load) during this process. Essentially, I'd like Postgres to ignore archiving/checkpoint retention policies temporarily, and flush all WAL activity to the core database files, leaving pg_xlog in the same state as if the database were recently created--with very few WAL files. What I've Tried: I know that the pg_basebackup utility performs something like this (it generates an almost-all-WALs-merged copy of a Postgres instance's data directory), but we aren't ready to use it on all our systems yet, as we are still testing replication settings; I'm hoping for a more short-term solution. I've tried issuing CHECKPOINT commands, but they just recycle one WAL file and replace it with another (that is, if they do anything at all; if I issue them during database idle time, they do nothing). pg_switch_xlog() similarly just forces a switch to the next log segment; it doesn't flush all queued/buffered segments. I've also played with the pg_resetxlog utility. That utility sort of does what I want, but all of its usage docs seem to indicate that it destroys (rather than flushing out of the transaction log and into the main data files) some or all of the WAL data. Is that impression accurate? If not, can I use pg_resetxlog during a zero-WAL-activity period to force a flush of all queued WAL data to non-WAL data? If the answer to that is negative, how can I achieve this goal? Thanks!

    Read the article

  • Checking the configuration of two systems to determine changes

    - by None
    We are standing up a replicant data center at work and need to ensure that the new data center is configured (nearly) identically to the original. The new data center will be differently addressed and named than the original and will have differing user accounts, but all the COTS, patches, and configurations should be the same. We would normally ghost the original servers and install those images onto the new machines, however, we have a few problematic pieces of COTS that require we install them outside of an image due to how they capture the setup of the network during their installation and maintain it within their configuration information (in some cases storing it in various databases). We have tried multiple times and this piece of COTS cannot be captured within a ghost image unless the destination machine will have an identical network setup (all the same IPs, hostnames, user accounts, etc across the entire network) as the original. In truth, it is the setup of these special COTS that I want to audit the most because they are difficult to install and configure in the first place. In light of the fact that we can’t simply ghost, I’m trying to find a reasonable manner to audit the new data center and check to see if it is setup like the original (some sort of system wide configuration audit or integrity check). I’m considering using something like Tripwire for Servers to capture the configuration on the source machines and then run an audit on the destination machines. I understand that it will still show some differences due to the minor config changes, but I’m hoping that it will eliminate the majority of the work. Here are some of the constraints I’m working under: Data center is comprised of multiple Windows and Linux machines of differing versions (about 20 total) I absolutely cannot ghost or snap any other type of image of these machines … at least not in their final configuration I want to audit the final configuration to ensure all of the COTS, patches, configurations, etc are installed and setup properly (as compared to the original data center) I would rather not install any additional tools on these machines … I’d much rather run it from a standalone machine or off a DVD Price of tools is important but not an impossible burden, however, getting a solution soon is important (I can’t take the time to roll my own tools to do this) For the COTS that stores the network information, I don’t know all of the places it stores the network information … so it would be unlikely I could find a way in the near future to adjust its setup after the installation has occurred Anyone have any thoughts or alternate approaches? Can anyone recommend tools that would be usable for system wide configuration audits?

    Read the article

  • Is it possible to configure a CDN so that it will step out of the way for a subset of regional IPs?

    - by rwired
    We have a website which targets customers in China, both expat and local Chinese. We have an ICP license which allows us to host in a datacenter inside China. Internet in China is actually as fast as anywhere else (faster than most places actually), so long as the content is served-up within the boundaries of the Great-Firewall. Anything that crosses the wall is horribly slow. The problem is that most expats have some sort of VPN installed so that they can access all the blocked stuff. What this means is that when they access our site, the traffic first has to go out of China through the firewall to their VPN, and then back in. The performance is terrible, worse than if we were just hosting outside of China directly (which we used to do before the ICP was issued). So I want to use a global CDN to mirror the site automatically, but I only want to deliver the content via the CDN if the user's request IP address is outside of China. Inside China I would like the content to be served by our own server. I also want to be careful with the domain names. We currently use www.xxx.com and www.xxx.cn for language selection purposes, as these perform well in SEO on Google (which the expats use), and Baidu (which the locals use). If possible I would like to avoid having one domain on the outside, and the other on the inside since not all expats use a VPN, and some Chinese speakers also use VPNs. Also some of our legitimate customers in both languages are from outside of China. I also don't want to resort to using something like www2.xxx.com/cn for the outside connection if at all possible, since I have worries about duplicate content and canonical URLs ruining our SEO (unless you know of a quick fix for that). CDNs I'm considering are: Google PageSpeed, CloudFlare, Amazon CloudFront. None of which have datacenters inside China. I have complete control of the .com DNS zone records, but the .cn zones are under the control of the domain issuing body in China. I'm not sure at this time if they would allow even a CNAME to point to an IP outside of China (although I don't see why not). They no longer allow outside registrars like they used to.

    Read the article

  • Outlook refuses to connect to Exchange

    - by wfaulk
    Outlook 2007 under Windows XP connecting to Exchange 2003 SP2: when started, it flips back and forth between "Connecting to Exchange Server" and "Disconnected" three or four times, then gives up and stays disconnected. I tried deleting the ost file (which was nearly 2GB), turning Cached mode on and off, recreating the account inside the Mail control panel, changing the account to use HTTP, and probably some other things. None of it seemed to make any difference, until … After fiddling with it for a while, I got this absurd error message dialog at startup, and it exits after I click OK: Cannot start Microsoft Office Outlook. Cannot open the Outlook window. The set of folders cannot be opened. Microsoft Exchange is not available. Either there are network problems or the Exchange server is down for maintenance. (I'm not sure if I can even trust that message. It's so long, it just feels like a random offset into Outlook's stack of error messages.) Either way, the Exchange server is available to everyone else, and is available via OWA from that computer. I ran Process Explorer against Outlook and it showed 5 or so ESTABLISHED connections to our Exchange server, plus listening on two UDP ports, and two CLOSE_WAIT connections to localhost. If I managed to look at Outlook's IP connections while it was doing its Connecting/Disconnected dance, it had a huge number of connections open to the Exchange server. It more than filled ProcExp's dialog box; I'm guessing at least 20, probably more. The only other odd thing is that our network admin at some point added a wildcard DNS record to the domain name that we use for email, and now Outlook will sometimes (always?) start by complaining about autodiscover.example.com's SSL certificate. There is a web server there, but it doesn't have any sort of email autodiscover anything on it. It doesn't make any difference if I click "OK" or "Cancel" (or whatever the buttons are). I also added a bogus entry for the hostname to Windows' hosts file, pointing it at 127.0.0.2, and it stopped complaining about the certificate. (The CLOSE_WAIT sockets above were from before I made this change, and went away after.) I don't think this is related, as the same problem should exist for everyone, but it might be. This is the second time this user has had this problem. The first time, I never found a solution other than reinstalling Outlook. Now that it's a pattern, I'd like to find a permanent solution, rather than assume it's a random glitch.

    Read the article

  • Write STDOUT & STDERR to a logfile, also write STDERR to screen

    - by Stefan Lasiewski
    I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone). Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile. { command1 && command2 && command3 ; } > logfile.log 2>&1 Here is what I want to do with the output of these commands: STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems. Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored. It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this: { command1 && command2 && command3 ; } logfile.log 2&1 || mailx -s "There was an error" [email protected] The problem I run into is that STDERR loses context during I/O redirection. A '2&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2 error.log Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag. { ./configure && make --keep-going && make install ; } > build.log 2>&1 Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error. { ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1 I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.

    Read the article

  • How to know the source of certain TCP traffic on AIX

    - by A.Rashad
    We have two AIX boxes, one for production system and another for testing. both systems are running ATM machine switches, where the ATM device is connected via TCP socket. we had an issue on production system where the machine would power off or get disconnected but the netstat -na | grep <IP of machine > would still mention that the socket is up when simulated that case on the UAT environment, the problem did not happen, where the socket would terminate in 3 to 5 minutes. when sniffed on the traffic between the machine and ATM we found that no traffic takes place on production while there is some sort of heartbeat on UAT. but it is not initiated by the application. $>tcpdump | grep -v "10.2.2.71" | grep -v "HSRP" | grep "10.3.1.30" tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on en6, link-type 1, capture size 96 bytes 09:08:13.323421 IP server073.afs3-callback > 10.3.1.30.impera: . 278204201:278204202(1) ack 3307884029 win 164 09:08:13.335334 IP 10.3.1.30.impera > server073.afs3-callback: . ack 1 win 64180 09:08:23.425771 IP 10.3.1.30.impera > server073.afs3-callback: . 1:2(1) ack 1 win 64180 09:08:23.425789 IP server073.afs3-callback > 10.3.1.30.impera: . ack 2 win 65535 09:09:13.628985 IP server073.afs3-callback > 10.3.1.30.impera: . 0:1(1) ack 1 win 164 09:09:13.633900 IP 10.3.1.30.impera > server073.afs3-callback: . ack 1 win 64180 09:09:23.373634 IP 10.3.1.30.impera > server073.afs3-callback: . 1:2(1) ack 1 win 64180 09:09:23.373647 IP server073.afs3-callback > 10.3.1.30.impera: . ack 2 win 65535 while on production, that traffic is not there. we want to know where this traffic is initiated from to implement on production to sense disconnection our comms parameters are: tcp_keepcnt = 2 tcp_keepidle = 100 tcp_keepinit = 150 tcp_keepintvl = 150 tcp_finwait2 = 1200 can anyone help?

    Read the article

  • Subversion/Hudson/Sonar/Artifactory - too much for my little server to handle! Help!

    - by Ricket
    I have a little dedicated server. It's at a cheap price and has a simple AMD 1800+ (1.5ghz), 256mb DDR RAM, ...need I continue? And I think I'm overloading it already. I have installed the following, and it's running CentOS 5.4: Webmin Apache MySQL Subversion as an Apache module Hudson (standalone) Sonar (standalone, runs with a standalone Jetty install) Artifactory (standalone) That's pretty much it. But I'm having problems; pages are loading quite slowly. Network speed of the server is excellent, but I think I'm just running out of CPU and/or memory. A side-effect of the pages loading slowly is that sometimes Hudson times out, not being able to start Maven or contact Sonar in a certain amount of time. I think the next step to speed things up might be to move to an application server and use the WAR version of Hudson, Sonar and Artifactory together on that server. I don't know that it will help, but it just seems to make sense, especially with Sonar running on its own Jetty install and the other two probably running their own mini application servers as well. Am I correct in thinking this? Is this the right course of action? Any other tips on how to make the server run faster? I can post more data if you'd like, just let me know what else would help you answer my question. Oh, also just to cure any suspicions, I don't have any sort of virus or spyware. I protect my SSH access with DenyHosts (which has blocked 300+ brute forcers in the past few months), and I have confirmed that the top four processes in terms of memory and CPU usage are Sonar, Artifactory, Hudson, and MySQL. Edit: I just thought of another thing that I'd like you to comment on as well: Apache currently has 8 spawned slave processes, taking 42MB of ram apiece. This is not my web server. Is everything else able to function if I shut down Apache? Can you point me towards a tutorial or something on migrating Subversion from Apache into something that might work along with the other three applications, maybe even make Subversion a WAR file or something?

    Read the article

  • APC File Cache not working but user cache is fine

    - by danishgoel
    I have just got a VPS (with cPanel/WHM) to test what gains i could get in my application with using apc file cache AND user cache. So firstly I got the PHP 5.3 compiled in as a DSO (apache module). Then installed APC via PECL through SSH. (First I tried with WHM Module installer, it also had the same problem, so I tried it via ssh) All seemed fine and phpinfo showed apc loaded and enabled. Then I checked with apc.php. All seemed OK But as I started testing my php application, the stats in apc for File Cache Information state: Cached Files 0 ( 0.0 Bytes) Hits 1 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Which meant no PHP files were being cached, even though I had browsed through over 10 PHP files having multiple includes. So there must have been some Cached Files. But the user cache is functioning fine. User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 1000 Misses 1000 Request Rate (hits, misses) 0.84 cache requests/second Hit Rate 0.42 cache requests/second Miss Rate 0.42 cache requests/second Insert Rate 0.84 cache requests/second Cache full count 0 Its actually from an APC caching test script which tries to retrieve and store 1000 entries and gives me the times. A sort of simple benchmark. Can anyone help me here. Even though apc.cache_by_default = 1, no php files are being cached. This is my apc config Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 1M apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 32M apc.slam_defense 1 apc.stat 1 apc.stat_ctime 0 apc.ttl 0 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock 1 Also most php files are under 20KB, thus, apc.max_file_size = 1M is not the cause. I have also tried using 'apc_compile_file ' to force some files into opcode cache with no luck. I have also re-installed APC with Debugging enabled, but nothing shows in the error_log I have also tried setting mmap_file_mask to /dev/zero and /tmp/apc.xxxxxx, i have also set /tmp permissions to 777 to no avail Any clue anyone. Update: I have tried following things and none cause APC file cache to populate 1. set apc.enable_cli = 1 AND run a script from cli 2. Set apc.max_file_size = 5M (just in case) 3. switched php handler from dso to FastCGI in WHM (then switched it back to dso as it did not solve the problem) 4. Even tried restarting the container

    Read the article

  • kvm and qemu host: Is there a limit for max CPUs (Ubuntu 10.04)?

    - by Valentin
    Today we encountered a really strange behaviour on two identical kvm and qemu hosts. The host systems each have 4 x 10 Cores, which means that 40 physical cores are displayed as 80 within the operating system (Ubuntu Linux 10.04 64 Bit). We started a Windows 2003 32 Bit VM (1 CPU, 1 GB RAM, we changed those values multiple times) on one of the nodes and noticed that it took 15 minutes until the boot process began. During those 15 minutes, a black screen is shown and nothing happens. libvirt and the host system show that the qemu-kvm process for the guest is almost idling. stracing this process only shows some FUTEX entries, but nothing special. After those 15 minutes, the Windows VM suddenly starts booting and the Windows logo occurs. After a few seconds, the VM is ready to be used. The VM itself is very performant, so this is no performance issue. We tried to pin the CPUs with the virsh and taskset tools, but this only made things worse. When we boot the Windows VM with a Linux Live CD there is also a black screen for several minutes, but not as long as 15. When booting another VM on this host (Ubuntu 10.04) it also has the black screen problem, and also here the black screen is only shown for 2-3 minutes (instead of 15). So, summerinzing this: Each guest on each of those identical nodes suffers from idling a few minutes after being started. After a few minutes, the boot process suddenly starts. We have observed that the idling time happens right after the bios of the guest was initialized. One of our employees had the idea to limit the amount of CPUs with maxcpus=40 (because of 40 physical cores existing) within Grub (kernel parameter) and suddenly the "black-screen-idling"-behaviour disappeared. Searching the KVM and Qemu mailing lists, the internet, forums, serverfault and other various sites for known bugs etc. showed no useful results. Even asking in the dev IRC channels brought no new ideas. The people there recommend us to use CPU pinning, but as stated before it didn't help. My question is now: Is there a sort of limit of CPUs for a qemu or kvm host system? Browsing the source code of those two tools showed that KVM would send a warning if your host has more than 255 CPUs. But we are not even scratching on that limit. Some stuff about the host system: 3.0.0-20-server kvm 1:84+dfsg-0ubuntu16+0.14.0+noroms+0ubuntu4 kvm-pxe 5.4.4-7ubuntu2 qemu-kvm 0.14.0+noroms-0ubuntu4 qemu-common 0.14.0+noroms-0ubuntu4 libvirt 0.8.8-1ubuntu6 4 x Intel(R) Xeon(R) CPU E7-4870 @ 2.40GHz, 10 Cores

    Read the article

  • Using IIS7 why are my PNGs being cached by the browser, but my JS and CSS files not?

    - by Craig Shearer
    I am trying to sort out caching in IIS for my site. Basically, I want nothing cached, except for .png, .js, and .css files. At my site level, I opened the HTTP Reponse Headers and used the "Set Common Hedaers..." to set content to expire immediately. I have no Output Caching profiles set at any level in IIS. I clear my browser cache then try accessing my site. When my site requests a PNG file, I see responses like: Accept-Ranges bytes Age 0 Connection Keep-Alive Content-Type image/png Date Thu, 12 Apr 2012 21:55:15 GMT Etag "83b7322de318cd1:0" Last-Modified Thu, 12 Apr 2012 19:33:45 GMT Server Microsoft-IIS/7.5 X-Powered-By ASP.NET For JS and CSS files, I see responses like: Accept-Ranges bytes Cache-Control no-cache Connection Keep-Alive Content-Encoding gzip Content-Length 597 Content-Type text/css Date Thu, 12 Apr 2012 21:55:15 GMT Etag "06e45ede15bca1:0" Last-Modified Mon, 02 Nov 2009 17:28:44 GMT Server Microsoft-IIS/7.5 Vary Accept-Encoding X-Powered-By ASP.NET Accept-Ranges bytes Cache-Control no-cache Connection Keep-Alive Content-Encoding gzip Content-Length 42060 Content-Type application/x-javascript Date Thu, 12 Apr 2012 21:55:14 GMT Etag "2356302de318cd1:0" Last-Modified Thu, 12 Apr 2012 19:33:45 GMT Server Microsoft-IIS/7.5 Vary Accept-Encoding X-Powered-By ASP.NET So, why are my PNGs able to be cached, but JS and CSS files not? Then, I go into the Output Caching feature in IIS and set up profiles for .png, .css, and .js files. This updates the web.config file as follows: <caching> <profiles> <add extension=".png" policy="CacheUntilChange" kernelCachePolicy="DontCache" /> <add extension=".css" policy="CacheUntilChange" kernelCachePolicy="DontCache" /> <add extension=".js" policy="CacheUntilChange" kernelCachePolicy="DontCache" /> </profiles> </caching> I do a "precautionary" IISReset then try accessing my site again. For PNG files, I see the following response: Accept-Ranges bytes Age 0 Connection Keep-Alive Content-Length 3833 Content-Type image/png Date Thu, 12 Apr 2012 22:02:30 GMT Etag "0548c9e2c5dc81:0" Last-Modified Tue, 22 Jan 2008 19:26:00 GMT Server Microsoft-IIS/7.5 X-Powered-By ASP.NET For CSS and JS files, I see the following responses: Accept-Ranges bytes Cache-Control no-cache,no-cache Connection Keep-Alive Content-Encoding gzip Content-Length 2680 Content-Type application/x-javascript Date Thu, 12 Apr 2012 22:02:29 GMT Etag "0f743af9015c81:0" Last-Modified Tue, 23 Oct 2007 16:20:54 GMT Server Microsoft-IIS/7.5 Vary Accept-Encoding X-Powered-By ASP.NET Accept-Ranges bytes Cache-Control no-cache,no-cache Connection Keep-Alive Content-Encoding gzip Content-Length 3831 Content-Type text/css Date Thu, 12 Apr 2012 22:02:29 GMT Etag "c3f42d2de318cd1:0" Last-Modified Thu, 12 Apr 2012 19:33:45 GMT Server Microsoft-IIS/7.5 Vary Accept-Encoding X-Powered-By ASP.NET What am I doing wrong? Have I completely misunderstood the features of IIS, or is there a bug. Most importantly, how do I achieve what I want - that is get the browser to cache only PNG, JS and CSS files?

    Read the article

  • What is it that kills laptop batteries?

    - by Mala
    There are many superstitions on what you must never do lest your battery become worthless - and by worthless I mean hold about 16 - 24 seconds of charge. This has happened to every laptop I have ever owned, and I just got a new one, so please help me sort out fact from fiction. Here are some of the things I've heard: Do not keep your laptop fully charged. You must run it completely down every so often. Do not use your laptop plugged in to the wall. Only plug it in when it needs charge. If you will not be using your laptop for a long period of time, don't leave it at full charge. Do not leave your laptop running 24/7. The first two I know to be complete fiction: this was true of old batteries such as you might have had in an iPod in 2003, but modern batteries function better when kept at or near full charge. Devices even have circuitry to prevent you from completely depleting your battery, as this is dangerous. The third point sounds probable, and I'd be interested to know if it was true. However, it doesn't really apply to me because I'm not really the type to leave my laptop alone for a day, much less a "long period of time" The fourth seems most likely of the above, but only because of causality: I have always done this, and my batteries have always crapped out on me. I've generally treated a laptop like a desktop with a battery backup, and that I can move from one room to another if necessary. The fact that my batteries tend to last less than 30 seconds has further entrenched this behavior. Should I be trying to break this habit? Are there any other things that ruin laptop batteries? I love that I can actually use my new laptop unplugged :) I'd like to keep it that way. Update: Additional question: If the computer will be used for an extended period of time plugged in, does it make sense to remove the battery first? Update 2: I know people with laptops older than mine, who actively use their laptops as much as I do, and their batteries still hold about an hours' charge while mine holds less than 30 seconds, hence my belief that something I'm doing kills them.

    Read the article

  • Need advice on which PCI SATA Controller Card to Purchase

    - by Matt1776
    I have a major issue with the build of a machine I am trying to get up and running. My goal is to create a file server that will service the needs of my software development, personal media storage and streaming/media server needs, as well as provide a strong platform for backing up all this data in a routine, cron-job oriented German efficiency sort of way. The issue is a simple one - all my drives are SATA drives and my motherboard controller only contains 4 ports. Solving the issue has proven to be an unmitigated nightmare. I would like advice on the purchase of the following: 4 Port internal SATA / 2 Port external eSATA PCI SATA Controller Card that has the following features and/or advantages: It must function. If I plug it in and attach drives, I expect my system to still make it to the Operating System login screen. It must function on CentOS, and I mean it must function WELL and with MINIMAL hassle. If hassle is unavoidable, there shall be CLEAR CUT and EASY TO FOLLOW instructions on how to install drivers and other supporting software. I do not need nor want fakeRAID - I will be setting up any RAID configurations from within the operating system. Now, if I am able to find such a mythical device, I would be eternally grateful to whomever would be able to point me in the right direction, a direction which I assume will be paved with yellow bricks. I am prepared to pay a considerable sum of money (as SATA controller cards go) and so paying anywhere between 60 to 120 dollars will not be an issue whatsoever. Does such a magical device exist? The following link shows an "example" of the type of thing I am looking for, however, I have no way of verifying that once I plug this baby in that my system will still continue to function once I've attached the drives, or that once I've made it to the OS, I will be able to install whatever drivers or software programs I need to make it work with relative ease. It doesn't have to be dog-shit simple, but it cannot involve kernels or brain surgery. http://www.amazon.com/gp/product/B00552PLN4/ref=pd_lpo_k2_dp_sr_1?pf_rd_p=486539851&pf_rd_s=lpo-top-stripe-1&pf_rd_t=201&pf_rd_i=B003GSGMPU&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=1HJG60XTZFJ48Z173HKY So does anyone have a suggestion regarding the subject I am asking about? PCI SATA Controller Cards? It would help if you've had experience with the component before - that is after all why I am asking here - for those who have had experience that I do not have. Bear in mind that this is for a home setup and that I do not have a company credit card. I have a budget with a 'relative' upper limit of about $150.00.

    Read the article

  • pam_unix(sshd:session) session opened for user NOT ROOT by (uid=0), then closes immediately using using TortiseSVN

    - by codewaggle
    I'm having problems accessing an SVN repository using TortoiseSVN 1.7.8. The SVN repository is on a CentOS 6.3 box and appears to be functioning correctly. # svnadmin --version # svnadmin, version 1.6.11 (r934486) I can access the repository from another CentOS box with this command: svn list svn+ssh://[email protected]/var/svn/joetest But when I attempt to browse the repository using TortiseSVN from a Win 7 workstation I'm unable to do so using the following path: svn+ssh://[email protected]/var/svn/joetest I'm able to login via SSH from the workstation using Putty. The results are the same if I attempt access as root. I've given ownership of the repository to USER:USER and ran chmod 2700 -R /var/svn/. Because I can access the repository via ssh from another Linux box, permissions don't appear to be the problem. When I watch the log file using tail -fn 2000 /var/log/secure, I see the following each time TortiseSVN asks for the password: Sep 26 17:34:31 dev sshd[30361]: Accepted password for USER from xx.xxx.xx.xxx port 59101 ssh2 Sep 26 17:34:31 dev sshd[30361]: pam_unix(sshd:session): session opened for user USER by (uid=0) Sep 26 17:34:31 dev sshd[30361]: pam_unix(sshd:session): session closed for user USER I'm actually able to login, but the session is then closed immediately. It caught my eye that the session is being opened for USER by root (uid=0), which may be correct, but I'll mention it in case it has something to do with the problem. I looked into modifying the svnserve.conf, but as far as I can tell, it's not used when accessing the repository via svn+ssh, a private svnserve instance is created for each log in via this method. From the manual: There's still a third way to invoke svnserve, and that's in “tunnel mode”, with the -t option. This mode assumes that a remote-service program such as RSH or SSH has successfully authenticated a user and is now invoking a private svnserve process as that user. The svnserve program behaves normally (communicating via stdin and stdout), and assumes that the traffic is being automatically redirected over some sort of tunnel back to the client. When svnserve is invoked by a tunnel agent like this, be sure that the authenticated user has full read and write access to the repository database files. (See Servers and Permissions: A Word of Warning.) It's essentially the same as a local user accessing the repository via file:/// URLs. The only non-default settings in sshd_config are: Protocol 2 # to disable Protocol 1 SyslogFacility AUTHPRIV ChallengeResponseAuthentication no GSSAPIAuthentication yes GSSAPICleanupCredentials yes UsePAM yes AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE AcceptEnv XMODIFIERS X11Forwarding no Subsystem sftp /usr/libexec/openssh/sftp-server Any thoughts?

    Read the article

  • Dangers of Running Computers w/o Air Conditioning

    - by Daniel Bingham
    I recently moved in to an apartment with out air conditioning. This is fine most of the time as I am in upstate New York. It only ever gets above the high 70s during the hottest of the summer months. And when it does, I'm stubborn enough that I'll just deal with wearing minimal clothing around the house. However, I'm worried about my computers. I'm a software developer and gamer, so many of my machines are very high powered. And at least one of them is a server that must be left on 24/7 (not just a game server - also serves multiple websites). I've never before had to worry about the heat too much, as I always lived in buildings with central air. The in building temperature rarely got much above 70 F. All of the machines I built had good enough air cooling that I never saw a problem. Now the temperature in building is pushing 100F and I'm worried that the machines will not be able to keep themselves cool enough by simply blowing already hot air over themselves. The hottest of them I've turned off. However, the server I cannot. It's an old Dell (not custom build) that runs on a Pentium 4 (2.2GHz). It only has a single hard drive, integrated video. And it'd not running any processor intensive servers. Just basic LAMP. It used to run a MUD server, but that's off for now. So it should be idling most of the time. I haven't been able to find any sort of built in temperature sensors in the hardware... at least not any that the programs I've found in the Debian repository can read. And it's an inherited machine to which I do not have the full specs, so I don't know the tolerances anyway. How worried should I be about it melting down on me? How worried should I be about the hard drive melting or becoming corrupted? To generalize the question for other people, what are the safe temperature tolerances for most machines. How widely does it vary, and how does one go about determining when their machine is running too hot and needs to be shut down?

    Read the article

< Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >