Search Results

Search found 15129 results on 606 pages for 'orientation changes'.

Page 461/606 | < Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >

  • VirtualHost on WAMPSERVER not working

    - by Martin C
    I currently have WAMPSERVER 2.2 set up on my PC. I'm trying to set up a new host called pplocal.local I made the changes in httpd.conf to uncomment this: Include conf/extra/httpd-vhosts.conf Then, I edited httpd-vhosts.cong and I added the following: NameVirtualHost 127.0.0.1 <VirtualHost 127.0.0.1> DocumentRoot "E:/wamp2/www/" ServerName localhost </VirtualHost> <VirtualHost 127.0.0.1> DocumentRoot "E:/wamp2/www/pp/" ServerName pplocal.local <Directory "E:/wamp2/www/pp/"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order Deny,Allow Deny from all Allow from 127.0.0.1 </Directory> CustomLog "E:\wamp2\logs\pplocal-access.log" common ErrorLog "E:\wamp2\logs\pplocal-error.log" Im my windows 'hosts' file I added: 127.0.0.1 localhost 127.0.0.1 pplocal.local Then, I restarted apache. If I type localhost in my browser I get the files at E:/wamp2/www/ If I type pplocal.local in my browser I get the files at E:/wam2/www/ instead of those at E:/wamp2/www/pp/ I have followed several tutorials and can't see what I'm doing wrong. I'm new to editing the files associated with apache so any advice is appreciated. Thanks

    Read the article

  • port forwarding with socks over proxy

    - by Oz123
    I am trying to browse a wiki that runs on a server inside one domain from another domain. The wiki is accessible only on the LAN, but I need to browse it from another LAN to which I connect with an SSH tunnel ... Here is my setup and the steps I did so far: ~.ssh/confing on wikihost: Host gateway User kisteuser Port 443 Hostname gateway.companydomain.com ProxyCommand /home/myuser/bin/ssh-https-tunnel %h %p # ssh-https-tunnel: # http://ttcplinux.sourceforge.net/tools/stunnel Protocol 2 IdentityFile ~/.ssh/key_dsa LocalForward 11069 localhost:11069 Host server1 User kisteuser Hostname localhost Port 11069 LocalForward 8022 server1:22 LocalForward 17001 server1:7100 LocalForward 8080 www-proxy:3128 RemoteForward 11069 localhost:22 from wikihost myuser@wikihost: ssh -XC -t gateway.companydomain.com ssh -L11069:localhost:22 server1 on another terminal: ssh gateway.companydomain.com Now, on my companydomain I would like to start firefox and browse the wiki on wikihost. I did: [email protected] ~ $ ssh gateway Have a lot of fun... kisteuser@gateway ~ $ ssh -D 8383 localhost user@localhost's password: user@wikiserver:~> My .ssh/config on that side looks like that: host server1 localforward 11069 localhost:11069 host localhost user myuser port 11069 host wikiserver forwardagent yes user myuser port 11069 hostname localhost Now, I started firefox on the server called gateway, and edited the proxy settings to use SOCKSv5, specifying that the proxy should be gateway and use the port 8383... kisteuser@gateway ~ $ LANG=C firefox -P --no-remote And, now I get the following error popping in the Terminal of wikiserver: myuser@wikiserver:~> channel 3: open failed: connect failed: Connection refused channel 3: open failed: connect failed: Connection refused channel 3: open failed: connect failed: Connection refused Confused? Me too ... Please help me understand how to properly build the tunnels and browse the wiki over SOCKS protocol. update: I managed to browse the wiki on wikiserver with the following changes: host wikiserver forwardagent yes user myuser port 11069 hostname localhost localforward 8339 localhost:8443 Now when I ssh gateway I launch Firefox and go to localhost:8339 and I hit the start page of the wiki, which is served on Port 8443. Now I ask myself is SOCKS really needed? Can someone elaborate on that ?

    Read the article

  • Word 2007 won't run, tries to reinstall, fails with error 1402.

    - by eidylon
    Okay, this problem has been plaguing this computer for a while now. We tried googling, and none of the answers found helped to solve the problem. So, I am now posting the answer here for posterity. Office 2007 Home/Student edition was installed on the computer, running Vista (32-bit). One day, Word just up and stopped working. All the other programs continued to operate as expected. But every time you would click the icon for Word, it would pop up an install dialog, with a message reading "Preparing to install...". After a few minutes of the little progress bar going and going, it errors out, and gives error 1402, something to the effect of unable to access registry key HKEY_Local_Machine\Software\Classes\.wll\.... Searching around, every answer i found had to do with reassigning the permissions on this key, giving full rights to SYSTEM or to Everyone, and propagating the changes down to all sub-keys. When ever this was attempted though, it would tell us that we were unable to access the key due to permissions, even though we had run regedit as Administrator and are logged on with an administrative account. We also tried uninstalling Office and reinstalling it, as well as doing a repair install. Both these attempts also threw the same 1402 error. Also of note was that the executable for Word (winword.exe) was MIA and no longer to be found in the Office install directory.

    Read the article

  • Unable to record using Jmeter: [help me very urgent]

    - by krish
    Hi, I am trying to record a http web page using Jmeter 2.3.3 version.I has setup the JMeter proxy and tried, but did n't work. I have followed the below steps. 1. Launch jmeter 2.3.3, added thred group to test plan 2. Under Workbench-add-non-test elements- added HTTP proxy server. proxy server setting are port:9090, target:use recording controller, grouping:donot group samplers, Type:HTTp request and checked the boxes of all under http sampler settings 3.Saved the settings 4. Now in browser(IE 7.0 or firefox 3.0.16), under connection settings, setup the manual proxy settings as local host and port as 9090(no auto detect settings nothing, only manual proxy). Setting saved 5.Now in the jmeter, started the http proxy server. 6. Open a browser and hit the webpage needs to be tested. The page is not opened. In fact because of the changes made in browsers, no pages are opened. Whenever i try hitting a page, the pages are recorded in the Jmeter. but without the page open, how can i test. I looking for an immediate answer and my work is blocked. Immediate answer would be appreciated.

    Read the article

  • Truncated content with Apache on Vagrant VM

    - by Nev Stokes
    I'm using Vagrant to run a CentOS VM in order to try and achieve local development parity with our live servers. I've symlinked /var/www/html with the /vagrant shared directory and am forwarding port 80 for viewing at http://localhost:4567. I'm developing using SublimeText 2 on OS X Mountain Lion. Once I figured that iptables was tripping me up, all was well and good. Until I noticed something strange. I have a sample HTML page consisting of several paragraphs of lorem copy. I can view this fine in a browser on OS X. But when I make an edit, for example removing a paragraph, and refresh the content is truncated with the paragraph I deleted still visible. When I cat the files on the server I can see the changes I made but these aren't even reflected when I curl localhost. I strongly suspect that it's a problem with my Apache settings — with which I didn't really tinker — as the issue doesn't arise when I stop Apache and run sudo python -m SimpleHTTPServer 80 in the directory to view pages instead. What gives?

    Read the article

  • Nginx + PHP-FPM Timeouts, almost zero load consumption?

    - by javipas
    I've got a server running on a Linode with Ubuntu 10.04 LTS, Nginx 0.7.65, MySQL 5.1.41 and PHP 5.3.2 with PHP-FPM. There is a WordPress blog on it, updated to WordPress 3.2.1 recently. I have made no changes to the server (except updating WordPress) and while it was running fine, a couple of days ago I started having downtimes. I tried to solve the problem, and checking the error_log I saw many timeouts and messages that seemed to be related to timeouts. The server is currently logging this kind of errors: 2011/07/14 10:37:35 [warn] 2539#0: *104 an upstream response is buffered to a temporary file /var/lib/nginx/fastcgi/2/00/0000000002 while reading upstream, client: 217.12.16.51, server: www.mydomain.com, request: "GET /page/2/ HTTP/1.0", upstream: "fastcgi://127.0.0.1:9000", host: "www.mydomain.com", referrer: "http://www.mydomain.com/" 2011/07/14 10:40:24 [error] 2539#0: *231 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 46.24.245.181, server: www.mydomain.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.mydomain.com", referrer: "http://www.google.es/search?sourceid=chrome&ie=UTF-8&q=mydomain" and even saw this previous serverfault discussion with a possible solution: to edit /etc/php/etc/php-fpm.conf and change request_terminate_timeout=30s instead of ;request_terminate_timeout= 0 The server worked for some hours, and then broke again. I edited the file again to leave it as it was, and restarted again php-fpm (service php-fpm restart) but no luck: the server worked for a few minutes and back to the problem over and over. The strange thing is, although the services are running, htop shows there is no CPU load (see image) and I really don't know how to solve the problem. The config files are on pastebin The php-fpm.conf file is here The /etc/nginx/nginx.conf is here The /etc/nginx/sites-available/www.mydomain.com is here Please help :(

    Read the article

  • Log Shipping breaking daily on SQL Server 2005

    - by IT2
    I am facing a somewhat serious problem with Log Shipping on SQL Server 2005 and I am having trouble to correct it, so I will try some help from SF's experts. I have a Windows 2003 Server (PROD) that ships transactional log backups to another two servers: STAND1: Windows 2003 Server with SQL Server 2005. STAND2: Windows 2008 R2 Server with SQL Server 2005. The problem is that Log Shipping to STAND2 is breaking for ~ 90 minutes some times of the day and returning back without intervention. The breaking occur at times when the backup file is larger (after reindexing, etc). I can see the message below logged on the COPY job: *** Error: The specified network name is no longer available The copy agent was breaking dozens times a day only to STAND2 server, and after the changes below "only" breaks ~ two times a day: The frequency of the backup job was changed from 5 minutes to 10 minutes. Instead of backing up the 4 databases to the same folder, the log backups are now saved on separated folders for each database. The backup job doesn't run 24hs now, and only for 14 hours a day, when people are working on the database. I configured the SQL Server instances on the three servers to limit the memory, leaving more memory to the OS. Now I don't know what to do. Any help will be much appreciated! Thanks!

    Read the article

  • Samba4/Ubuntu Shares Incorrectly Available to All Users

    - by Dan
    I've got my Ubuntu server working with Samba4 and got it set up as the Primary domain controller on my network with AD and all that goodness. However, I'm trying to get my Samba configuration to work with the users and groups I've defined with the Active Directory tools from Windows. For instance, I've got a share X which I want users A and B (as part of the 'management' group, known as LLGrpManager in my setup) to see, but no body else. However, after making changes to the configuration, restarting Samba, I test by connecting to the share with my Mac over Samba as user 'C' which isn't part of the management group, and I can, incorrectly, see the X share. I've tried alsorts of combinations of specifying the group with no luck at all. I've got a feeling that my global config might be too lenient or something to do with file permissions but being a bit green, I'm without clue. My /etc/samba/smb.conf # Global parameters [global] server role = domain controller server string = Office Server workgroup = LLDOMAIN realm = lldomain.local netbios name = DUMBO passdb backend = samba4 logon path = \\%L\profiles\%U logon drive = L: log file = /var/log/samba/%m.log max log size = 50 security = ads domain logons = yes domain master = auto usershare allow guests = no valid users = %S [netlogon] path = /var/lib/samba/sysvol/lldomain.local/scripts read only = no guest ok = no [sysvol] path = /var/lib/samba/sysvol read only = No guest ok = no valid users = @LLDOMAIN\LLGrpManager [ShareX] path = /data comment = Entire Data Volume guest ok = no comment = Entire Data Volume guest ok = no valid users = @LLDOMAIN\LLGrpManager admin users = @LLDOMAIN\LLGrpManager browsable = no inherit acls = yes inherit permissions = yes ... My /etc/nsswitch.conf I've also instructed the system to use the nss winbind library when searching for users or groups by adding the stanza passwd and group in /etc/nsswitch.conf: passwd: compat winbind group: compat winbind shadow: compat Permissions on the folder in question drwxrwxrwt 8 root root 4.0K Oct 28 19:11 data

    Read the article

  • Server 2003 Remote Desktop loses its virtual printer image of the local printer

    - by Charles Hart
    Server 2003 Remote Desktop provides service to stores served by several ISPs. The server loses its virtual printer image of the local printer (as seen from the remote store site) and a copy of the original local printer appears on the local computer with a different driver without notice. Specifically: A remote desktop session is opened on a local computer that has a Brother HL2140 USB printer connected and the associated software installed with a correct driver shown under the “advanced” button. The server has the same Brother software and driver. An application that is running on the server attempts to print on the local printer connected to the local computer running Vista Pro or XP Pro. Either it works correctly (Good) or it does not print (Bad) or it prints on another Local Printer connected to another local computer logged into the server (Bad and Odd). When it doesn’t print (or prints somewhere else) we ask the customer to look for the (virtual) printer using the Remote desktop view of the server and the printer is gone. Then we ask the customer to look at the printers folder in the local computer. There are several possibilities: The printer is there, but the driver is mysteriously changed in the drop down to MDX something; we have the customer select the other (proper) Brother driver, and all is well again, as now after the change, the virtual printer in the server (which now matches the local printer) appears again, and so printing can resume. A “copy” of the printer mysteriously appears in the local printer’s folder and after we delete it the virtual printer in the server appears again and so printing can resume. Note that in both case 1 and 2, the server sometimes sends the print job elsewhere, to some other local computer. Meanwhile in the log file, endless errors are reported and the server eventually crashes, sometimes twice a day. I’m puzzled what changes the local printer driver and I’m puzzled what loads the copy 2 or copy 3 of the printer in the local printer folder. This entire description randomly occurs on any of 40+ local computers in eight different locations in different ISPs, all sharing one Domain.

    Read the article

  • DNS Replication on Server 2008 R2

    - by Aaron
    Hi There, I have been trying out public only facing DNS servers with Server 2008 R2 Web - I've wanted to setup at least 2 in a master/slave replication. Using Microsoft DNS I am able to add in the domains into the primary zone on the master DNS server (ns1) and add the records ok and have them visible publically. On ns2 I can then add in the same domain but as a secondary zone and get them to replicate / zone transfer fine. Is there a way inside of Windows to have the slave(s) automatically synchronise all the changes from the master? For example it's ok if i have manually added the domains onto each of the NS's but if i add a new zone on the master i have to add it on the slave before it replicates. I installed Simple DNS and they have a 'Super Master/Slave' which takes care of exactly this whereby if you add a new domain into the primary zone it is automatically created and kept in sync on NS2 but i would have to buy a licence. All this is non active directory if that helps. Can anyone advise if it is possible to do this using Microsoft DNS? Many Thanks in Advance!

    Read the article

  • SQL Server Offsite Backups

    - by Eric Maibach
    We have about !TB of SQL Server databases, and these databases generate about 200GB of data changes each day. Up to this point we have been doing Weekly full backups, daily diff backups, and hourly transaction log backups. The full and diff backups are backed up to tape and taken offsite each day. We have been trying to move away from tapes, and our IT department purchased a Barracuda Backup device that backups up data and then sends it offsite using our internet connection. I have been trying to get this to work for our SQL Server backups, and have ran into a number of problems. I normally like to just use SQL Server to perform backups instead of trying to use a agent, so that is what I tried first. However the Barracuda device was not able to dedup these files very well, so it ended up being to much data to try to send offsite and to archive. I then tried installing the Barracuda agent and using it to backup the SQL Server databases. However the problem I am having there is that on some of the database servers I also have files that need backed up, and I cannot find a way to create seperate backup schedules for the file backups and the SQL Server backups. Barracuda only does full or transaction log backups. So if I want to do hourly transaction log backups I end up doing a file system backup every hour (which is not good), or if I only schedule the backups to run once a night I either have to do a full backup every night, or only do a transaction backup once a day. None of these scenarios are good options. My question is, how is everyone else getting their large SQL Server database backups offsite. Are you just using tape, or have you found a offsite backup device that works well? Is anybody else using Barracuda to backup their SQL Server databases? If you do, then how do you have it setup?

    Read the article

  • Creating Routes using the second NIC in the box

    - by Aditya Sehgal
    OS: Linux I need some advice on how to set up the routing table. I have a box with two physical NIC cards eth0 & eth1 with two associated IPs IP1 & IP2 (both of the same subnet). I need to setup a route which will force all messages from IP1 towards IP3 (of the same subnet) to go via IP2. I have a raw socket capture program listening on IP2 (This is not for malicious use). I have set up the routing table as Destination Gateway Genmask Flags Metric Ref Use Iface IP3 IP2 255.255.255.255 UGH 0 0 0 eth1 If I try to specify eth0 while adding the above rule, I get an error "SIOCADDRT: Network is unreachable". I understand from the manpage of route that if the GW specified is a local interface, then that would be use as the outgoing interface. After setting up this rule, if i do a traceroute (-i eth0), the packet goes first to the default gateway and then to IP3. How do I force the packet originating from eth0 towards IP3 to first come to IP2. I cannot make changes to the routing table of the gateway. Please suggest.

    Read the article

  • Permissions nightmare - tried all I know

    - by Ben
    Working on a new client's dev site, which is a wordpress install on a Plesk box. I have SSH root access, and FTP access through a separate account. What I've done so far Initially I couldn't make any changes to any files at all. The permissions on all the template files looked a little screwy (644), so I figured change them to allow group, and add myself to the group: CHMOD Recursive on the theme folder to set everything to 664 Quickly realised I'd broken it, set the folders to 755, kept files as 664 Ownership on all files is a mixture of root:root and 500:500 (there is no user nor group with the ID of 500 on the server). Added myself to the group 'root' so I could modify the files too The Problem This worked OK, in terms of being able to edit the existing files, so I began working. However, I can't upload to the directory, even having run CHOWN -R root:root templatefolder/ and being in the root group. I feel like I must be missing something obvious, and it's doing my head in. Questions: Files in the install owned by 500 with group 500 - I've looked in /etc/group and /etc/passwd and there is no user nor group with this ID. Is that left over from another developer's setup or the previous server (they moved recently)? Is being in the 'root' group enough, or do I need to own the theme folder as 'myftpuser' in order to upload and create new files? Like I say, I have edit access, so I got myself this far. I'm now questioning what to do next!

    Read the article

  • Completely remove and freshly install MySql on XP?

    - by Corey Ogburn
    I have read this question and have not found it as a solution and I have even attempted much more. I've uninstalled MySql 5.5.18 and deleted: C:\Program Files\MySql C:\Documents and Settings\All Users\Application Data\MySql After uninstalling, I restart the computer. When I reinstall, in the MySql Server Instance Configuration Wizard I leave everything to their defaults except: I add a firewall exception I check Launch MySQL Server Automatically I check Include BIN directory in windows path Enable root access from remote machines (I'll lock that down later, just debugging for now, I have also tried installing without this option to no avail) I've tried Typical and Complete while installing, as well as with and without strict mode. No combination shows a difference. After all this, it cannot Apply Security Settings and I get a 10061 error (it also said error number 2003) and this article didn't help. I've tried everything I can to completely uninstall and successfully reinstall so I can start from scratch. I've uninstalled and reinstalled about a dozen times with minor changes (including turning off the firewall at times), each time deleting the above folders and any proper registry entries with no success. Note by success, applying security settings and a working remote connection. I can connect locally every time, but it's remotely that counts. I have tried to look for exterior problems such as port forwarding in the router and (even though the installer should add it) I do double check the firewall settings, which have always allowed the default port. I'm out of ideas.

    Read the article

  • Dell XPS 15 L502X hard drive Partition

    - by Mohan Gajula
    I have a situation here. I got my new Dell XPS 15 Laptop. The configuration of hard drive is as below : Volume 1: (OEM Partition): 133MB Volume 2: OS (C:): 685.25 GB Volume 3: Recovery : 13.25 GB Now, I am trying to re-partition my C Drive to have a C: drive with 100 GB and a new drive with 585 GB. Earlier, I tried using the Windows 7 Disk Management to shrink and extend the volume. That lead to the OS and hard drive not working. Dell Tech support tried to fix the issue, but they were not able to fix the issue online. Later a Dell Technician arrived my place, and replaced the hard drive with a new hard drive. Please help me re-partition the C: Drive with 100 GB, and new D drive with 585 GB. I don't want to lose my Recovery Partition. SOLUTION As Suggested by KCotreau below , I have done exactly. I have resized the C drive to 100 GB. And then applied the changes. Windows got restarted. On the boot screen, the partition was taking place. It took around 30 mins ( approx. ). Once after restart, I can see my C drive is 100 GB. Now opened the Easeus again. And created a new partition for the free space ( 585 GB ) this took 10 seconds to create. Here goes the screenshot after partitioning. Thanks to KCotreau. You are amazing.

    Read the article

  • Recommendation for robust, customizable, open source, Java servlet-based forum software?

    - by Erik Hermansen
    There is a lot of forum software out there, but it seems to me that a lot of the popular choices are PHP-based. And for my project, I'd like something based on Java servlets so my team can make customizations to it. Another important feature is that I can completely change the pages to hide unwanted elements without too much work. So I'm looking either for a template system or easily editable scripts (i.e. JSPs) that have a clean view separation. Just having skin changes or CSS customization is not enough. I understand that if I have open source, I can change anything I want, but my point is that it should be easy and not requiring mastery of a complex code base. Finally, I want something that has been around for at least a year and deployed on some high-traffic sites. Clustering support (one database, multiple web servers) is highly desirable. Up-time is crucial since I have an SLA to support. What do you think?

    Read the article

  • Enabling NAT forwarding using a second WAN interface and a second gateway on ubuntu

    - by nixnotwin
    I have 3 interfaces: eth0 192.168.0.50/24 eth1 10.0.0.200/24 eth2 225.228.123.211 The default gateway is 192.168.0.1 which I want to keep as it is in the changes I want to make. I want to masquerade eth1 10.0.0.200/24 and enable NAT forwarding to eth2. So I have done this: ip route add 225.228.123.208/29 dev eth2 src 225.228.123.211 table t1 ip route add default via 225.228.123.209 dev eth2 table t1 ip rule add from 225.228.123.211 table t1 ip rule add to 225.228.123.211 table t1 Now I can receive ping replies from any internet host if I did: ping -I eth2 8.8.8.8 To enable NAT forwarding I did this: sudo iptables -A FORWARD -o eth2 -i eth1 -s 10.0.0.0/24 -m conntrack --ctstate NEW -j ACCEPT sudo iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT sudo iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE But it isn't working. To test I used a client pc and put it on 10.0.0.0/24 network and gateway was set as 10.0.0.200. I want to have 192.168.0.1 as default gateway. And the traffic that comes in via eth1 10.0.0.200/24 should be forwarded to eth2 225.228.123.211. I have enabled forwarding on ubuntua also.

    Read the article

  • Connecting to my home router web interface from work

    - by Joe
    Hi, I'm trying to connect to my home router web interface from work. I use dyndns, because I don't have a static IP at home, and it works perfectly from any other place except my workplace (update: I made a mistake, see edit below). When trying to access the web interface from work I get a "500 Server Error" with the code: SERVER_RESPONSE_RESET. I'm not trying to use any protocols such as remote desktop, I'm only trying to access the web interface. I can access any other web page from my workplace with no problems, and I think my router web interface is like any other web page, isn't it? I thought maybe my work place proxy blocks addresses of services like dyndns, so I also applied another trick. Since I have a web page on my own domain (say www.mydomain.com) which I can access from work, I tried adding a CNAME to my domain which is linked to the dyndns address (router.mydomain.com). This way if anyone enters the address router.mydomain.com from anywhere, they reach my home router web interface, and there's no way of knowing it's a dyndns address (or is there?). However, it still doesn't work from my workplace (I get the same error message). Any ides? Edit: I'm sorry to say I made a mistake earlier. I used to be able to access my home router web interface from my old workplace, and I thought it was still possible since I don't recall making any configuration changes. However, after reading the replies, I went over to my old workplace and checked, and it doesn't work from there either. I'm very sorry for giving out wrong and misleading information about my problem. So to summarize: my problem is that I can't access my home router web interface from anywhere.

    Read the article

  • Which version control should I use for my configuration files?

    - by rakete
    I want to store some of my configuration files (~/.emacs.d/, .Xdefaults, etc. linux $HOME stuff) in version control so I can easily sync them with my notebook/workplace and see my past changes and revert to them should the need arise. So far it seems to me that there are quite some people using git for this and I think that I too want to use a distributed vcs for this (if only to get more used to them) but I can't say that I am very experienced with all things dvcs. I did use darcs and git briefly and so far I can say that I really like the way git handles branches, and I think the possibility to have different branches within the same directory is especially useful for my use case. Darcs on the other hand has cherry picking of patches, which too is quite the convenient feature when managing configuration files (at least I assume it is). So, what would you recommend to use? And what would be your reasoning for your recommendation? What other vcs with nice feature that I haven't mentioned exist and would make a good vcs to store configuration files and why?

    Read the article

  • When connecting to PPTP Centos via Windows 7 VPN, I get error 2147943625

    - by Charlie Dyason
    The remote computer refused the network connection. phrase has been my arch enemy for the past week now I recently "bought" a VPS server, I gave up trying to configure it with OpenVPN, all the issues were making me lose my mind, so I tried the easier way with pptp, but i figure, both are leading to a dead end... I followed this post (many others too but this is the unlucky one), http://blog.secaserver.com/2011/10/install-vpn-pptp-server-centos-6/ and it all goes well with the setup, however, I run into this error when connecting to the VPN in Windows 7 here is a pic of the error: Image So I do not know what I have done wrong... When connecting, Code: Select all netstat -apn | grep -w 1723 before connecting: netstat -apn |grep -w 1723 tcp 0 0 0.0.0.0:1723 0.0.0.0:* LISTEN 1137/pptpd after the error came I tried again: netstat -apn |grep -w 1723 tcp 0 0 0.0.0.0:1723 0.0.0.0:* LISTEN 1137/pptpd tcp 0 0 41.185.26.238:1723 41.13.212.47:49607 TIME_WAIT - iptables: # Generated by iptables-save v1.4.7 on Fri Nov 1 18:14:53 2013 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [63:8868] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i eth0 -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 1723 -j ACCEPT -A INPUT -i eth0 -p gre -j ACCEPT -A FORWARD -i ppp+ -o eth0 -j ACCEPT -A FORWARD -i eth0 -o ppp+ -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT # Completed on Fri Nov 1 18:14:53 2013 # Generated by iptables-save v1.4.7 on Fri Nov 1 18:14:53 2013 *nat : PREROUTING ACCEPT [96:12732] : POSTROUTING ACCEPT [0:0] : OUTPUT ACCEPT [31:2179] -A POSTROUTING -o eth0 -j MASQUERADE COMMIT # Completed on Fri Nov 1 18:14:53 2013 options.pptpd the only changes was the require-mppe # BSD licensed ppp-2.4.2 upstream with MPPE only, kernel module ppp_mppe.o # {{{ refuse-pap refuse-chap refuse-mschap # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. require-mschap-v2 require-mppe # Require MPPE 128-bit encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) require-mppe-128 # }}} I check the iptables, everything is normal, all INPUTs, etc are before rejects, username and password I also checked in chap-secrets file, I am really puzzled...

    Read the article

  • .NET 2.0 Application now running slow on IIS 7.5

    - by Valien
    I recently moved (and still in testing) an application from a Windows 2003 Server (Physical box) running IIS 6.x to a Windows 2008 R2 Standard (VM) IIS 7.5 server. The application is a .NET framework 2.0 application and is running under a 2.0 App Pool. This site works great except for one thing: Takes forever to get a request back. I've been tracking it with Chrome Inspect Element and it queries the site and can take up to 45 seconds to answer. Now when it does the page(s) render instantly but it's that initial request that's killing it. I see no error logs or issues with the application or Windows Event Viewer or even IIS logs so not sure where to start looking next. Some new changes was that previously the app resided behind a Pix firewall and now is behind a larger network environment in a DMZ zone (and I believe NetScaler is also being used to manage the network). I do not have rights/abilities to look at the network itself but can contact the Data center folks to look deeper into this but I wanted to make sure it's not my application that might be causing the slowdown or IIS. In summary: .NET 2.0 application works great in IIS 6.x Application moved to an IIS 7.5 server and now slow on rendering but when it does render responds back with pages instantly. Edit for solution Found out that it was the SOAP calls that were slowing the site down. In the new datacenter my application cannot request SOAP calls and so they time out after 40-45 seconds or so. Now trying to find out if I can install a proxy server to redirect this...

    Read the article

  • Is it possible to have DisplayLink USB display hotplugging with Xorg 1.13 on kernel 3.4?

    - by lkraav
    keithp seems to be the only one on the interwebs to have written anything about the subject and he worked with 3.5_rc. I don't want to go above 3.4 at the moment for various stability reasons and am trying to see whether I can get this to work. Xorg 1.13 recognizes the display on connection, "udl" module is loaded, xorg-video-modesetting driver also loads, display lights up. So everything seems to be good. I emerged xrandr-9999 (not many changes on top of 1.3.5): $ xrandr --listproviders Providers: number : 2 Provider 0: id: 69 cap: 0x0 crtcs: 2 outputs: 4 associated providers: 0 name:Intel Provider 1: id: 338 cap: 0x0 crtcs: 1 outputs: 1 associated providers: 0 name:modesetting But I can't get any further, just like this guy: $ xrandr --setprovideroutputsource 338 69 X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 139 (RANDR) Minor opcode of failed request: 35 () Value in failed request: 0x152 Serial number of failed request: 11 Current serial number in output stream: 12 $ xrandr --setprovideroutputsource 1 0 X Error of failed request: 148 Major opcode of failed request: 139 (RANDR) Minor opcode of failed request: 35 () Serial number of failed request: 11 Current serial number in output stream: 12 Any thoughts?

    Read the article

  • Can I use multiple URLs in the URL field of KeePass?

    - by Sammy
    I am using KeePass version 2.19. What I would like to do is have more than just one URL address associated with a given user name and password. The entry for a given website might look something like this... Title google User Name email Password pass URL https://accounts.google.com/ServiceLogin?hl=en&continue=https://www.google.com/ https://accounts.google.com/ServiceLogin?hl=sv&continue=https://www.google.com/ https://accounts.google.com/ServiceLogin?hl=de&continue=https://www.google.com/ As you can see the ?hl=en changes into ?hl=sv and then to ?hl=de for the three different languages in which I wish to view the Google log-in page. But this of course could be something completely different, like different web services from the same provider like YouTube and Gmail by Google. Very much like SE where you have several websites but only use one user name and password. I imagine something along the lines of having multiple entries for one and the same website, where KeePass would actually prompt you to choose which one you want to use. So you have several user names and passwords that use the same URL. But is it possible to have several URLs using the same user name and password, so that KeePass asks me "to which of the following three URLs do you want to auto-log into with this password"?

    Read the article

  • How do I prevent spawning of zombie-like apache2 processes on Dreamhost VPS?

    - by Jonathan Hayward
    I have had a website for months or longer on a DreamHost VPS, and I have had vague memories on, in initial setup, having to turn off some customized Apache under /dh to get a standard Apache 2.x to work with. Things have been going along on an even keel, when I started making some changes lately and I found that when I tried to bounce Apache (/usr/sbin/apachectl restart), it couldn't bind to port 80, and my site had been converted from a big literature site to a small parking site. I tried to see what was listening on 80, and it was a DreamHost-customized Apache that had spawned. I killed all of them, restarted Apache, and changed the parent directory under /dh to mode 000. That was a day or two ago. I was bouncing Apache again in trying to get a new site to load under HTTPS, and I found that once again DreamHost's apache had spawned, from the directory I set to mode 000, and once again converted my site to a parking page. I renamed the directory, but I am very skeptical of whether I have permanently killed the DreamHost-customized Apache. Besides duct tape options like a crontab to kill and delete each minute, how can I permanently turn off the Apache processes that are spawning from a location under /dh and interfering with standard Apache? What should I be doing that I am not? Can DreamHost's technical support stop the interference? Thanks,

    Read the article

  • Boot sequence unlike reboot

    - by samgoody
    When I turn on the computer it acts very differently than when I reboot it. [WinXP Pro, Intel Core2 6600, 2.4GHZ, 2GB RAM, NVIDA GeForce] Boot: Monitor must be plugged into the motherboard or no image. Screen resolution 800x600. Changes to the resolution cause only the top half of the screen to be usable, and are lost when I shut down the computer. Desktop icons arranged in neat rows on left of desktop. Nothing of note in system tray In Device Manger - Display adapter: Intel(R) Q965/Q963 Express Chipset Family In Device Manger - Monitors, two monitors are listed Hibernate and standby work. Reboot: Monitor must be plugged into the graphics card or no image. Screen resolution - 1280x1024 Desktop icons arranged in the cute circle that I put them in. NVIDIA icon shows in system tray. In Device Manger - Display adapter: NVIDA GeForce 6200LE In Device Manger - Monitors, one monitor is listed Hibernate and standby do not work. When awakened after a hibernation it says: The system could not be restarted from its previous location because the restoration image is corrupt. Delete restoration data & proceed to system boot? Double reboot (inconsistent): Monitor must be plugged into the graphics card. Screen resolution - 1024x768 Odd icon shows in system tray whose tooltip says "Intel Graphics" For a while my morning ritual was to boot, wait, reboot using (alt+ctrl+del - ctrl+u - R), wait. Keeping the monitor plugged into the graphics card. But aside for the inefficiency of this method, I sometimes want to standby and can't. On the other hand, the computer is unusable when set to 800x600. Please help, anyone?

    Read the article

< Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >