Search Results

Search found 7073 results on 283 pages for 'shared printers'.

Page 217/283 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • Secondary backup server

    - by verdy
    I've been given a task to implement a backup solution in the event of our website goes down. It is a dedicated server running centos 6. From what i've experience on our server, our server may go down because of PHP application crash or hardware failure. I have couple of questions: In the first case, is it possible to get the server restart the PHP automatically, how can I do that? Because in my mind, if it is only the application that goes down, probably I can still make use of the server itself. In the second case, can I redirect a request to a secondary server? How can I do that? What do I need other than another server? For now it is gonna be a simple server which shows the user a static landing page so later the system notify us via email that the primary server went down so that we can restart the server manually. Is it possible to setup just a vps or even a shared server for the secondary server ? As I think there is only gonna be a static page. Thanks. Any help would be much appreciated

    Read the article

  • Symantec Endpoint Protection Virus Definitions

    - by Gus Denton
    I have done some Googling but I cannot get a definitive answer certainly not from the Symantec KB. I have a Virtualised Win 2003R2 server 32bit. It has been provisioned to me with Symantec Endpoint Protection 11.0.62xxx CLIENT (not a definitions server) the directory C:\Program Files\Common Files\Symantec Shared\VirusDefs is 750MB IT doesn't contain .tmp directories so it is NOT a corrupt definitions server. IT does contain directories named with a date pattern YYYYMMDD.xxx Some of these folders are 12 months old and I would like to recover the space. The sysmantect forums are full of this stuff but a lot of the postings contain links back to documents that are not specific to End Point Protection Client. It appears that I should be able to delete the older folders and all will be OK. with a service restart however there is a warning about having Live Update Administrator Installed Firstly I have no idea if I have this installed how to I check and secondly can I just ditch these old files and restart ? Regards Gus Denton Learning and Teaching Uni of New South Wales Sydney Australia For those trying to assist me I thankyou. I have followed some instructions found on the Symantec site and assumed that the response from Nixphoe would resolve my issue. It appears that as I am on a provisioned VM from a central IT unit I cannot run the Symantec commands from the Run prompt as my admin creds to get me in. (smc -stop) Basically I need to claw back some Diskspace from the c: drive which is being filed up with WSUS patches and Symantec files. I have managed to delete one symantec cache through the live update control panel and recovered 470Mb I suppose my last question for those more experienced than myself is, can I simply remove say the two oldest virus definition folders without completely foobaring the End Point protection and the server ? Regards Gus

    Read the article

  • Windows Vista claims wireless key is the wrong length

    - by humble coffee
    A family member of mine is house sitting and has been given the details of their wifi. The access point is an Airport Express, it has WEP encryption (I think) and they've been given a passphrase to use. I know it's a passphrase and not the encrypted key as it's an English word. The passphrase is 10 characters long. The problem is that Vista complains that it's not a valid key as it must be a 5 or 13 character non-hex key or a 10 or 26 character hex key. (From what I've read this suggests the encryption is WEP?) I've found a couple of suggested solutions, but I'm not actually at the house at the moment so I wanted to make sure I have a good chance of getting it to work when I'm there but have no internets to ask. Solution 1: Vista needs to be told explicitly what kind of encryption and key is being used. Specify in the connection settings that you are using WEP and that it is a "shared key". Solution2: Try converting the passphrase to hexadecimal using an ASCII-hex converter and entering that.

    Read the article

  • Linksys router cannot change default password

    - by Jessica M.
    My wireless internet suddenly stopped working today. I have Windows 7 and Lindsys WRT54G router. I tried to log into the Linksys setup router page by typing in www.192.168.1.1 into firefox and it prompted me for a username and password as usual. The problem is when i tried to enter my regular username and password it did not work. I finally solved my problem when I came across this post here and the very last post solved my problem. It suggested I try username: root / password: admin. For some reason the username and password has been changed. When i tried username: root / password: admin , it worked and allowed me to get into the Linksys setup page. The problem is I can't change the username or password anymore. Every time I want to log into my Linksys setup page I have to enter username: root / password: admin. I can't change the "WPA shared key" (password). For the security settings I selected WPA2Personal + AES. Also the post said "If the firmware was upgraded to non-linksys firmware - the default will be different" . The problem is I didn't update anything and I'm worried that someone installed a virus or something or somehow changed the firmware on my router. How did I get non-Linksys firmware on my router? EDIT: I figured out how to change the password when I log into the Lynksys setup page. Administration -- Management -- password. I still don't understand if my router firmware was changed or who changed it or if it happened by mistake.

    Read the article

  • Enable Soap for PHP 5.5.x on CentOS 6.5

    - by Chris Mancini
    Unfortunately I have to support soap on my server for the Fedex webservice. I recompiled PHP enabling support and it works via CLI but not PHP-fpm. They both point to the same ini file and both show the module loaded, but only CLI shows the configuration values. Output of php -i | grep -i soap Configuration File (php.ini) Path => /usr/local/etc Loaded Configuration File => /usr/local/etc/php.ini Configure Command => './configure' '--prefix=/usr/local' '--with-config-file-path=/usr/local/etc' '--with-config-file-scan-dir=/usr/local/etc/php_user/' '--enable-fpm' '--enable-ftp' '--enable-libxml' '--enable-mbstring' '--enable-pdo' '--enable-soap' '--enable-sockets=shared' '--enable-zip' '--with-curl' '--with-fpm-group=nginx' '--with-fpm-user=nginx' '--with-freetype-dir=/usr/lib64/' '--with-gd' '--with-jpeg-dir=/usr/lib64/' '--with-libdir=lib64' '--with-mcrypt' '--with-openssl' '--with-pdo-mysql' '--with-pear' '--with-readline' '--with-tidy' '--with-xsl' '--with-zlib' '--without-pdo-sqlite' '--without-sqlite3' soap Soap Client => enabled Soap Server => enabled soap.wsdl_cache => 1 => 1 soap.wsdl_cache_dir => /tmp => /tmp soap.wsdl_cache_enabled => 1 => 1 soap.wsdl_cache_limit => 5 => 5 soap.wsdl_cache_ttl => 86400 => 86400 Output from php-fpm phpinfo(): Configuration File (php.ini) Path /usr/local/etc Loaded Configuration File /usr/local/etc/php.ini SOAP Brad Lafountain, Shane Caraveo, Dmitry Stogov Please help, I have tried so many things...

    Read the article

  • nginx connection pool race condition?

    - by wlf
    I have a shared hosting server with high traffic. I have a lightweight apache mod_proxy for static content that from time to time has a "504 proxy error" problem proxing to apache/mod_php. Error log says: error reading status line from remote server 127.0.0.1:8080 Error reading from remote server returned by / This is what the apache documentation says about it. proxy-initial-not-pooled If this variable is set no pooled connection will be reused if the client connection is an initial connection. This avoids the "proxy: error reading status line from remote server" error message caused by the race condition that the backend server closed the pooled connection after the connection check by the proxy and before data sent by the proxy reached the backend. It has to be kept in mind that setting this variable downgrades performance, especially with HTTP/1.0 clients. I am really concerned about this downgrade in performance therefore I started to look at nginx immediately. I am new to nginx and time is crucial right now, I can't afford to waste days to study it just to find out there is the same race condition issue. Is nginx affected by this connection pool race condition? Thanks

    Read the article

  • libstdc++ - compiling failing because of tr1/regex

    - by Radek Šimko
    I have these packages installed on my OpenSUSE 11.3: i | libstdc++45 | Standard shared library for C++ | package i | libstdc++45-devel | Contains files and libraries for development | package But when i'm trying to compile this C++ code: #include <stdio.h> #include <tr1/regex> using namespace std; int main() { int test[2]; const tr1::regex pattern(".*"); test[0] = 1; if (tr1::regex_match("anything", pattern) == false) { printf("Pattern does not match.\n"); } return 0; } using g++ -pedantic -g -O1 -o ./main.o ./main.cpp It outputs this errors: ./main.cpp: In function ‘int main()’: ./main.cpp:13:43: error: ‘printf’ was not declared in this scope radek@mypc:~> nano main.cpp radek@mypc:~> g++ -pedantic -g -O1 -o ./main.o ./main.cpp /tmp/cc0g3GUE.o: In function `basic_regex': /usr/include/c++/4.5/tr1_impl/regex:771: undefined reference to `std::tr1::basic_regex<char, std::tr1::regex_traits<char> >::_M_compile()' /tmp/cc0g3GUE.o: In function `bool std::tr1::regex_match<char const*, char, std::tr1::regex_traits<char> >(char const*, char const*, std::tr1::basic_regex<char, std::tr1::regex_traits<char> > const&, std::bitset<11u>)': /usr/include/c++/4.5/tr1_impl/regex:2144: undefined reference to `bool std::tr1::regex_match<char const*, std::allocator<std::tr1::sub_match<char const*> >, char, std::tr1::regex_traits<char> >(char const*, char const*, std::tr1::match_results<char const*, std::allocator<std::tr1::sub_match<char const*> > >&, std::tr1::basic_regex<char, std::tr1::regex_traits<char> > const&, std::bitset<11u>)' collect2: ld returned 1 exit status What packages should i (un)install to make the code work on my PC?

    Read the article

  • Can't ping my Window 7 machine from within a Windows XP virtual machine

    - by Jonathan Conway
    I have Windows 7 installed as my primary operating system, on a laptop that's on my home network (wireless). I'm using Microsoft Virtual PC 2007 SP1 to run a virtual machine of Windows XP SP3, in which I want to access the Windows 7 instance, both to browse a shared folder and access the local Apache server. So far I can ping my Windows 7 IP address (IPv4) and access the apache server through the web browser through HTTP. However using my machine name never seems to work. Pinging it fails, and I can't access my apache server using it either. The problem seems to be something to do with my machine's name being registered under IPv6 rather than IPv4. I'm at a loss what to do. Should I try to set up IPv6 on the virtual machine? Not sure how to go about that. Or maybe I should somehow get my machine name on Windows 7 to work with IPv4? Although I think it already does, because I can ping it from a separate box (running Ubuntu), which is only registered under IPv6.

    Read the article

  • APC not caching many files

    - by tetranz
    Hello I have a Drupal site running on a VPS at Linode with PHP 5.2.10 and APC 3.1.6. It never caches more than about 25 files and barely uses any of its available memory. Drupal has hundreds of php files. I have another server where APC seems to work well and does indeed cache hundreds of files. The only difference with that site is that it runs Ubuntu 10.04 and php 5.3.2. The config settings are the same. What could be wrong? I'll paste the config from apc.php below. This is after hitting multiple parts of Drupal. Thanks APC Version 3.1.6 PHP Version 5.2.10-2ubuntu6.5 APC Host xxx.example.com Server Software Apache/2.2.12 (Ubuntu) Shared Memory 1 Segment(s) with 32.0 MBytes (mmap memory, pthread mutex locking) Start Time 2010/12/02 11:32:17 Uptime 3 minutes File Upload Support 1 File Cache Information Cached Files 21 ( 1.4 MBytes) Hits 169 Misses 21 Request Rate (hits, misses) 1.00 cache requests/second Hit Rate 0.89 cache requests/second Miss Rate 0.11 cache requests/second Insert Rate 0.17 cache requests/second Cache full count 0 User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 1M apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.shm_segments 1 apc.shm_size 32M apc.slam_defense 1 apc.stat 1 apc.stat_ctime 0 apc.ttl 0 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock 1

    Read the article

  • Windows Share authentication from Active Directory Linux login

    - by Kenny
    I'm using Active Directory to log into RHEL. To do this, I followed the steps outlined here: http://www.markwilson.co.uk/blog/2007/05/using-active-directory-to-authenticate-users-on-a-linux-computer.htm I'd like to be able to read data from Windows Servers shared folders without being prompted for a password. On Windows I log into an AD domain, and when I access windows file shares on a server on the LAN (also part of the AD domain) my I can just access them with no authentication step. I've used SMBclient on Linux to access these shares, but it asks for my password. I would like to be able to script access to the data on the shares, but I can't if there's a password prompt in the way. Well, I could, but it's not how I want to do it. Now, since I'm logged in using my active directory username & password, can't I just access the shares without jumping that extra hoop? I know I can mount the share using something like: //192.168.0.5/share /mnt/windows cifs auto,username=steve,password=secret,rw 0 0 but access will depend who is logged in... each user logging in should have their own unique AD access privelages. Thanks for reading!

    Read the article

  • Powershell script to delete sub folders and files if creation date is >7 days but maintain parent folders of sub folders and files <7 days old

    - by Mark
    I'm currently using the Powershell script below to delete all files directories and sub directories of "$dump_path" that are seven days or older based upon the creation date and not modified date. The problem with this script is this: If folder "A" is seven (or more) days old it will be deleted even if its sub folders and files are less then seven days old. What I would like this script to do is this: Delete all files from the root and in all sub folders of "$dump_path" that are seven or more days old but maintain the parent folder(s) of files and folders that are less than seven days old even if that means the parent folders are more than seven days old. If all subfolders and files are seven days or older than the parent folder then the parent can be deleted. Slightly obscure problem I know, but the intention is to have a 7 day retention period of all data in a 'sandbox' location of our shared areas. Also, an added bonus if it could generate a log of what it deletes and e-mails it out post deletion. Thank you for reading and I hope that all makes sense! Mark # set folder path $dump_path = "c:\temp" # set minimum age of files and folders $max_days = "-7" # get the current date $curr_date = Get-Date # determine how far back we go based on current date $del_date = $curr_date.AddDays($max_days) # delete the files and folders Get-ChildItem $dump_path | Where-Object { $_.CreationTime -lt $del_date } | Remove-Item -Recurse

    Read the article

  • Using a Linksys Wireless- G Broadband Router for a Wireless Antenna/Adapter?

    - by Alex
    (Warning: I'm not too computer-smart). Just moved fm a home where I used ATT DSL to a home w/ Comcast Cable for internet. Old home had the DSL ethernet wired to my only desktop w/ a 2Wire router for my wireless signal for the laptops. In the new home the signal comes fm cable via a Netgear(300) wireless router (remotely located) & works fine for my laptops. After searching with the network software, my desktop (can't be ethernet wired cuz of location) detects no wireless signal. Desktop is a 2 yr old HP p6311f Pavilion (Windows 7). Can't seem to detect any wireless hardware (ant/adaptr). Maybe I don't have the ability & need to buy a USB wireless antenna? Would the Pavilion come w/ wireless capability out of the box, maybe something inside the tower? No antenna on back. I happen to have a Linksys Wireless router which I plugged in to the desktop (trying both internet & shared ports) & noticed signal action on the router front panel. No internet on desktop though. Can I use this as an antenna for the desktop? Thanks & sorry if my solution is an easy one I'm just missing. Just want internet on the desktop.

    Read the article

  • Disable IPv6 on Debian VPS (Virtuozzo!)

    - by chris_l
    I have a Debian Lenny VPS, that's running virtualized by Parallels/Virtuozzo. Currently, the network interface doesn't have an IPv6 address - and that's good, because I don't have an ip6tables configuration. But I assume, that I could wake up one day, and ifconfig will show me an ipv6 address for the interface - because I have no control over the kernel or its modules - they're under the control of the hosting company. That would leave the server completely vulnerable to attacks from IPv6 addresses. What would be the best way to disable IPv6 (for the interface or maybe for the entire host)? Usually I would simply disable the kernel module, but that's not possible in this case. Update Maybe I should add, that I can use iptables and everything normally (I'm root on the VPS), but I can't make changes to the kernel or load kernel modules because of the way Virtuozzo works (shared kernel). lsmod always returns nothing. I can't call ip6tables -L (it says that I need to insmod, or that the kernel would have to be upgraded). I don't think, that changes to /etc/modprobe.d/aliases would have any effect, or do they? Networking Config? I thought, that maybe I can turn IPv6 off from /etc/network/... Is that possible? I just see, that they've set up avahi, so I should probably change the setting use-ipv6=yes to "no" in /etc/avahi/avahi.conf (?) Has anybody already tried this solution, and can I rely on it? I don't know too much about avahi. Would it actually have any effect? Or could it even bring my entire interface down, once IPv6 is enabled by the kernel?

    Read the article

  • How can I manage AWS VPC ssh access accounts and keys across multiple instances?

    - by deitch
    I am setting up a standard AWS VPC structure: a public subnet some private subnets, hosts on each, ELB, etc. Operational network access will be via either an ssh bastion host or an openvpn instance. Once on the network (bastion or openvpn), admins use ssh to access the individual instances. From what I can tell all of the docs seem to depend on a single user with sudo rights and a single public ssh key. But is that really best practice? Isn't it much better to have each user access each host under their own name? So I can deploy accounts and ssh public keys to each server, but that rapidly gets unmanageable. How do people recommend managing user accounts? I've looked at: IAM: It doesn't like like IAM has a method for automatically distributing accounts and ssh keys to VPC instances. IAM via LDAP: IAM doesn't have an LDAP API LDAP: set up my own LDAP servers (redundant, of course). Bit of a pain to manage, still better than managing on every host, especially as we grow. Shared ssh key: rely on the VPN/bastion to track user activities. I don't love it, but... What do people recommend? NOTE: I moved this over from accidentally posting in StackOverflow.

    Read the article

  • apache with php fastcgi keeps going down

    - by Josh Nankin
    I have an apache2 server configured with MPM worker and php fast cgi. Lately the apache logs have been telling me that MaxClients is being reached frequently, even though it's already pretty high. My server is now constantly going down, and I see a bunch of lines like this in the log: [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: comm with (dynamic) server "/var/local/fcgi/php-cgi-wrapper.fcgi" aborted: (first read) idle timeout (20 sec) [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: incomplete headers (0 bytes) received from server "/var/local/fcgi/php-cgi-wrapper.fcgi" I can see that my php-cgi processes are pretty large (about 70mb on average). Here's my apache configuration for MPM worker: KeepAlive ON KeepAliveTimeout 2 <IfModule mpm_worker_module> StartServers 5 MinSpareThreads 10 MaxSpareThreads 10 ThreadLimit 64 ThreadsPerChild 10 MaxClients 20 MaxRequestsPerChild 2000 </IfModule> Heres my fastcgi apache configuration: <IfModule mod_fastcgi.c> # One shared PHP-managed fastcgi for all sites Alias /fcgi /var/local/fcgi # IMPORTANT: without this we get more than one instance # of our wrapper, which itself spawns 20 PHP processes, so # that would be Bad (tm) FastCgiConfig -idle-timeout 20 -maxClassProcesses 1 <Directory /var/local/fcgi> # Use the + so we don't clobber other options that # may be needed. You might want FollowSymLinks here Options +ExecCGI </Directory> AddType application/x-httpd-php5 .php AddHandler fastcgi-script .fcgi Action application/x-httpd-php5 /fcgi/php-cgi-wrapper.fcgi </IfModule> Here's my fastcgi wrapper: #!/bin/sh PHPRC="/etc/php5/apache2" export PHPRC PHP_FCGI_CHILDREN=8 export PHP_FCGI_CHILDREN exec /usr/bin/php-cgi Any help would be very very much appreciated!

    Read the article

  • Windows Share authentication from Active Directory Linux login

    - by Kenny
    Hi, I'm using Active Directory to log into RHEL. To do this, I followed the steps outlined here: http://www.markwilson.co.uk/blog/2007/05/using-active-directory-to-authenticate-users-on-a-linux-computer.htm I'd like to be able to read data from Windows Servers shared folders without being prompted for a password. On Windows I log into an AD domain, and when I access windows file shares on a server on the LAN (also part of the AD domain) my I can just access them with no authentication step. I've used SMBclient on Linux to access these shares, but it asks for my password. I would like to be able to script access to the data on the shares, but I can't if there's a password prompt in the way. Well, I could, but it's not how I want to do it. Now, since I'm logged in using my active directory username & password, can't I just access the shares without jumping that extra hoop? I know I can mount the share using something like: //192.168.0.5/share /mnt/windows cifs auto,username=steve,password=secret,rw 0 0 but access will depend who is logged in... each user logging in should have their own unique AD access privelages. Thanks for reading!

    Read the article

  • Sync Two Exchange accounts or Ready Only access to subfolders

    - by cpgascho
    This is two questions kind of. The situation is as follows. I am running SBS 2008 with Exchange 2007. There is a shared account which has subfolders to keep track of the process of jobs that are coming into the company (ie: sales) I need to give other people in the company read access to this mailbox not full control. When I give ready only access to the root other users can only see the Inbox and not subfolders. Permissions have to be applied to each folder. One solution I have considered is creating a secondary mailbox that everyone could have full access too which would have a one way sync from the sales mailbox to the secondary mailbox. Then people could see what was happening without messing up the main mailbox by accident (at worst they would mess up the secondary mailbox) Ideally I could find a way to propgate the READ ONLY Permissiosn to all the subfolders. I have tried using PFDavAdmin to do this but have not been able to get it to connect successfully from Windows 7 To Exchange 2007 Any idea on how to 1. Propogate permissions (get PFDavAdmin to work??!) 2. Sync mailboxes 3. Other solution? Thanks Chris

    Read the article

  • After reinstallation, Disk Cleanup disappears when I click OK.

    - by James
    After I reinstalled Windows 7, Disk Cleanup stopped working. I can start Disk Cleanup and select the drive to clean, but when I click on the OK button, the window disappears. Any solutions? Here's the data from Windows LogsApplication :- EventData 1744235005 1 APPCRASH Not available 0 cleanmgr.exe 6.1.7600.16385 4a5bc5e1 Csi.dll 14.0.4733.1000 4b5662be c0000005 00135213 F:\Users\Jacob\AppData\Local\Temp\WER419.tmp.WERInternalMetadata.xml F:\Users\Jacob\AppData\Local\Microsoft\Windows\WER\ReportArchive\AppCrash_cleanmgr.exe_6514b6ecb633f97cbf78e3a5bcae2c4bd74351_0d3b109c 0 75fa9599-41b1-11e0-b864-001966b2bcb6 0 The above one was with an Information icon. The one below was with an Error icon:-- EventData cleanmgr.exe 6.1.7600.16385 4a5bc5e1 Csi.dll 14.0.4733.1000 4b5662be c0000005 00135213 bbc 01cbd5be36b572bf F:\Windows\system32\cleanmgr.exe F:\Program Files\Common Files\Microsoft Shared\OFFICE14\Csi.dll 75fa9599-41b1-11e0-b864-001966b2bcb6 I also used process explorer:- When i started disk cleanup, a cleanmgr.exe process appeared under explorer.exe.But when i clicked on the "OK" button after selecting the drive, cleanmgr.exe was there for some seconds before it disappeared. But a new process - WerFault.exe appeared under svchost.exe a few seconds after i clicked the "OK" button. It disappeared, too, from the process list after some time (i think it disappeared along with cleanmgr.exe).

    Read the article

  • How to backup iTunes on Windows to folder/share, i.e. without "Back Up to Disc"? No DVD writer avail

    - by Chris W. Rea
    (Surprised I didn't already find an answer here to this!) I've got a computer on which I'd like to back up the iTunes library – music, movies, apps, everything. We're talking multiple gigabytes. Unfortunately, it seems that iTunes' own built-in "Back Up to Disc" feature (the only backup feature I can find in iTunes) only functions with a CD or DVD writer/burner. The computer in question does not have a DVD burner. While it has a CD burner, attempting to back up to CDs would require dozens of discs plus more time than I'm willing to spend swapping them. So: What is the recommended way to back up an entire iTunes library on a Windows computer, to a non-CD/DVD location such as an external hard drive or a network shared folder? Then, once such a backup has been performed, what is the process for restoring the library – e.g. after the computer has been repaved with a new version of Windows – so that iTunes is resurrected whole and recognizes devices it syncs with? Thank you!

    Read the article

  • Git push over http (using git-http-backend) and Apache is not working

    - by Ole_Brun
    I have desperately been trying to get push for git working through the "smart-http" mode using git-http-backend. However after many hours of testing and troubleshooting, I am still left with error: Cannot access URL http://localhost/git/hello.git/, return code 22 fatal: git-http-push failed` I am using latest versions of Ubuntu (12.04), Apache2 (2.2.22) and Git (1.7.9.5) and have followed different tutorials found on the Internet, like this one http://www.parallelsymmetry.com/howto/git.jsp. My VHost file currently looks like this: <VirtualHost *:80> SetEnv GIT_PROJECT_ROOT /var/www/git SetEnv GIT_HTTP_EXPORT_ALL SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER DocumentRoot /var/www/git ScriptAliasMatch \ "(?x)^/(.*?)\.git/(HEAD | \ info/refs | \ objects/info/[^/]+ | \ git-(upload|receive)-pack)$" \ /usr/lib/git-core/git-http-backend/$1/$2 <Directory /var/www/git> Options +ExecCGI +SymLinksIfOwnerMatch -MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I have changed the ownership of the /var/www/git folder to root.www-data and for my test repositories I have enabled anonymous push by doing git config http.receivepack true. I have also tried with authenticated users but with the same outcome. The repositories were created using: sudo git init --bare --shared [repo-name] While looking at the apache2 access.log, it appears to me that WebDAV is trying to be used, and that git-http-backend is never fired: 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "GET /git/hello.git/info/refs?service=git-receive-pack HTTP/1.1" 200 207 "-" "git/1.7.9.5" 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "GET /git/hello.git/HEAD HTTP/1.1" 200 232 "-" "git/1.7.9.5" 127.0.0.1 - - [20/May/2012:23:04:53 +0200] "PROPFIND /git/hello.git/ HTTP/1.1" 405 563 "-" "git/1.7.9.5" What am I doing wrong? Is it an issue with the version of git and/or apache that I am using perhaps? BTW: I have read all the git http related questions on ServerFault and StackOverflow, and none of them provided me with a solution, so please don't mark this as duplicate.

    Read the article

  • NFS high CPU usage

    - by user269836
    Hello, I have a very strange issue. I have next server: Intel(R) Xeon(TM) MP CPU 3.16GHz cat /proc/cpuinfo | grep proce | wc -l 8 free -m total used free shared buffers cached Mem: 28203 27606 596 0 10789 9714 -/+ buffers/cache: 7103 21100 Swap: 24695 0 24695 RAID card *-storage description: RAID bus controller product: MegaRAID vendor: LSI Logic / Symbios Logic physical id: 7 bus info: pci@0000:13:07.0 logical name: scsi2 version: 01 width: 32 bits clock: 66MHz capabilities: storage pm bus_master cap_list rom configuration: driver=megaraid latency=32 resources: irq:134 memory:d8ff0000-d8ffffff(prefetchable) memory:df600000-df60ffff(prefetchable) HDD: 10x148Gb SCSI U320 15k - RAID5 /dev/sdb1 807G 674G 93G 88% /storage /dev/sdb1 /storage ext4 defaults,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,noatime,nodiratime,noacl,errors=remount-ro 0 1 network cards ethtool -i eth0 driver: tg3 version: 3.116 firmware-version: 5704-v3.36, ASFIPMIc v2.36 bus-info: 0000:10:02.0 ethtool -i eth1 driver: tg3 version: 3.116 firmware-version: 5704-v3.36, ASFIPMIc v2.36 bus-info: 0000:10:02.0 ifconfig bond0 Link encap:Ethernet HWaddr 00:0f:1f:ff:d6:4d inet addr:192.168.15.71 Bcast:192.168.15.255 Mask:255.255.255.0 inet6 addr: fe80::20f:1fff:feff:d64d/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:1062818202 errors:0 dropped:3918 overruns:0 frame:0 TX packets:1041317321 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:10000 RX bytes:258867684559 (241.0 GiB) TX bytes:396569192650 (369.3 GiB) this server running only nfs-kernel-server uname -a Linux nas2-backup 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux Debian 6. What do I have, once per day or two, LA goes up, it can be reached around LA: 40 but if I do: nfs-kernel-server restart. Every thing is OK. But on the next day or a little bit later, LA goes up again. Servers are connected to d-link dgs 1016d with 24 GBits ports. I have tried everything to find out what the problem is. Why it's happening, but still I can not resolve this issue. Any ideas on what is happening here?

    Read the article

  • Sharepoint/WSS Reporting Services Integration woes

    - by mhollers
    after a number of failed attempts i seem to have successfully installed the Reporting services add-in to my WSS farm. However, I seem to be missing most of the enhanced functionality eg no report library template, no report center site template. the only additional functionality available is the report viewer web part. background: 2 server WSS 3.0 farm with CA (Central admin) WFE (web front end) and reporting services addin installed on 1, and SQL05 SP2 with Reporting services (RS) and all databases installed on other. I have a VM environment set up and have rolled this back and repeated a number of times. I have configured RS within CA and activated 'Report Server INtegration Feature'. Within the 'site settings' I have a 'Reporting Services' heading with a 'manage shared schedules' item/link, not sure if there should be other options? I was of the understanding that to view reports within sharepoint i could either create a new site using the 'report center' template or add a report library to an existing site, neither of which seems available I am at a loss as to what to do, as all online information seems to do with dealing with installation issues/errors, which i seem to have eventually got past

    Read the article

  • How can I run Gnome or KDE locally in Cygwin?

    - by John Peter Thompson Garcés
    Apparently it is possible to do this using cygwin ports, as can be seen in screenshots. I followed this how-to to get apt-cygports set up, and I used it to install gnome-session. This how-to supposedly gives the commands needed to run Gnome or KDE, but whenever I try to run Gnome, a blank X-window pops up and then quickly disappears. Here is the terminal output: $ startx /usr/bin/dbus-launch gnome-session xauth: file /home/jpthomps/.serverauth.4168 does not exist Welcome to the XWin X Server Vendor: The Cygwin/X Project Release: 1.10.3.0 OS: Windows 7 Service Pack 1 [Windows NT 6.1 build 7601] (WoW64) Package: version 1.10.3-12 built 2011-08-22 XWin was started with the following command line: /usr/bin/X :0 -auth /home/jpthomps/.serverauth.4168 (II) xorg.conf is not supported (II) See http://x.cygwin.com/docs/faq/cygwin-x-faq.html for more information LoadPreferences: /home/jpthomps/.XWinrc not found LoadPreferences: Loading /etc/X11/system.XWinrc LoadPreferences: Done parsing the configuration file... winDetectSupportedEngines - DirectDraw installed, allowing ShadowDD winDetectSupportedEngines - Windows NT, allowing PrimaryDD winDetectSupportedEngines - DirectDraw4 installed, allowing ShadowDDNL winDetectSupportedEngines - Returning, supported engines 0000001f winSetEngine - Using Shadow DirectDraw NonLocking winScreenInit - Using Windows display depth of 32 bits per pixel winFinishScreenInitFB - Masks: 00ff0000 0000ff00 000000ff Screen 0 added at virtual desktop coordinate (0,0). MIT-SHM extension disabled due to lack of kernel support XFree86-Bigfont extension local-client optimization disabled due to lack of shared memory support in the kernel (II) AIGLX: Loaded and initialized /usr/lib/dri/swrast_dri.so (II) GLX: Initialized DRISWRAST GL provider for screen 0 winPointerWarpCursor - Discarding first warp: 637 478 (--) 5 mouse buttons found (--) Setting autorepeat to delay=500, rate=31 (--) Windows keyboard layout: "00000409" (00000409) "US", type 4 (--) Found matching XKB configuration "English (USA)" (--) Model = "pc105" Layout = "us" Variant = "none" Options = "none" Rules = "base" Model = "pc105" Layout = "us" Variant = "none" Options = "none" winBlockHandler - pthread_mutex_unlock() winProcEstablishConnection - winInitClipboard returned. winClipboardProc - DISPLAY=:0.0 winClipboardProc - XOpenDisplay () returned and successfully opened the display. xinit: XFree86_VT property unexpectedly has 0 items instead of 1 xinit: connection to X server lost waiting for X server to shut down winClipboardProc - winClipboardFlushWindowsMessageQueue trapped WM_QUIT message, exiting main loop. winClipboardProc - XDestroyWindow succeeded. winClipboardProc - Clipboard disabled - Exit from server winDeinitMultiWindowWM - Noting shutdown in progress

    Read the article

  • Compiling Gearman PHP Library for CentOS 5.8

    - by Andrew Ellis
    I've been trying to get Gearman compiled on CentOS 5.8 all afternoon. Unfortunately I am restricted to this version of CentOS by my CTO and how he has our entire network configured. I think it's simply because we don't have enough resources to upgrade our network... But anyways, the problem at hand. I have searched through Server Fault, Stack Overflow, Google, and am unable to locate a working solution. What I have below is stuff I have pieced together from my searching. Searches have told said to install the following via yum: yum -y install --enablerepo=remi boost141-devel libgearman-devel e2fsprogs-devel e2fsprogs gcc44 gcc-c++ To get the Boost headers working correctly I did this: cp -f /usr/lib/boost141/* /usr/lib/ cp -f /usr/lib64/boost141/* /usr/lib64/ rm -f /usr/include/boost ln -s /usr/include/boost141/boost /usr/include/boost With all of the dependancies installed and paths setup I then download and compile gearmand-1.1.2 just fine. wget -O /tmp/gearmand-1.1.2.tar.gz https://launchpad.net/gearmand/1.2/1.1.2/+download/gearmand-1.1.2.tar.gz cd /tmp && tar zxvf gearmand-1.1.2.tar.gz ./configure && make -j8 && make install That works correctly. So now I need to install the Gearman library for PHP. I have attempted through PECL and downloading the source directly, both result in the same error: checking whether to enable gearman support... yes, shared not found configure: error: Please install libgearman What I don't understand is I installed the libgearman-devel package which also installed the core libgearman. The installation installs libgearman-devel-0.14-3.el5.x86_64, libgearman-devel-0.14-3.el5.i386, libgearman-0.14-3.el5.x86_64, and libgearman-0.14-3.el5.i386. Is it possible the package version is lower than what is required? I'm still poking around with this, but figured I'd throw this up to see if anyone has a solution while I continue to research a fix. Thanks!

    Read the article

  • Geographically distributed file system with preferred locality

    - by dpb
    Hi All -- I'm building a application that needs to distribute a standard file server across a few sites over a WAN. Basically, each site needs to write a lot of misc files of varying size (some in the 100s MB range, but most small), and the application is written such that collisions aren't a problem. I'd like to have a system set up that meets the following qualifications: Each site can store files in a shared "namespace". That is, all the files would show up in the same filesystem. Each site would not send data over the WAN unless necessary. I.e., there would be local storage on each side of the WAN that would be "merged" into the same logical filesystem. Linux & Free ($$$) is a must. Basically, something like a central NFS share would meet most of the requirements, however it would not allow the locally written data to stay local. All data from remote sides of the WAN would be copied locally all the time. I have looked into Lustre, and have run some successful tests with it, however, it appears to distribute files fairly uniformly across the distributed storage. I have dug through the documentation and have not found anything that automatically will "prefer" local storage over remote storage. Even something that went with the lowest latency storage would be fine. It would work most of the time, which would meet this application's requirements. Any ideas?

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >