Search Results

Search found 41497 results on 1660 pages for 'fault'.

Page 104/1660 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • How to change local user home folder on Windows 2000 and above

    - by Adi Roiban
    I was using a local account on a Windows 7 desktop that is not connected to any Active Directory. After a while it was required to rename the local account. Renaming the account was simple using Local users and groups management tool. After renaming the user, the user home folder was not renamed and I could not find any information about how to change user home folder. I found the ProfileList registry folder but maybe there is a command line for doing such changes. HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList Any help is much appreciated. Thanks!

    Read the article

  • How to configure Apache so all requests go to single CGI file

    - by fastmonkeywheels
    I'm porting a CGI application from an embedded web server to run under Apache. In the effort of changing the least amount required I'm trying to figure out how to configure Apache so any requests coming in go to my CGI program, which then will use the QueryString environmental variable to determine which file needs to be created. I have Apache working now to where it will process my CGI file if it's requested directly i.e. localhost/cgi-bin/cgi_test.out but I need to figure out how to get my application to be called whenever any file is requested: localhost/ - call my application with QueryString set to "" or "/" localhost/thisFile - call my application with QueryString set to "/thisFile" etc. I have been doing all of my configuration testing under /etc/apache2/sites-available/mysite, which has been enabled and the default disabled. Thanks for any help. I've tried the recommendation from here: http://serverfault.com/questions/56082/configure-apache-to-handle-all-requests-via-single-index-php but I keep getting circular redirects.

    Read the article

  • ServerFault Wiki: How does Subnetting Work?

    - by Kyle Brandt
    How does Subnetting Work, and How do you do it by hand or in your head? Can someone explain both conceptually and with several examples? Serverfault gets lots of subnetting homework questions, so we could use an answer to point them to on serverfault itself. If I have a network, how do I figure out how to split it up? If I am given a netmask, how do I know what the network Range is for it? Sometimes there is a slash followed by a number, what is that number? Sometimes there is a subnet mask, but also a wildcard mask, they seem like the same thing but they are different? Someone mentioned something about knowing binary for this? Not looking for links to other sites (unless maybe you have one post with a bunch of good ones). I already know how to subnet, I just thought it would be nice if serverfault had a generic subnetting answer.

    Read the article

  • WebDav rename fails on an Apache mod_dav install behind NginX

    - by The Daemons Advocate
    I'm trying to solve a problem with renaming files over WebDav. Our stack consists of a single machine, serving content through Nginx, Varnish and Apache. When you try to rename a file, the operation fails with the stack that we're currently using. To connect to WebDav, a client program must: Connect over https://host:443 to NginX NginX unwraps and forwards the request to a Varnish server on http://localhost:81 Varnish forwards the request to Apache on http://localhost:82, which offers a session via mod_dav Here's an example of a failed rename: $ cadaver https://webdav.domain/ Authentication required for Webdav on server `webdav.domain': Username: user Password: dav:/> cd sandbox dav:/sandbox/> mkdir test Creating `test': succeeded. dav:/sandbox/> ls Listing collection `/sandbox/': succeeded. Coll: test 0 Mar 12 16:00 dav:/sandbox/> move test newtest Moving `/sandbox/test' to `/sandbox/newtest': redirect to http://webdav.domain/sandbox/test/ dav:/sandbox/> ls Listing collection `/sandbox/': succeeded. Coll: test 0 Mar 12 16:00 For more feedback, the WebDrive windows client logged an error 502 (Bad Gateway) and 303 (?) on the rename operation. The extended logs gave this information: Destination URI refers to different scheme or port (https://hostname:443) (want: http://hostname:82). Some other Restrictions: Investigations into NginX's Webdav modules show that it doesn't really fit our needs, and forwarding webdav traffic to Apache isn't an option because we don't want to enable Apache SSL. Are there any ways to trick mod_dav to forward to another host? I'm open to ideas :).

    Read the article

  • Cannot get libcurl-devl on OpenSUSE 11.3

    - by Dai
    I have a server running OpenSUSE 11.3 that I can't really upgrade to a newer version of OpenSUSE (it's a managed appliance). I have some PHP shell scripts that need to run on the server that have a dependency on both cURL and OpenSSL. I discovered that the PHP 5.3.3 binaries on the server did not include OpenSSL but did include cURL I downloaded the latest PHP sources, extracted them, and ran ./configure --with-openssl --with-zlib --with-bcmath --with-curl --with-readline --with-libxml --enable-sockets This failed: the configure script complained that it couldn't find cURL: checking for cURL support... yes checking for cURL in default path... not found configure: error: Please reinstall the libcurl distribution - easy.h should be in /include/curl/ I tried to install libcurl by running zypper install libcurl-devl This failed too: doom:~/phpworksite/php-5.5.15 # zypper install libcurl-devl Loading repository data... Warning: Repository 'Updates for openSUSE 11.3 11.3-1.82' appears to outdated. Consider using a different mirror or server. Warning: Repository 'openSUSE_11.3_Updates' appears to outdated. Consider using a different mirror or server. Reading installed packages... 'libcurl-devl' not found in package names. Trying capabilities. No provider of 'libcurl-devl' found. Resolving package dependencies... Nothing to do. However, libcurl-devl is listed when I run zypper search curl. doom:~/phpworksite/php-5.5.15 # zypper search curl Loading repository data... Warning: Repository 'Updates for openSUSE 11.3 11.3-1.82' appears to outdated. Consider using a different mirror or server. Warning: Repository 'openSUSE_11.3_Updates' appears to outdated. Consider using a different mirror or server. Reading installed packages... S | Name | Summary | Type --+-----------------------------+----------------------------------------------------------+-------- i | curl | A Tool for Transferring Data from URLs | package | curlftpfs | Filesystem for mounting FTP hosts using FUSE and libcurl | package | libcurl-devel | A Tool for Transferring Data from URLs | package i | libcurl4 | cURL shared library version 4 | package i | perl-WWW-Curl | Perl extension interface for libcurl | package i | php5-curl | PHP5 Extension Module | package | python-curl | Python module interface to the cURL library | package | python-curl-doc | Documentation for python-curl | package | xmms2-plugin-curl | Curl Support for xmms2 | package | xmms2-plugin-curl-debuginfo | Debug information for package xmms2-plugin-curl | package doom:~/phpworksite/php-5.5.15 # Here are the current repositories. doom:~/phpworksite/php-5.5.15 # zypper repos # | Alias | Name | Enabled | Refresh ---+----------------------------------------------+----------------------------------------------+---------+-------- 1 | PHP_extensions_(openSUSE_11.3) | PHP_extensions_(openSUSE_11.3) | No | Yes 2 | Packman_11.3 | Packman_11.3 | Yes | Yes 3 | Updates for openSUSE 11.3 11.3-1.82 | Updates for openSUSE 11.3 11.3-1.82 | Yes | Yes 4 | openSUSE_11.3_OSS | openSUSE_11.3_OSS | Yes | Yes 5 | openSUSE_11.3_Updates | openSUSE_11.3_Updates | Yes | Yes 6 | openSUSE_BuildService_-_devel:languages:perl | openSUSE_BuildService_-_devel:languages:perl | No | Yes 7 | repo-debug | openSUSE-11.3-Debug | No | Yes 8 | repo-non-oss | openSUSE-11.3-Non-Oss | Yes | Yes 9 | repo-oss | openSUSE-11.3-Oss | Yes | Yes 10 | repo-source | openSUSE-11.3-Source | No | Yes BTW, I did try building PHP without cURL, however it broke a lot of things, so apparently I really need cURL. My question: how can I install libcurl-devl (or just install cURL) so that I can build PHP?

    Read the article

  • Setting up APACHE-JAMES Server

    - by venomrld
    I am trying to setup Apache-James Server on Ubuntu, in which I am getting the annoying error: JAVA_HOME not defined correctly We cannot execute I have already referred to the documentations and set up the PATH and JAVA_HOME variables correctly in the /etc/profile file. Upon calling echo, I get the values in the output screen. Where am I missing? echo $JAVA_HOME /usr/local/jdk1.6.0_27 echo $PATH $PATH:/usr/local/jdk1.6.0_27/bin Please Help !!

    Read the article

  • using own mail server with external domain and dns. Now have internal dns. dkim test not working

    - by mojotaker
    I am not very knowledgeable in this area, but have been able to make great head way. Now i am stuck I setup my own mail server, e.g mailbox.example.com. I had the domain dns point to my mail server in my office. i was able to set up everything working fine. such as dkim and spf records. Recently i decided to setup an internal dns server in the office so as to resolve some addresses for some development servers internally. Ok the problem now is my mail server is sitting on the internal dns server (the mail server is on the same box as the dns server) its still able to send and receive emails but not sure if dkim is working properly. when i try to do a dkim test "amavisd test keys" i get "invalid (public key: not available)" and i know that that means i have a dns issue. so what should i do? I am currently looking at my internal dns zonefile and i dont know what to do (i am using bind dns server on an ubuntu-server box). do i configure a dkim txt record on the local dns ? or is there a way to forward dkim "request" to the external dns ? or do i have this whole thing done wrong ? To be clear Basically my internal domain name is the same as my external domain name (i.e example.com) i have a mail server within my internal domain mailbox.example.com, that uses my external domain dns (external dns has been setup to point to my emailserver (which of course is now sitting behind my internal dns)) dkim (i dont think its working because it fails the dkim test") Need help in determing the proper setup What is the proper way to set this up ? thank you Update: Here is my local dns zone file ; ; BIND data file for local loopback interface ; $TTL 604800 @ IN SOA webserver.example.com. root.example.com. ( //dns and webserver on the same box 2012030809 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS webserver.example.com. @ IN A 192.168.1.117 @ IN AAAA ::1 ns IN A 192.168.1.117 www IN A xx.xx.xx.xxx // ip of external domain box (bluehost) work around to let local clients access website newsletter IN A xx.xx.xxx.117 // external ip address of local network mailbox.example.com. IN A 192.168.1.111 // internal ip of mailbox (mailserver webserver.example.com. IN A 192.168.1.117 //internal ip of a webserver

    Read the article

  • vSphere Client vCenter Template Customization Specification Using Windows Sysprep Unattended Answer XML File

    - by Brian
    I'm trying to setup a vSphere Client vCenter v5.0.0 Build 455964 Template Customization Specification using a Windows Sysprep unattended answer XML file for Win2008R2. However I didn't know how Sysprep worked before attempting this so it was a time-consuming nightmare (even after reviewing VMware vSphere ESXi 5's documentation)! I think I've figure out what I'm supposed to be doing, but it's still not working. The biggest problem at this point is that vSphere Client vCenter Customization Specification IP address information is not sticking when I load a Sysprep XML file with just 1 basic setting! This can only be a bug. Here is the process I'm using: PROCESS for Windows - vSphere Client Install Windows OS install VM Tools customize Windows (GPOs can be used to do this after deployment) install Applications (GPOs can be used to do this after deployment too) shutdown the VM convert the VM to a template create a custom Windows Sysprep XML answer file with desired customizations View Management Customization Specifications Manager create "New" Specification for "Target Virtual Machine OS" select Windows check "Use Custom Sysprep Answer File" (ADDS: Custom Sysprep File. KEEPS: Network (IP), Operating System Options (SID, Sysprep /generalize). REPLACES: Registration Information of Owner Name & Organization, Computer Name, Windows License (Key), Administrator Password, Time Zone, Run Once, Workgroup or Domain) name it as "VMwareCS-OS####R#x32/64w/Sysprep-TEST" (CS=Customization Specification) set Description as "Created YYYY/MM/DD by FLast" NEXT import a Sysprep answer file from secure location NEXT Custom settings NEXT click "..." box to right of "Use DHCP" set "Use the following IP settings:" for "IP Address" fill out the first 2 octets set appropriate values for other 2-3 fields set DNS server addresses OK NEXT check "Generate New Security ID (SID)" ALWAYS as template is likely a domain-member computer so it can be updated occasionally NEXT Finish View Inventory VMs and Templates right-click previously completed template Deploy Virtual Machine from this Template provide the new OS name (max15char) select inventory location NEXT select Host/Cluster (wait for validation to succeed) NEXT select Resource Pool (wait for validation to succeed) NEXT select Storage location NEXT check "Power on this virtual machine after creation" select "Customize using an existing customization specification" select desired specification select "Use the Customization Wizard to temporarily adjust the specification before deployment" NEXT NEXT Custom settings? NEXT check "Generate New Security ID (SID)" ALWAYS as template is likely a domain-member computer so it can be updated occasionally NEXT Finish Finish. I know a community member named "brian" (http://serverfault.com/users/25904/brian) has worked with this scenario before, but I couldn't figure out how to contact him directly, so Brian if you see this message could you provide some information to help? Thanks, Brian

    Read the article

  • Error when installing AppFabric 1.1 on Server 2012 64bit

    - by no9
    I am trying to install AppFabric 1.1 on 64bit Windows Server 2012 R2. All updates have been installed and updates are turned ON .NET Framework 4.0 is installed .NET Framework 3.5 is installed IIS is installed Windows Powershell 3.0 should already be included in Server 2012 I am getting the following error: 2014-03-21 11:02:34, Information Setup ===== Logging started: 2014-03-21 11:02:34+01:00 ===== 2014-03-21 11:02:34, Information Setup File: c:\6c4006b0b3f6dee1bf616f1967\setup.exe 2014-03-21 11:02:34, Information Setup InternalName: Setup.exe 2014-03-21 11:02:34, Information Setup OriginalFilename: Setup.exe 2014-03-21 11:02:34, Information Setup FileVersion: 1.1.2106.32 2014-03-21 11:02:34, Information Setup FileDescription: Setup.exe 2014-03-21 11:02:34, Information Setup Product: Microsoft(R) Windows(R) Server AppFabric 2014-03-21 11:02:34, Information Setup ProductVersion: 1.1.2106.32 2014-03-21 11:02:34, Information Setup Debug: False 2014-03-21 11:02:34, Information Setup Patched: False 2014-03-21 11:02:34, Information Setup PreRelease: False 2014-03-21 11:02:34, Information Setup PrivateBuild: False 2014-03-21 11:02:34, Information Setup SpecialBuild: False 2014-03-21 11:02:34, Information Setup Language: Language Neutral 2014-03-21 11:02:34, Information Setup 2014-03-21 11:02:34, Information Setup OS Name: Windows Server 2012 R2 Standard 2014-03-21 11:02:34, Information Setup OS Edition: ServerStandard 2014-03-21 11:02:34, Information Setup OSVersion: Microsoft Windows NT 6.2.9200.0 2014-03-21 11:02:34, Information Setup CurrentCulture: sl-SI 2014-03-21 11:02:34, Information Setup Processor Architecture: AMD64 2014-03-21 11:02:34, Information Setup Event Registration Source : AppFabric_Setup 2014-03-21 11:02:34, Information Setup 2014-03-21 11:02:34, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : Initiating V1.0 Upgrade module. 2014-03-21 11:02:34, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : V1.0 is not installed. 2014-03-21 11:02:54, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : Initiating V1 Upgrade pre-install. 2014-03-21 11:02:54, Information Setup Microsoft.ApplicationServer.Setup.Upgrade.V1UpgradeSetupModule : V1.0 is not installed, not taking backup. 2014-03-21 11:02:55, Information Setup Executing C:\Windows\Microsoft.NET\Framework64\v4.0.30319\aspnet_regiis.exe with commandline -iru. 2014-03-21 11:02:55, Information Setup Return code from aspnet_regiis.exe is 0 2014-03-21 11:02:55, Information Setup Process.Start: C:\Windows\system32\msiexec.exe /quiet /norestart /i "c:\6c4006b0b3f6dee1bf616f1967\Microsoft CCR and DSS Runtime 2008 R3.msi" /l*vx "C:\Users\Administrator\AppData\Local\Temp\AppServerSetup1_1(2014-03-21 11-02-55).log" 2014-03-21 11:02:57, Information Setup Process.ExitCode: 0x00000000 2014-03-21 11:02:57, Information Setup Windows features successfully enabled. 2014-03-21 11:02:57, Information Setup Process.Start: C:\Windows\system32\msiexec.exe /quiet /norestart /i "c:\6c4006b0b3f6dee1bf616f1967\Packages\AppFabric-1.1-for-Windows-Server-64.msi" ADDDEFAULT=Worker,WorkerAdmin,CacheService,CacheAdmin,Setup /l*vx "C:\Users\Administrator\AppData\Local\Temp\AppServerSetup1_1(2014-03-21 11-02-57).log" LOGFILE="C:\Users\Administrator\AppData\Local\Temp\AppServerSetup1_1_CustomActions(2014-03-21 11-02-57).log" INSTALLDIR="C:\Program Files\AppFabric 1.1 for Windows Server" LANGID=en-US 2014-03-21 11:03:45, Information Setup Process.ExitCode: 0x00000643 2014-03-21 11:03:45, Error Setup AppFabric installation failed because installer MSI returned with error code : 1603 2014-03-21 11:03:45, Error Setup 2014-03-21 11:03:45, Error Setup AppFabric installation failed because installer MSI returned with error code : 1603 2014-03-21 11:03:45, Error Setup 2014-03-21 11:03:45, Information Setup Microsoft.ApplicationServer.Setup.Core.SetupException: AppFabric installation failed because installer MSI returned with error code : 1603 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.WindowsInstallerProxy.GenerateAndThrowSetupException(Int32 exitCode, LogEventSource logEventSource) 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.WindowsInstallerProxy.Invoke(LogEventSource logEventSource, InstallMode installMode, String packageIdentity, List`1 updateList, List`1 customArguments) 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.MsiInstaller.InstallSelectedFeatures() 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Installer.MsiInstaller.Install() 2014-03-21 11:03:45, Information Setup at Microsoft.ApplicationServer.Setup.Client.ProgressPage.StartAction() 2014-03-21 11:03:45, Information Setup 2014-03-21 11:03:45, Information Setup === Summary of Actions === 2014-03-21 11:03:45, Information Setup Required Windows components : Completed Successfully 2014-03-21 11:03:45, Information Setup IIS Management Console : Completed Successfully 2014-03-21 11:03:45, Information Setup Microsoft CCR and DSS Runtime 2008 R3 : Completed Successfully 2014-03-21 11:03:45, Information Setup AppFabric 1.1 for Windows Server : Failed 2014-03-21 11:03:45, Information Setup Hosting Services : Failed 2014-03-21 11:03:45, Information Setup Caching Services : Failed 2014-03-21 11:03:45, Information Setup Hosting Administration : Failed 2014-03-21 11:03:45, Information Setup Cache Administration : Failed 2014-03-21 11:03:45, Information Setup Microsoft Update : Skipped 2014-03-21 11:03:45, Information Setup Microsoft Update : Skipped 2014-03-21 11:03:45, Information Setup 2014-03-21 11:03:45, Information Setup ===== Logging stopped: 2014-03-21 11:03:45+01:00 ===== I have tried this solution but no success: http://stackoverflow.com/questions/11205927/appfabric-installation-failed-because-installer-msi-returned-with-error-code-1 My system enviroment variable PSModulesPath has this value: C:\Windows\System32\WindowsPowerShell\v1.0\Modules I have also followed this link with no success: http://jefferytay.wordpress.com/2013/12/11/installing-appfabric-on-windows-server-2012/ Any help would be greatly appreciated !

    Read the article

  • BackupExec errors when starting services

    - by blade
    Hi, I've installed backupexec 2010 trial on my server, with an appropriately-privileged AD account, but get errors when starting the required services from the login page: Processing services Start services on server: WIN-HQ7JSCRTTSQ Starting Enterprise Vault Admin Service on WIN-HQ7JSCRTTSQ. The service Enterprise Vault Admin Service is already running on WIN-HQ7JSCRTTSQ. Starting Backup Exec Remote Agent for Windows Systems on WIN-HQ7JSCRTTSQ. The service Backup Exec Remote Agent for Windows Systems is already running on WIN-HQ7JSCRTTSQ. Starting Backup Exec Device & Media Service on WIN-HQ7JSCRTTSQ. Error starting the service Backup Exec Device & Media Service on WIN-HQ7JSCRTTSQ. Service-specific error code returned: 0x2000e2d3 (536928979) Starting Backup Exec Server on WIN-HQ7JSCRTTSQ. Error starting the service Backup Exec Server on WIN-HQ7JSCRTTSQ. The dependency service or group failed to start. Starting Backup Exec Job Engine on WIN-HQ7JSCRTTSQ. Error starting the service Backup Exec Job Engine on WIN-HQ7JSCRTTSQ. The dependency service or group failed to start. Starting Backup Exec Agent Browser on WIN-HQ7JSCRTTSQ. Error starting the service Backup Exec Agent Browser on WIN-HQ7JSCRTTSQ. The dependency service or group failed to start. Starting Backup Exec DLO Administration Service on WIN-HQ7JSCRTTSQ. Error starting the service Backup Exec DLO Administration Service on WIN-HQ7JSCRTTSQ. Error code returned: Starting Backup Exec DLO Maintenance Service on WIN-HQ7JSCRTTSQ. The service Backup Exec DLO Maintenance Service is already running on WIN-HQ7JSCRTTSQ. Starting Backup Exec Web Service on WIN-HQ7JSCRTTSQ. The service Backup Exec Web Service is already running on WIN-HQ7JSCRTTSQ. Start services on server WIN-HQ7JSCRTTSQ completed. Processing services completed! How can I resolve this?

    Read the article

  • Nginx, memcached and cakephp: memcached module always misses cache

    - by Tim
    I've got a simple nginx configuration; server{ servername localhost; root /var/www/webroot; location / { set $memcached_key $uri; index index.php index.html; try_files $uri $uri/ @cache; } location @cache { memcached_pass localhost:11211; default_type text/html; error_page 404 @fallback; } location @fallback{ try_files $uri $uri/ /index.php?url=$uri&$args; } location ~ \.php$ { fastcgi_param MEM_KEY $memcached_key; include /etc/nginx/fastcgi.conf; fastcgi_index index.php; fastcgi_intercept_errors on; fastcgi_pass unix:/var/run/php5-fpm.sock; } } I've got a CakePHP helper that saves the view into memcached using the MEM_KEY parameter. I have tested it and it's working, however, nginx is always going to the @fallback direction. How can I go about troubleshooting this behavior? Would could the problem be?

    Read the article

  • Windows Network Load Balancing on ESX Cluster with Dell PowerConnect stacks

    - by dunxd
    We recently switched out our Cisco 6500 core switch for a pair of Dell PowerConnect 6248 stacks. Since then, our Network Load Balanced Sharepoint, which runs on two virtual machines on an ESX cluster has been behaving very poorly. The symptoms are that opening and saving documents stored in sharepoint takes a very very long time. There are no errors showing up on the Sharepoint servers or SQL server, just a lot of annoyed users. Initially I thought there was no way NLB could cause this, but as soon as we repointed the DNS records for our intranet to the ip address of one of the web front ends, the problems disappeared. We suspect there is an issue related to multicast in the Dell configs - NLB is configured for multicast, but not IGMP. Has anyone got a similar set up to us and fixed this sort of issue? Sharepoint on VMware ESX, with Dell PowerConnect switches.

    Read the article

  • Have set Expiration time: Still getting "Query string present but no explicit expiration time"

    - by oligofren
    I have one local Apache instance running with mod_cache (+ disk & mem) enabled, and it seems to cache content from my appserver fine. My app server sets Expiration headers and Last-modified. Yet, when deploying on a production server with the same modules enabled, I am getting the following error in my logs: blablabla not cached. Reason: Query string present but no explicit expiration time Any clues on why Apache is not caching content? The only difference is the Apache version. Locally I am running 2.2. This is from my config CacheRoot "/var/cache/apache2/" CacheEnable disk / This is example output < HTTP/1.1 200 OK < Date: Mon, 19 Nov 2012 16:09:13 GMT < Server: Sun GlassFish Enterprise Server v2.1.1 < X-Powered-By: Servlet/2.5 < Expires: Tue Nov 20 05:00:00 CET 2012 < Last-Modified: Mon Nov 19 17:09:13 CET 2012 < Cache-Control: no-transform < Content-Type: application/x-javascript < Transfer-Encoding: chunked

    Read the article

  • Adding an user to samba

    - by JustMaximumPower
    I'm trying to setup some samba shares in my home network on an Ubuntu 12.04 machine. Everything works fine for my user account (max) but I can not add any new user. Every time I try to add new user they can not use the shares. It's likely that the error is very basic to the concept of samba but please don't just tell me to read the docs. I've been trying that for about 2 weeks now. I've set up the server with my user max who can mount transfer and the share max. Than I added the user simon with sudo adduser --no-create-home --disabled-login --shell /bin/false simon because the user should not be able to ssh into the machine. I did an sudo smbpasswd -a simon and set an (samba) password for simon and added an share for simon. I also added simon to transferusers to give him access to the share transfer. But simon can't connect to transfer or simons. ---- output of testparam: ------- Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Processing section "[printers]" Processing section "[print$]" Processing section "[max]" Processing section "[simons]" Processing section "[transfer]" Loaded services file OK. Server role: ROLE_STANDALONE Press enter to see a dump of your service definitions [global] server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers [max] comment = Privater share von Max path = /media/Main/max read only = No create mask = 0700 [simons] comment = Privater share von Simon path = /media/Main/simon read only = No create mask = 0700 [transfer] comment = Transferlaufwerk path = /media/Main/transfer read only = No create mask = 0755 ---- The files in /media/Main: ------ drwxrwxr-x 17 max max 4096 Oct 4 19:13 max/ drwx------ 5 simon max 4096 Aug 4 15:18 simon/ drwxrwxr-x 7 max transferusers 258048 Oct 1 22:55 transfer/

    Read the article

  • Finding all domains registered in a nameserver

    - by Florian
    Up until now, I was pretty confident that it was pretty much impossible to list all the domains handled by a nameserver. But apparently, there exists a couple of websites on the Internet that are able to list all the domains registered in a namerserver. For example: http://www.gwebtools.com/ns-spy/udns1.ultradns.net Or all domains pointing to a specific IP : http://www.robtex.com/ip/190.7.200.92.html (These DNS/IP were picked at random) Do you know how it's done ?

    Read the article

  • Debian, 6rd tunnel, and connection troubles

    - by Chris B
    Long story short I am having issues with IPv6 using a 6rd tunnel with my ISP, charter business. They offer a 6rd tunnel that I think I have properly set up, but the server doesn’t reply to every ipv6 request. When the server has the network interfaces idle with no traffic for about 10 minutes, then IPv6 stops accepting inbound connections. to re-allow it, I must go into the server, and make it do a outbound ipv6 connection (normally a ping) to start it back up. Whats weird though i that if I run iptraf when its not working, it still shows a inbound ipv6 packet… the server is just not replying, and I can’t figure out why. Also, if I try to access my server over IPv6 from a house about 1 mile away on the same ISP, it is never able to connect. it always times out, but again the iptraf shows a ipv6 inbound packet. Again, it just does not reply. To test if my server is accessible through IPv6 I always have to use my vzw 4g phone (they use IPv6) or ipv6proxy dot net. Here is all of the configuration information my ISP gives on there tunnel server: 6rd Prefix = 2602:100::/32 Border Relay Address = 68.114.165.1 6rd prefix length = 32 IPv4 mask length = 0 Here is my /etc/network/interfaces for ipv6 (used x's to block real addresses) auto charterv6 iface charterv6 inet6 v4tunnel address 2602:100:189f:xxxx::1 netmask 32 ttl 64 gateway ::68.114.165.1 endpoint 68.114.165.1 local 24.159.218.xxx up ip link set mtu 1280 dev charterv6 here is my iptables config filter :INPUT DROP [0:0] :fail2ban-ssh – [0:0] :OUTPUT ACCEPT [0:0] :FORWARD DROP [0:0] :hold – [0:0] -A INPUT -p tcp -m tcp —dport 22 -j fail2ban-ssh -A INPUT -m state —state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m multiport -j ACCEPT —dports 80,443,25,465,110,995,143,993,587,465,22 -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp —dport 10000 -j ACCEPT -A INPUT -p tcp -m tcp —dport 5900:5910 -j ACCEPT -A fail2ban-ssh -j RETURN -A INPUT -p icmp -j ACCEPT COMMIT and last here is my ip6tables firewall config filter :INPUT DROP [1653:339023] :FORWARD DROP [0:0] :OUTPUT ACCEPT [60141:13757903] :hold – [0:0] -A INPUT -m state —state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m multiport —dports 80,443,25,465,110,995,143,993,587,465,22 -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp —dport 10000 -j ACCEPT -A INPUT -p tcp -m tcp —dport 5900:5910 -j ACCEPT -A INPUT -p ipv6-icmp -j ACCEPT COMMIT So Summary: 1.iptraf always shows IPv6 traffic, so its always making it to the server 2.server stops replying on ipv6 after no traffic for awhile (10 minutesish) until a outbound connection is made, then the process repeats. 3.server is NEVER accessable vi same ISP (yet iptraf still shows ipv6 request) Notes: When I try to access it from the same ISP from across town, even with iptables and ip6tables allowing ALL inbound traffic, this is what iptraf shows. IPv6 (92 bytes) from 97.92.18.xxx to 24.159.218.xxx on eth0 ICMP dest unrch (port) (120 bytes) from 24.159.218.xxx to 97.92.18.xxx on eth1 its strange, like its trying to forward to LAN? (eth1 is LAN, eth0 is WAN) even with the IPv6 address being set in the hosts file to the servers domain name. With iptables set up normally with the above configurations it only says this: IPv6 (100 bytes) from 97.92.18.xxx to 24.159.218.xxx on eth0 Im REALLY stuck on this, and any help would be GREATLY appreciated.

    Read the article

  • OpenSSL Handshake Failure (14094410) - Erroneous Client Certificate Check from Mobile Phone

    - by Clayton Sims
    I'm running a proxy server through Apache with modssl, which we're using to proxy POSTs from mobile devices to another internal server. This works successfully for most clients, but requests from a specific phone model (Nokia 2690) are showing a bizarre handshake failure. It looks as though OpenSSL is either requesting (or attempting to read an unsolicited) client certificate from the phone (which is especially bizarre because j2me's kssl implementation doesn't support client certs). I've disabled client certificates with the SSLVerifyClient none directive in both the virtual host conf and the modssl conf. The trace from error.log on debug level is (details redacted): [client 41.220.207.10] Connection to child 0 established (server www.myserver.org:443) [info] Seeding PRNG with 656 bytes of entropy [debug] ssl_engine_kernel.c(1866): OpenSSL: Handshake: start [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: before/accept initialization [debug] ssl_engine_io.c(1882): OpenSSL: read 11/11 bytes from BIO#7fe3fbaf17a0 [mem: 7fe3fbaf90d0] (BIO dump follows) [debug] ssl_engine_io.c(1815): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1860): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1882): OpenSSL: read 49/49 bytes from BIO#7fe3fbaf17a0 [mem: 7fe3fbaf90db] (BIO dump follows) [debug] ssl_engine_io.c(1815): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1860): +-------------------------------------------------------------------------+ [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: SSLv3 read client hello A [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: SSLv3 write server hello A [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: SSLv3 write certificate A [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: SSLv3 write server done A [debug] ssl_engine_kernel.c(1874): OpenSSL: Loop: SSLv3 flush data [debug] ssl_engine_io.c(1882): OpenSSL: read 5/5 bytes from BIO#7fe3fbaf17a0 [mem: 7fe3fbaf90d0] (BIO dump follows) [debug] ssl_engine_io.c(1815): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1860): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1882): OpenSSL: read 2/2 bytes from BIO#7fe3fbaf17a0 [mem: 7fe3fbaf90d5] (BIO dump follows) [debug] ssl_engine_io.c(1815): +-------------------------------------------------------------------------+ [debug] ssl_engine_io.c(1860): +-------------------------------------------------------------------------+ [debug] ssl_engine_kernel.c(1879): OpenSSL: Read: SSLv3 read client certificate A [debug] ssl_engine_kernel.c(1898): OpenSSL: Exit: failed in SSLv3 read client certificate A [client 41.220.207.10] SSL library error 1 in handshake (server www.myserver.org:443) [info] SSL Library Error: 336151568 error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure [client 41.220.207.10] Connection closed to child 0 with abortive shutdown (server www.myserver.org:443) I've tried enabling all ciphers and all protocols temporarily with modssl, neither of which seemed to be the issue. The phone should be using RSA_RC4_128_MD5 and SSLv3, all of which are available. Am I missing something more fundamental about what's failing here? It seemed like the certificate request might have been part of a renegotiation failure. I tried enabling SSLInsecureRenegotiation On on the virtual host, in case it was an issue of the phone's SSL not supporting the new protocol, but to no avail. Currently running: Apache/2.2.16 (Ubuntu) mod_ssl/2.2.16 OpenSSL/0.9.8o Apache proxy_html/3.0.1

    Read the article

  • Remotely sync Time Machine drives

    - by Off Rhoden
    I have an Xserve that runs Time Machine to a local terabyte drive. I also connected my external terabyte drive for a time period and had Time Machine use it to establish the seed data. I plan to take my drive back home with me (out of state) and have the Xserve return to using its local drive for Time Machine. But when I get back home, is there a way to keep my external drive's copy of the Time Machine Backups folder in sync with the Backups folder back on the Xserve? I'm wanting a full copy of the history (makes an awesome remote backup). I've thought of using the unix command rsync. In fact, that's how I had been doing it but I was looking the compactness that Time Machine was able to achieve. Thanks.

    Read the article

  • Lotus Notes 8.5.3 "Unable To Invoke Program"

    - by Ross Doughty
    Just got a little issue with some of my users today, when trying to open an email attachment in Lotus Notes, they get the error Unable To Invoke Program. Now, they can always just save the attachment and then open it through windows explorer, which is fine, but I would rather get a proper solution for this. Knowing almost nothing about Lotus Notes does not help me I know but I think it may have something to do with the default file association, however my colleague tells me that Notes uses the windows file associations, which are working. Any ideas anyone?

    Read the article

  • Windows Server 2008 R2 LDAPS

    - by Chad Moran
    I have a Server 2008 R2 server with ADDS installed. I'm trying to configure HP's ILO utility to connect to it over SSL. I installed the Active Directory Certificate Service, after doing so I'm still not able to connect to LDAP over SSL. I checked the event log and it's showing warnings with Event ID 36886 saying that there aren't default credentials yet. I'm not too sure why this is happening. I haven't done anything with ADCS other than installing the service do I need to create a certificate for the server?

    Read the article

  • mono: application not running when web.config in subfolder

    - by Quandary
    I run asp.net on mono. I wanted to run blogengine: http://www.dotnetblogengine.net/ I downloaded, put it in /var/www/blogengine, on the console went to /var/www/blogengine and started xsp2. I went to http://localhost:8080, and it ran without problems. Then I stopped xsp2, went to /var/www and started xsp2. I went to http://localhost:8080/blogengine It ended with a strange error: The section <blogProvider> can't be defined in this configuration file (the allowed definition context is 'MachineToApplication'). (/var/www/blogengine/Web.Config line 9) The problem seems to be that it stops working as soon as the xsp2 root folder is anything else than the application root folder... Do I have to config anything ? Or what else is wrong ?

    Read the article

  • ulimit not reflected for jenkins slave

    - by techastute
    Problem Got java.io.IOException: Too many open files in solr indexing through jenkins. Did some googling and found we have to set the ulimit for the box in where we are running the job. So set the ulimit in a linux box with spec Linux x86_64 GNU/Linux in both of the following fashions ulimit -n 1000000 /etc/security/limits.conf userx soft nofile 1000000 userx hard nofile 1000000 Given userx is the user through which the jenkins job is being executed. when doing ssh to the box as userx manually through terminal and check ulimit -n am getting 10000000 Question But when executing the same ulimit -n through a jenkins job, only getting 1024 which is the default. Any advice would be much helpful?

    Read the article

  • Virtualhosts, maintain existing applications

    - by savageguy
    I host several applications by IP in subfolders (http://ip/app). I would also now like to host a domain, I've been able to setup the virtual hosts so that the domain loads properly in it's document root however the rest of my applications stop working and point to the same virtual host document root of the domain. How do I maintain my existing setup so all other requests behave the same?

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >