Search Results

Search found 14590 results on 584 pages for 'entity manager'.

Page 539/584 | < Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >

  • passwd ldap request to ActiveDirectory fails on half of 2500 users

    - by groovehunter
    We just setup ActiveDirectory in my company and imported all linux users and groups. On the linux client: (configured to ask ldap in nsswitch.conf): If i do a common ldapsearch to the AD ldap server i get the complete number of about 2580 users. But if i do this it only gets a part of all users, 1221 in number: getent passwd | wc -l Running it with strace shows kind of attempt to reconnect My ideas were: Does the linux authentication procedure run ldapsearch with a parameter incompatible to AD ldap ? Or probably it is a encoding issue. The windows user are entered in AD with all kind of characters. Maybe someone could shed light on this and give a hint how to debug that further!? Here's our ldap.conf host audc01.mycompany.de audc03.mycompany.de base ou=location,dc=mycompany,dc=de ldap_version 3 binddn cn=manager,ou=location,dc=mycompany,dc=de bindpw Password timelimit 120 idle_timelimit 3600 nss_base_passwd cn=users,cn=import,ou=location,dc=mycompany,dc=de?sub nss_base_group ou=location,dc=mycompany,dc=de?sub # RFC 2307 (AD) mappings nss_map_objectclass posixAccount User # nss_map_objectclass shadowAccount User nss_map_objectclass posixGroup Group nss_map_attribute uid sAMAccountName nss_map_attribute cn sAMAccountName # Display Name nss_map_attribute gecos cn ## nss_map_attribute homeDirectory unixHomeDirectory nss_map_attribute loginShell msSFU30LoginShell # PAM attributes pam_login_attribute sAMAccountName # Location based login pam_groupdn CN=Location-AU-Login,OU=au,OU=Location,DC=mycompany,DC=de pam_member_attribute msSFU30PosixMember ## pam_lookup_policy yes pam_filter objectclass=User nss_initgroups_ignoreusers avahi,avahi-autoipd,backup,bin,couchdb,daemon,games,gdm,gnats,haldaemon,hplip,irc,kernoops,libuuid,list,lp,mail,man,messagebus,news,proxy,pulse,root,rtkit,saned,speech-dispatcher,statd,sync,sys,syslog,usbmux,uucp,www-data and here the stacktrace from strace getent passwd poll([{fd=4, events=POLLIN|POLLPRI|POLLERR|POLLHUP}], 1, 120000) = 1 ([{fd=4, revents=POLLIN}]) read(4, "0\204\0\0\0A\2\1", 8) = 8 read(4, "\4e\204\0\0\0\7\n\1\0\4\0\4\0\240\204\0\0\0+0\204\0\0\0%\4\0261.2."..., 63) = 63 stat64("/etc/ldap.conf", {st_mode=S_IFREG|0644, st_size=1151, ...}) = 0 geteuid32() = 12560 getsockname(4, {sa_family=AF_INET, sin_port=htons(60334), sin_addr=inet_addr("10.1.35.51")}, [16]) = 0 getpeername(4, {sa_family=AF_INET, sin_port=htons(389), sin_addr=inet_addr("10.1.5.81")}, [16]) = 0 time(NULL) = 1297684722 rt_sigaction(SIGPIPE, {SIG_DFL, [], 0}, NULL, 8) = 0 munmap(0xb7617000, 1721) = 0 close(3) = 0 rt_sigaction(SIGPIPE, {SIG_IGN, [], 0}, {SIG_DFL, [], 0}, 8) = 0 rt_sigaction(SIGPIPE, {SIG_DFL, [], 0}, NULL, 8) = 0 rt_sigaction(SIGPIPE, {SIG_IGN, [], 0}, {SIG_DFL, [], 0}, 8) = 0 write(4, "0\5\2\1\5B\0", 7) = 7 shutdown(4, 2 /* send and receive */) = 0 close(4) = 0 shutdown(-1, 2 /* send and receive */) = -1 EBADF (Bad file descriptor) close(-1) = -1 EBADF (Bad file descriptor) exit_group(0) = ?

    Read the article

  • High CPU from httpd process

    - by KHWeb
    I am currently getting high CPU on a server that is just running a couple of sites with very low traffic. One of the sites is in still development going live soon. However, this site is very very slow...When browsing through its pages I can see that the CPU goes from 30% to 100% for httpd (see top output below). I have tuned httpd & MySQL, Apache Solr, Tomcat for high performance, and I am using APC. Not sure what to do from here or how to find the culprit as I have a bunch of messages on the httpd log and have been chasing dead ends for some time...any help is greatly appreciated. Server: AuthenticAMD, Quad-Core AMD Opteron(tm) Processor 2352, RAM 16GB Linux 2.6.27 64-bit, Centos 5.5 Plesk 9.5.4, MySQL 5.1.48, PHP 5.2.17 Apache/2.2.3 (CentOS) DAV/2 mod_jk/1.2.15 mod_ssl/2.2.3 OpenSSL/0.9.8e-fips-rhel5 PHP/5.2.17 mod_perl/2.0.4 Perl/v5.8.8 Tomcat6-6.0.29-1.jpp5, Tomcat-native-1.1.20-1.el5, Apache Solr top 17595 apache 20 0 1825m 507m 10m R 100.4 3.2 0:17.50 httpd 17596 apache 20 0 1565m 247m 9936 R 83.1 1.5 0:10.86 httpd 17598 apache 20 0 1430m 110m 6472 S 54.5 0.7 0:08.66 httpd 17599 apache 20 0 1438m 124m 12m S 37.2 0.8 0:11.20 httpd 16197 mysql 20 0 13.0g 2.0g 5440 S 9.6 12.6 297:12.79 mysqld 17617 root 20 0 12748 1172 812 R 0.7 0.0 0:00.88 top 8169 tomcat 20 0 4613m 268m 6056 S 0.3 1.7 6:40.56 java httpd error_log [debug] prefork.c(991): AcceptMutex: sysvsem (default: sysvsem) [info] mod_fcgid: Process manager 17593 started [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 17594 for worker proxy:reverse [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 17594 for (*) [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 17595 for worker proxy:reverse [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [notice] child pid 22782 exit signal Segmentation fault (11) [error] (43)Identifier removed: apr_global_mutex_lock(jk_log_lock) failed [debug] util_ldap.c(2021): LDAP merging Shared Cache conf: shm=0x7fd29a5478c0 rmm=0x7fd29a547918 for VHOST: example.com [info] APR LDAP: Built with OpenLDAP LDAP SDK [info] LDAP: SSL support available [info] Init: Seeding PRNG with 256 bytes of entropy [info] Init: Generating temporary RSA private keys (512/1024 bits) [info] Init: Generating temporary DH parameters (512/1024 bits) [debug] ssl_scache_shmcb.c(374): shmcb_init allocated 512000 bytes of shared memory [debug] ssl_scache_shmcb.c(554): entered shmcb_init_memory() [debug] ssl_scache_shmcb.c(576): for 512000 bytes, recommending 4265 indexes [debug] ssl_scache_shmcb.c(619): shmcb_init_memory choices follow [debug] ssl_scache_shmcb.c(621): division_mask = 0x1F [debug] ssl_scache_shmcb.c(623): division_offset = 96 [debug] ssl_scache_shmcb.c(625): division_size = 15997 [debug] ssl_scache_shmcb.c(627): queue_size = 2136 [debug] ssl_scache_shmcb.c(629): index_num = 133 [debug] ssl_scache_shmcb.c(631): index_offset = 8 [debug] ssl_scache_shmcb.c(633): index_size = 16 [debug] ssl_scache_shmcb.c(635): cache_data_offset = 8 [debug] ssl_scache_shmcb.c(637): cache_data_size = 13853 [debug] ssl_scache_shmcb.c(650): leaving shmcb_init_memory()

    Read the article

  • Why is Excel 2010/2013 taking 10 seconds open any file?

    - by jbkly
    I have a fast Windows 7 PC with two SSDs and 16GB of RAM, so I'm used to programs loading very fast. But recently, for no reason I can figure out, Excel has started taking way too long to open Excel files (of any size--even blank files). This is occurring with Excel 2010 and with Excel 2013 after I upgraded, hoping to solve the problem. Here a couple scenarios: If I start Excel directly, it opens almost instantly. No problem there. If I start Excel directly, and then open any Excel file (.xls or .xlsx), it loads almost instantly. Still no problem BUT if I attempt to open any Excel file directly, with Excel not running, it consistently takes 10-11 seconds for Excel to start. I get no error messages, just a spinning cursor for 10-11 seconds, and then the file opens. During the delay while Excel is trying to start, I'm not really seeing any discernible spike in CPU or memory usage, other than explorer.exe. This problem is only occurring with Excel, not Word or any other program I'm aware of. I've searched around quite a bit on this question and found various others who have experienced it, but the solutions that worked for them are not working for me. For a few people it was a problem with scanning network drives, but my problem is purely with local files; I have no network drives, and the problem persists even with all network connections disabled. Some people suggested worksheets with corrupted formulas or links, but I'm experiencing this with ANY Excel file: even blank worksheets. Others thought it was a problem with add-ins, but I have all Excel add-ins disabled (as far as I can tell). One person solved it by disabling a "clipboard manager" process that was running in the background, but I don't have that. I've disabled as many startup and background processes as I can, but the problem persists. I've run malware scans, disk cleanup, CCleaner, and installed Excel 2013. I've deleted temporary files, enabled SuperFetch, and edited registry keys. Still can't get rid of the problem. Any ideas? My system details: Windows 7 Professional SP1 64-bit, Excel 2013 32-bit, 16GB RAM, all programs installed on SSD.

    Read the article

  • which package i should choose, if i want to install virtualenv for python?

    - by hugemeow
    pip search just returns so many matches, i am confused about which package i should choose to install .. should i only install virtualenv? or i'd better also install virtualenv-commands and virtualenv-commands, etc, but i really don't know exactly what virtualenv-commands is ... mirror0@lab:~$ pip search virtualenv virtualenvwrapper - Enhancements to virtualenv virtualenv - Virtual Python Environment builder veh - virtualenv for hg pyutilib.virtualenv - PyUtilib utility for building custom virtualenv bootstrap scripts. envbuilder - A package for automatic generation of virtualenvs virtstrap-core - A bootstrapping mechanism for virtualenv+pip and shell scripts tox - virtualenv-based automation of test activities virtualenvwrapper-win - Port of Doug Hellmann's virtualenvwrapper to Windows batch scripts everyapp.bootstrap - Enhanced virtualenv bootstrap script creation. orb - pip/virtualenv shell script wrapper monupco-virtualenv-python - monupco.com registration agent for stand-alone Python virtualenv applications virtualenvwrapper-powershell - Enhancements to virtualenv (for Windows). A clone of Doug Hellmann's virtualenvwrapper RVirtualEnv - relocatable python virtual environment virtualenv-clone - script to clone virtualenvs. virtualenvcontext - switch virtualenvs with a python context manager lessrb - Wrapper for ruby less so that it's in a virtualenv. carton - make self-extracting virtualenvs virtualenv5 - Virtual Python 3 Environment builder clever-alexis - Clever redhead girl that builds and packs Python project with Virtualenv into rpm, deb, etc. kforgeinstall - Virtualenv bootstrap script for KForge pypyenv - Install PyPy in virtualenv virtualenv-distribute - Virtual Python Environment builder virtualenvwrapper.project - virtualenvwrapper plugin to manage a project work directory virtualenv-commands - Additional commands for virtualenv. rjm.recipe.venv - zc.buildout recipe to turn the entire buildout tree into a virtualenv virtualenvwrapper.bitbucket - virtualenvwrapper plugin to manage a project work directory based on a BitBucket repository tg_bootstrap - Bootstrap a TurboGears app in a VirtualEnv django-env - Automaticly manages virtualenv for django project virtual-node - Install node.js into your virtualenv django-environment - A plugin for virtualenvwrapper that makes setting up and creating new Django environments easier. vip - vip is a simple library that makes your python aware of existing virtualenv underneath. virtualenvwrapper.django - virtualenvwrapper plugin to create a Django project work directory terrarium - Package and ship relocatable python virtualenvs venv_dependencies - Easy to install any dependencies in a virtualenviroment(without making symlinks by hand and etc...) virtualenv-sh - Convenient shell interface to virtualenv virtualenvwrapper.github - Plugin for virtualenvwrapper to automatically create projects based on github repositories. virtualenvwrapper.configvar - Plugin for virtualenvwrapper to automatically export config vars found in your project level .env file. virtualenvwrapper-emacs-desktop - virtualenvwrapper plugin to control emacs desktop mode bootstrapper - Bootstrap Python projects with virtualenv and pip. virtualenv3 - Obsolete fork of virtualenv isotoma.depends.zope2_13_8 - Running zope in a virtualenv virtual-less - Install lessc into your virtualenv virtualenvwrapper.tmpenv - Temporary virtualenvs are automatically deleted when deactivated isotoma.plone.heroku - Tooling for running Plone on heroku in a virtualenv gae-virtualenv - Using virtualenv with zipimport on Google App Engine pinvenv - VirtualEnv plugins for pin isotoma.depends.plone4_1 - Running plone in a virtualenv virtualenv-tools - A set of tools for virtualenv virtualenvwrapper.npm - Plugin for virtualenvwrapper to automatically encapsulate inside the virtual environment any npm installed globaly when the venv is activated d51.django.virtualenv.test_runner - Simple package for running isolated Django tests from within virtualenv difio-virtualenv-python - Difio registration agent for stand-alone Python virtualenv applications VirtualEnvManager - A package to manage various virtual environments. virtualenvwrapper.gem - Plugin for virtualenvwrapper to automatically encapsulate inside the virtual environment any gems installed when the venv is activated

    Read the article

  • Windows explorer locks files

    - by John Prince
    I'm using Office 2010 & Windows 7 Home Premium 64-bit. My problem starts when I attempt to save e-mail messages to my PC that I have received via Outlook (my ISP is Comcast). I'm using the default .msg file extension option when I attempt to save these e-mails. The resultant files are locked and do not show the normal "envelope" icon. Instead, it’s a “blank page” icon with the right upper corner folded in. These files refuse to open either by double clicking on them or right clicking and trying to open them with Outlook. And when I return to Outlook, I discover that Outlook is now hung up and I have to close it via the Task Manager. To make matters worse, I’ve also discovered that every e-mail message that I've saved on my PC over the years has also somehow become locked and their original "envelope" icon has been replaced with the "blank page" icon. I found and installed an application called LockHunter. As a result, when I right click on a saved and locked e-mail message, I’ve given an option to find out what's locking it. Each time I'm told that the culprit is Windows explorer.exe. When I unlock the file the normal envelope icon is sometimes displayed (but not always) but at least the file can then be opened. But the file is still “squirrely” as it can’t be moved or saved to a folder until it’s unlocked again. On this second attempt, LockHunter says it’s now locked by Outlook.exe. By the way, I don't have this issue when I save Word, Excel & PowerPoint files; only with Outlook. I've exhausted every remedy that I can think of including: making sure that the file and folder options are checked to always show icons and not thumbnails; running the Windows 7 & Office 2010 repair options which find nothing amiss; running a complete system scan with Windows Malicious Software Removal Tool with negative results; verifying that Outlook is the default for opening e-mails; updating all of my applications via Secunia Personal Software Inspector; uninstalling every application that I felt was unnecessary; doing a registry cleanup via CC Cleaner; having Windows Security Essentials always on (it did find one Java Trojan recently which was quarantined and then deleted); uninstalling a bunch of non-Microsoft shell extensions; and deactivating all of the Outlook Add-ins and then re-activating each one. None of this solves the problem. I’d welcome any advice on how to resolve this.

    Read the article

  • Using Diskpart in a PowerShell script won't allow script to reuse drive letter

    - by Kyle
    I built a script that mounts (attach) a VHD using Diskpart, cleans out some system files and then unmounts (detach) it. It uses a foreach loop and is suppose to clean multiple VHD using the same drive letter. However, after the 1st VHD it fails. I also noticed that when I try to manually attach a VHD with diskpart, diskpart succeeds, the Disk Manager shows the disk with the correct drive letter, but within the same PoSH instance I can not connect (set-location) to that drive. If I do a manual diskpart when I 1st open PoSH I can attach and detach all I want and I get the drive letter every time. Is there something I need to do to reset diskpart in the script? Here's a snippet of the script I'm using. function Mount-VHD { [CmdletBinding()] param ( [Parameter(Position=0,Mandatory=$true,ValueFromPipeline=$false)] [string]$Path, [Parameter(Position=1,Mandatory=$false,ValueFromPipeline=$false)] [string]$DL, [string]$DiskpartScript = "$env:SystemDrive\DiskpartScript.txt", [switch]$Rescan ) begin { function InvokeDiskpart { Diskpart.exe /s $DiskpartScript } ## Validate Operating System Version ## if (Get-WmiObject win32_OperatingSystem -Filter "Version < '6.1'") {throw "The script operation requires at least Windows 7 or Windows Server 2008 R2."} } process{ ## Diskpart Script Content ## Here-String statement purposefully not indented ## @" $(if ($Rescan) {'Rescan'}) Select VDisk File="$Path" `nAttach VDisk Exit "@ | Out-File -FilePath $DiskpartScript -Encoding ASCII -Force InvokeDiskpart Start-Sleep -Seconds 3 @" Select VDisk File="$Path"`nSelect partition 1 `nAssign Letter="$DL" Exit "@ | Out-File -FilePath $DiskpartScript -Encoding ASCII -Force InvokeDiskpart } end { Remove-Item -Path $DiskpartScript -Force ; "" Write-Host "The VHD ""$Path"" has been successfully mounted." ; "" } } function Dismount-VHD { [CmdletBinding()] param ( [Parameter(Position=0,Mandatory=$true,ValueFromPipeline=$false)] [string]$Path, [switch]$Remove, [switch]$NoConfirm, [string]$DiskpartScript = "$env:SystemDrive\DiskpartScript.txt", [switch]$Rescan ) begin { function InvokeDiskpart { Diskpart.exe /s $DiskpartScript } function RemoveVHD { switch ($NoConfirm) { $false { ## Prompt for confirmation to delete the VHD file ## "" ; Write-Warning "Are you sure you want to delete the file ""$Path""?" $Prompt = Read-Host "Type ""YES"" to continue or anything else to break" if ($Prompt -ceq 'YES') { Remove-Item -Path $Path -Force "" ; Write-Host "VHD ""$Path"" deleted!" ; "" } else { "" ; Write-Host "Script terminated without deleting the VHD file." ; "" } } $true { ## Confirmation prompt suppressed ## Remove-Item -Path $Path -Force "" ; Write-Host "VHD ""$Path"" deleted!" ; "" } } } ## Validate Operating System Version ## if (Get-WmiObject win32_OperatingSystem -Filter "Version < '6.1'") {throw "The script operation requires at least Windows 7 or Windows Server 2008 R2."} } process{ ## DiskPart Script Content ## Here-String statement purposefully not indented ## @" $(if ($Rescan) {'Rescan'}) Select VDisk File="$Path"`nDetach VDisk Exit "@ | Out-File -FilePath $DiskpartScript -Encoding ASCII -Force InvokeDiskpart Start-Sleep -Seconds 10 } end { if ($Remove) {RemoveVHD} Remove-Item -Path $DiskpartScript -Force ; "" } }

    Read the article

  • Windows 7 + Deep Freeze - I'm stuck in an endless reboot loop

    - by myermian
    I have the following setup: Windows 7 Ultimate Deep Freeze I "thawed" my machine last night and performed a Windows Update. The update is having issues (it gets stuck at 32%, fails, and restarts my machine). When it reboots it attempts it again, and again, and again, etc. (Endless loop). I looked online and found some solutions, but none of them seem to be working: When I run Safe Mode, Safe Mode w/ Network, or Safe Mode w/ Command Prompt it attempts to revert the Windows Update changes. However, the problem is with Deep Freeze on (and now in "Frozen" mode) the reverted changes don't stay, and I'm back into the loop of death. Oh, and side note: "Safe Mode w/ Command Prompt" does not actually take me to a command prompt window? Perhaps because it is attempting to complete the Windows Update changes first? I have tried to select the option to NOT restart when an windows error occurs, but it still does. I tried the remainder of all the other options in the F8 screen. The only other option left is to find my Windows 7 Media Disc (I can't find it right now) and use it to repair windows (because for some reason the repair option does not show up in the F8 screen). Is there a way to disable Deep Freeze from loading? When I selected "Safe Mode w/ Command Prompt" I noticed that it loads the DpFrz.sys file. I know that when I'm in the Windows Boot Manager if I press F10 instead of F8 (while highlighting Windows 7) it takes me to an "Edit Boot Options" screen: Edit Windows boot options for: Windows 7 Path: \Windows\system32\winload.exe Partition: 2 Hard Disk: 8e90e329 [ /NOEXECUTE=OPTIN (I CAN EDIT THIS LINE) ] Update: I found my Windows 7 Media Disk and it did not help out. The laptop had the "System Restore" as a partition on the HDD. I later received (in the mail) a Windows 7 Upgrade Disc from Sony to upgrade my system from Windows Vista to Windows 7 Ultimate. I placed the disc into the DVD drive and it does not come up as a "bootable" disc. I'm going to try to find an alternative disc to see if I can get into Command Prompt. Update 2: I got a Windows Repair disc and got into a command prompt window. I got into the registry and disabled Deep Freeze. Also: I renamed the Pending.xml file to Pending.old I cleared out the Windows Temp directory I still am stuck in the loop (though, it isn't an issue with DeepFreeze anymore because I can make changes to the hard drive and they persist). Not sure what to do at this point? Update 3: I ran the repair option and it couldn't repair, but it did point me to something. It says the error was due to a driver that was failing. I have a feeling it is my UPEK Fingerprint scanner.

    Read the article

  • Outlook Anywhere inconsistencies with authentication methods

    - by gravyface
    So I've read this question and attempted just about every other workaround I've found online. Problem seems completely illogical to me, anyways: SBS 2011, vanilla install; haven't touched anything in IIS or Exchange outside of what's been done through the checklist (brand new domain, completely new customer) except to import an existing wildcard certificate for *.example.com (which is valid, Remote Web Workplace and Outlook Web Access work fine). On the two test machines and one production machine running a mixture of Windows XP Pro, Windows 7 and Outlook 2003 through to 2010, I've had no problem saving the password after configuring Outlook Anywhere using the wrong authentication method. I repeat, I have had no issues using the wrong authentication method on these test machines; password saves the first time, no problem, can verify it exists in the credentials manager (Start Run control userpasswords2), close Outlook, reboot, go make a sammie, come back, credentials are still saved. When I say wrong, it's because I was choosing NTLM and Exchange (under Exchange Console Server Configuration Client Access) was set by default to use Basic. On two completely different machines setup by a co-worker, they had (under my guidance) used NTLM as well... except that frustratingly, Outlook would always ask for a password. One machine was Windows XP with Outlook 2010, the other was Windows 7 with Outlook 2003. When these two machines were set to use Basic -- the correct settings -- the option to save was there and now works without issue. Puzzled by how my machines could possibly work with the wrong authentication, I then went into one of them and changed the authentication method to Basic. Now here's where it gets a little crazy: if I go under Outlook and change the authentication to use the correct setting (Basic) it fails to save the password and Outlook prompts every time (without a "remember me" checkbox). I have not had a chance to change it to Basic on the other two machines to see if this is just a fluke or not, but something just isn't right here. My two hunches are either a missing/installed KB Update or perhaps a local security policy. I should add that none of the 5 test machines in the equation here have ever been joined to the domain.

    Read the article

  • Apache Solr Admin on Tomcat Deployed in WebApps Directory

    - by KM01
    I am trying to get Apache Solr to work on Redhat6 and Tomcat6 (using these instructions), but get this error when browsing to the admin section, http://localhost:8080/solr-example/admin: HTTP Status 404 - missing core name in path type Status report message missing core name in path description The requested resource (missing core name in path) is not available. http://localhost:8080/solr-example loads fine, with a link to "Solr Admin." My setup is as follows: tomcat6: /etc/tomcat6 Solr: /app/solr/example I have a solr-example.xml in /etc/tomcat6/Catalina/localhost/, which reads: <?xml version="1.0" encoding="utf-8"?> <Context docBase="/app/solr/example/apache-solr-3.4.0.war" debug="0" crossContext="true"> <Environment name="solr/home" type="java.lang.String" value="/app/solr/example" override="true"/> </Context> I don't see anything in the logs (/var/log/tomcat6) ... only entires in catalina.out are regarding the starting and stopping of tomcat6. My questions are: 1.What else do I need to do to get "Solr Admin" to work under Tomcat? 2.Where are these "cores" supposed to be specified? I see an entry in /app/solr/example/solr/solr.xml ? <solr persistent="false"> adminPath: RequestHandler path to manage cores. If 'null' (or absent), cores will not be manageable via request handler <cores adminPath="/admin/cores" defaultCoreName="collection1"> <core name="collection1" instanceDir="." /> </cores> </solr> 3.How do I got about ensuring that logs are working correctly? I can't find logs that contain mention of the 404 above. Update in response to @quanta's comment: Downloaded former (apache-solr-3.4.0.tgz) dataDir was not set, now set to: <dataDir>${solr.data.dir:../solr/data}</dataDir> JAVA_OPTS: /usr/lib/jvm/java/bin/java -classpath :/usr/share/tomcat6/bin/bootstrap.jar:/usr/share/tomcat6/bin/tomcat-juli.jar:/usr/share/java/commons-daemon.jar -Dcatalina.base=/usr/share/tomcat6 -Dcatalina.home=/usr/share/tomcat6 -Djava.endorsed.dirs= -Djava.io.tmpdir=/var/cache/tomcat6/temp -Djava.util.logging.config.file=/usr/share/tomcat6/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager org.apache.catalina.startup.Bootstrap start catalina.out contains no indication of the above error

    Read the article

  • Can't install new database in OpenLDAP 2.4 with BDB on Debian

    - by Timothy High
    I'm trying to install an openldap server (slapd) on a Debian EC2 instance. I have followed all the instructions I can find, and am using the recommended slapd-config approach to configuration. It all seems to be just fine, except that for some reason it can't create my new database. ldap.conf.bak (renamed to ensure it's not being used): ########## # Basics # ########## include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema pidfile /var/run/slapd/slapd.pid argsfile /var/run/slapd/slapd.args loglevel none modulepath /usr/lib/ldap # modulepath /usr/local/libexec/openldap moduleload back_bdb.la database config #rootdn "cn=admin,cn=config" rootpw secret database bdb suffix "dc=example,dc=com" rootdn "cn=manager,dc=example,dc=com" rootpw secret directory /usr/local/var/openldap-data ######## # ACLs # ######## access to attrs=userPassword by anonymous auth by self write by * none access to * by self write by * none When I run slaptest on it, it complains that it couldn't find the id2entry.bdb file: root@server:/etc/ldap# slaptest -f ldap.conf.bak -F slapd.d bdb_db_open: database "dc=example,dc=com": db_open(/usr/local/var/openldap-data/id2entry.bdb) failed: No such file or directory (2). backend_startup_one (type=bdb, suffix="dc=example,dc=com"): bi_db_open failed! (2) slap_startup failed (test would succeed using the -u switch) Using the -u switch it works, of course. But that merely creates the configuration. It doesn't resolve the underlying problem: root@server:/etc/ldap# slaptest -f ldap.conf.bak -F slapd.d -u config file testing succeeded Looking in the database directory, the basic files are there (with right ownership, after a manual chown), but the dbd file wasn't created: root@server:/etc/ldap# ls -al /usr/local/var/openldap-data total 4328 drwxr-sr-x 2 openldap openldap 4096 Mar 1 15:23 . drwxr-sr-x 4 root staff 4096 Mar 1 13:50 .. -rw-r--r-- 1 openldap openldap 3080 Mar 1 14:35 DB_CONFIG -rw------- 1 openldap openldap 24576 Mar 1 15:23 __db.001 -rw------- 1 openldap openldap 843776 Mar 1 15:23 __db.002 -rw------- 1 openldap openldap 2629632 Mar 1 15:23 __db.003 -rw------- 1 openldap openldap 655360 Mar 1 14:35 __db.004 -rw------- 1 openldap openldap 4431872 Mar 1 15:23 __db.005 -rw------- 1 openldap openldap 32768 Mar 1 15:23 __db.006 -rw-r--r-- 1 openldap openldap 2048 Mar 1 15:23 alock (note that, because I'm doing this as root, I had to also change ownership of some of the files created by slaptest) Finally, I can start the slapd service, but it dies in the attempt (text from syslog): Mar 1 15:06:23 server slapd[21160]: @(#) $OpenLDAP: slapd 2.4.23 (Jun 15 2011 13:31:57) $#012#011@incagijs:/home/thijs/debian/p-u/openldap-2.4.23/debian/build/servers/slapd Mar 1 15:06:23 server slapd[21160]: config error processing olcDatabase={1}bdb,cn=config: Mar 1 15:06:23 server slapd[21160]: slapd stopped. Mar 1 15:06:23 server slapd[21160]: connections_destroy: nothing to destroy. I manually checked the olcDatabase={1}bdb file, and it looks fine to my amateur eye. All my specific configs are there. Unfortunately, syslog isn't reporting a specific error in this case (if it were a file permission error, it would say). I've tried uninstalling and reinstalling slapd, changing permissions, Googling my wits out, but I'm tapped out. Any OpenLDAP genius out there would be greatly appreciated!

    Read the article

  • IIS6 Virtual Directory 500 Error on Remote Share

    - by David
    We have our servers at the server farm in a domain. Let's call it LIVE. Our developer computers live in a completely separate corporate domain, miles and miles away. Let's call it CORP. We have a large central storage unit (unix) that houses images and other media needed by many webservers in the server farm. The IIS application pools run as (let's say) LIVE\MediaUser and use those credentials to connect to a central storage share as a virtual directory, retrieve the images, and serve them as if they were local on each server. The problem is in development. On my development machine. I log in as CORP\MyName. My IIS 6 application pool runs as Network Service. I can't run it as a user from the LIVE domain because my machine isn't (and can not be) joined to that domain. I try to create a virtual directory, point it to the same network directory, click Connect As, uncheck the "Always use the authenticated user's credentials when validating access to the network directory" checkbox so that I can enter the login info, enter the credentails for LIVE\MediaUser, click OK, verify the password, etc. This doesn't work. I get "HTTP Error 500 - Internal server error" from IIS. The IIS log file reports sc-status = 500, sc-substatus = 16, and sc-win32-status = 1326. The documentation says this means "UNC authorization credentials are incorrect" and the Win32 status means "Logon failure: unknown user name or bad password." This would be all and good if it were anywhere close to accurate. I double- and trouble-checked it. Tried multiple known good logins. The IIS manager allows me to view the file tree in its window, it's only the browser that kicks me out. I even tried going to the virtual directory's Directory Security tab, and under Authentication and Access Control, I tried using the same LIVE domain username for the anonymous access credential. No luck. I'm not trying to run any ASP, ASP.NET, or other dynamic anything out of the virtual directory. I just want IIS to be able to load static images, css, and js files. If anyone has some bright ideas I would be most appreciative!

    Read the article

  • Why is user asked to choose their workgroup?

    - by Clinton Blackmore
    We running Mac OS X Server 10.5.8 with Mac OS X 10.5.8 clients. Students use network logins to, well, log in. I've been asked to deny internet access to a specific user. I was told that a good way to do it is to create a user workgroup called "No Internet Access" and manage settings there. (Specifically, I told parental controls to allow access to no sites, and blacklisted all the installed web browsers). Now, when the user authenticates to log in, they are greeted with this dialog: Workgroups for <username> Grade 7 Students No Internet Access It is unlikely that the student would willing choose "No Internet Access" to be their base group. Looking in Workgroup Manager at the student's record, it shows their primary group ID is the grade 7 group, and "No Internet Access" is listed as another group they belong to. I looked at the managed preferences for all the computers pertaining to logins. They are set to their defaults. Specifically, the computer groups' preference for Logins - Access has the defaults: [unchecked] Ignore workgroup nesting [checked] Combine available workgroup settings Based on my reading of Tips and Tricks for Mac Administrators, this should be correct, the user should not be asked which group they belong to, and settings from all applicable groups should be applied. How can I achieve that result? Edit: I've decided to add some additional information from the Tips and Tricks for Mac Management White Paper (via Apple in Education, via the author's site). On page 21, it says: With Leopard MCX, workgroup preference settings are combined by default into a single set of values. This means that instead of having to choose between the Math, Science, or Language Arts workgroups when logging in, a user can just authenticate and be taken directly to the desktop. All the settings for each of those workgroups are composited together, providing you with all the Dock items and a composite of all the other settings. On page 40, an example is given in which settings are combined from different 'domains', one computer group, two (user) workgroups, and one individual user's settings. [When johnd logs into a leopard client,] the items staged in the Dock from left to right are: computer group, first workgroup alphabetically, second workgroup, user. Items within the workgroup are staged alphabetically. Nowhere is there an indication that groups are nested; indeed, I can see no sensible (non-flat) heirarchy for groups like Math, Science, and Language Arts. I strongly believe that there is a way to apply settings from two unrelated user workgroups such that a user of OS X 10.5.x or newer does not need to choose their workgroup. This is what I seek to achieve.

    Read the article

  • creating a heirarchy of terminals or workspaces

    - by intuited
    <rant This question occurred to me ('occurred' meaning 'whispered seductively in my ear for the 100th time') while using GNU-screen, so I'll make that my example. However this is a much more general question about user interfaces and what I perceive as a flawmissing feature in every implementation I've yet seen. I'm wondering if there is some way to create a heirarchy/tree of terminals in a screen session. EG I'd like to have something like 1 bash 1.1 bash 1.2 bash 2 bash 3 bash 3.1 bash 3.1.1 bash 3.1.2 bash It would be good if the terminals could be labelled instead of having to be navigated to via some arrangement that I suspect doesn't exist. So then you could jump to one using eg ^A:goto happydays or ^A:goto dykstra.angry. So to generalize that: Firefox, Chrome, Internet Explorer, gnome-terminal, roxterm, konsole, yakuake, OpenOffice, Microsoft Office, Mr. Snuffaluppagus's Funtime Carousel™, and Your Mom's Jam Browser™ all offer the ability to create a flat set of tabs containing documents of an identical nature: web pages, terminals, documents, fun rideable animals, and jams. GNU-screen implements the same functionality without using tabs. Linux and OS/X window managers provide the ability to organize windows into an array of workspaces, which amounts to again, the same deal. Over the past few years, this has become a more or less ubiquitous concept which has been righteously welcomed into the far reaches of the computer interface funfest. Heavy users of these systems quickly encounter a problem with it: the set of entities is flat. In the case of workspaces, an option may be available to create a 2d array. However none of these applications furnish their users with the ability to create heirarchies, similar to filesystem directory structures, containing instances of their particular contained type. I for one am consistently bothered by this, and am wondering if the community can offer some wisdom as to why this has not happened in any of the foremost collections of computational functionality our culture has yet produced. Or if perhaps it has and I'm just an ignorant savage. I'd like to be able to not only group things into a tree structure, but also to create references (aka symbolic links, aka pointers) from one part of the structure to another, as well as apply properties (eg default directory, colorscheme, ...) recursively downward from a given node. I see no reason why we shouldn't be able to save these structures as known sessions, and apply tags to particular instances. So then you can sort through them by tag, find them by name, or just use the arrow keys (with an appropriate modifier) to move left or right and in or out of a given level. Another key combo would serve to create a branch in the place of the current terminal/webpage/lifelike statue/spreadsheet/spreadsheet sheet/presentation/jam and move that entity into the new branch, then create a fresh one as a sibling to it: a second leaf node within the same branch node. They would get along well. I find it a bit astonishing that this hasn't happened yet, and the only reason I can venture as a guess is that the creators of these fine systems do not consider such functionality to be useful to a significant portion of their userbase. I posit that the probability that that such an assumption would be correct is pretty low. On the other hand, given the relative ease with which such structures can be implemented using modern libraries/languages, it doesn't seem likely that difficulty of implementation would be a major roadblock. If it could be done in 1972 or whenever within the constraints of a filesystem driver, it should be relatively painless to implement in 2010 in a fullblown application. Given that all of these systems are capable of maintaining a set of equivalent entities, it seems unlikely that a major infrastructure overhaul would be necessary in order to enable a navigable heirarchy of them. </rant Mostly I'm just looking to start up a discussion and/or brainstorming on this topic. Any ideas, examples, criticism, or analysis are quite welcome. * Mr. Snuffaluppagus's Funtime Carousel is a registered trademark of Children's Television Workshop Inc. * Your Mom's Jam Browser is a registered trademark of Your Mom Inc.

    Read the article

  • What is a good method to solve cabal install problems?

    - by sp3ctum
    I've used the cabal package manager for Haskell programs to install libraries and new projects that I've cloned from some repositories. More often than not, I keep running into problems. Most projects make installing them seem super easy, but in my case that's not always true - sometimes they are very hard to get running. Some are so hard, in fact, that I've lost interest in the project solely because of not being able to install it. So instead of complaining, I'd like to ask what I should do to better this situation. I'd like to use my most recent problem as an example. I'm interested in trying out the Gitit project. It's a promising looking personal wiki that runs on various version control systems. So here's what I've done: Clone from Github run cabal install in the project directory like I'm told on the project install page: mika@eka:~/git/gitit$ ls BLUETRIP-LICENSE CHANGES HCAR-gitit.tex LICENSE Network README.markdown RELANN-0.6.1 Setup.lhs TANGOICONS YUI-LICENSE data expireGititCache.hs gitit.cabal gitit.hs plugins mika@eka:~/git/gitit$ cabal install Resolving dependencies... cabal: cannot configure happstack-server-7.0.7. It requires base64-bytestring ==1.0.* For the dependency on base64-bytestring ==1.0.* there are these packages: base64-bytestring-1.0.0.0. However none of them are available. base64-bytestring-1.0.0.0 was excluded because gitit-0.10 requires base64-bytestring ==0.1.* mika@eka:~/git/gitit$ So now I'm thinking: well, I'll install happstack-server on its own, maybe that will work: mika@eka:~/git/gitit$ cabal install happstack-server Resolving dependencies... Warning: happstack-server.cabal: Ignoring unknown section type: test-suite Configuring happstack-server-7.0.7... cabal: At least the following dependencies are missing: blaze-html ==0.5.*, hslogger >=1.0.2, monad-control ==0.3.*, network >=2.2.3, sendfile >=0.7.1 && <0.8, system-filepath >=0.3.1, text >=0.10 && <0.12, threads >=0.5, transformers-base ==0.4.* cabal: Error: some packages failed to install: happstack-server-7.0.7 failed during the configure step. The exception was: ExitFailure 1 So looks like there are some dependencies missing. But isn't installing these dependencies the whole point of using cabal in the first place? What should I do? File bug reports (to which project?), install the dependencies manually or something else? Bonus points for explaining what causes these kinds of problems.

    Read the article

  • IIS can't load Oracle.Web assembly (for ASP.NET membership provider)

    - by Konamiman
    I am trying to configure an IIS web site to use an Oracle database for ASP.NET membership, but I can't get it to work. IIS doesn't seem to be able to load the assembly containing the Oracle membership provider. That's what I have so far: An Oracle 10g database online and with all the tables for ASP.NET membership created. Windows 2008 R2 Standard with the web server role installed, including support for ASP.NET. Oracle 11g Release 2 ODAC 11.2.0.1.2 installed. The installed components are: Oracle data provider for .NET, Oracle providers for ASP.NET, Oracle instant client. The default web site on IIS (I am using that for testing) has the following web.config file: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <membership defaultProvider="OracleMembershipProvider"> <providers> <remove name="SqlMembershipProvider" /> <add name="OracleMembershipProvider" type="Oracle.Web.Security.OracleMembershipProvider, Oracle.Web, Version=2.112.1.2, Culture=neutral, PublicKeyToken=89b483f429c47342" connectionStringName="OracleServer" /> </providers> </membership> </system.web> </configuration> (Additional attributes on the "add" element omitted for brevity. Also, the connection string is defined for the whole server.) The Oracle.Web.dll file is on the GAC. That's the relevant part of the C:\Windows\Assembly folder: The web site application pool is configured for .NET 2.0, and has 32-bit applications enabled. I have allowed untrusted providers in the IIS' administration.config file (just for the sake of testing, I'll explicitly add the assembly to the trusted providers list later). With all of this setup in place, when I click on the ".NET Users" icon on the IIS manager, I get a warning about the provider having too much privileges, and when I accept I get the following message: There was an error while performing this operation. Details: Could not load file or assembly 'Oracle.Web, Version=2.112.1.2, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. The system cannot find the file specified. So, what am I missing? How can I get the Oracle membership provider to work? Thank you! UPDATE: It seems that the problem is not with IIS itself, but with the IIS administrator only. When using the web site configuration tool provided by Visual Studio, everything works fine.

    Read the article

  • Exchange server not serving mobile devices - how to troubleshoot?

    - by chickeninabiscuit
    Our exchange server has suddenly stopped serving mobile devices. Attempts to connect result in our ActiveSync server returning HTTP 500. It is serving outlook clients fine. Our server is Windows 2003 SBS 6.5 SP2 There are no abnormal events in the system log. I ran the "Exchange ActiveSync with AutoDiscover" at https://www.testexchangeconnectivity.com/ I've notice an abnormality in the exchange properties, Log File Directory shows: Access denied. Facility: Win32 ID no: 80070005 Exchange System Manager As shown in the following image: I think it may be related to a recent issue we had here: http://serverfault.com/questions/40222/windows-server-2003-suddenly-unable-to-connect-to-anything We followed a procedure to reinstall TCP/IP: http://support.microsoft.com/kb/325356 I've run the "exchange activesync" connectivity test at testexchangeconnectivity.com: Attempting to Resolve the host name mail.immersive.com.au in DNS. Host successfully Resolved Additional Details IP(s) returned: 221.133.203.229 Testing TCP Port 443 on host mail.immersive.com.au to ensure it is listening/open. The port was opened successfully. Testing SSL Certificate for validity. The certificate passed all validation requirements. Test Steps Validating certificate name Successfully validated the certificate name Additional Details Found hostname mail.immersive.com.au in Certificate Subject Common name Validating certificate trust for Windows Mobile Devices Certificate is trusted and all certificates are present in chain Additional Details Certificate is trusted for Windows Mobile 5 and Later platforms. Root = [email protected], CN=Thawte Server CA, OU=Certification Services Division, O=Thawte Consulting cc, L=Cape Town, S=Western Cape, C=ZA Testing certificate date to ensure validity Date Validation passed. The certificate is not expired. Additional Details Certificate is valid: NotBefore = 1/5/2009 4:00:00 PM, NotAfter = 1/11/2010 3:59:59 PM Testing Http Authentication Methods for URL https://mail.immersive.com.au/Microsoft-Server-Activesync/ Http Authentication Methods are correct Additional Details Found all expected authentication methods and no disallowed methods. Methods Found: Basic Attempting an Activesync session with server Errors were encountered while testing the ActiveSync session Test Steps Attempting to send OPTIONS command to server OPTIONS response was successfully received and is valid Additional Details Headers received: MicrosoftOfficeWebServer: 5.0_Pub Pragma: no-cache Public: OPTIONS, POST Allow: OPTIONS, POST MS-Server-ActiveSync: 6.5.7638.1 MS-ASProtocolVersions: 1.0,2.0,2.1,2.5 MS-ASProtocolCommands: Sync,SendMail,SmartForward,SmartReply,GetAttachment,GetHierarchy,CreateCollection,DeleteCollection,MoveCollection,FolderSync,FolderCreate,FolderDelete,FolderUpdate,MoveItems,GetItemEstimate,MeetingResponse,ResolveRecipients,ValidateCert,Provision,Search,Notify,Ping Content-Length: 0 Date: Thu, 16 Jul 2009 01:07:27 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Attempting FolderSync command on ActiveSync session FolderSync command test failed Tell me more about this issue and how to resolve it Additional Details Exchange

    Read the article

  • Passive FTP on Windows Server 2008 R2 using the IIS7 FTP-Server

    - by ntor
    Hello serverFault-community! During the last few days I have been setting up a Windows Server 2008 R2 in a VMware. I installed the standard FTP-Server on it by using the Webserver (IIS)-role. Everything works fine with accessing my FTP-Site with ftp://localhost in Firefox. I can also get access to it via the local IP of my Server. Actually everything works fine in my LAN. But here's my problem: I want to get access "from outside", using the external IP or a dyndns-URL. I have a LinkSys-Router in front of my Server, therefore I'm forwarding all the important ports. If you may now think "this idiot has probably forgotten some ports", I must dissappoint you. It even works getting access to my Server-Website and messing around in some WebInterfaces. The problem is my passive FTP (active works for me). I always get a timeout, when e.g. FileZilla waits for a response to the LIST-command. The one big thing I don't get, is, why my Server sends a response to the PASV-command, naming a port like 40918, even if I have restricted the data port range for my passive FTP ( in the IIS-Manager) to e.g. [5000-5009]. I simply don't want to open and forward all possible data ports! And another thing is, I can't specify a static external IP-adress for my server, since I don't own any. I hope I have explained my problem in a comprehensible way. If not, simply ask by posting a comment! LG ntor PS: I have already mainly tried following articles: Out Of Band FTP 7 shows "Operation timed out" How to Configure Windows Firewall for a Passive Mode FTP Server ServerFault --- Passive ftp on Server 2008 --- EDIT: --- There is one idea rising up in my mind: When I use FileZilla to connect by passive mode I always get something like this: 227 Entering Passive Mode (192,168,1,102,160,86) According to a Rhinosof-article FZ tries to connect on port "160*256+86 = 41046", although I have restricted the data ports (as mentioned above). Could this be caused by the router, that doesn't forward out-ports directly, but uses different ones? (-- The IP-Adress given is the local one, since I'm not able to define a static external in the IIS-Mgr)

    Read the article

  • backup of KVM VM's running on Ubuntu 12.4.1 precise edition from a remote machine

    - by Dr. Death
    I am creating a library API which will take the backup of all the VM's running on KVM hypervisor. My VM's can be of any type. I am taking this backup from a remote machine and need to put the backup at remote server. I have KVM, Libvirt installed on my system. Some of my VM's are LVM based and some are normal VM's running on KVM. I research and found out an excellent perl script for taking the backup http://pof.eslack.org/2010/12/23/best-solution-to-fully-backup-kvm-virtual-machines/ but since I am developing this library in C++ I cannot use it however it has given me a good understanding of how it will work. One thing I didnot able to sort out is if my VM's are not created using virt-manager or are created using any other tool them virsh system list command does not give them in the list of running VM's however they are running perfectly on my KVM server. Is there a way to list these VM's in my system list anyhow? secondly, when I am taking backup from the remote machine I am getting out of my ssh mode as soon as my libvirt command finishes and for every command I need to ssh again, Is there a way that I do not need to ssh each and every time? I have already used the rsa key for ssh but when once my command finishes my control moves to the remote machine again and try to find out my source VM location in remote machine's local drives which in turn fails it. here is the main problem I am facing. also for the LVM based VM I am able to take the live backup but for non LVM based my machines are getting suspended and not been able to take the live backup. Since my library will work on the remote machine only I might not know the VM's configruation on the KVM server. so need to make it consistent for all the VM's. Please share any thing related to this issue so that I may be able to take the live backup of the non lvm vm's also. I'll update my working and any research findings time to time to all of you. Thanks in advance for your suggestions in these regards.

    Read the article

  • Deleting windows.edb and unchecking Indexing service lead to hard drive file records swapping

    - by linni
    I followed the instructions listed here:http://www.mydigitallife.info/2007/09/18/turn-off-and-disable-search-indexing-service-in-windows-xp/ to free up space on hard drive by deleting the windows.edb indexing file... I also stopped windows search service as mentioned in the comments following the article. In addition to unchecking the "Allow Indexing Service to index this disk for fast file searching" check box on the properties dialog for the C:\ drive, I did the same for two usb connected hard drives (J:\ and I:\ ). I'm not sure why I did that, thought it might shrink the windows.edb file so I wouldn't have to delete it (which sounded a bit risky in my ears at the time). The file of course didn't shrink so I ended up deleting it and freeing up over 3 GB of space, yeehaw. However, as soon as I had done this I could not access the usb connected hard drives anymore. The error I got was "I:\photos is not accessible" "The file or directory is corrupted and unreadable" when I tried to open the photos directory on I:\ Here is where I enter the twilight zone... I try disconnecting I:\ usb hard drive. But XP shows me that instead J:\ drive has disconnected and I:\ is still there. So I disconnect both drives and restart the computer. I then connect one drive, but it lists up the contents of the other drive on root level. I tried connecting the drives vice versa and the same thing happens. I try taking one of the hard drives to another computer and when I connect it there it lists up not its own contents but the contents of the other hard drive and gives the same error as above when I try and access any of the folders (even folders on the root that have the same name as folders on the other drive (e.g. J:\photos and I:\photos)??? And no, this is not a me mixing up my drive letters. Computer Manager - Disk management shows the same result as explorer: The drive size is correct (one is 500GB, the other is 640GB) but the drive name is of the opposite drive, as long as the contents. Also, one drive was full of data and the other almost empty but they incorrectly show their free space status of the other drive. Somehow the usb drives seem to have switched file tables, file records, boot records or something, extremely weird! Even weirder, if I try and create a text file or folder on this drive, it works fine, accessing them, saving, whatever, all good, but accessing any other data on the drive gives me an error. Does anyone have a clue what is going on and more importantly, how I can restore the correct folder listings to access my family photos ??? cheers, linni

    Read the article

  • Weird random application hang problem

    - by haridsv
    I am trying to understand an application hang problem that started up lately on my windows xp system. The system runs fine for days (sometimes) without ever shutting down or putting it to sleep, but the problem first shows up as one of the apps hanging. The application's UI stops responding or one or more background threads hang, so even though the GUI is responding, it is not doing anything (e.g., in VirtualDub's case, the UI responds fine, but the job doesn't progress and I won't even be able to abort it). The weirdness part comes from the fact that if I try to kill such an app, the program that is used to kill it goes into the same mode (i.e, that hangs instead of the original). E.g., if I use Process Explorer to kill it, the original program exits, but procexp now hangs. If I use another instance of procexp to kill the one that is hanging, this repeats, so there is always at least one program hanging in that state. This is not specific to procexp, I tried the native task manager and even the "End Process" dialog from windows explorer that shows up when you try to close a non-responsive GUI (in this case, the explorer itself hangs). The only program that didn't hang after the kill, is the command line taskkill. However, in this case, explorer hangs instead of taskkill. Also, once this problem starts manifesting, it soon ends up freezing the whole system to the extent that even a clean shutdown is not possible, so I have learned to reboot as soon as I notice this problem, however this is very inconvenient, as I often have encoding batch jobs going on which can't continue the job after the restart. The longer I leave the system running after seeing this problem, the more applications get into this state. I have tried to do a repair install but that didn't make any difference. I also uninstalled some of the newer installs, but again no difference. I tried to search online, but got inundating results for generic hang and crash related problems. Though I couldn't notice any pattern, it seems as though the problem is more frequent if I have some video encoding going on at that time. I had the system running for days when I only do browsing and internet audio/video chat before I decide to start encoding something and the problem starts to show up. I am not too sure if it is the encoding program that first hangs, though I almost always noticed that too hanging (like the VirtualDub stopping to make progress). I also had to reboot 3 times on one day when I was heavily experimenting with encoding. I would appreciate any help in narrowing down this problem and save me the trouble of reinstalling. I don't especially want to loose my gotd installs.

    Read the article

  • SQL Server 2008 R2 upgrade fails on upgrade rule check

    - by Tim
    I'm trying to upgrade an evaluation instance of SQL Server 2008 to a fully licensed instance of SQL Server 2008 R2. I made it most of the way through the installer, but I'm getting stopped at the Upgrade Rules page - the SQL Server Analysis Services Upgrade Service Functional Check is failing. The specific error I get: Rule "SQL Server Analysis Services Upgrade Service Functional Check" failed. The current instance of the SQL Server Analysis Services service cannot be upgraded because the Analysis Services service is disabled or not online. Please start the service and then run the upgrade rules check again. Simple enough - just need to start the service. Here's where it gets troublesome. When I open Services and go to start the SQL Server Analysis Services (MSSQLSERVER) service, it provides me the following message: The SQL Server Analysis Services (MSSQLSERVER) service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs. Trying from the command line as Administrator yields: PS C:\Windows\System32 net start MSSQLServerOLAPService The SQL Server Analysis Services (MSSQLSERVER) service is starting... The SQL Server Analysis Services (MSSQLSERVER) service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. I've tried changing the logon setting of this service to Administrator, a user with admin privileges, and both the Local System and Network Service accounts - nothing works. In addition, when I look at the service through the SQL Server Configuration Manager (also run as Administrator), attempting to change the logon setting for the service results in the message: The server threw an exception. [0x80010105] I have no need for analysis services themselves - all I need is for this one service to be running long enough to do the R2 upgrade, then it can shut down again. Any thoughts on how to get the Analysis Services service running? Update: Checking the event log, I found an error logged to the Application log from the MSSQLServerOLAPService. It has event ID 0, task category (289), and says: The service cannot be started: XML parsing failed at line 1, column 4: Unrecognized input signature.

    Read the article

  • configure squid3 to set up a web proxy in ubuntu12.04

    - by Gnijuohz
    I am in a LAN and have to use a proxy given to access the web in a very limited way. I can't even use google, github.com or SE sites. However I can use ssh to log into a server, which I have root access so basically I can do anything I want with it. So I was thinking that maybe I could use that server as a proxy so I can visit sites through it. I tested it using ssh -vT [email protected] which gave a proper response. And In my computer I can't do this. Also I tried downloading something from the gun.org using wget, which can't be done in my computer too. And it succeeded on that server. I don't know if that's enough to say that this server have full access to the Internet. But I assumed so and I installed squid3 on it. After trying some while, I failed to get it working. I got this after I run squid3 -k parse 2012/07/06 21:45:18| Processing Configuration File: /etc/squid3/squid.conf (depth 0) 2012/07/06 21:45:18| Processing: acl manager proto cache_object 2012/07/06 21:45:18| Processing: acl localhost src 127.0.0.1/32 ::1 2012/07/06 21:45:18| Processing: acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 2012/07/06 21:45:18| Processing: acl localnet src 10.1.0.0/16 # RFC1918 possible internal network 2012/07/06 21:45:18| Processing: acl SSL_ports port 443 2012/07/06 21:45:18| Processing: acl Safe_ports port 80 # http 2012/07/06 21:45:18| Processing: acl Safe_ports port 21 # ftp 2012/07/06 21:45:18| Processing: acl Safe_ports port 443 # https 2012/07/06 21:45:18| Processing: acl Safe_ports port 70 # gopher 2012/07/06 21:45:18| Processing: acl Safe_ports port 210 # wais 2012/07/06 21:45:18| Processing: acl Safe_ports port 1025-65535 # unregistered ports 2012/07/06 21:45:18| Processing: acl Safe_ports port 280 # http-mgmt 2012/07/06 21:45:18| Processing: acl Safe_ports port 488 # gss-http 2012/07/06 21:45:18| Processing: acl Safe_ports port 591 # filemaker 2012/07/06 21:45:18| Processing: acl Safe_ports port 777 # multiling http 2012/07/06 21:45:18| Processing: acl CONNECT method CONNECT 2012/07/06 21:45:18| Processing: http_port 3128 transparent vhost vport 2012/07/06 21:45:18| Starting Authentication on port [::]:3128 2012/07/06 21:45:18| Disabling Authentication on port [::]:3128 (interception enabled) 2012/07/06 21:45:18| Disabling IPv6 on port [::]:3128 (interception enabled) 2012/07/06 21:45:18| Processing: cache_mem 1000 MB 2012/07/06 21:45:18| Processing: cache_swap_low 90 2012/07/06 21:45:18| Processing: coredump_dir /var/spool/squid3 2012/07/06 21:45:18| Processing: refresh_pattern ^ftp: 1440 20% 10080 2012/07/06 21:45:18| Processing: refresh_pattern ^gopher: 1440 0% 1440 2012/07/06 21:45:18| Processing: refresh_pattern -i (/cgi-bin/|?) 0 0% 0 2012/07/06 21:45:18| Processing: refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880 2012/07/06 21:45:18| Processing: refresh_pattern . 0 20% 4320 2012/07/06 21:45:18| Processing: ipcache_high 95 2012/07/06 21:45:18| Processing: http_access allow all I deleted some allow and deny rules and added http_access allow all so that all the request would be allowed. After configuring my computer, I got this error: Access control configuration prevents your request from being allowed at this time. Please contact your service provider if you feel this is incorrect. And the log in the server showed that my TCP requests had all been denied. So, first of all, is what I am trying to do achievable? If so, how to configure the squid in the server so that I use it as a proxy to surf the Internet? My computer and the server both run Ubuntu11.04. Thanks for any help~

    Read the article

  • ZFS/Btrfs/LVM2-like storage with advanced features on Linux?

    - by Easter Sunshine
    I have 3 identical internal 7200 RPM SATA hard disk drives on a Linux machine. I'm looking for a storage set-up that will give me all of this: Different data sets (filesystems or subtrees) can have different RAID levels so I can choose performance, space overhead, and risk trade-offs differently for different data sets while having a few number of physical disks (very important data can be 3xRAID1, important data can be 3xRAID5, unimportant reproducible data can be 3xRAID0). If each data set has an explicit size or size limit, then the ability to grow and shrink the size limit (offline if need be) Avoid out-of-kernel modules R/W or read-only COW snapshots. If it's a block-level snapshots, the filesystem should be synced and quiesced during a snapshot. Ability to add physical disks and then grow/redistribute RAID1, RAID5, and RAID0 volumes to take advantage of the new spindle and make sure no spindle is hotter than the rest (e.g., in NetApp, growing a RAID-DP raid group by a few disks will not balance the I/O across them without an explicit redistribution) Not required but nice-to-haves: Transparent compression, per-file or subtree. Even better if, like NetApps, analyzes the data first for compressibility and only compresses compressible data Deduplication that doesn't have huge performance penalties or require obscene amounts of memory (NetApp does scheduled deduplication on weekends, which is good) Resistance to silent data corruption like ZFS (this is not required because I have never seen ZFS report any data corruption on these specific disks) Storage tiering, either automatic (based on caching rules) or user-defined rules (yes, I have all-identical disks now but this will let me add a read/write SSD cache in the future). If it's user-defined rules, these rules should have the ability to promote to SSD on a file level and not a block level. Space-efficient packing of small files I tried ZFS on Linux but the limitations were: Upgrading is additional work because the package is in an external repository and is tied to specific kernel versions; it is not integrated with the package manager Write IOPS does not scale with number of devices in a raidz vdev. Cannot add disks to raidz vdevs Cannot have select data on RAID0 to reduce overhead and improve performance without additional physical disks or giving ZFS a single partition of the disks ext4 on LVM2 looks like an option except I can't tell whether I can shrink, extend, and redistribute onto new spindles RAID-type logical volumes (of course, I can experiment with LVM on a bunch of files). As far as I can tell, it doesn't have any of the nice-to-haves so I was wondering if there is something better out there. I did look at LVM dangers and caveats but then again, no system is perfect.

    Read the article

  • Xubuntu login hangs after Cancel Button click

    - by akester
    I'm running Xubuntu 12.04 (I installed using the alternative installer.) running in Virtaulbox 4.1.20 My issue is with the login screen (lightdm-gtk-greeter). It usually runs just fine, and allows users to log in and out but it will hang if the user presses the cancel button. The interface is still working (ie, shutdown menu is still available, I can switch to a different tty) but the username or password field (depending on when the button is hit) is disabled. Restarting lightdm will reset the screen, but the problem still exists. The issue is only with the cancel button. The login, session, and language buttons/menus as well as the accessibility and shutdown menu appear to work normally. I've modified some of the config files for lighdm-gtk-greeter, specifically /etc/lightdm/lighdm-gtk-greeter.conf to change the background image and /etc/lightdm/lightdm.conf to disable the user list. I did not check to see if the error existed before the changes took place. The changes have been restored the default settings but the problem persists. Here is the output of /var/log/lightdm/lightdm.log when the screen is hung: [+0.00s] DEBUG: Logging to /var/log/lightdm/lightdm.log [+0.00s] DEBUG: Starting Light Display Manager 1.2.1, UID=0 PID=2072 [+0.00s] DEBUG: Loaded configuration from /etc/lightdm/lightdm.conf [+0.00s] DEBUG: Using D-Bus name org.freedesktop.DisplayManager [+0.00s] DEBUG: Registered seat module xlocal [+0.00s] DEBUG: Registered seat module xremote [+0.00s] DEBUG: Adding default seat [+0.00s] DEBUG: Starting seat [+0.00s] DEBUG: Starting new display for greeter [+0.00s] DEBUG: Starting local X display [+0.02s] DEBUG: Using VT 7 [+0.02s] DEBUG: Activating VT 7 [+0.03s] DEBUG: Logging to /var/log/lightdm/x-0.log [+0.04s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+0.04s] DEBUG: Launching X Server [+0.05s] DEBUG: Launching process 2078: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch [+0.05s] DEBUG: Waiting for ready signal from X server :0 [+0.05s] DEBUG: Acquired bus name org.freedesktop.DisplayManager [+0.05s] DEBUG: Registering seat with bus path /org/freedesktop/DisplayManager/Seat0 [+0.28s] DEBUG: Got signal 10 from process 2078 [+0.28s] DEBUG: Got signal from X server :0 [+0.28s] DEBUG: Connecting to XServer :0 [+0.29s] DEBUG: Starting greeter [+0.29s] DEBUG: Started session 2082 with service 'lightdm', username 'lightdm' [+0.36s] DEBUG: Session 2082 authentication complete with return value 0: Success [+0.36s] DEBUG: Greeter authorized [+0.36s] DEBUG: Logging to /var/log/lightdm/x-0-greeter.log [+0.36s] DEBUG: Session 2082 running command /usr/lib/lightdm/lightdm-greeter-session /usr/sbin/lightdm-gtk-greeter [+0.58s] DEBUG: Greeter connected version=1.2.1 [+0.58s] DEBUG: Greeter connected, display is ready [+0.58s] DEBUG: New display ready, switching to it [+0.58s] DEBUG: Activating VT 7 [+1.04s] DEBUG: Greeter start authentication for andrew [+1.04s] DEBUG: Started session 2137 with service 'lightdm', username 'andrew' [+1.09s] DEBUG: Session 2137 got 1 message(s) from PAM [+1.09s] DEBUG: Prompt greeter with 1 message(s) [+17.24s] DEBUG: Cancel authentication [+17.24s] DEBUG: Session 2137: Sending SIGTERM

    Read the article

  • Messages stuck in SMTP queue - Exchange 2003

    - by Diav
    I need your help people ;-) I have a problem with messages coming into our Exchange Server and ones going out through it. Basically, the messages are stuck in the SMTP queue. A message will come into the server, I can see it listed under "Exchange System Manager", but if you list the properties of the message queue it says something like 00:10 SMTP Message queued for local delivery 00:10 SMTP Message delivered locally to [email protected] 00:10 SMTP Message scheduled to retry local delivery 00:11 SMTP Message delivered locally to [email protected] 00:11 SMTP Message scheduled to retry local delivery etc etc For outgoing message list looks like this: 10:55 SMTP: Message Submitted to Advanced Queuing 10:55 SMTP: Started Message Submission to Advanced Queue 10:55 SMTP: Message Submitted to Categorizer 10:55 SMTP: Message Categorized and Queued for Routing 10:55 SMTP: Message Routed nad Queued for Remote Delivery And the end - since then status didn't change, message is in queue, I am forcing connection from time to time but without an effect. I checked connection with smarthost (used telnet for that) and everything seems to work correctly, so the problem is probably on exchange side. I am using Exchange Server 2003 running on Small Business Server 2003. I don't have any antivirus installed on server. Remaining free space on each partition is over 3Gb, on partition with data bases - it is over 12Gb. All was working good and without problems since 2005, problems started in half of this june - messages started going out and being stuck almost randomly (I don't see a pattern yet, some are going out, some are not, some are going after several hours). I don't know what to do, what to check more, so please, any ideas? Best regards, D. edit Priv1.edb has 14,5GB and priv1.stm 2,6GB - together those files have more than 16GB - can it be the reason? If yes, then what? Indeed, I haven't thought that it can have something in common with my problem, but several users reported recent problems with Outlook Web Access - they can log in, they see the list of their mails, but they can't see the content of their emails. Although when they are connecting with Outlook 2003/2007 - there is no such problem, only with OWA there is. edit2 So,.. It works now, and I have to admit that I am not really sure what the problem was (hope it won't come back). What have I done: Cleaned up some mailboxes to reduce size of them Dismounted Information Store Defragmentated data base files ( I used eseutil: c:\program files\exchsrvr\bin eseutil /d g:\data base\Exchsrvr\MDBDATA\priv1.edb ) Mounted Information Store back ..and before I managed to do anything else - my queue started moving, elements which were kept there already for days - started moving and after few minutes everything was sent, both, outside and locally. But: priv1.edb is still big (13 884 203 008), priv1.stm as well (2 447 384 576), so this is probably not the issue of size of the file. And if not this, so what was that? And if that was issue of size of the file, then soon it will repeat - is there something I can do to avoid it ?

    Read the article

< Previous Page | 535 536 537 538 539 540 541 542 543 544 545 546  | Next Page >