Search Results

Search found 6875 results on 275 pages for 'crash reports'.

Page 213/275 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • "Safely remove hardware"...doesn't.

    - by Kev
    I have an external USB harddisk that I have scripted to safely shut down after a backup, so the backup operator can unplug it, and knows not to if the lights are still on for some reason. It's always worked fine using the DevEject command-line utility. This week it failed for some reason: DevEject 1.0 2003 c't/Matthias Withopf Ejecting 'USB Mass Storage Device' [USB\VID_0411&PID_002A\00000704C8D2]...FAILED (23,5) Error ejecting device USB Mass Storage Device, vetoed (15,5)! Worse yet, using the SRH tray icon, I click Stop, click OK, it pauses about 5 seconds with OK and Cancel greyed out, closes the sub-window, and then the main window with the Stop button still shows the device, and Stop is still available. I can keep doing that and it never gets rid of the device. I can still access it in Explorer. LockHunter reports that nothing is locking the drive. I've made no changes to the backup configuration or anything to do with the drive this week. Why the sudden flake-out? Short of a restart, which I can't do today before the backup operator goes home, how do I fix it?

    Read the article

  • SSRS2008R2 report times out, but the underlying query executes in the Management Studio

    - by Matthew Belk
    A customer of mine recently moved servers and the new server has SQL2008R2. His old server was SQL2005. The new server has substantially better CPU, RAM, and disk performance than the old, but several reports time out while executing. When I run the underlying query in the SQL Management Studio, the query executes in sub-second time. The exact error message returned via the Report Manager UI is: An error occurred within the report server database. This may be due to a connection failure, timeout or low disk condition within the database. (rsReportServerDatabaseError) Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. It must be noted that this database is not just analytical; it's also fairly transactional, although the transaction volume is not exceptionally high. What can I do to improve the performance of the SSRS query engine? Are there settings in the data source I can adjust, or in the SSRS config files?

    Read the article

  • Windows 2008 64bit: applications and explorer always hang

    - by Phil Farthing
    I setup a couple of Window 2008 64bit systems about 5 months ago. Initially all seemed well. Now however, for no apparent reason, things are dog slow, apps hang, explorer hangs, just clicking on something can cause a CPU spike of 100%, and often it's explorer that is eating it up. As I have two on identical hardware, and they experience the same problem, it doesn't seem related to addon software. The only thing these have in common is Kaspersky and I've tried disabling/uninstalling to no avail. There are no useful error messages in the event logs. Actually, the system never even reports app hangs. Sometimes, it similar to what I've seen on Windows 7 systems where the screen goes milky and asks if I want to trouble shoot, that's only when get impatient and click happy. The really odd thing, is that it will NOT do this for a few minutes at a time, and then starts up again. Like I will click on the start menu and browse for the Admin Tools, the start menu will hang at some point and I'll have to wait about a minute, then it's OK. The next time I do this, a few seconds later, it's fine. Every click seems to hang the first time around, then be ok the second time if I do the exact same thing. If anyone has any suggestions, please PLEASE let me know! thanks =)

    Read the article

  • Authenticate VNC session with ConsolKit?

    - by lori
    I have a linux machine running Fedora 16 in a cupboard. It has no screen or keyboard. I connect to it using a combination of vnc and ssh. Recently, after an update, I have had issues with authentication on the machine. If I vnc to it, the kde desktop pops up an error dialog every few minutes saying Authorization failed. Failed to obtain authentication. If I plug in a USB drive it fails to mount, Dolphin reports an authentication issue again. I have had limited success finding the solution. AFAICT, it is an issue with ConsoleKit deeming me to be a non-local user so it prevents authentication. This is the output from ck-list-sessions: $ ck-list-sessions Session5: unix-user = '1000' realname = 'steve' seat = 'Seat6' session-type = '' active = FALSE x11-display = ':1' x11-display-device = '' display-device = '' remote-host-name = '' is-local = FALSE on-since = '2012-09-16T08:07:03.137011Z' login-session-id = '1' I have tried to update my .vnc/xstartup script to include ck-launch-session as follows: $ cat ~/.vnc/xstartup #!/bin/sh exec ck-launch-session vncconfig -iconic & unset SESSION_MANAGER unset DBUS_SESSION_BUS_ADDRESS export XKL_XMODMAP_DISABLE=1 OS=`uname -s` if [ $OS = 'Linux' ]; then case "$WINDOWMANAGER" in *gnome*) if [ -e /etc/SuSE-release ]; then PATH=$PATH:/opt/gnome/bin export PATH fi ;; esac fi if [ -x /etc/X11/xinit/xinitrc ]; then exec ck-launch-session /etc/X11/xinit/xinitrc fi if [ -f /etc/X11/xinit/xinitrc ]; then exec ck-launch-session sh /etc/X11/xinit/xinitrc fi [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources exec ck-launch-session xsetroot -solid grey exec ck-launch-session xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & exec ck-launch-session twm & This has not helped. How can I either authenticate myself to ConsoleKit, or trick it into believing I am a local user?

    Read the article

  • Why is my filesystem being mounted read-only in linux?

    - by Tim
    I am trying to set up a small linux system based on Gentoo on a VirtualBox machine, as a step towards deploying the same system onto a low-spec Single Board Computer. For some reason, my filesystem is being mounted read-only. In my /etc/fstab, I have: /dev/sda1 / ext3 defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 none /dev/shm tmpfs defaults 0 0 However, once booted /proc/mounts shows rootfs / rootfs rw 0 0 /dev/root / ext3 ro,relatime,errors=continue,barrier=0,data=writeback 0 0 proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0 sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 udev /dev tmpfs rw,nosuid,relatime,size=10240k,mode=755 0 0 devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620 0 0 none /dev/shm tmpfs rw,relatime 0 0 usbfs /proc/bus/usb usbfs rw,nosuid,noexec,relatime,devgid=85,devmode=664 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,nosuid,nodev,noexec,relatime 0 0 (the above may contain errors: there's no practical way to copy and paste) The partition at /dev/hda1 is clearly being mounted OK, since I can read all the data, but it's not being mounted as described in fstab. How might I go about diagnosing / resolving this? Edit: I can remount with mount -o remount,rw / and it works as expected, except that /proc/mounts reports /dev/root mounted at / rather than /dev/sda1 as I'd expect. If I try to remount with mount -a I get mount: none already mounted or /sys busy mount: according to mtab, sysfs is already mounted on /sys Edit 2: I resolved the problem with mount -a (the same error was occuring during startup, it turned out) by changing the sysfs and proc lines to proc /proc proc [...] sysfs /sys sysfs [...] Now mount -a doesn't complain, but it doesn't result in a read-write root partition. mount -o remount / does cause the root partition to be remounted, however.

    Read the article

  • How to debug slow queries in Django+Postgres

    - by lacker
    My database queries from Django are starting to take 1-2 seconds and I'm having trouble figuring out why. Not too big a site, about 1-2 requests per second (that hit Django; static files are just served from nginx.) The thing that confuses me is, I can replicate the slowness in the Django shell using debug mode. But when I issue the exact same queries at an sql prompt they are fast. It takes about a second for a query to return, but when I check connection.queries it reports the time as under 10 ms. Here's an example (from the Django shell): >>> p = PlayerData.objects.get(uid="100000521952372") >>> a = time.time(); p.save(); print time.time() - a 1.96812295914 >>> for d in connection.queries: print d["time"] ... 0.002 0.000 0.000 How can I figure out where this extra time is being spent? I'm using Apache+mod_wsgi in daemon mode, but this happens with just the django shell as well, so I figure it is not apache-related.

    Read the article

  • Migrating to CF9: trouble getting JRun working with SSL

    - by DaveBurns
    I have a client on MX7 who wants to migrate to CF9. I have a dev environment for them on my WinXP machine where I've configured MX7 to run with JRun's built-in web server. I've had that working for a long time with both regular and SSL connections. I installed CF9 yesterday side-by-side with the existing MX7 install to start testing. The install was smooth and detected MX7, adjusted CF9's port numbers for no conflict, etc. Testing started well: MX7 over regular and SSL still worked and CF9 worked over regular HTTP. But I can't get CF9 to work with SSL. I installed a new certificate with keytool, FireFox (v3.6) complained about it being unsigned, I added it to the exception list, and now I get this: Secure Connection Failed An error occurred during a connection to localhost:9101. Peer reports it experienced an internal error. (Error code: ssl_error_internal_error_alert) I've been Googling that in all variations but can't find much help to get past this. I don't see any info in any log files either. FWIW, here's my SSL config from SERVER-INF/jrun.xml: <service class="jrun.servlet.http.SSLService" name="SSLService"> <attribute name="enabled">true</attribute>` <attribute name="interface">*</attribute> <attribute name="port">9101</attribute> <attribute name="keyStore">{jrun.rootdir}/lib/mykey</attribute> <attribute name="keyStorePassword">*deleted*</attribute> <attribute name="trustStore">{jrun.rootdir}/lib/trustStore</attribute> <attribute name="socketFactoryName">jrun.servlet.http.JRunSSLServerSocketFactory</attribute> <attribute name="deactivated">false</attribute> <attribute name="bindAddress">*</attribute> <attribute name="clientAuth">false</attribute> </service> Anyone here know of any issues re setting up SSL and CF9? Anyone had success with it? Dave

    Read the article

  • Windows7 shows a drive as full in summary but files, including backup folder, shown on drive are ver

    - by Rob
    I have a drive partitioned so it is seen by Windows as 2 drives: C:\ and D:\ Windows7 shows D:\ as full up in the graphical summary in 'My Computer' summary of all the drives, e.g. the bar graph indicates full and nearly all of the drive's capacity, 108Gb, is full. So I go into the D:\ drive to look at the files, I see several folders. I select them all and the right click menu Properties to count their size, expecting the value to be about the same as what Windows reports in the summary, i.e. nearly 108Gb. But the properties shows the files are very small, Kbs and Mbs, nowhere near 108Gbs. One of the folders is a backup, but its size is very small. I've checked the folder options to show all system files and hidden files too - and counted these in the properties. Something invisible is holding the space. What is happening here? I'm afraid to delete anything if it removes valuable backups. Have I got huge backups here? Why can't I see them? How do I see them?

    Read the article

  • unreadable corrupted ntfs partition - lost clusters reported

    - by Eduardo Martinez
    Hi, partition magic is reporting multiple 'bad file record signature' and 'lost clusters' errors on my 250GB samsung sata disk (connected via usb on a xp sp3). Unfortunately PM is unable to fix. PM shows the drive as being NTFS, detects used space ok and also drive name. But PM browser (right click on partition, browse...) won't show anything (as if disk was empty) Windows Explorer is not even picking the drive name and reports 'the file or directory is corrupted and unreadable' PTDD partition table doctor demo tells me the boot sector is fine, and I can see all disk content on its browser - but crucially cannot copy that content over to a new disk (PTDD browser is pretty arid to say the least) Also tried - photorec-6.11.3 - it actually started to extract files but wouldn't keep file names or any folder structure (maybe I missed sth on the configuration options) - find and mount - intellectual scan went well, the only partition on the disk was detected, then tried to mount into p: but got this error on windows explorer: 'p:\ is not accesible. The media is write protected'. Find and mount allows you to create an image from partition but I don't have a disk big enough at hand. Does anyone know if this will keep the extracted files/folders structure intact? I'm starting to think the disk is pretty screwed and my chances to recover this data are slim. Please someone enlighten me with that marvellous piece of software I am missing :-) Thanks in advance

    Read the article

  • MySQL Server hitting 100% unexpectedly (Amazon AWS RDS)

    - by Luc
    Please help! We've been struggling with this one for months. This week we upped our RDS instance to the highest performing instance and although the occurrences have reduced, we're still having our DB all of a sudden hit 100%. It comes out of nowhere. Sometimes 2am, sometimes midday. I've ruled out a DOS - our pages access logs have normal traffic I've ruled out memcached suddenly dieing (hits and misses continue as normal). The SHOW PROCESSLIST while we have issues reports about 500 queries in queue. If I kill them off or restart the server, they just keep coming back and then eventually out of knowhere, our server resumes back to normal. Sometimes up to 3 hours. Our bad performing queries take .02 seconds to execute when the server eventually returns back to normal but while we're in this 100% CPU physco phase, those queries never finish executing. Please help!!!!! Anybody know anything about MYSQL query optimization? Could it be the server deciding to use different indexes all of a sudden, which puts it into a spiral?

    Read the article

  • Split MPEG video from command line?

    - by Tim
    I have a homemade DVD that I'm effectively trying to insert chapters into and rearrange - the original author burned it as one long chapter, and I'd like to rip it into smaller pieces and re-encode it into a new DVD. I ripped the DVD with the following command: mplayer dvd:// -dvd-device /dev/sr2 -dumpstream -dumpfile raw.vob I'm running Gentoo Linux with mplayer version 1.0-rc2_p20090731 (the latest available in Portage). I have a list of times that the chapters are supposed to span (for example 30:11-33:25), so my first thought was to rip the entire DVD and use mpgtx to cut out certain pieces of the file. My issue is that running mpgtx -i on the file reports quite a few timestamp jumps: Time stamps jumped from 59.753789 to 0.001622 at position 1d29800 Time stamps jumped from 204963823030450.343750 to 31.165900 at position 2d4f800 Time stamps jumped from 60.077878 to 0.001622 at position 43cc000 Time stamps jumped from 60.024233 to 0.001622 at position 65c5000 Time stamps jumped from 204963823068631.718750 to 52.549244 at position 7fd1000 I've tried to fix the indexes using: mencoder raw.vob -oac copy -ovc copy -forceidx -o fixed.vob -of mpeg But mpgtx will still report timestamp issues. My immediate question: is there a way to take the ripped movie I have and correct its timestamps so I can cut it with mpgtx? If I can get that one issue out of the way, building the rest of the DVD will be smooth sailing. If it's not possible to fix the timestamps on this file: is there a better way to rip small chunks of the DVD into separate files for recompilation later? I'd very much like this to be done on Linux, and it'd be even better if I could script it somehow (feed in a list of start and end positions, or start times and durations, and get out a series of ripped files). If need be, I also have a Mac OS X machine available, but no Windows. Edit: I wound up finding another solution involving HandBrake and ffmpeg (with help from this question), but the question stands. Edit again: Turns out my other solution didn't quite work - the audio desynchronized by about five seconds, in about half of my cut mpgs - so I'm back to square one. Anyone?

    Read the article

  • How to install IE9 when KB2120976 is not applicable to my Windows 7 x86 Ultimate edition?

    - by CVertex
    I'm trying to install IE9, visit http://beautyoftheweb.com, download it and then says it's downloading prereqs. After a minute, the installer says there's a problem and directs me to Prerequisites for installing Internet Explorer 9 Beta I click on the x86 installers one by one... most say "already installed".. But http://support.microsoft.com/kb/2120976/ says "The update is not applicable to your computer". Lame. So I try the IE9 install and the go through the whole process again with the same result. I discovered there's an IE9 install log at C:\Windows\IE9_main.log which logs the install process and reports an Error for 2120976 00:06.630: ERROR: Error installing prerequisite file (C:\Users\Vijay\AppData\Local\Temp\IE98036.tmp\KB2120976_x86.msu): 0x80240017 (2149842967) 00:06.677: INFO: PauseOrResumeAUThread: Successfully resumed Automatic Updates. 00:12.090: INFO: Link clicked, opening URL in new window:'http://go.microsoft.com/fwlink/?LinkId=185111' 00:12.106: INFO: Setup exit code: 0x00009C47 (40007) - Required updates are missing from the system.are missing from the system. Any idea why KB2120976 is inapplicable to my LEGAL Windows 7 Ultimate system? Any help is greatly appreciated.

    Read the article

  • install zenoss on ubuntu, raise No valid ZENHOME error

    - by bxshi
    I've added an user with name zenoss, and set export ZENHOME=/usr/local/zenoss in ~/.bashrc under /home/zenoss, and when using echo $ZENHOME, it could show /usr/local/zenoss When install zenoss, I switched to zenoss and then run install.sh under zenoss-4.2.0/inst, when it tries to run Tests, the error occured. ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.zenoss.utils.ZenPacksTest Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 0.045 sec <<< FAILURE! Running org.zenoss.utils.ZenossTest Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.71 sec Results : Tests in error: testGetZenPack(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. testGetPackPath(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. testGetAllPacks(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. Tests run: 6, Failures: 0, Errors: 3, Skipped: 0 [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Zenoss Core ....................................... SUCCESS [27.643s] [INFO] Zenoss Core Utilities ............................. FAILURE [12.742s] [INFO] Zenoss Jython Distribution ........................ SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 40.586s [INFO] Finished at: Wed Sep 26 15:39:24 CST 2012 [INFO] Final Memory: 16M/60M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.8:test (default-test) on project utils: There are test failures. [ERROR] [ERROR] Please refer to /home/zenoss/zenoss-4.2.0/inst/build/java/java/zenoss-utils/target/surefire-reports for the individual test results.

    Read the article

  • How to discover true identity of hard disk?

    - by F21
    I have 2 fake external hard drives that claim to have a storage capacity of 2TB. I pulled the enclosure apart and the hard drives seems to be refurbished ones with their labels replaced as Barracuda LP 2000 GB labels (the serial numbers on both labels are the same). Interestingly, one of the drives have 160G written on it with pencil. However, the counterfeiters seem to have done something to the firmware, because CrystalDiskInfo reports them as 2TB ST2000DL003 drives. I then delete the 1.81 TB partition in Windows disk management and tried to create a new one and format it. Once I get to this point, the drives would make some noise that is common to dying drives. I am not interested in using these drives for production, but I am interested in finding the true identity (manufacturer/serial number/model number, etc) and restoring it to their factory defaults with the right capacity. Can this be done without any special equipment? This would be an interesting learning exercise. Some pictures of the drives in question: Here are the screens from CrystalDiskInfo: Note the serial numbers are the same (these are 2 different drives!). How is this done? Did they have to tamper with the controller board? I would assume that changing the firmware doesn't change the serial number at all.

    Read the article

  • How to set up the jdbc driver to connect to hsqldb from libreoffice?

    - by rumtscho
    I am trying to "split" a LibreOffice .odb file into a HSQL database and an OpenOffice document containing forms and macros. I am trying to follow the instructions from this thread: Within a few minutes you can convert your embedded HSQLDB to a stand-alone HSQLDB which is just a very fine database engine. 1) Download and extract the current version from http://hsqldb.org/ and point the Java class path in ToolsOptionsJava to the new hsqldb.jar 2) Extract the database folder from your embedded database and rename the files data, properties, script to name.data name.properties, name.script where "name." is an arbitrary name prefix. 3) Connect a Base document to an existing JDBC database such as jdbc:hsqldb:file:/home/chenier/hsqldb/name;default_schema=true;shutdown=true;hsqldb.default_table_type=cached;get_column_name=false (again, "name" refers to your own file name prefix). This local single-user connection gives you much more than the embedded HSQLDB. 4) Copy queries, forms and reports from the old database over to the new one. The wizard presents me with a window expecting two inputs: a "Datasource URL" and a "JDBC driver class". As far as I can tell, the tutorial above only tells me what to put into the Datasource URL. As for the JDBC driver class, I have no idea what to write into this field. I tried the fully-qualified name of the Java class, org.hsqldb.jdbc.JDBCDriver as given in the HSQLDB documentation. When that failed, I tried the physical path /var/lib/hsqldb/lib/hsqldb.jar (although that should have been unnecessary, because first I pointed to this path as described under 1 and then restarted LibreOffice). In both cases, "Test class" failed with the message "The JDBC driver could not be loaded". OpenOffice's documentation doesn't say anything sensible about the field, it was something like "enter the JDBC driver in this box". Any ideas what I should enter there to get the connection working?

    Read the article

  • Hiera datatypes wont load in Puppet

    - by Cole Shores
    I have spent a couple of days on this, followed the instructions on http://downloads.puppetlabs.com/docs/puppetmanual.pdf and even the Puppet Training Advanced Puppet manual. When I run a test against it, the results always come back as 'nil' and Im not sure why. I am running Puppet 3.6.1 Community Edition, with Hiera 1.2.1 on SLES 11. My puppet.conf file at /etc/puppet/puppet.conf consists of: [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl certificate_revocation = false [master] hiera_config=/etc/puppet/hiera.yaml reporturl = http://puppet2.vvmedia.com/reports/upload ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY # certname = dev-puppetmaster2.vvmedia.com # ca_name = 'dev-puppetmaster2.vvmedia.com' # facts_terminus = rest # inventory_server = localhost # ca = false [agent] # The file in which puppetd stores a list of the classes # associated with the retrieved configuratiion. Can be loaded in # the separate ``puppet`` executable using the ``--loadclasses`` # option. # The default value is '$confdir/classes.txt'. classfile = $vardir/classes.txt # Where puppetd caches the local configuration. An # extension indicating the cache format is added automatically. # The default value is '$confdir/localconfig'. localconfig = $vardir/localconfig my /etc/puppet/hiera.yaml consists of: :backends: yaml :yaml: :datadir: /etc/puppet/hieradata :hierarchy: - common - database I have a directory created in /etc/puppet/hieradata and within it contains: /etc/puppet/hieradata/common.yaml :nameserver: ["dnsserverfoo1", "dnsserverfoo2"] :smtp_server: relay.internalfoo.com :syslog_server: syslogfoo.com :logstash_shipper: logstashfoo.com :syslog_backup_nfs: nfsfoo:/vol/logs :auth_method: ldap :manage_root: true and /etc/puppet/hieradata/database.yaml :enable_graphital: true :mysql_server_package: MySQL-server :mysql_client_package: MySQL-client :allowed_groups_login: extranet_users does anyone have any idea what could be causing Hiera to not load the requested values? I have tried even restarting the Master. Thanks in advance, Cole

    Read the article

  • Configured Samba to join our domain, but logon fails from Windows machine

    - by jasonh
    I've configured a Fedora 11 installation to join our domain. It seems to join successfully (though it reports a DNS update failure) but when I try to access \\fedoraserver.test.mycompany.com I'm prompted for a password. So I enter adminuser and the password and that fails, so I try test.mycompany.com\adminuser and that too fails. What am I missing? EDIT (Update 9/1/09): I can now connect to the machine and see the shares on it (see my response to djhowell's answer) but when I try to connect, I get an error saying The network path was not found. I checked the log entry on the Fedora computer for the computer I'm connecting from (/var/log/samba/log.ComputerX) and it reads: [2009/09/01 12:02:46, 1] libads/cldap.c:recv_cldap_netlogon(157) no reply received to cldap netlogon [2009/09/01 12:02:46, 1] libads/ldap.c:ads_find_dc(417) ads_find_dc: failed to find a valid DC on our site (Default-First-Site-Name), trying to find another DC Config files as of 9/1/09: smb.conf: [global] Workgroup = TEST realm = TEST.MYCOMPANY.COM password server = DC.TEST.MYCOMPANY.COM security = DOMAIN server string = Test Samba Server log file = /var/log/samba/log.%m max log size = 50 idmap uid = 15000-20000 idmap gid = 15000-20000 windbind use default domain = yes cups options = raw client use spnego = no server signing = auto client signing = auto [share] comment = Test Share path = /mnt/storage1 valid users = adminuser admin users = adminuser read list = adminuser write list = adminuser read only = No I also set the krb5.conf file to look like this: [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = test.mycompany.com dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h forwardable = yes [realms] TEST.MYCOMPANY.COM = { kdc = dc.test.mycompany.com admin_server = dc.test.mycompany.com default_domain = test.mycompany.com } [domain_realm] dc.test.mycompany.com = test.mycompany.com .dc.test.mycompany.com = test.mycompany.com [appdefaults] pam = { debug = false ticket_lifetime = 36000 renew_lifetime = 36000 forwardable = true krb4_convert = false } I realize that there might be an issue with EXAMPLE.COM in there, however if I change it to TEST.MYCOMPANY.COM then it fails to join the domain with a preauthentication failure. As of 9/1/09, this is no longer the case.

    Read the article

  • SQL Server Reporting Services - website blank, builder works

    - by Keith
    We have a few reports in SQL Server Reporting Services. For some reason when we run the report from the website, it doesn't return any data. When I run the same report from the Report Builder, it returns data. I looked in the logs and the only errors I could find is: ReportingServicesService!library!8!6/15/2012-08:12:33:: i INFO: Current DB Version Unknown, Instance Version C.0.8.54. ReportingServicesService!library!8!6/15/2012-08:12:33:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.InvalidReportServerDatabaseException: The version of the report server database is either in a format that is not valid, or it cannot be read. The found version is 'Unknown'. The expected version is 'C.0.8.54'. To continue, update the version of the report server database and verify access rights., ;Info: Microsoft.ReportingServices.Diagnostics.Utilities.InvalidReportServerDatabaseException: The version of the report server database is either in a format that is not valid, or it cannot be read. The found version is 'Unknown'. The expected version is 'C.0.8.54'. To continue, update the version of the report server database and verify access rights. ReportingServicesService!library!8!6/15/2012-08:12:33:: e ERROR: Exception caught while starting service. Error: Microsoft.ReportingServices.Diagnostics.Utilities.InvalidReportServerDatabaseException: The version of the report server database is either in a format that is not valid, or it cannot be read. The found version is 'Unknown'. The expected version is 'C.0.8.54'. To continue, update the version of the report server database and verify access rights. I'm not really sure why it would be a different version. It's all SQL Server 2008 R2 and I haven't made any changes to it since it's been running.

    Read the article

  • Is it worth hiring a hacker to perform some penetration testing on my servers ?

    - by Brann
    I'm working in a small IT company with paranoid clients, so security has always been an important consideration to us ; In the past, we've already mandated two penetration testing from independent companies specialized in this area (Dionach and GSS). We've also ran some automated penetration tests using Nessus. Those two auditors were given a lot of insider information, and found almost nothing* ... While it feels comfortable to think our system is perfectly sure (and it was surely comfortable to show those reports to our clients when they performed their due diligence work), I've got a hard time believing that we've achieved a perfectly sure system, especially considering that we have no security specialist in our company (Security has always been a concern, and we're completely paranoid, which helps, but that's far as it goes!) If hackers can hack into companies that probably employ at least a few people whose sole task is to ensure their data stays private, surely they could hack into our small business, right ? Does someone have any experience in hiring an "ethical hacker"? How to find one? How much would it cost? *The only recommendation they made us was to upgrade our remote desktop protocols on two windows servers, which they were able to access because we gave them the correct non-standard port and whitelisted their IP

    Read the article

  • Why does this preseed for gitolite fail?

    - by troutwine
    I'm installing gitolite on a Debian Squeeze box with the following preseed: gitolite gitolite/gituser string git gitolite gitolite/adminkey string ssh-rsa AAAAB3ECT gitolite gitolite/gitdir string /var/lib/git On installation: # debconf-set-selections /var/cache/debconf/gitolite.preseed # apt-get install gitolite Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: git-daemon-run gitweb The following NEW packages will be installed: gitolite 0 upgraded, 1 newly installed, 0 to remove and 26 not upgraded. Need to get 0 B/114 kB of archives. After this operation, 348 kB of additional disk space will be used. Preconfiguring packages ... Selecting previously deselected package gitolite. (Reading database ... 24715 files and directories currently installed.) Unpacking gitolite (from .../gitolite_1.5.4-2+squeeze1_all.deb) ... Setting up gitolite (1.5.4-2+squeeze1) ... adduser: The home dir must be an absolute path. dpkg: error processing gitolite (--configure): subprocess installed post-installation script returned error exit status 1 configured to not write apport reports Errors were encountered while processing: gitolite E: Sub-process /usr/bin/dpkg returned an error code (1) Why? The pre-seed was extracted from a manually configured installation, per here and exists without issue on another machine.

    Read the article

  • Set postfix to send email but not to receive them

    - by CodeShining
    I'm using Google Apps to handle personal email addresses for my domain name, and I set up the DNS as Google suggests. All works fine. Now since I need a SMTP to send emails from my e-commerce I installed Postfix on the server. It works fine when I send emails to any email address but it doesn't send to the same domain name, so let's say my domain is example.com, I set postfix using example.com, if I try to reset a password using [email protected] postfix doesn't send and instead reports on the mail.log Sep 20 01:09:52 ip-10-54-26-162 postfix/pickup[6809]: B09A3415D8: uid=33 from=<www-data> Sep 20 01:09:52 ip-10-54-26-162 postfix/cleanup[6854]: B09A3415D8: message-id=<20120920010952.B09A3415D8@ip-10-54-26-162.eu-west-1.compute.internal> Sep 20 01:09:52 ip-10-54-26-162 postfix/qmgr[30978]: B09A3415D8: from=<[email protected]>, size=4234, nrcpt=1 (queue active) Sep 20 01:09:52 ip-10-54-26-162 postfix/local[6856]: B09A3415D8: to=<[email protected]>, relay=local, delay=0.01, delays=0.01/0/0/0, dsn=5.1.1, status=bounced (unknown user: "myaccount") Of course it cannot find a local user "myaccount" since that account is on Google Apps... How can I tell Postfix to send the email and do not search for a local user?

    Read the article

  • why does mysql have so many more open and fragmented tables than tables in the DB?

    - by kswift
    I've been working making our database run a little smoother and had good results over the past week. But there are still some things I dont understand. For one thing, the database has 25 tables. But mysql status shows 512 are open: mysqladmin status Uptime: 212854 Threads: 1 Questions: 43041 Slow queries: 7 Opens: 2605 Flush tables: 1 Open tables: 512 Queries per second avg: 0.202 I've read that isam opens extra file descriptors and a few other reasons why the number of open tables might be higher than 25, but I am guessing that 512 is not a good thing. Any suggestions on why this might be or what I should be looking into? I've also been using mysqltuner and its been helpful. But it has consistently listed the number of fragmented tables at 207. In phpmyadmin I've selected all the tables and optimized them several times. It hasn't reduced the number of fragmented tables that mysqltuner reports. I think I am missing some important concept about how this all works. Does anyone have any suggestions to point me in the right direction or narrow down google searches or just generally help me be less clueless? Thanks!

    Read the article

  • testdisk - recover partition table

    - by Evaggelos Balaskas
    I destroyed my partition table of my laptop. Testdisk reports the below Disk laptop.img - 250 GB / 232 GiB - CHS 30402 255 63 (RO) Partition Start End Size in sectors >P MS Data 435868 456606 20739 [NO NAME] P MS Data 19232600 19235479 2880 [NO NAME] D MS Data 41945087 83890143 41945057 D MS Data 57151486 168579069 111427584 D MS Data 67637246 141037565 73400320 D MS Data 151523326 193466365 41943040 D MS Data 170617328 170618223 896 D MS Data 170631168 170634047 2880 D MS Data 171338232 171344405 6174 [Boot] D MS Data 172008235 172231918 223684 [NO NAME] P MS Data 193466368 214437887 20971520 D MS Data 217321375 225321678 8000304 [root] D MS Data 224923646 308809725 83886080 [media] D MS Data 308809728 420237311 111427584 D MS Data 418910206 481824765 62914560 [vmimages] my partition table had 3 Primary Partitions. 1. WinXP Home 2. /boot 3. LVM inside LVM i had 9 or 10 LVM partitions One of them was my home (encrypted with luks) testdisk cant recover my partition table or any other partition. Partitions with [P] doesnt have any useful data. I want to use dd to extract the partitions and try to recover as many files i can. Any ideas of how i can extract eg. the [root] lvm partition from the above testdisk report ? I am afraid that my disk was also corrupted.

    Read the article

  • What steps to take when CPAN installation fails?

    - by pythonic metaphor
    I have used CPAN to install perl modules on quite a few occasions, but I've been lucky enough to just have it work. Unfortunately, I was trying to install Thread::Pool today and one of the required dependencies, Thread::Converyor::Monitored failed the test: Test Summary Report ------------------- t/Conveyor-Monitored02.t (Wstat: 65280 Tests: 89 Failed: 0) Non-zero exit status: 255 Parse errors: Tests out of sequence. Found (2) but expected (4) Tests out of sequence. Found (4) but expected (5) Tests out of sequence. Found (5) but expected (6) Tests out of sequence. Found (3) but expected (7) Tests out of sequence. Found (6) but expected (8) Displayed the first 5 of 86 TAP syntax errors. Re-run prove with the -p option to see them all. Files=3, Tests=258, 6 wallclock secs ( 0.07 usr 0.03 sys + 4.04 cusr 1.25 csys = 5.39 CPU) Result: FAIL Failed 1/3 test programs. 0/258 subtests failed. make: *** [test_dynamic] Error 255 ELIZABETH/Thread-Conveyor-Monitored-0.12.tar.gz /usr/bin/make test -- NOT OK //hint// to see the cpan-testers results for installing this module, try: reports ELIZABETH/Thread-Conveyor-Monitored-0.12.tar.gz Running make install make test had returned bad status, won't install without force Failed during this command: ELIZABETH/Thread-Conveyor-Monitored-0.12.tar.gz: make_test NO What steps do you take to start seeing why an installation failed? I'm not even sure how to begin tracking down what's wrong.

    Read the article

  • My client's solution of a Windows SBS 2011 VM on an Ubuntu host and VirtualBox is pinning the host CPU

    - by Scott Stamp
    Here's my situation, I've got a client hosting two servers (one VM), with the host providing VMware Zimbra, the other Windows Small Business Server 2011. Unfortunately, the person before me had configured this setup as follows. Host: Ubuntu Desktop Edition 10.04 (I know, again, not my choice) running VMware Zimbra 8GB of RAM On-board RAID1 of two 320GB Seagate Barracuda drives for the OS Software RAID5 of four 500GB WD Caviar Black drives on MDADM for bulk storage (sorry, I don't know the model #) A relatively competent quad-core Intel Core i7 CPU from the Nehalem architecture (not suspicious of this as the bottleneck) Guest: Windows Small Business Server 2011 4GB of RAM Host-equivalent CPU allocation VDI file for OS hosted on the on-board RAID, VDI file for storage hosted on the on-board RAID For some reason when running, the VM locks up when sitting nearly idle, and the VirtualBox process reports values of 240%+ in top (how is that even possible?!). Anyone have any ideas or suggestions? I'm totally stumped on this one. Happy to provide whatever logs you'd like to take a look at. Ideally I'd drop VirtualBox and provision this with VMware Workstation, but the client has objected to the (very nominal) costs involved. If hardware needs to be purchased to help, it will be, but we're considering upgrades a last-resort at this time. Thanks in advance! *fingers crossed*

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >