Search Results

Search found 4279 results on 172 pages for 'crystal reports 8 5'.

Page 132/172 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Exchange MSExchangeIS Mailbox Store Error

    - by Bart Silverstrim
    Boss asked me to check to see if I could figure out why he's had to restart the services on the Exchange server three mornings in a row now. While going through the system logs I ran across an error from the MSExchangeIs Mailbox Store, category General, Event 9690. The message said (edited to make generalized): Exchange store 'First Storage Group\Mailbox Store (Servername)': The logical size of this database (the logical size equals the physical size of the .edb file and the .stm file minus the logical free space in each) is 22GB. This database size has exceeded the size limit of 22 GB. This database will be dismounted immediately. Hmm...happened at five in the morning, and I'm thinking this is a pretty good hint that this leads to the culprit. Thing is I'm not an Exchange expert, so I'm still googling around to figure out how to fix the problem. Any better guidance out there? Or am I barking up the wrong binary tree? Exchange System Manager reports that the server is "version 6.5 build 7638.2, SP2", standard, which I believe is Exchange 2003. It's running on Windows Server 2003 R2 Standard, SP2.

    Read the article

  • What could cause the file command in Linux to report a text file as data?

    - by Jonah Bishop
    I have a couple of C++ source files (one .cpp and one .h) that are being reported as type data by the file command in Linux. When I run the file -bi command against these files, I'm given this output (same output for each file): application/octet-stream; charset=binary Each file is clearly plain-text (I can view them in vi). What's causing file to misreport the type of these files? Could it be some sort of Unicode thing? Both of these files were created in Windows-land (using Visual Studio 2005), but they're being compiled in Linux (it's a cross-platform application). Any ideas would be appreciated. Update: I don't see any null characters in either file. I found some extended characters in the .cpp file (in a comment block), removed them, but file still reports the same encoding. I've tried forcing the encoding in SlickEdit, but that didn't seem to have an effect. When I open the file in vim, I see a [converted] line as soon as I open the file. Perhaps I can get vim to force the encoding?

    Read the article

  • "Safely remove hardware"...doesn't.

    - by Kev
    I have an external USB harddisk that I have scripted to safely shut down after a backup, so the backup operator can unplug it, and knows not to if the lights are still on for some reason. It's always worked fine using the DevEject command-line utility. This week it failed for some reason: DevEject 1.0 2003 c't/Matthias Withopf Ejecting 'USB Mass Storage Device' [USB\VID_0411&PID_002A\00000704C8D2]...FAILED (23,5) Error ejecting device USB Mass Storage Device, vetoed (15,5)! Worse yet, using the SRH tray icon, I click Stop, click OK, it pauses about 5 seconds with OK and Cancel greyed out, closes the sub-window, and then the main window with the Stop button still shows the device, and Stop is still available. I can keep doing that and it never gets rid of the device. I can still access it in Explorer. LockHunter reports that nothing is locking the drive. I've made no changes to the backup configuration or anything to do with the drive this week. Why the sudden flake-out? Short of a restart, which I can't do today before the backup operator goes home, how do I fix it?

    Read the article

  • check_snmp warning & critical thresholds with negative values

    - by Oesor
    I'm querying some signal level values measured in dBm, and the SNMP host on the remove device reports the values as negative values, ie, -90 dBm. However, check-snmp seems to be incapable of dealing with negative numbers as part of its threshold values. If I specify the values as part of a collection of OIDs, it accepts the syntax but converts the snmp value to positive, thus always generating a WARNING/CRITICAL result: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::AverageReceiveSNR.0,DEVICE-MIB::CurrentNoiseFloor.0 -w 10:,~:-85 -c 15:,~:-80 -vvvv /usr/bin/snmpget -t 1 -r 5 -m ALL -v 1 [authpriv] 192.168.1.100:161 DEVICE-MIB::AverageReceiveSNR.0 DEVICE-MIB::CurrentNoiseFloor.0 DEVICE-MIB::AverageReceiveSNR.0 = INTEGER: 25 DEVICE-MIB::CurrentNoiseFloor.0 = INTEGER: -97 Processing line 1 oidname: DEVICE-MIB::AverageReceiveSNR.0 response: = INTEGER: 25 Processing line 2 oidname: DEVICE-MIB::CurrentNoiseFloor.0 response: = INTEGER: -97 SNMP CRITICAL - 25 *97* | DEVICE-MIB::AverageReceiveSNR.0=25 DEVICE-MIB::CurrentNoiseFloor.0=97 If I run it with a single OID, it gives me an error that the format is incorrect: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::CurrentNoiseFloor.0 -w ~:-85 -c ~:-80 -vvvv Range format incorrect And if I run it with no thresholds defined, it works properly and returns the right value. This makes the graphs correct, however it'll never generate a notification when out of range: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::CurrentNoiseFloor.0 -vvvv /usr/bin/snmpget -t 1 -r 5 -m ALL -v 1 [authpriv] 192.168.1.100:161 DEVICE-MIB::CurrentNoiseFloor.0 DEVICE-MIB::CurrentNoiseFloor.0 = INTEGER: -97 Processing line 1 oidname: DEVICE-MIB::CurrentNoiseFloor.0 response: = INTEGER: -97 SNMP OK - -97 | DEVICE-MIB::CurrentNoiseFloor.0=-97 What am I doing wrong here? How would I, for example, generate a CRITICAL when the noise floor is -80 dBm or higher, a WARNING when it's -85 to -80 dBm, and an OK when -85 dBm or lower? Do I have to write my own SNMP plugins when dealing with negative values?

    Read the article

  • SSRS2008R2 report times out, but the underlying query executes in the Management Studio

    - by Matthew Belk
    A customer of mine recently moved servers and the new server has SQL2008R2. His old server was SQL2005. The new server has substantially better CPU, RAM, and disk performance than the old, but several reports time out while executing. When I run the underlying query in the SQL Management Studio, the query executes in sub-second time. The exact error message returned via the Report Manager UI is: An error occurred within the report server database. This may be due to a connection failure, timeout or low disk condition within the database. (rsReportServerDatabaseError) Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. It must be noted that this database is not just analytical; it's also fairly transactional, although the transaction volume is not exceptionally high. What can I do to improve the performance of the SSRS query engine? Are there settings in the data source I can adjust, or in the SSRS config files?

    Read the article

  • Windows 2008 64bit: applications and explorer always hang

    - by Phil Farthing
    I setup a couple of Window 2008 64bit systems about 5 months ago. Initially all seemed well. Now however, for no apparent reason, things are dog slow, apps hang, explorer hangs, just clicking on something can cause a CPU spike of 100%, and often it's explorer that is eating it up. As I have two on identical hardware, and they experience the same problem, it doesn't seem related to addon software. The only thing these have in common is Kaspersky and I've tried disabling/uninstalling to no avail. There are no useful error messages in the event logs. Actually, the system never even reports app hangs. Sometimes, it similar to what I've seen on Windows 7 systems where the screen goes milky and asks if I want to trouble shoot, that's only when get impatient and click happy. The really odd thing, is that it will NOT do this for a few minutes at a time, and then starts up again. Like I will click on the start menu and browse for the Admin Tools, the start menu will hang at some point and I'll have to wait about a minute, then it's OK. The next time I do this, a few seconds later, it's fine. Every click seems to hang the first time around, then be ok the second time if I do the exact same thing. If anyone has any suggestions, please PLEASE let me know! thanks =)

    Read the article

  • Authenticate VNC session with ConsolKit?

    - by lori
    I have a linux machine running Fedora 16 in a cupboard. It has no screen or keyboard. I connect to it using a combination of vnc and ssh. Recently, after an update, I have had issues with authentication on the machine. If I vnc to it, the kde desktop pops up an error dialog every few minutes saying Authorization failed. Failed to obtain authentication. If I plug in a USB drive it fails to mount, Dolphin reports an authentication issue again. I have had limited success finding the solution. AFAICT, it is an issue with ConsoleKit deeming me to be a non-local user so it prevents authentication. This is the output from ck-list-sessions: $ ck-list-sessions Session5: unix-user = '1000' realname = 'steve' seat = 'Seat6' session-type = '' active = FALSE x11-display = ':1' x11-display-device = '' display-device = '' remote-host-name = '' is-local = FALSE on-since = '2012-09-16T08:07:03.137011Z' login-session-id = '1' I have tried to update my .vnc/xstartup script to include ck-launch-session as follows: $ cat ~/.vnc/xstartup #!/bin/sh exec ck-launch-session vncconfig -iconic & unset SESSION_MANAGER unset DBUS_SESSION_BUS_ADDRESS export XKL_XMODMAP_DISABLE=1 OS=`uname -s` if [ $OS = 'Linux' ]; then case "$WINDOWMANAGER" in *gnome*) if [ -e /etc/SuSE-release ]; then PATH=$PATH:/opt/gnome/bin export PATH fi ;; esac fi if [ -x /etc/X11/xinit/xinitrc ]; then exec ck-launch-session /etc/X11/xinit/xinitrc fi if [ -f /etc/X11/xinit/xinitrc ]; then exec ck-launch-session sh /etc/X11/xinit/xinitrc fi [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources exec ck-launch-session xsetroot -solid grey exec ck-launch-session xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & exec ck-launch-session twm & This has not helped. How can I either authenticate myself to ConsoleKit, or trick it into believing I am a local user?

    Read the article

  • Why is my filesystem being mounted read-only in linux?

    - by Tim
    I am trying to set up a small linux system based on Gentoo on a VirtualBox machine, as a step towards deploying the same system onto a low-spec Single Board Computer. For some reason, my filesystem is being mounted read-only. In my /etc/fstab, I have: /dev/sda1 / ext3 defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 none /dev/shm tmpfs defaults 0 0 However, once booted /proc/mounts shows rootfs / rootfs rw 0 0 /dev/root / ext3 ro,relatime,errors=continue,barrier=0,data=writeback 0 0 proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0 sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 udev /dev tmpfs rw,nosuid,relatime,size=10240k,mode=755 0 0 devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620 0 0 none /dev/shm tmpfs rw,relatime 0 0 usbfs /proc/bus/usb usbfs rw,nosuid,noexec,relatime,devgid=85,devmode=664 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,nosuid,nodev,noexec,relatime 0 0 (the above may contain errors: there's no practical way to copy and paste) The partition at /dev/hda1 is clearly being mounted OK, since I can read all the data, but it's not being mounted as described in fstab. How might I go about diagnosing / resolving this? Edit: I can remount with mount -o remount,rw / and it works as expected, except that /proc/mounts reports /dev/root mounted at / rather than /dev/sda1 as I'd expect. If I try to remount with mount -a I get mount: none already mounted or /sys busy mount: according to mtab, sysfs is already mounted on /sys Edit 2: I resolved the problem with mount -a (the same error was occuring during startup, it turned out) by changing the sysfs and proc lines to proc /proc proc [...] sysfs /sys sysfs [...] Now mount -a doesn't complain, but it doesn't result in a read-write root partition. mount -o remount / does cause the root partition to be remounted, however.

    Read the article

  • How to debug slow queries in Django+Postgres

    - by lacker
    My database queries from Django are starting to take 1-2 seconds and I'm having trouble figuring out why. Not too big a site, about 1-2 requests per second (that hit Django; static files are just served from nginx.) The thing that confuses me is, I can replicate the slowness in the Django shell using debug mode. But when I issue the exact same queries at an sql prompt they are fast. It takes about a second for a query to return, but when I check connection.queries it reports the time as under 10 ms. Here's an example (from the Django shell): >>> p = PlayerData.objects.get(uid="100000521952372") >>> a = time.time(); p.save(); print time.time() - a 1.96812295914 >>> for d in connection.queries: print d["time"] ... 0.002 0.000 0.000 How can I figure out where this extra time is being spent? I'm using Apache+mod_wsgi in daemon mode, but this happens with just the django shell as well, so I figure it is not apache-related.

    Read the article

  • MySQL Server hitting 100% unexpectedly (Amazon AWS RDS)

    - by Luc
    Please help! We've been struggling with this one for months. This week we upped our RDS instance to the highest performing instance and although the occurrences have reduced, we're still having our DB all of a sudden hit 100%. It comes out of nowhere. Sometimes 2am, sometimes midday. I've ruled out a DOS - our pages access logs have normal traffic I've ruled out memcached suddenly dieing (hits and misses continue as normal). The SHOW PROCESSLIST while we have issues reports about 500 queries in queue. If I kill them off or restart the server, they just keep coming back and then eventually out of knowhere, our server resumes back to normal. Sometimes up to 3 hours. Our bad performing queries take .02 seconds to execute when the server eventually returns back to normal but while we're in this 100% CPU physco phase, those queries never finish executing. Please help!!!!! Anybody know anything about MYSQL query optimization? Could it be the server deciding to use different indexes all of a sudden, which puts it into a spiral?

    Read the article

  • Windows7 shows a drive as full in summary but files, including backup folder, shown on drive are ver

    - by Rob
    I have a drive partitioned so it is seen by Windows as 2 drives: C:\ and D:\ Windows7 shows D:\ as full up in the graphical summary in 'My Computer' summary of all the drives, e.g. the bar graph indicates full and nearly all of the drive's capacity, 108Gb, is full. So I go into the D:\ drive to look at the files, I see several folders. I select them all and the right click menu Properties to count their size, expecting the value to be about the same as what Windows reports in the summary, i.e. nearly 108Gb. But the properties shows the files are very small, Kbs and Mbs, nowhere near 108Gbs. One of the folders is a backup, but its size is very small. I've checked the folder options to show all system files and hidden files too - and counted these in the properties. Something invisible is holding the space. What is happening here? I'm afraid to delete anything if it removes valuable backups. Have I got huge backups here? Why can't I see them? How do I see them?

    Read the article

  • unreadable corrupted ntfs partition - lost clusters reported

    - by Eduardo Martinez
    Hi, partition magic is reporting multiple 'bad file record signature' and 'lost clusters' errors on my 250GB samsung sata disk (connected via usb on a xp sp3). Unfortunately PM is unable to fix. PM shows the drive as being NTFS, detects used space ok and also drive name. But PM browser (right click on partition, browse...) won't show anything (as if disk was empty) Windows Explorer is not even picking the drive name and reports 'the file or directory is corrupted and unreadable' PTDD partition table doctor demo tells me the boot sector is fine, and I can see all disk content on its browser - but crucially cannot copy that content over to a new disk (PTDD browser is pretty arid to say the least) Also tried - photorec-6.11.3 - it actually started to extract files but wouldn't keep file names or any folder structure (maybe I missed sth on the configuration options) - find and mount - intellectual scan went well, the only partition on the disk was detected, then tried to mount into p: but got this error on windows explorer: 'p:\ is not accesible. The media is write protected'. Find and mount allows you to create an image from partition but I don't have a disk big enough at hand. Does anyone know if this will keep the extracted files/folders structure intact? I'm starting to think the disk is pretty screwed and my chances to recover this data are slim. Please someone enlighten me with that marvellous piece of software I am missing :-) Thanks in advance

    Read the article

  • Migrating to CF9: trouble getting JRun working with SSL

    - by DaveBurns
    I have a client on MX7 who wants to migrate to CF9. I have a dev environment for them on my WinXP machine where I've configured MX7 to run with JRun's built-in web server. I've had that working for a long time with both regular and SSL connections. I installed CF9 yesterday side-by-side with the existing MX7 install to start testing. The install was smooth and detected MX7, adjusted CF9's port numbers for no conflict, etc. Testing started well: MX7 over regular and SSL still worked and CF9 worked over regular HTTP. But I can't get CF9 to work with SSL. I installed a new certificate with keytool, FireFox (v3.6) complained about it being unsigned, I added it to the exception list, and now I get this: Secure Connection Failed An error occurred during a connection to localhost:9101. Peer reports it experienced an internal error. (Error code: ssl_error_internal_error_alert) I've been Googling that in all variations but can't find much help to get past this. I don't see any info in any log files either. FWIW, here's my SSL config from SERVER-INF/jrun.xml: <service class="jrun.servlet.http.SSLService" name="SSLService"> <attribute name="enabled">true</attribute>` <attribute name="interface">*</attribute> <attribute name="port">9101</attribute> <attribute name="keyStore">{jrun.rootdir}/lib/mykey</attribute> <attribute name="keyStorePassword">*deleted*</attribute> <attribute name="trustStore">{jrun.rootdir}/lib/trustStore</attribute> <attribute name="socketFactoryName">jrun.servlet.http.JRunSSLServerSocketFactory</attribute> <attribute name="deactivated">false</attribute> <attribute name="bindAddress">*</attribute> <attribute name="clientAuth">false</attribute> </service> Anyone here know of any issues re setting up SSL and CF9? Anyone had success with it? Dave

    Read the article

  • Split MPEG video from command line?

    - by Tim
    I have a homemade DVD that I'm effectively trying to insert chapters into and rearrange - the original author burned it as one long chapter, and I'd like to rip it into smaller pieces and re-encode it into a new DVD. I ripped the DVD with the following command: mplayer dvd:// -dvd-device /dev/sr2 -dumpstream -dumpfile raw.vob I'm running Gentoo Linux with mplayer version 1.0-rc2_p20090731 (the latest available in Portage). I have a list of times that the chapters are supposed to span (for example 30:11-33:25), so my first thought was to rip the entire DVD and use mpgtx to cut out certain pieces of the file. My issue is that running mpgtx -i on the file reports quite a few timestamp jumps: Time stamps jumped from 59.753789 to 0.001622 at position 1d29800 Time stamps jumped from 204963823030450.343750 to 31.165900 at position 2d4f800 Time stamps jumped from 60.077878 to 0.001622 at position 43cc000 Time stamps jumped from 60.024233 to 0.001622 at position 65c5000 Time stamps jumped from 204963823068631.718750 to 52.549244 at position 7fd1000 I've tried to fix the indexes using: mencoder raw.vob -oac copy -ovc copy -forceidx -o fixed.vob -of mpeg But mpgtx will still report timestamp issues. My immediate question: is there a way to take the ripped movie I have and correct its timestamps so I can cut it with mpgtx? If I can get that one issue out of the way, building the rest of the DVD will be smooth sailing. If it's not possible to fix the timestamps on this file: is there a better way to rip small chunks of the DVD into separate files for recompilation later? I'd very much like this to be done on Linux, and it'd be even better if I could script it somehow (feed in a list of start and end positions, or start times and durations, and get out a series of ripped files). If need be, I also have a Mac OS X machine available, but no Windows. Edit: I wound up finding another solution involving HandBrake and ffmpeg (with help from this question), but the question stands. Edit again: Turns out my other solution didn't quite work - the audio desynchronized by about five seconds, in about half of my cut mpgs - so I'm back to square one. Anyone?

    Read the article

  • Why does notepad crash on desktop files in the save-as dialog?

    - by deepc
    Here's a puzzling problem - maybe somebody has an idea. Right now I am out of ideas. On Win7 64bit, the following crashes Notepad: On Desktop, right click, select "New | Text Document". This creates "New Text Document.txt". Right click on that file, select "Edit". This opens notepad with the empty file. Select "File | Save as": Notepad crashes and Win7 reports that "Notepad has stopped working". Now, move the file to c:\temp and repeat steps 2 and 3: no crash this time and the save-as dialog appears normally. I can create similar steps for the "open" dialog. Things I have tried: Safe mode - does not work, same problem Create a new user and try again logged in as that user - no crash Name file differently, or create elsewhere and then move to desktop - same problem Use Wordpad instead - same problem Review shell extensions with ShellExView - nothing extraordinary here Stare at the event log entries for each of the crashes. Does not enlighten me. At time of crash look at the process explorer stack view. Hangs at a function "TaskDialog". sfc.exe /scannow repaired some files but the problem persists. This is how the event log entries look like: Log Name: Application Source: Application Error Date: 14.12.2010 00:33:48 Event ID: 1000 Task Category: (100) Level: Error Keywords: Classic User: N/A Description: Faulting application name: NOTEPAD.EXE, version: 6.1.7600.16385, time stamp: 0x4a5bc9b3 Faulting module name: COMCTL32.dll, version: 6.10.7600.16661, time stamp: 0x4c6f6e4b Exception code: 0xc000041d Fault offset: 0x00000000000db770 Faulting process id: 0x198 Faulting application start time: 0x01cb9b1e140ab92a Faulting application path: C:\Windows\system32\NOTEPAD.EXE Faulting module path: C:\Windows\WinSxS\amd64_microsoft.windows.common-controls_6595b64144ccf1df_6.0.7600.16661_none_fa62ad231704eab7\COMCTL32.dll What else should I try, short of dumping my user and starting over with a new profile? Thanks...

    Read the article

  • install zenoss on ubuntu, raise No valid ZENHOME error

    - by bxshi
    I've added an user with name zenoss, and set export ZENHOME=/usr/local/zenoss in ~/.bashrc under /home/zenoss, and when using echo $ZENHOME, it could show /usr/local/zenoss When install zenoss, I switched to zenoss and then run install.sh under zenoss-4.2.0/inst, when it tries to run Tests, the error occured. ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.zenoss.utils.ZenPacksTest Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 0.045 sec <<< FAILURE! Running org.zenoss.utils.ZenossTest Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.71 sec Results : Tests in error: testGetZenPack(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. testGetPackPath(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. testGetAllPacks(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. Tests run: 6, Failures: 0, Errors: 3, Skipped: 0 [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Zenoss Core ....................................... SUCCESS [27.643s] [INFO] Zenoss Core Utilities ............................. FAILURE [12.742s] [INFO] Zenoss Jython Distribution ........................ SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 40.586s [INFO] Finished at: Wed Sep 26 15:39:24 CST 2012 [INFO] Final Memory: 16M/60M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.8:test (default-test) on project utils: There are test failures. [ERROR] [ERROR] Please refer to /home/zenoss/zenoss-4.2.0/inst/build/java/java/zenoss-utils/target/surefire-reports for the individual test results.

    Read the article

  • How to install IE9 when KB2120976 is not applicable to my Windows 7 x86 Ultimate edition?

    - by CVertex
    I'm trying to install IE9, visit http://beautyoftheweb.com, download it and then says it's downloading prereqs. After a minute, the installer says there's a problem and directs me to Prerequisites for installing Internet Explorer 9 Beta I click on the x86 installers one by one... most say "already installed".. But http://support.microsoft.com/kb/2120976/ says "The update is not applicable to your computer". Lame. So I try the IE9 install and the go through the whole process again with the same result. I discovered there's an IE9 install log at C:\Windows\IE9_main.log which logs the install process and reports an Error for 2120976 00:06.630: ERROR: Error installing prerequisite file (C:\Users\Vijay\AppData\Local\Temp\IE98036.tmp\KB2120976_x86.msu): 0x80240017 (2149842967) 00:06.677: INFO: PauseOrResumeAUThread: Successfully resumed Automatic Updates. 00:12.090: INFO: Link clicked, opening URL in new window:'http://go.microsoft.com/fwlink/?LinkId=185111' 00:12.106: INFO: Setup exit code: 0x00009C47 (40007) - Required updates are missing from the system.are missing from the system. Any idea why KB2120976 is inapplicable to my LEGAL Windows 7 Ultimate system? Any help is greatly appreciated.

    Read the article

  • How to discover true identity of hard disk?

    - by F21
    I have 2 fake external hard drives that claim to have a storage capacity of 2TB. I pulled the enclosure apart and the hard drives seems to be refurbished ones with their labels replaced as Barracuda LP 2000 GB labels (the serial numbers on both labels are the same). Interestingly, one of the drives have 160G written on it with pencil. However, the counterfeiters seem to have done something to the firmware, because CrystalDiskInfo reports them as 2TB ST2000DL003 drives. I then delete the 1.81 TB partition in Windows disk management and tried to create a new one and format it. Once I get to this point, the drives would make some noise that is common to dying drives. I am not interested in using these drives for production, but I am interested in finding the true identity (manufacturer/serial number/model number, etc) and restoring it to their factory defaults with the right capacity. Can this be done without any special equipment? This would be an interesting learning exercise. Some pictures of the drives in question: Here are the screens from CrystalDiskInfo: Note the serial numbers are the same (these are 2 different drives!). How is this done? Did they have to tamper with the controller board? I would assume that changing the firmware doesn't change the serial number at all.

    Read the article

  • How to test server throughput

    - by embwbam
    I've always used apache benchmark to try to get a rough idea of how many requests/second my server can handle. I read that it was good, and it seemed to work well. Enter node.js, which is fully event-based, so it never blocks. If I run apache benchmark on a simple hello world server it can handle 2500 requests per second or so. However, if I put a timeout in the hello world function, so that it responds after 2 seconds, apache benchmark reports a dramatically reduced throughput: about 50/s. I'm running 100 concurrent connections with ab. If I increase the concurrency, it goes up. This makes sense, because apache benchmark is basically sending out requests in batches of 100, which come back every 2 seconds. 100 requests / 2 seconds = 50 requests / second If I increase the concurrency to about 400 or 500, it starts to crash. I don't think I've hit node.js's limit, I think I'm hitting a wall in my operating system on the number of open file descriptors or sockets or something. Any way I can get a good guess about how many requests my server can handle? I want to make sure the test computer isn't the one causing the problem.

    Read the article

  • Hiera datatypes wont load in Puppet

    - by Cole Shores
    I have spent a couple of days on this, followed the instructions on http://downloads.puppetlabs.com/docs/puppetmanual.pdf and even the Puppet Training Advanced Puppet manual. When I run a test against it, the results always come back as 'nil' and Im not sure why. I am running Puppet 3.6.1 Community Edition, with Hiera 1.2.1 on SLES 11. My puppet.conf file at /etc/puppet/puppet.conf consists of: [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl certificate_revocation = false [master] hiera_config=/etc/puppet/hiera.yaml reporturl = http://puppet2.vvmedia.com/reports/upload ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY # certname = dev-puppetmaster2.vvmedia.com # ca_name = 'dev-puppetmaster2.vvmedia.com' # facts_terminus = rest # inventory_server = localhost # ca = false [agent] # The file in which puppetd stores a list of the classes # associated with the retrieved configuratiion. Can be loaded in # the separate ``puppet`` executable using the ``--loadclasses`` # option. # The default value is '$confdir/classes.txt'. classfile = $vardir/classes.txt # Where puppetd caches the local configuration. An # extension indicating the cache format is added automatically. # The default value is '$confdir/localconfig'. localconfig = $vardir/localconfig my /etc/puppet/hiera.yaml consists of: :backends: yaml :yaml: :datadir: /etc/puppet/hieradata :hierarchy: - common - database I have a directory created in /etc/puppet/hieradata and within it contains: /etc/puppet/hieradata/common.yaml :nameserver: ["dnsserverfoo1", "dnsserverfoo2"] :smtp_server: relay.internalfoo.com :syslog_server: syslogfoo.com :logstash_shipper: logstashfoo.com :syslog_backup_nfs: nfsfoo:/vol/logs :auth_method: ldap :manage_root: true and /etc/puppet/hieradata/database.yaml :enable_graphital: true :mysql_server_package: MySQL-server :mysql_client_package: MySQL-client :allowed_groups_login: extranet_users does anyone have any idea what could be causing Hiera to not load the requested values? I have tried even restarting the Master. Thanks in advance, Cole

    Read the article

  • SQL Server Reporting Services - website blank, builder works

    - by Keith
    We have a few reports in SQL Server Reporting Services. For some reason when we run the report from the website, it doesn't return any data. When I run the same report from the Report Builder, it returns data. I looked in the logs and the only errors I could find is: ReportingServicesService!library!8!6/15/2012-08:12:33:: i INFO: Current DB Version Unknown, Instance Version C.0.8.54. ReportingServicesService!library!8!6/15/2012-08:12:33:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.InvalidReportServerDatabaseException: The version of the report server database is either in a format that is not valid, or it cannot be read. The found version is 'Unknown'. The expected version is 'C.0.8.54'. To continue, update the version of the report server database and verify access rights., ;Info: Microsoft.ReportingServices.Diagnostics.Utilities.InvalidReportServerDatabaseException: The version of the report server database is either in a format that is not valid, or it cannot be read. The found version is 'Unknown'. The expected version is 'C.0.8.54'. To continue, update the version of the report server database and verify access rights. ReportingServicesService!library!8!6/15/2012-08:12:33:: e ERROR: Exception caught while starting service. Error: Microsoft.ReportingServices.Diagnostics.Utilities.InvalidReportServerDatabaseException: The version of the report server database is either in a format that is not valid, or it cannot be read. The found version is 'Unknown'. The expected version is 'C.0.8.54'. To continue, update the version of the report server database and verify access rights. I'm not really sure why it would be a different version. It's all SQL Server 2008 R2 and I haven't made any changes to it since it's been running.

    Read the article

  • Configured Samba to join our domain, but logon fails from Windows machine

    - by jasonh
    I've configured a Fedora 11 installation to join our domain. It seems to join successfully (though it reports a DNS update failure) but when I try to access \\fedoraserver.test.mycompany.com I'm prompted for a password. So I enter adminuser and the password and that fails, so I try test.mycompany.com\adminuser and that too fails. What am I missing? EDIT (Update 9/1/09): I can now connect to the machine and see the shares on it (see my response to djhowell's answer) but when I try to connect, I get an error saying The network path was not found. I checked the log entry on the Fedora computer for the computer I'm connecting from (/var/log/samba/log.ComputerX) and it reads: [2009/09/01 12:02:46, 1] libads/cldap.c:recv_cldap_netlogon(157) no reply received to cldap netlogon [2009/09/01 12:02:46, 1] libads/ldap.c:ads_find_dc(417) ads_find_dc: failed to find a valid DC on our site (Default-First-Site-Name), trying to find another DC Config files as of 9/1/09: smb.conf: [global] Workgroup = TEST realm = TEST.MYCOMPANY.COM password server = DC.TEST.MYCOMPANY.COM security = DOMAIN server string = Test Samba Server log file = /var/log/samba/log.%m max log size = 50 idmap uid = 15000-20000 idmap gid = 15000-20000 windbind use default domain = yes cups options = raw client use spnego = no server signing = auto client signing = auto [share] comment = Test Share path = /mnt/storage1 valid users = adminuser admin users = adminuser read list = adminuser write list = adminuser read only = No I also set the krb5.conf file to look like this: [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = test.mycompany.com dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h forwardable = yes [realms] TEST.MYCOMPANY.COM = { kdc = dc.test.mycompany.com admin_server = dc.test.mycompany.com default_domain = test.mycompany.com } [domain_realm] dc.test.mycompany.com = test.mycompany.com .dc.test.mycompany.com = test.mycompany.com [appdefaults] pam = { debug = false ticket_lifetime = 36000 renew_lifetime = 36000 forwardable = true krb4_convert = false } I realize that there might be an issue with EXAMPLE.COM in there, however if I change it to TEST.MYCOMPANY.COM then it fails to join the domain with a preauthentication failure. As of 9/1/09, this is no longer the case.

    Read the article

  • How to set up the jdbc driver to connect to hsqldb from libreoffice?

    - by rumtscho
    I am trying to "split" a LibreOffice .odb file into a HSQL database and an OpenOffice document containing forms and macros. I am trying to follow the instructions from this thread: Within a few minutes you can convert your embedded HSQLDB to a stand-alone HSQLDB which is just a very fine database engine. 1) Download and extract the current version from http://hsqldb.org/ and point the Java class path in ToolsOptionsJava to the new hsqldb.jar 2) Extract the database folder from your embedded database and rename the files data, properties, script to name.data name.properties, name.script where "name." is an arbitrary name prefix. 3) Connect a Base document to an existing JDBC database such as jdbc:hsqldb:file:/home/chenier/hsqldb/name;default_schema=true;shutdown=true;hsqldb.default_table_type=cached;get_column_name=false (again, "name" refers to your own file name prefix). This local single-user connection gives you much more than the embedded HSQLDB. 4) Copy queries, forms and reports from the old database over to the new one. The wizard presents me with a window expecting two inputs: a "Datasource URL" and a "JDBC driver class". As far as I can tell, the tutorial above only tells me what to put into the Datasource URL. As for the JDBC driver class, I have no idea what to write into this field. I tried the fully-qualified name of the Java class, org.hsqldb.jdbc.JDBCDriver as given in the HSQLDB documentation. When that failed, I tried the physical path /var/lib/hsqldb/lib/hsqldb.jar (although that should have been unnecessary, because first I pointed to this path as described under 1 and then restarted LibreOffice). In both cases, "Test class" failed with the message "The JDBC driver could not be loaded". OpenOffice's documentation doesn't say anything sensible about the field, it was something like "enter the JDBC driver in this box". Any ideas what I should enter there to get the connection working?

    Read the article

  • Is it worth hiring a hacker to perform some penetration testing on my servers ?

    - by Brann
    I'm working in a small IT company with paranoid clients, so security has always been an important consideration to us ; In the past, we've already mandated two penetration testing from independent companies specialized in this area (Dionach and GSS). We've also ran some automated penetration tests using Nessus. Those two auditors were given a lot of insider information, and found almost nothing* ... While it feels comfortable to think our system is perfectly sure (and it was surely comfortable to show those reports to our clients when they performed their due diligence work), I've got a hard time believing that we've achieved a perfectly sure system, especially considering that we have no security specialist in our company (Security has always been a concern, and we're completely paranoid, which helps, but that's far as it goes!) If hackers can hack into companies that probably employ at least a few people whose sole task is to ensure their data stays private, surely they could hack into our small business, right ? Does someone have any experience in hiring an "ethical hacker"? How to find one? How much would it cost? *The only recommendation they made us was to upgrade our remote desktop protocols on two windows servers, which they were able to access because we gave them the correct non-standard port and whitelisted their IP

    Read the article

  • Why does this preseed for gitolite fail?

    - by troutwine
    I'm installing gitolite on a Debian Squeeze box with the following preseed: gitolite gitolite/gituser string git gitolite gitolite/adminkey string ssh-rsa AAAAB3ECT gitolite gitolite/gitdir string /var/lib/git On installation: # debconf-set-selections /var/cache/debconf/gitolite.preseed # apt-get install gitolite Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: git-daemon-run gitweb The following NEW packages will be installed: gitolite 0 upgraded, 1 newly installed, 0 to remove and 26 not upgraded. Need to get 0 B/114 kB of archives. After this operation, 348 kB of additional disk space will be used. Preconfiguring packages ... Selecting previously deselected package gitolite. (Reading database ... 24715 files and directories currently installed.) Unpacking gitolite (from .../gitolite_1.5.4-2+squeeze1_all.deb) ... Setting up gitolite (1.5.4-2+squeeze1) ... adduser: The home dir must be an absolute path. dpkg: error processing gitolite (--configure): subprocess installed post-installation script returned error exit status 1 configured to not write apport reports Errors were encountered while processing: gitolite E: Sub-process /usr/bin/dpkg returned an error code (1) Why? The pre-seed was extracted from a manually configured installation, per here and exists without issue on another machine.

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >