Search Results

Search found 5836 results on 234 pages for 'report builder2 0'.

Page 179/234 | < Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >

  • get-eventlog issue

    - by Jim B
    I wanted to get a quick report of some log entries I saw on a server, so I ran: Get-Eventlog -logname system -newest 10 -computer fs1 | fl I got events back however the descriptions were all wrong. Here's an example: Index : 1260055 EntryType : Warning InstanceId : 2186936367 Message : The description for Event ID '-2108030929' in Source 'W32Time' cannot be found. The local compute r may not have the necessary registry information or message DLL files to display the message, or you may not have permission to access them. The following information is part of the event:'time. windows.com,0x1' Category : (0) CategoryNumber : 0 ReplacementStrings : {time.windows.com,0x1} Source : W32Time TimeGenerated : 1/25/2010 10:43:31 AM TimeWritten : 1/25/2010 10:43:31 AM UserName : Note that if I pull the event ID property it's correct (in this case 38) Is this is known issue or is something wrong. The messages resolve fine via event viewer locally and remotely Here is the powershell version info: Name : ConsoleHost Version : 2.0 InstanceId : bc58fcf8-bba3-4ca8-8972-17dbd5d9ff08 UI : System.Management.Automation.Internal.Host.InternalHostUserInterface CurrentCulture : en-US CurrentUICulture : en-US PrivateData : Microsoft.PowerShell.ConsoleHost+ConsoleColorProxy IsRunspacePushed : False Runspace : System.Management.Automation.Runspaces.LocalRunspace Here is the revised version info: Name Value ---- ----- CLRVersion 2.0.50727.3603 BuildVersion 6.0.6002.18111 PSVersion 2.0 WSManStackVersion 2.0 PSCompatibleVersions {1.0, 2.0} SerializationVersion 1.1.0.1 PSRemotingProtocolVersion 2.1

    Read the article

  • Tools to (privately) annotate/markup a website for maintenance

    - by rob
    I've been tasked with updating a website. Rather than proofreading and updating each page (one at a time), I want to make a single pass over the entire website, marking graphics/images/videos that need to be rewritten, removed, or updated. I thought about taking screenshots, marking those up, and putting them in our bug-tracking database, but that seems like an extremely tedious solution. Some of the content is similar on various pages across the website, and the entire site itself is localized into several languages (so any changes made to the English version will have corresponding changes for other languages). I also want all of my markup to remain private (that is, if it's stored online somewhere, I should be the only person who can see my comments). I found an article that lists several website annotation services, but it's not clear whether they allow private annotations, or whether these tools are even appropriate for website maintenance (many of them look more geared toward social networking). I've started making a list of some necessary and desired features below, and may add more as necessary. Annotations/markup/comments remain private (only visible to me) Comment history/tagging (so I can reuse the same comment for shared footers, items requiring similar updates, etc.) Ability to print/export a list or report of all comments for the entire website Ability to produce a categorized list of changes (e.g., to produce a list of images that need updating, which I can send to the graphic designer) What processes and tools do you use to keep track of all the changes that need to be made to a website? What features are painfully absent from the tools you use?

    Read the article

  • kvm works only when kvm-intel is unloaded

    - by Sathya
    I am new to kvm. I have this strange issue. But before explaining the issue, here is my set up. I try to install VM on my Host which is a Acer Laptop 5720 Has T7500 Intel processor. The cpu flags indicate that Virtualization is supported. I run Ubuntu 10.04 (lucid) on it. It comes with kvm. Now coming to the issue - I dont get any errors while executing "sudo modprobe kvm-intel". So I presume my processor does indeed support hardware virtualization. I use virt-manager and create a VM on which I install ubuntu from an *.iso file. When I start the VM it says it is running. No signs of any trouble. I can see the domain list in "virsh list". But when I try to connect to the VM thru VNC, all I get to see is a blank screen (no cursor). There is no response to any key press. I changed the video mode etc. Tried all different combinations but none work. But strangely, if I shutdown the vm an virt-manager and then unload the module by doing "sudo modprove -r kvm-intel", everything works fine. ie., I can see the screen via VNC. I am able to install the OS and so on. So what does this mean ? IS hardware virtualization not supported ? How come there is no error anywhere ? dmesg | grep kvm doesnt report anything. Can someone throw light on what excatly is happening ?

    Read the article

  • Postgresql server will not start

    - by Claudiu
    I'm on Windows 7. I restarted my computer. I then tried to connect to the database and got an error. I don't remember which one in particular but it was some connection issue. I decided to try to restart the server, so I clicked on "Restart server" from the start menu. This blocked. After a few minutes I killed the process and tried again, only to get a "The service is starting or stopping. Please try again later." message. I rebooted the computer again, tried to start again, and got the same error. I killed the pg_ctl process and tried starting it manually, but that didn't work either: C:\Users\DrClaud>cscript "C:\Program Files\PostgreSQL\8.3\scripts\serverctl.vbs" start wait Microsoft (R) Windows Script Host Version 5.8 Copyright (C) Microsoft Corporation. All rights reserved. The PostgreSQL Server 8.3 service is starting................................... ....................................... The PostgreSQL Server 8.3 service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. The start command returned an error (2) Any ideas?

    Read the article

  • Why does my Belkin wireless router has eMule port open?

    - by Jeremy Powell
    I have a Belkin F6D4230-4 v1 router. When I port scan it with nmap I get the following: $ sudo nmap -sS -A -T5 192.168.2.1 -p- Starting Nmap 5.00 ( http://nmap.org ) at 2010-04-17 11:40 CDT Interesting ports on 192.168.2.1: Not shown: 65532 closed ports PORT STATE SERVICE VERSION 80/tcp open http Belkin 2307 wifi router http config (IP_SHARER httpd 1.0) |_ html-title: '+i1+' 4661/tcp filtered unknown 4662/tcp filtered edonkey MAC Address: 00:22:75:5D:52:D8 (Belkin International) Device type: WAP|broadband router|firewall|printer|specialized|webcam Running (JUST GUESSING) : Linksys embedded (95%), TRENDnet embedded (95%), Netgear embedded (92%), Canon embedded (89%), On Time RTOS (89%), Symantec embedded (89%), D-Link embedded (86%), Polycom embedded (85%) Aggressive OS guesses: Linksys WRT54GC or TRENDnet TEW-431BRP wireless broadband router (95%), TRENDnet TW100-BRF114 broadband router (95%), Netgear FR114P ProSafe VPN firewall (92%), Canon PIXMA MX850 printer (89%), On Time RTOS (89%), Symantec Firewall/VPN 100 (89%), D-Link DI-714P+ wireless broadband router (86%), Polycom ViewStation video conferencing system (85%) No exact OS matches for host (test conditions non-ideal). Network Distance: 1 hop Service Info: Device: WAP OS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 21.57 seconds Why are the 4461 and 4462 ports open? This is a basic, out-of-the-box installation.

    Read the article

  • Latency between IIS and SQL on same physical, two VMs

    - by Jerad Rose
    I have a single server (2x4 core CPUs, 32GB ram), that is a Windows Server 2012 Hyper V host, and it hosts two guest VMs (also Windows Server 2012 instances). One of them is a web server, the other is a SQL server. When hitting a page that loops over 50 records, there is noticeable latency. I capture/report the timings of each iteration on the loop, and each iteration is about 20-30 milliseconds. Of course, this amounts to over a second of latency for the whole loop. I thought maybe SQL needed to be tuned, but running profiler on it, the queries are showing almost 0 duration, so it seems the bottleneck is in transit between the two VMs. I have both VMs configured to use the actual NIC (vs. using a VNIC), so maybe that's part of my problem. Also, this is a classic ASP site, so it's using the SQL OLE DB provider, and I'm wondering if that is part of the problem. This is a new server setup, from an existing Windows 2003/IIS6 server setup where both web and DB run on the same server instance (no virtualization). On that setup, there is no such latency when looping over the cursor like this. But there are so many variables, I'm not sure where to start ruling things out.

    Read the article

  • New 3TB HDD, can see full 2.7TB in Linux and Windows, but shows up as 801.6GB in BIOS

    - by Ben Lee
    I recently purchased a Seagate Barracuda 3TB drive (ST3000DM001). After installing it, my BIOS recognized it but reported the size as 801.6gb. I went ahead and booted into Linux anyway (Ubuntu 11.10 64-bit). Linux saw it as a 2.7TB. Following some online instructions (don't have the link handy, unfortunately), it looks liked converting this drive to GPT was recommended. So I used gparted to do that, then formatted it to NTFS also using gparted. (I'm using NTFS because my machine is daul-boot and I want to have access to the drive in Windows too). I rebooted to Windows (Windows 7 64-bit), and Windows also sees the drive with 2.7TB free. Everything seems to be working fine. The only issue is that my BIOS is still reporting the drive as 801.6GB. My motherboard is an ASRock 770 Extreme3 and BIOS is the latest version. Since everything seems to be working with the new drive anyway, I'm hoping that the fact that the BIOS is reporting the wrong size is not an actual problem. But honestly, I don't really know. Anyone out there more familiar with this know if this could potentially cause any problems in the future? Any way to get the BIOS to report the correct size?

    Read the article

  • Probelms Intstalling Trac using apt-get Ubuntu Jaunty

    - by Ben Waine
    Hi, I'm having some issues getting apt to install trac correctly on my Ubuntu Jaunty Box. Using the command 'apt-get install trac' I get the following output: root@myserver:~# apt-get install trac Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: trac: Depends: python-setuptools (> 0.5) but it is not installable Depends: python-pysqlite2 (>= 2.3.2) but it is not going to be installed Depends: python-subversion but it is not installable Depends: libjs-jquery but it is not installable Recommends: python-pygments (= 0.6) but it is not installable or enscript but it is not installable Recommends: python-tz but it is not installable E: Broken packages I have successfully used the command on my karmic kola desktop machine and am able to create new projects etc. I thought I might be able to solve the problem by installing all python related extensions. This produced a very similar output. I have Main, universe and multi-verse repositories enabled. Its a remote machine and I have no access to the gui. Hope someone can help, googleing failed to solve the issue or find a solution! Thanks, Ben

    Read the article

  • Apache & SVN on Ubuntu - Post-commit hook fails silently, pre-commit hook “Permission Denied”

    - by 113169587962668775787
    I've been struggling for the past couple days to get post-commit email notifications working on my SVN server (running via HTTP with Apache2 on Ubuntu 9.10). SVN commits work fine, but for some reason the hooks are not being properly executed. Here are the configuration settings: - Users access the repo via HTTP with the apache dav_svn module (I created users/passwords via htpasswd in a dav_svn.passwd file). dav_svn.conf: <Location /svn/repos> DAV svn SVNPath /home/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user </Location> I created a post-commit hook file that writes a simple message to a file in the repository root: /home/svn/repos/hooks/post-commit: #!/bin/sh REPOS="$1" REV="$2" /bin/echo 'worked' > ${REPOS}/postcommit.log I set the entire repository to be owned by www-data (the apache user), and assigned 755 permissions to the post-commit script when I test the post-commit script using the www-data user in an empty environment, it works: sudo -u www-data env - /home/svn/repos/hooks/post-commit /home/svn/repos 7 But when I commit on a client machine, the commit is successful, but the post-commit script does not seem to be executed. I also tried running a simple script for the pre-commit hook, and I get an error, even with an empty pre-commit script: "Commit failed (details follow): Can't create null stdout for hook '/home/svn/repos/hooks/pre-commit': Permission denied" I did a few searches on Google for this error and I presume that this is an issue with the apache user (www-data) not having adequate permissions, specifically to execute /dev/null. I also read that the reason post-commit fails silently is because that it doesn't report with stdout. Anyway, I've also tried giving the apache user (www-data) ownership of the entire repository, and edited the apache virtualhost to allow operations on the server root, and I'm still getting permission denied /etc/apache2/sites-available/primarydomain.conf <Directory /> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> Any ideas/suggestions would be greatly appreciated! Thanks

    Read the article

  • Explorer and open file dialog not responding (Vista)

    - by rohancragg
    Any explorer window opened for the first time on my machine causes the explorer window to display the folders tree and folder path in the address bar immediately but the file/folder list pane is blank and the window displays 'Not Responding' in the title bar, this hangs for up to a minute or more. Any file dialog displays 'Not Responding' in the title bar. The files list is eventually displayed after a few seconds or more. Steps to repro: Close all open instances of explorer Windows Key | Run | [enter a folder path such as 'c:\temp'] Or within any app: use a file open / save dialog Once there is at least one open instance of explorer the performance is still fairly poor but not nearly so bad and file lists are displayed in a timely fashion. What I've tried: Cleaned up registry with CCleaner tool, and uninstalled all other unused software Checked nothing unwanted running at startup with Autoruns Removed any ISO burner/recorder/mount software Still to try Get latest version of everything - especially stuff with shell extension behaviour such as TortoiseSVN Anyone have any other suggestions? Thanks alot. Update I'm wondering if this is related, I'll try the hotfix when I get home and report back: KB972685 - FIX:Explorer.exe hangs when using a shell extension written using MFC Update 2 Before I got a chance to try the hotfix it seems one of the above actions fixed this for me; either the removal of IsoRecorder or TortoiseHg (which I was no longer using anyway). Update 3 A similar issue with Explorer.exe has come back since installing TortoiseHg 1.01 :-(

    Read the article

  • One host on a network can't connect to one other host

    - by Max Williams
    I'm on a local network with a few other people. On of the hosts is a virtual machine running in virtualbox on a mac, which has the ip address 192.168.0.35 (the VM that is, not the mac host). Everyone except one guy can connect (ie ping, ssh etc) to that machine. When that one guy tries to ping it he gets Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 which i understand is just how certain mac os's report an unreachable connection. He can ping all the other hosts on the network, ie our computers, and we can all ping the VM fine and connect to it with no problems etc. His ip is 192.168.0.17. I ssh'd onto his machine (as a new user 'anon') and saw the same problems. I can ssh onto the 192.168.0.35 VM as well. From there, i can ping other users, but when i ping the problem guy, it's unreachable that way round as well. He restarted his mac, and was fine for a while. Then, just stopped working again. He's got a different IP to before. Any ideas, anyone? Don't know enough about this stuff to even diagnose the problem. thanks, max

    Read the article

  • Application losing Printer within Terminal Services for remote users

    - by Richard
    Question: What I need to do is have a permanent link to a printer, normally only accessible through Terminal Services (Printer Redirect), to allow Sage Line 50 layouts to see that printer persistently, even after users have disconnected and reconnected to the Terminal Services session? Although the printer is accessible each time a user connects to the Sage Server via Terminal Services, it is given a different session number and therefore the Sage Layout sees it as a different printer. History behind question: Users using Terminal Services connecting to a Sage Server on a different site Using Sage Line 50 v 15 on that Server Users want to print invoices (sage layouts) locally Sage Server cannot see the users local printers, to get around this user uses the Print redirect features of Terminal Services The individual reports can be edited to point to a specific printer by default. This means the user just has to select an invoice and click print, then select the layout/report wanted and it auto prints that invoice to the default printer specified. The problem occurs because the layouts are edited to point to the users local printer "Ricoh 1018d (session#)", note the "(session#)" as this is the users local printer being redirected through the terminal services session. Users are able to print using the sage layouts once the default printer is setup within the layout and saved, but as soon as the users disconnects from the Terminal Services session and then reconnect in the morning go to print, it has lost the connection to that printer. I understand why its failed, because that the printer is on a per session basis and the layout would not be able to hold on to the connection from a previous session. Thanks in advance for any assistance...

    Read the article

  • smartctl not actually running self tests?

    - by canzar
    I want to run the smartctl self tests to check the health of the drives in my RAID array (PERC 5/i). The array is on sda and comprises six drives. I can check the status using sudo smartctl /dev/sda -d megaraid,0 -a And I see that SMART is available and enabled on all the drives. I have tried to run self tests using sudo smartctl /dev/sda -d megaraid,0 -t short and sudo smartctl /dev/sda -d megaraid,0 -t long I have also tried it on all of the drives 0-5. No matter what I try, when I run: sudo smartctl /dev/sda -d megaraid,0 -l selftest I always get the same result, which seems to always report that I have never run a self test. /dev/sda [megaraid_disk_00] [SAT]: Device open changed type from 'megaraid' to 'sat' ===START OF READ SMART DATA SECTION === SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] From what I read, I should have no problem running the short and long self tests on the array while it is mounted. Does anyone else have experience running these tests on a PERC 5/i raid array who could lend some insight into what is causing the problem? (smartmontools release 5.40 dated 2009-12-09 at 21:00:32 UTC)

    Read the article

  • Xen dom0 reports incorrect amount of RAM with dom0_mem set

    - by xen_amnesiac
    I've done a fair bit of searching about this, but have found nothing that answers my question. I have a system with 6GB of RAM which acts as a Xen server. For reference, it runs Ubuntu 12.04. I've set the kernel parameter dom0_mem:512M,max:512M in /etc/default/grub as follows: GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=min:512M,max:512M" I've tried variations of that, with the same result. My question is this: With the above set, the dom0 reports in all applications a RAM amount of 422M. cat /proc/meminfo gives the following: $ cat /proc/meminfo MemTotal: 432472 kB MemFree: 54144 kB Buffers: 17640 kB Cached: 220104 kB SwapCached: 30172 kB Active: 136500 kB Inactive: 167780 kB Active(anon): 6156 kB Inactive(anon): 60516 kB Active(file): 130344 kB Inactive(file): 107264 kB Unevictable: 52 kB Mlocked: 52 kB SwapTotal: 1794044 kB SwapFree: 1682012 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 39572 kB Mapped: 8048 kB Shmem: 136 kB Slab: 44324 kB SReclaimable: 22012 kB SUnreclaim: 22312 kB KernelStack: 1280 kB PageTables: 3840 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 2010280 kB Committed_AS: 329192 kB VmallocTotal: 34359738367 kB VmallocUsed: 313988 kB VmallocChunk: 34359417340 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 524696 kB DirectMap2M: 0 kB top, htop, free -m, and byobu's RAM monitor all report the same amount. At first I thought this was because of the onboard graphics borrowing some memory, but have now switched to a dedicated GPU and it persists. Is this normal behavior, or has something gone amiss? It's just about 100MB of RAM that's "gone", and I have no idea where it went. I understand that it's normal that not all RAM is available for allocation, but does the system really take an amount relatively high to the amount of RAM available?

    Read the article

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • Windows 7 Apache Crashes on ANY request

    - by Dan
    I have XAMPP installed. I am running Windows 7. I have WordPress installed so that I may tweak it and test things locally before putting them 'live' on a remote server. I just installed BuddyPress. The installation was rather seamless. I activated the plugin and almost immediately, Apache crashed. I have Apache running as a service so it immediately restarted itself and was running BUT if I even so much as refresh the page (or create any other request), down it goes. Listed here is the error report as generated by Windows 7: Problem signature: Problem Event Name: APPCRASH Application Name: apache.exe Application Version: 2.2.4.0 Application Timestamp: 45ebef86 Fault Module Name: ZendOptimizer.dll Fault Module Version: 0.0.0.0 Fault Module Timestamp: 45ea8fee Exception Code: c0000005 Exception Offset: 0004dc22 OS Version: 6.1.7600.2.0.0.256.1 Locale ID: 1033 Additional Information 1: 1ec0 Additional Information 2: 1ec0fd70d07d060e5bfcf53c69ad1739 Additional Information 3: 2c48 Additional Information 4: 2c48940de5e7d1cb2e131ad6a0ca2feb Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt Help?

    Read the article

  • How to multiseat with HW 3d accel on CentOS 6.3 Final?

    - by user35070
    I would like to setup a multiseat configuration on CentOS 6.3 (two video cards, two keyboards, two mice, two monitors) and have hardware accelerated 3D on both monitors. 3D HW acceleration rules out Xephyr. I saw somewhere that recent versions of GDM (3.3 and newer?) don't support multiseat, so do I have to install KDM to make this work? If I just create a duplicate section with new device identifiers in my xorg.conf file, will this 'just work'? Using different ports on the same video card and separate keyboards, mice, and displays, the result was a desktop which spanned both monitors with both keyboards and mice acting as the same input in the GUI. I will power down and put in the new video card and report on the results soon. Both video cards are nvidia. UPDATE after putting in another NVIDIA video card, default behavior (before changing xorg.conf) is that one screen works normally, and both mice and keyboards are connected to it. Changing xorg.conf and the display manager to KDM and following the directions here https://help.ubuntu.com/community/MultiseatX#Ubuntu_10.04_.28Lucid.29 , I have 2 mirrored screens connected to separate video cards, DRI enabled, and 2 mice both connected to the same pointer. Keyboards don't do anything, however, I probably just need to fix a setting in xorg.conf I would still like to get multiseat functionality, eg. separate screens with separate input devices I have verified that the separate X processes are running (see page above) using 'ps aux | grepX [01]'

    Read the article

  • external drive enclosure -> software RAID 5?

    - by memilanuk
    Hello all, I have two older PCs on my LAN posing as 'servers'... one running FreeNAS off a USB stick using three 500GB hdds in a ZFS RAID-Z pool serving as storage for the LAN and one running Debian Lenny with an 80GB drive used as a general purpose 'tinker' box that I can ssh into, etc. Problem is that the SMART report for one of those 500GB drives in the FreeNAS box is showing some pre-failure attributes, and the whole array is a little small anyways. Rather than simply replace one 500GB drive with another 500GB drive, and have no backup of the file server, I'd like to upgrade all the drives to 2TB ones - but I have no where to store that much data in the mean while. As such, I started looking at getting a 4-bay external drive enclosure with an eSATA card for the Debian box, with the hopes of creating a RAID5 + LVM setup using those drives and backing the data up to that external drive enclosure. After the backup is done, replace the drives in the FreeNAS box and rebuild the array there and mirror the data back. Then, I'd have both the primary storage (on the FreeNAS box) and a backup (which I don't have currently) using the external drive enclosure on the Debian box. My big question is... most of these external drive boxes seem to claim support for JBOD, RAID 0, 1, 10, 5, etc. - should I presume that is simply fake RAID like many commodity mobos have, and not really usable in Linux? In that case, with all the drives hanging off the one eSATA connection, will Linux (specifically Debian Squeeze, as I plan on upgrading that box here shortly) see all four drives, or just the first one? Will I be able to configure them in a RAID5 array as desired? Thanks, Monte

    Read the article

  • Acer Aspire One -- strange battery problem, charges only up to ~90%

    - by houbysoft
    I have this strange problem on the acer aspire one d250. It happened already once before, stayed for about two weeks, and then "fixed itself". The problem is as follows: the battery can't seem to get fully charged; ie the indicator is stuck at about 90% (it's probably not a software problem -- I have ArchLinux and Windows 7 installed and both report exactly the same) and it never passes that value, but it still shows the status as "charging" (I tried everything I could think of -- leaving it charging for extremely long amounts of time, doing a few complete charge-recharge cycles, removing/reinserting the battery, cleaning the connectors, even updating the BIOS, etc., and nothing helped). Also, when it is getting charged, it charges pretty fast until about 70% and then progresses extremely slowly. The battery holds the charge that appears on the battery indicator normally. Just can't get the battery to charge fully -- I can't get it past the 90%. At first I thought this would be a simple battery failure (even if the computer is not that old, about 6-7 months), but as I mentioned it happened once before, and then one day it fixed itself. I tried contacting Acer about this, but the support was not helpful, completely stupid, it seemed like they used canned responses, the usual. Any thoughts on how to fix this?

    Read the article

  • Task Scheduler Crashing MMC

    - by Valrok
    I've been getting errors whenever I try to run the task scheduler for Windows 2008 R2. Each time that I've tried running it, the task scheduler will crash and report the following: Problem signature: Problem Event Name: CLR20r3 Problem Signature 01: mmc.exe Problem Signature 02: 6.1.7600.16385 Problem Signature 03: 4a5bc808 Problem Signature 04: System.Windows.Forms Problem Signature 05: 2.0.0.0 Problem Signature 06: 50c29e85 Problem Signature 07: 151f Problem Signature 08: 18 Problem Signature 09: Exception OS Version: 6.1.7601.2.1.0.16.7 Locale ID: 1033 I've been looking online but so far I keep finding mixed results on what could be the fix for this and was wondering if anyone here has ever ran into this issue before. I read that this issue could be because of Security Update for Microsoft Windows (KB2449742) and that by uninstalling it I would be able to fix this issue, however I was not able to locate this anywhere in the server. Here's the link if interested Patch wise, everything is up to date. Also, I tried running hotfix KB2688730 to see if that would work after doing some research online, however the hotfix is not applicable to the computer. If anyone could provide some information on how to fix this and get the task scheduler running again it would be extremely helpful!

    Read the article

  • Emails from web site sometimes blank or gibberish

    - by John Gardeniers
    Our company has one web site with an online store based on osCommerce. The system sends emails for various reasons, such as password changes, order confirmations, etc., using PHP's mail() function. We occasionally have customers report that the email they received is either blank (email is plain text format) or gibberish (email is in HTML format). In the latter case it's really just HTML that's being displayed as raw text but of course the customers can't read it. In this case the first opening tag's <, and sometimes a few more characters, has gone missing. In an attempt to determine whether this was happening only for certain customers or email systems I configured the web site to send a CC of each message to a service account at my end. Those CC'd messages always arrive intact and display correctly in Outlook. For what it's worth, it seems to happen a little more frequently to Hotmail users but is certainly not limited to them. As the web site is on a shared (Debian) host there's precious little I can do about debugging things from that end, although if I made the right request I feel the hosting company staff would help me, even though they have limited resources to spend on such matters. Any suggestions on what else I might do to try and determine just why those emails are not being received correctly by some customers, yet a CC copy arrives just fine?

    Read the article

  • Intermittent PHP error: Undefined function <core function>

    - by Daniel
    In the last week I've been coming across an incredibly annoying error on one of Slicehost slices. It appears that every now and then PHP will fail with a fatal error, saying a certain function is undefined. The function changes, but is always a core PHP function e.g. defined(), version_compare(), etc. This problem has occurred while using several different PHP applications - PHPMyAdmin, my own custom built apps, etc, leading me to believe that the problem is not specific to the running code. Here are some details: - Debian Lenny - Apache 2.2.9 - PHP 5.2.6-1+lenny4 with Suhosin-Patch (running eAccelerator 0.9.6) Apache and PHP are installed from Debian packages. Error logs show nothing out of the ordinary. I thought memory might be an issue, but free -m reports upwards of 100MB free almost all the time. Another thing I'm trying to investigate is if the problem might be related to eAccelerator, but testing this theory out is incredibly hard because the issue doesn't appear very often and I've been using eAccelerator for months on this install without any problems up until now. Has anyone ever come across anything like this? Why would PHP report undefined core functions?

    Read the article

  • Unable to connect remotely to Vsftpd server set up on CentOS VirtualBox

    - by ryekayo
    I have set up a Vsftp server using the following instructions provided Here and even went as far as following the commentary at the bottom. But I am unable to connect remotely. When I attempt to use FileZilla or my Ubuntu terminal, I always get: ryan@ryan-Galago-UltraPro:~$ ftp 10.0.x.xx ftp: connect: Connection timed out ftp> I have checked and re-checked iptables conf file and made sure that Port 21 is being Accepted and it is. I have looked this up on the web and decided to try nmap to port scan it and this is what I get for a result: ryan@ryan-Galago-UltraPro:~$ nmap -PN 10.0.xx.xx Starting Nmap 6.40 ( http://nmap.org ) at 2014-08-19 15:01 EDT Nmap scan report for 10.0.xx.xx Host is up. All 1000 scanned ports on 10.0.xx.xx are filtered Nmap done: 1 IP address (1 host up) scanned in 201.38 seconds Is there anything else that I should do or check for? UPDATE: I have tried to ping from the virtual machine to my IP address on Ubuntu and have been successfully able to. I cannot ping to my virtual machine from Ubuntu. I have narrowed this down to possibly being a firewall related issue on Ubuntu's side, but why would I be unable to connect from FileZilla?

    Read the article

  • 'txn-current-lock': Permission denied [500, #13] - Subversion + Apache Configuration Issue

    - by wfoster
    Current Setup Fedora 13 32bit Apache 2.2.16 Subversion repositories setup under /var/www/svn I have two different repositories under this directory so my /etc/httpd/conf.d/subversion.conf setup in this way; LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so <Location /svn> DAV svn SVNListParentPath on SVNParentPath /var/www/svn <LimitExcept GET PROPFIND OPTIONS REPORT> AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/httpd/.htpasswd Require valid-user </LimitExcept> </Location> After copying over my repos and using; chmod 755 -R /var/www/svn chcon -R -t httpd_sys_content_t /var/www/svn chown apache:apache -R /var/www/svn I can browse my repos fine through the browser, and I can update all my working copies, however when I try to check in from anywhere I get the same error Can't open file '/var/www/svn/repo/db/txn-current-lock':Permission denied I have been working on this issue for a while now and cant seem to find a solution to my issues. It might be of some use to know that the repo existed on a different server before this, it has been now moved to this new server. Everything I have read seems to indicate that the permissions for apache are incorrect, however apache is set to run as User apache and Group apache. So as far as I can tell my setup is correct. The behavior is not though. Any Ideas? Solution The only way I was able to get this to work is to disable SELinux, it could also be done by setting the proper booleans with SELinux via setsetbool and getsebool since this is just a home server, I decided to disable SELinux and am reaping the benefits now.

    Read the article

  • Moving automatically spam messages to a folder in Postfix

    - by cad
    Hi My problem is that I want to automatically to move spam messages to a folder and not sure how. I have a linux box giving email access. MTA is Postfix, IMAP is Courier. As webmail client I use Squirrelmail. To filter SPAM I use Spamassassin and is working ok. Spamassasin is overwriting subjects with [--- SPAM 14.3 ---] Viagra... Also is adding headers: X-Spam-Flag: YES X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on xxxx X-Spam-Level: ************** X-Spam-Status: Yes, score=14.3 required=2.0 tests=BAYES_99, DATE_IN_FUTURE_24_48,HTML_MESSAGE,MIME_HTML_ONLY,RCVD_IN_PBL, RCVD_IN_SORBS_WEB,RCVD_IN_XBL,RDNS_NONE,URIBL_RED,URIBL_SBL autolearn=no version=3.2.5 X-Spam-Report: * 0.0 URIBL_RED Contains an URL listed in the URIBL redlist * [URIs: myimg.de] * 3.5 BAYES_99 BODY: Bayesian spam probability is 99 to 100% * [score: 1.0000] * 0.9 RCVD_IN_PBL RBL: Received via a relay in Spamhaus PBL * [113.170.131.234 listed in zen.spamhaus.org] * 3.0 RCVD_IN_XBL RBL: Received via a relay in Spamhaus XBL * 0.6 RCVD_IN_SORBS_WEB RBL: SORBS: sender is a abuseable web server * [113.170.131.234 listed in dnsbl.sorbs.net] * 3.2 DATE_IN_FUTURE_24_48 Date: is 24 to 48 hours after Received: date * 0.0 HTML_MESSAGE BODY: HTML included in message * 1.5 MIME_HTML_ONLY BODY: Message only has text/html MIME parts * 1.5 URIBL_SBL Contains an URL listed in the SBL blocklist * [URIs: myimg.de] * 0.1 RDNS_NONE Delivered to trusted network by a host with no rDNS I want to automatically to move spam messages to a folder. Ideally (not sure if possible) only to move messages with puntuation 5.0 or more to folder.. spam between 2.0 and 5.0 I want to be stored in Inbox. (I plan later to switch autolearn on) After reading a lot in procmail, postfix and spamassasin sites and googling a lot (lot of outdated howtos) I found two solutions but not sure which is the best or if there is another one: Put a rule in squirrelmail (dirty solution?) Use Procmail Which is the best option? Do you have any updated howto about it? Thanks

    Read the article

< Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >