Search Results

Search found 9952 results on 399 pages for 'more details'.

Page 81/399 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Issues Installing PHP Memcache module

    - by smith
    I am trying to install memcache on my VPS. When I type $ pecl install memcache I get this error checking whether the C compiler works... configure: error: cannot run C compiled programs. If you meant to cross compile, use `--host'. See `config.log' for more details. ERROR: `/root/tmp/pear/memcache/configure --enable-memcache-session=yes' failed Any ideas what the issue could be?

    Read the article

  • Connecting to the Internet from FreeBSD on VirtualBox

    - by Alex Farber
    I just installed FreeBSD in the VirtualBox running on Ubuntu host, and need instructions to enable Internet access from FreeBSD. My guess that I need to run /usr/sbin/sysinstall and do something there, but I need exact instructions. Details. Host: Ubuntu 9.10, connected to the Internet through LAN. Sun VirtualBox 3.1.4. FreeBSD 8.0 running in the VirtualBox.

    Read the article

  • exchange user receive Undeliverable: form postmaster after every sended message

    - by Alexander Pavluchenko
    this happens after users mailbox moved from one exchange to other (in one organization, but differend domains) details: From: postmaster Sent: Saturday, April 03, 2010 8:43 AM To: USER1 Subject: Undeliverable: subject of message Delivery has failed to these recipients or distribution lists: IMCEAex-_O=DOMAIN_OU=First+20Administrative+20Group_cn=Recipients_cn=USER1@domain A problem occurred during the delivery of this message. Microsoft Exchange will not try to redeliver this message for you. Please try resending this message later, or provide the following diagnostic text to your system administrator.

    Read the article

  • Why did my flash drive become "read only" and (how) can I fix it?

    - by Bob
    I have a brand new flash drive (one week old) that has become marked as read only, by Windows, Kubuntu and a bootable partitioner. Why did this happen? Is it fixable? If it is, how can I fix this? The problem Firstly, this drive is new. It's certainly not been used enough to die from normal wear and tear, though I would not discount defective components. The drive itself has somehow become locked in a read only state. Windows' Disk management: Diskpart: Generic Flash Disk USB Device Disk ID: 33FA33FA Type : USB Status : Online Path : 0 Target : 0 LUN ID : 0 Location Path : UNAVAILABLE Current Read-only State : Yes Read-only : No Boot Disk : No Pagefile Disk : No Hibernation File Disk : No Crashdump Disk : No Clustered Disk : No What really confuses me is Current Read-only State : Yes and Read-only : No. Attempted solutions So far, I've tried: Formatting it in Windows (in Disk management, the format options are greyed out when right clicking). DiskPart Clean (CLEAN - Clear the configuration information, or all information, off the disk.): DISKPART> clean DiskPart has encountered an error: The media is write protected. See the System Event Log for more information. There was nothing in the event log. Windows command line format >format G: Insert new disk for drive G: and press ENTER when ready... The type of the file system is FAT32. Verifying 7740M Cannot format. This volume is write protected. Windows chkdsk: see below for details Kubuntu fsck (through VirtualBox USB passthrough): see below for details Acronis True Image to format, to convert to GPT, to destroy and rebuild MBR, basically anything: failed (could not write to MBR) Details (and a nice story) Background This was a brand new, generic, 8GB flash drive I wanted to create a multiboot flash drive with. It came formatted as FAT32, though oddly a little larger than most 8 GIGAbyte flash drives I've come across. Approximately 127MB was listed as "used" by Windows. I never discovered why. The end usable space was about what I normally expect from a 8GB drive (approx 7.4 GIBIbytes). I had thrown quite a few Linux distros on, along with a copy of Hiren's. They would all boot perfectly. They were put on with YUMI. When I tried to put the Knoppix DVD on, YUMI added an odd video option to its boot comman which caused Knoppix to boot with a black screen on X. ttys 1 through 6 still worked as text only interfaces. A few days later, I took some time to take that odd video option off, making the boot command match the one that comes with Knoppix. On the attempt to boot, Knoppix reported some form of LZMA corruption. Leading up to the current issue I was thinking the Knoppix files may have been corrupted somehow, so I tried reloading it. The drive was nearly full (45MB free), so I deleted a generic ISO that also was not booting. That went fine. I then went through YUMI to 'uninstall' Knoppix, i.e. delete files and remove from the menus. The files went first, then the menus were cleared successfully. However, the free space was stuck at about 700MB, same as it was before removing Knoppix. In the old Knoppix folder, there was a 0 byte file named KNOPPIX that could not be deleted. I tried reinserting the drive to delete this file - without safely removing, if that made a difference (hey, first time for everything). Running the standard Windows chkdsk scan without /r or /f reported errors found. Running with /r just got it stuck. I decided to give fsck a shot, so I loaded up my Kubuntu VM and attached the drive to it with VirtualBox's USB 2.0 passthrough. I umounted it (/dev/sda1) and ran a fsck. There are differences between boot sector and its backup. I chose No action. It told me FATs differ and asked me to select either the first or second FAT. Whichever I selected, I got a notice of Free cluster summary wrong. If I chose Correct, it gave a list of incorrect file names. To try to fix something, at least, I ran it with the -p option. Halfway through fixing the files, the VM froze - I ended its process about ten minutes later. Cause? My next attempt was to use YUMI, again, to rebuild the whole drive. I used YUMI's built in reformat (to FAT32) option and installed a Kubuntu ISO (700MB). The format was successful, however, the extract and copy of Kubuntu (which YUMI uses a 7zip binary for) froze at about 60% done. After waiting for about fifteen minutes (longer than the 3.5GB Knoppix ISO took last time), I pulled the drive out. The drive at this point was already formatted, SYSLINUX already installed, just waiting on the unpacking of an ISO and the modifying of the boot menus. Plugging it back in, it came up as normal - however, any write action would fail. Disk management reported it as read only. On reconnect, it would come up as normal but a write operation would cause it to go read only again. After a few attempts, it started coming up as read only on insertion. Attempts to fix This is when I ran through the attempts listed above, to try and reformat it in case of a faulty format. However the inability to do so even on a bootable disk indicated something more serious is wrong. chkdsk now reports nothing is wrong, and fsck still reports MBR inconsistencies, but now always chooses first FAT automatically after telling me FATs differ. It still does the same Free cluster summary wrong afterwards. I cannot run with -p anymore because it is now marked as read only. It also managed to corrupt my VM's disk somehow on the first attempt (yes, I'm sure I chose sda, which is mapped to a 7.4GB drive - I triple checked). Thank god for snapshots? I'm just about out of ideas. To my inexperienced mind it looks like something in the drive's firmware set it to read only "permanently" somehow - is there any way to reset this? I don't particularly care about keeping data, considering I've reformatted it twice. Also, fixes that keep me in Windows are better; it reduces the risk of me accidentally nuking my main hard drive. Update 1: I pulled apart the drive out of curiosity. As you can see, there are no obvious write protect switches. There is an IC on the other side, ALCOR branded labelled AU6989HL, if that matters. If there appears to be no way to fix this, I'll probably pull out the (glued down) card and put it in a card reader to check if it's the card or the controller that died. Update 2: I've pulled the card off, Windows detects the drive as a card reader now. The contacts on the card don't appear to be used, and there are several rows of holes on the card itself. Putting it into the card reader only detects about 30MB total, RAW. It's probably either the reader incorrectly reporting the card as faulty (as if a real SD card's write protect was switched on) or a bad contact somewhere. If nothing else, I have a spare 8GB Micro SD card now... as soon as I figure out how to format it as 8GB.

    Read the article

  • Few questions on bittorrent

    - by user23950
    Details Torrent Client: Bit lord Is it possible to continue torrent downloads on other computer? I'm at about 38% of the 8Gb file that I'm downloading. And then my isp suddenly dropped my speed from 90-100kbps into 40-48kbps(Maybe because I'm really producing lots of traffic with the 8 hrs/day downloading) If its possible what do I need to do in order for the downloads to be continued on other computer

    Read the article

  • What other protocols must not be fire-walled for FTP to work?

    - by Chris
    my Netgear router randomly reset itself the other day loosing all of my config settings: DSL details, Firewall rules, the lot! So I set about restoring all of the details manually, but when it came to configuring the firewall I wanted improve the security by explicitly setting 'deny' rules for everything that I figured is 'non-essential', and (although not necessary) whilst I was at it I set explicit 'allow' for the 'essential' protocols. I'll admit now I didn't really know what I was doing and everything was just 'my best guess', but I enabled only DNS, HTTP, HTTPS, FTP, SFTP, TFTP with everything else blocked. This did not work for me as I could not access 99% of web sites (although strangely Google worked!), so I played around a bit more and found that (oddly) if I disabled just the explicit 'allow' rules then everything worked fine, for browsing anyway. Today I came to work on some web-sites via FTP and just could not get a consistent connection, it kept dropping out after a few files or being blocked by the server or simply not connecting. It would authenticate okay but then stop when retrieving the initial directory listing! e.g.: Status: Delaying connection for 1 second due to previously failed connection attempt... Status: Resolving address of ftp.domain.co.uk Status: Resolving address of ftp.domain.co.uk Status: Connecting to 123.123.123.123:21... Status: Connecting to 123.123.123.123:21... Status: Connection established, waiting for welcome message... Status: Connection established, waiting for welcome message... Response: 421 Too many connections (8) from this IP Error: Could not connect to server Status: Delaying connection for 5 seconds due to previously failed connection attempt... Response: 421 Too many connections (8) from this IP Error: Could not connect to server Status: Delaying connection for 5 seconds due to previously failed connection attempt... I've checked and re-checked the FTP settings (they worked before anyway), I have Googled the I.T. out of the various protocols that I have blocked in the fire-wall but none seem essential to FTP (other than FTP/SFTP etc. which I have passively enabled). I'm (clearly) no server engineer, or protocols / fire-wall expert so I was hoping that some one could maybe shed some light on why my FTP is failing. I've been wondering if I ought to be allowing BGP, BOOTP and/or IDENT (or any others)? What other protocols are required for FTP? Thanks in advance!

    Read the article

  • zimbra server server not sync with ldap server

    - by shreedhar.bh
    Please help me out I am intermediate on linux, 1) I do have zimbra mail server on ubuntu with ladap server and external openldap server in internal location 2) last weak we got renewed the SSL certificate on Zimbra server 3) after renewed the SSL certificate with 10 years in Zimber server its not able to sync the ladap details with internal OpenLdap server Please help me to fix this issue In advance thanks Regards Shreedhar.BH

    Read the article

  • Using Google Analytics to track Usernames

    - by DrStalker
    We have a SharePoint Installation (MOSS, IIS 7.0, Windows Authentication, Windows 2008) and Google Analytics has been installed to track site usage. The site is an intranet site, and all users are authenticated before gaining access. Is there any way in Google Analytics to track user information so we can see details of the login names of who is accessing content?

    Read the article

  • Windows 7 Premium Error

    - by Lynn
    I am getting the following error details: Problem signature: Problem Event Name: BlueScreen OS Version: 6.1.7601.2.1.0.768.3 Locale ID: 1033 Additional information about the problem: BCCode: 9f BCP1: 0000000000000003 BCP2: FFFFFA8004281A10 BCP3: FFFFF80000B9C518 BCP4: FFFFFA8007BD5C10 OS Version: 6_1_7601 Service Pack: 1_0 Product: 768_1 Files that help describe the problem: C:\Windows\Minidump\121212-74334-01.dmp C:\Users\brianna\AppData\Local\Temp\WER-251161-0.sysdata.xml Anyone have any suggestions? Thanks!

    Read the article

  • few questions on packet sniffer/analyzer

    - by user37652
    I have few questions about packet sniffer. I'm using a zyxel p-600 series modem and a hub to distribute the internet connection. Can I use a packet sniffer here to determine if the user is downloading something? Can I determine if the user is downloading a file based on the modem alone.(The lights blink faster) Is there an application that I could use for the modem or the hub to limit or avoid direct downloads. Details: OS: Ubuntu 9.10 and Windows 7

    Read the article

  • how to set multiple white spaces (ex: tabs) as delimiters in bash's `cut`

    - by Idlecool
    I want to retrieve the cpu usage/free percentage from mpstat output. The bash cut can be used to retrieve such details but i dont know what should be the delimiter viz. [idlecool@archbitch proc]$ mpstat | grep "all" | cut -d '$x' -f11 what should be $x so that i can skip white spaces and select value corresponding to %idle? Output of mpstat: [idlecool@archbitch proc]$ mpstat Linux 2.6.36-ARCH (archbitch) 01/14/11 _i686_ (2 CPU) 19:58:53 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle 19:58:53 all 5.51 0.01 2.96 0.84 0.00 0.01 0.00 0.00 90.66

    Read the article

  • Custom built machine has much higher power consumption than expected

    - by foraidt
    I built a machine according to the specs of a computer magazine (c't, Germany). According to the magazine, the power consumption should be at around 10W. I don't want to go into the specifics of the hardware but rather ask for general advice on where to look: I updated the BIOS/UEFI version to the latest version, installed all the recommended drivers and unplugged all hardware that's not necessary to boot into Windows. All that was left is the power supply, mainboard, cpu, cpu cooler and one SSD drive. But still I measured a power consumption of 50W, which is 40W more than it should be. I tried booting Linux Mint from a USB stick, so I don't think it's some Windows-related problem.. Where else could I look? Update 1 I dind't want the question to get closed for being too localized but if more details are necessary, here they are: The system is a desktop PC. The power consumption is measured using a Brennenstuhl PM 231 device, which was tested also by c't and they found it quite accurate. The PSU is an Enermax ETL300AWT, the mainboard Intel DH87RL (Socket 1150) and the CPU Intel G3220 (Haswell). Update 2 There is no online version of the article*. The most details I found can be read on its project page (in German, though...) (*)You can pay for downloadable PDFs, however. English translation of that project page Update 3 Regarding the sceptics: It may sound ridiculous but apparently 10W idle consumption is possible with Intel's Haswell architecture. As a kind of proof, there's an additional Blog article explicitly listing the steps needed to reduce the idle consumption to 10W. Additional hardware: I measured the consumption without the HDD, and as expected the usage dropped by around 10W. I have no chassis fans and the CPU fan is a "Scythe Mugen 4" model. It runs at around 600rpm so I think it won't draw much. When stripping off all my extra components I should be at 10W. But I'm not getting anywhere near that. I would be happy to see "just" 15W in the stripped down version but currently I'm not getting below 50W no matter which component I remove. As I see it this cannot be explained by the PSU being less efficient at lower consumption. I also waited half an hour or so (also checked that no Windows updates were running in the background) and the consumption dind't drop by more than a few watts.

    Read the article

  • OpenSSL x509 Purpose flag "Any Purpose" What is this?

    - by Nick
    Looking at the details of a certificate using the following: openssl x509 -noout -text -purpose -in mycert.pem I find a bunch of purpose flags (which I've discovered are set by the various extensions attached to a certificate). One of these purpose flags is "Any Purpose". I can't seem to find ANY documentation on this flag and why or why not it is set. Do any of you know where I can find more information on this purpose and what it means? Thanks,

    Read the article

  • How to address a recurring low temperature error seen at every boot-up?

    - by GregC
    After updating to latest controller firmware, I started receiving the following error messages: LSI 2208 ROC: Temperature sensor below error threshold on enclosure 1 Sensors 5 thru 7 Is this something I should worry about, or is it a Red Herring? Details: I have a Sans Digital NexentaSTOR 24-disk JBOD enclosure connected to LSI 9286-8e RAID-on-Chip controller with two SAS cables. Seagate ES.2 3TB SAS hard drives populate every bay in the enclosure.

    Read the article

  • I want to install and get to building a personal MySQL DB on 64 bit Ubuntu [closed]

    - by Ari Hall
    So how do I go from installing MySQL from the Software Center to inputing data into fields and bringing in a comma delimited file? I've only had brief experience with MSAccess and OOo Base a long time ago, so details are appreciated, I just want to get up and running. I have Ubuntu 10.10, 64 bit, if that affects much. If you can link me to a howto that does exactly what I'm looking for, that would work. Again, Thanks!

    Read the article

  • How can I troubleshoot Virtualbox port forwarding from Windows guest to OSX host not working?

    - by joe larson
    There are a plethora of questions about virtual box port forwarding problems but none with my specific details. I have a Windows install living in Virtual Box, hosted within OSX. I've got several webservers running on localhost on different ports within the Windows install. I cannot for the life of me get port forwarding to work so I can access those webservers from OSX. My settings look like this (yes I have a NAT adapter): And in my vbox configuration file the relavent portion looks like this: <NAT> <DNS pass-domain="true" use-proxy="false" use-host-resolver="false"/> <Alias logging="false" proxy-only="false" use-same-ports="false"/> <Forwarding name="RLPWeb" proto="1" hostport="7084" guestip="127.0.0.1" guestport="7084"/> <Forwarding name="UtilWeb" proto="1" hostport="4040" guestip="127.0.0.1" guestport="4040"/> <Forwarding name="WCARLP" proto="1" hostport="8084" guestip="127.0.0.1" guestport="8084"/> <Forwarding name="WCAUtil" proto="1" hostport="4848" guestip="127.0.0.1" guestport="4848"/> </NAT> I've turned off the Windows firewall to ensure it is not interfering, and I am not running a firewall on OSX. Anyway, when I attempt to go to for example http://127.0.0.1:4040/ on any of my OSX browsers, it will eventually time out. The log file for this VM shows that it is correctly reading the settings and implying it's doing the right thing here: 00:00:08.286 NAT: set redirect TCP host port 4848 => guest port 4848 @ 127.0.0.1 00:00:08.286 NAT: set redirect TCP host port 8084 => guest port 8084 @ 127.0.0.1 00:00:08.286 NAT: set redirect TCP host port 4040 => guest port 4040 @ 127.0.0.1 00:00:08.286 NAT: set redirect TCP host port 7084 => guest port 7084 @ 127.0.0.1 00:00:08.290 Changing the VM state from 'LOADING' to 'SUSPENDED'. 00:00:08.290 Changing the VM state from 'SUSPENDED' to 'RESUMING'. 00:00:08.290 Changing the VM state from 'RESUMING' to 'RUNNING'. 00:00:08.337 Display::handleDisplayResize(): uScreenId = 0, pvVRAM=000000012017d000 w=1834 h=929 bpp=32 cbLine=0x1CA8, flags=0x1 00:00:09.139 AIOMgr: Host limits number of active IO requests to 16. Expect a performance impact. 00:00:13.454 NAT: DHCP offered IP address 10.0.2.15 I've tried setting the Host IP to 127.0.0.1, and I've tried setting Guest IP blank and also 10.0.2.15. None of these seem to help. What else can I look at to troubleshoot this issue? Details of setup: OSX 10.6.8 Windows 7 Professional 64bit VirtualBox 4.1.2

    Read the article

  • Getting a hardware profile from Fedora 12 after installation

    - by Chris T
    I've installed Fedora 12 on a computer I found in the trash (Bulk garbage day!) and I'm trying to figure out what's under the hood (seems like a stock dell but I'd like to know the details). Is there a way to get a hardware profile on Fedora after it's already installed on the harddrive? I saw an option at install but I skipped over it.

    Read the article

  • measuring of network reliability

    - by xpugur
    Hi, i have a exam to do at home but there is a question that i can't solve :S ... can anybody help me about that question (( i google it with diffrent ways but can't found anything as answer)) question: how the network reliability can be measured(write at least 4 factors)? explain 3 of the factors in details with their advantages and disadvantages?

    Read the article

  • Any issues with computer on one domain in a separate forest and user account in another domain/forest?

    - by TheCleaner
    I have a few of my sites with a trust relationship among two different forests with a single domain in each AD forest. I'll skip all the politics and details that don't matter and just ask the question: Will having a machine with a computer account in one domain and their user account in another cause any issues? (besides GPO behavior that would need to be understood such as their computer getting a GPO applied from the computer's domain, and their user account getting a GPO applied from their user domain)

    Read the article

  • Advantages of using .msi files?

    - by Frode Lillerud
    What are the advantages of using .msi files over regular setup.exe files? I have the impression that deployment is easier on machines where users have few permissions, but not sure about the details. What features does msiexec.exe have that makes deployment more easy than using setup.exe scenarios? Any tips or tricks when deploying .msi applications?

    Read the article

  • SSL certificate for Oracle Application Server 11g

    - by Easter Sunshine
    I was asked to get an SSL certificate for an "Oracle Application Server 11g" which has a soon-to-expire certificate. Brushing aside the fact that 10g seems to be the newest version, I got a certificate from InCommon, as I usually do without problem (except this is the first time I supplied Oracle Application Server 11g as the software type on the CSR form). On the email containing links to download the certificate, it mentioned: Certificate Details: SSL Type : InCommon SSL Server : OTHER I forwarded the email over to the person responsible for installing it and got a reply that the server type must be Oracle Application Server for the certificate to work (the CN is the same as before). They were unable to install this certificate (no details provided to me) and mentioned they had this issue previously with Thawte when they didn't supply Oracle Application Server as the server type. I don't see any significant difference between the currently installed certificate (working) and the new one I just got signed by InCommon (not working). $ openssl x509 -in sso-current.cer -text shows, with irrelevant information ommitted. Data: Version: 3 (0x2) Signature Algorithm: sha1WithRSAEncryption Issuer: C=ZA, ST=Western Cape, L=Cape Town, O=Thawte Consulting cc, OU=Certification Services Division, CN=Thawte Premium Server CA/[email protected] Validity Not Before: Oct 1 00:00:00 2009 GMT Not After : Nov 28 23:59:59 2012 GMT Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE X509v3 CRL Distribution Points: Full Name: URI:http://crl.thawte.com/ThawteServerPremiumCA.crl X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication Authority Information Access: OCSP - URI:http://ocsp.thawte.com Signature Algorithm: sha1WithRSAEncryption and $ openssl x509 -in sso-new.cer -text shows Data: Version: 3 (0x2) Signature Algorithm: sha1WithRSAEncryption Issuer: C=US, O=Internet2, OU=InCommon, CN=InCommon Server CA Validity Not Before: Nov 8 00:00:00 2012 GMT Not After : Nov 8 23:59:59 2014 GMT Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Authority Key Identifier: keyid:48:4F:5A:FA:2F:4A:9A:5E:E0:50:F3:6B:7B:55:A5:DE:F5:BE:34:5D X509v3 Subject Key Identifier: 18:8D:F6:F5:87:4D:C4:08:7B:2B:3F:02:A1:C7:AC:6D:A7:90:93:02 X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Basic Constraints: critical CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Certificate Policies: Policy: 1.3.6.1.4.1.5923.1.4.3.1.1 CPS: https://www.incommon.org/cert/repository/cps_ssl.pdf X509v3 CRL Distribution Points: Full Name: URI:http://crl.incommon.org/InCommonServerCA.crl Authority Information Access: CA Issuers - URI:http://cert.incommon.org/InCommonServerCA.crt OCSP - URI:http://ocsp.incommon.org Nothing jumps out at me as the reason one would not work so I don't have a specific request for the signer for what to do differently when re-signing.

    Read the article

  • methods for preventing large scale data scraping from REST api

    - by Simon Kenyon Shepard
    I know the immediate answer to this is going to be there is no 100% reliable method of doing this. But I'd like to create a question that details the different possibilities, the difficulty of implementing them and success rates. I would like to go from simple software ip/request speed analysis to high end sophisticated soft/hardware tools, e.g. neural networks. With a goal of predicting and preventing bogus requests and attempts to scrape the service. Many Thanks.

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >