Search Results

Search found 20883 results on 836 pages for 'wont say'.

Page 769/836 | < Previous Page | 765 766 767 768 769 770 771 772 773 774 775 776  | Next Page >

  • Skyrim: Heavy Performance Issues after a couple of location changes

    - by Derija
    Okay, I've tried different solutions: ENB Series, removing certain mods, checking my FPS Rate, monitoring my resources, .ini tweaks. It's all just fine, I don't see what I'm missing. A couple of days ago, I bought Skyrim. Before I bought the game, I admit I had a pirated copy because my girlfriend actually wanted to buy me the game as a present, then said she didn't have enough money. Sick of waiting, I decided to buy the game by myself. The ridiculous part is, it worked better cracked than it does now uncracked. As the title suggests, after entering and leaving houses a couple of times, my performance obviously drops extremely. My build is just fine, Intel i5 quad core processor, NVIDIA GTX 560 Ti from Gigabyte, actually stock-OC, but manually downclocked to usual settings using appropriate Gigabyte software. This fixed the CTD issues I had before with both Skyrim and BF3. I have 4GB RAM. A website about Game Tweaks suggested that my HDD may be too slow. A screenshot of a Windows Performance Index sample with the subscription "This is likely to cause issues" showed the HDD with a performance index of 5.9, the exact same mine has, so I was playing with the thought to purchase an SSD instead, load games onto it that really need it like Skyrim, and hope it'd do the trick. Unfortunately, SSDs are likewise expensive, compared to "normal" HDDs... I'm really getting desperate about it. My save is gone because the patches made it impossible to load saves of the unpatched version and I already saved more than 80 times despite being only level 8, just because every time I interact with a door leading me to another location I'm scared the game will drop again. I can't even play for 30 mins straight anymore, it's just no fun at all. I've researched for a couple of days before I decided to post my question here. Any help is appreciated, I don't want to regret having bought the game... Since it actually is the best game I've played possibly for ever. Sincerely. P.S.: I don't think it's necessary to say, but still, of course I'm playing on PC. P.P.S.: After monitoring both my PC resources including CPU usage and HDD usage as well as the GPU usage, I don't see any changes even after the said event. P.P.P.S.: Original question posted here where I've been advised to ask here.

    Read the article

  • Email delivery management grievances

    - by joxl
    The question I have may be more of principle than anything else, but here's my dilemma. I manage an email system for a small company (about 20 email users). We own a plain-letter .com domain name through Network Solutions. Our email service is hosted by Google Apps. Recently (Feb. 2011) we've been having customers report that they aren't getting our emails. Upon further investigation it seems that the failed emails are all to a common (well known) domain. We have not received any bounce messages for the emails. We've also contacted a few of the intended recipients, who have reported that the messages are not in their spam box; they simply did not receive anything. In these cases we re-sent the same email to an alternate address on another domain, which was successful received. One customer contacted their email provider about the issue. The provider recommended that we submit a form to be white-listed by their domain. Here's where my problem begins. I feel like this is heading down a slippery slope. Doesn't this undermine the very principle of email? If this is the appropriate action to take in these situations where will it end? In theory (following this model) it could be argued that eventually one will first need to "whitelist" (or more appropriately termed "authenticate") themselves with an email host before actually sending any messages. More to this point, what keeps the "bad" spammers from doing the same thing...? We've just gone full circle. I know avoiding anti-spam measures is a big cat-and-mouse game, but I think this is the wrong way of "patching" the problem. Email standards say that messages should not just disappear silently. I have a problem supporting a model that says "you must do < this to make sure your emails aren't ignored". I have a notion to call the provider and voice my complaint, although I have a feeling it will probably fall on deaf ears. Am I missing something here? Is this an acceptable approach to email spam problems? What should I do?

    Read the article

  • Can't send email through Comcast SMTP to my domains

    - by Midnight Oil
    I am a Comcast customer with 3 computers and 3 computer users in the house. There are 2 fully updated Macs and one PC running Windows 7. We use Mail on the Macs, and Outlook on Windows 7. All computer accounts are configured to send mail through port 587 of smtp.comcast.net. I also have two personal domains registered with Network Solutions. For the sake of this discussion, call my domains myOwnDomain1.com and myOwnDomain2.com. I have email addresses at both domains. They are of the form [email protected] and [email protected]. Until recently, our email worked as expected. However, sometime between September 13, 2012 and September 19, 2012, we lost the ability to send email through Comcast's SMTP server to the email addresses at my personal domains. If we attempt to send email through Comcast's SMTP to those addresses, the email never arrives. Furthermore, the email clients give no indication of failure. The email just never arrives. The result is the same on all 3 computers and with all accounts on those computers. We can successfully send email through Comcast's SMTP from any of our accounts on any of our computers to any email address other than to my email addresses at my personal domains! However, I receive email at those domains that is not sent through smtp.comcast.net. For example, I can successfully send email from my gmail and yahoo accounts to my email addresses at my personal domains. Furthermore, I can successfully send email through smtp.myOwnDomain1.com to [email protected] and through smtp.myOwnDoman2.com to [email protected]. Comcast says the problem must be at Network Solutions. According to Network Solutions, their logs show they are not blocking reception of the email, and our IP address is not flagged as a spam source. They say the email is simply not arriving. Does anyone have any ideas why we can't send email through Comcast's SMTP server to my domains? As an odd coincidence, we recently noticed a change in Comcast's SMTP service. there is now a 5 minute delay on all outbound mail. Comcast's SMTP server seems to sit on the mail with a 5 minute timer.

    Read the article

  • How do I troubleshoot root cause of a hung windows (2003) server?

    - by GregW
    I have a pair of Windows (2003 Server) servers both running MS SQL Server (2008 EE) that each hang every few months. This has been occurring intermittently :( for the past 15 months pretty much since we started using the servers. The symptoms are as-follows: I cannot remote desktop in to troubleshoot; when I attempt to, I get stuck on a blank black screen and am never offered a login prompt I can still ping the servers I can still open a SQL connection to the server, and, CURIOUSLY/BIZARRELY, when I do a "select getdate()", the time it returns appears to be stuck on the exact fraction of a second when (I presume) the server hung. Repeated attempts to do "select getdate()" keep getting that same date, suggesting that the clock is frozen. Filesharing attempts to connect to the hung server fail with the error message: "\ServerName is not accessible. You might not have permissions to use this network resource. Contact the administrator of this server to find out if you have access permissions. The server's clock is not synchronized with the primary domain controller's clock." This is consistent with a frozen clock. Post-reboot, if I investigate the Windows Event Viewer logs, I can see many security accesses (coming from me and others) that I recognize were login attempts during the "down" period, but all of them in the security log are associated with that same timestamp of when the server hung. This also suggests the clock is frozen. There is not a clear cause in the Application or System event logs. I have a local Admin account on the server and am in the process of getting a domain-credentialed Admin account for better remote admin access. HP is supposed to be supporting these machines and has some low-level ILO2 access but they seem incapable of finding the root cause. A reboot will "fix" the problem but I would like to get to the root cause and solve the issue. Has anyone ever seen something like this odd clock behavior?! (If it were just one server I'd perhaps say a bad hardware clock, but two?) Can anyone advise me on what I should try to troubleshoot this sort of situation to find the root cause (or what I should tell HP to try?)

    Read the article

  • Resolving CloudFlare DNS related mail delivery problems

    - by Andy Castles
    I recently started using CloudFlare and am having a few teething problems. Our domain is netlanguages.com and while we have a lot of sub-domains listen, we are currently only trialling a few of the servers through the CloudFlare CDN (for example, www.netlanguages.com is enabled for CDN, netlanguages.com is not). The actual CDN service seems to be reliable, but the problem that we are having is with DNS, and specifically with mail delivery. The background is that we have contact forms on our web site which use PHP mail() to send the details to end-users' email addresses, with the "from" address of the messages being [email protected] which is a valid address on our mail server. Most of the mails are arriving correctly, but a few specific people are not receiving them. The webserver uses qmail to deliver the messages, and the qmail log files show us some of the errors that the receiving mail servers return when they reject the mail delivery attempt. Two examples: Connected to 94.100.176.20 but sender was rejected./Remote host said: 421 DNS problem (interdominios.netlanguages.com). Try again later Connected to 213.186.33.29 but sender was rejected./Remote host said: 451 DNS temporary failure (#4.3.0) From what I can tell, the receiving SMTP server is doing a DNS lookup of some description on either the host of the "from" email address (netlanguages.com) or the server name given in the EHLO command of the SMTP conversation (in the first example above, interdominios.netlanguages.com), both of which should resolve to non-CloudFlare IP addresses. I've read that the CloudFlare DNS service is very reliable and fast but both of the problems above seem to point to a problem with remote servers unable to do DNS lookups. I should also point out that we changed our DNS to CloudFlare on 6th Feb, and since then started experiencing these mail delivery problems. On 22nd Feb we moved our DNS away from CloudFlare to see if the issues were related to CloudFlare and after a few hours delivery began to work. Then on 26th Feb I moved the DNS back to CloudFlare again and delivery problems started again. The issues definitely seems to be related to DNS, but I don't know if it's a configuration issue, or something else. Finally, I should say that our two DNS MX records point to non-CDN A record IP addresses, interdominios.netlanguages.com (the web and qmail server) also points to a non-CDN A record IP address. Does anyone know what the problem could be here? Any light you can shed on this will be most appreciated. Many thanks, Andy

    Read the article

  • Areca 1280ml RAID6 volume set failed

    - by Richard
    Today we hit some kind of worst case scenario and are open to any kind of good ideas. Here is our problem: We are using several dedicated storage servers to host our virtual machines. Before I continue, here are the specs: Dedicated Server Machine Areca 1280ml RAID controller, Firmware 1.49 12x Samsung 1TB HDDs We configured one RAID6-set with 10 discs that contains one logical volume. We have two hot spares in the system. Today one HDD failed. This happens from time to time, so we replaced it. Upon rebuilding a second disc failed. Normally this is no fun. We stopped heavy IO-operations to ensure a stable RAID rebuild. Sadly the hot-spare disc failed while rebuilding and the whole thing stopped. Now we have the following situation: The controller says that the raid set is rebuilding The controller says that the volume failed It is a RAID 6 system and two discs failed, so the data has to be intact, but we cannot bring the volume online again to access the data. While searching we found the following leads. I don't know whether they are good or bad: Mirroring all the discs to a second set of drives. So we would have the possibility to try different things without loosing more than we already have. Trying to rebuild the array in R-Studio. But we have no real experience with the software. Pulling all drives, rebooting the system, changing into the areca controller bios, reinserting the HDDs one-by-one. Some people are saying that the brought the system online by this. Some are saying that the effect is zero. Some say, that they blew the whole thing. Using undocumented areca commands like "rescue" or "LeVel2ReScUe". Contacting a computer forensics service. But whoa... primary estimates by phone exceeded 20.000€. That's why we would kindly ask for help. Maybe we are missing the obvious? And yes of course, we have backups. But some systems lost one week of data, thats why we'd like to get the system up and running again. Any help, suggestions and questions are more than welcome.

    Read the article

  • Unable to use TweetDeck on Windows due to "Ooops, TweetDeck can't find your data" and "Sorry, Adobe

    - by Matt
    I'm running Adobe AIR 1.5.2 (latest) on Windows 7 (64-bit RTM) and downloaded TweetDeck 0.31.1 (latest). When I run TweetDeck I get the following errors: Ooops, TweetDeck can't find your data and Sorry, Adobe AIR has a problem running on this computer Other AIR applications install and run fine. I've uninstalled both TweetDeck and AIR and reinstalled. Following the uninstalls I've also removed all on-disk references to both TweetDeck and AIR, but no luck. UPDATE: Using Process Monitor I did a trace of Tweetdeck from the moment it launched until the first error occurred. I saw the following information in the output of the trace: 1 5:22:18.6522338 PM TweetDeck.exe 5580 CreateFile D:\ProgramData\Microsoft\Windows\Start Menu\Programs\rs\??\d:\Use\myusername\AppData\Roaming\Adobe\AIR\ELS\TweetDeckFast.F9107117265DB7542C1A806C8DB837742CE14C21.1\PrivateEncryptedDatak NAME INVALID Desired Access: Generic Write, Read Attributes, Disposition: OverwriteIf, Options: Synchronous IO Non-Alert, Non-Directory File, Attributes: N, ShareMode: Read, Write, AllocationSize: 0 In this trace output, Tweetdeck.exe is trying to create the file D:\ProgramData\Microsoft\Windows\Start Menu\Programs\rs\??\d:\Use\myusername\AppData\Roaming\Adobe\AIR\ELS\TweetDeckFast.F9107117265DB7542C1A806C8DB837742CE14C21.1\PrivateEncryptedDatak but the path specified is invalid. When looking at the path you can see that it is indeed an invalid path. First, there’s the “??” portion which doesn’t exist in the file system since the “?” is an invalid character in Windows/NTFS file systems. Additionally, looking at this path, it actually seems to be composed of two parts (is the "??" a delimiter?): Part 1: D:\ProgramData\Microsoft\Windows\Start Menu\Programs\rs\?? Part 2: d:\Use\myusername\AppData\Roaming\Adobe\AIR\ELS \TweetDeckFast.F9107117265DB7542C1A806C8DB837742CE14C21.1\PrivateEncryptedDatak (the problem here is that d:\Use... doesn’t even exist. What seems to be happening here is that Tweetdeck is looking for the user credentials (the “PrivateEncryptedDatak” file) but it’s looking in the wrong place, can’t find the file, and hence the error that Tweetdeck is giving (shown in the screenshot). I'm trying to determine how TweetDeck is getting this path. I searched the contents of all files on my hard disk hoping to find some TweetDeck or Adobe AIR configuration file containing this incorrect path, but I was unable to find anything. UPDATE: See Carl's comment regarding directory junctions and symbolic links under my accepted answer. This ended up being the problem. Edit by Gnoupi: People, the answer section is there to provide an actual ANSWER, not to say you have the same issue. It doesn't help anyone that you have the same problem. Eventually, if you think this is really worth mentioning, put it as a comment under the question. But simply, if what you want to add is not an answer to the question, then don't post it as an answer. This is not a forum, I recommend new users to read the FAQ: http://superuser.com/faq

    Read the article

  • Email Tracking - By IP

    - by disasm
    I am trying to track an Email I received to see where it originated from (harmful scam Email). Here is the full header: Received-SPF: pass (domain of aol.com designates 205.188.109.203 as permitted sender) ZSwgeW91ciBpbmZvcm1hdGlvbiBoYXMgYmVlbiBhY2NlcHRlZCBhbmQgYXBw cm92ZWQgYnkgb3VyIFJlY3J1aXRpbmcgRGVwYXJ0bWVudC4gWU9VUiBGSVJT VCBBU1NJR05NRU5UIERFVEFJTFMuIFdlc3Rlcm4gVW5pb24gTW9uZXkgVHJh bnNmZXIgaGFkIHJlcG9ydHMgYWJvdXQgbGFwc2VzIGluIHRoZSBzZXJ2aWNl cyBvZiBzb21lIG9mIHRoZWlyIG91dGxldHMgYW5kIHNvbWUgbwEwAQEBAQN0 ZXh0L3BsYWluAwMwAgN0ZXh0L2h0bWwDAzIx X-YMailISG: HeDj35sWLDvNbQaO2B72kIm7hhiubE2qUysyBFo2m3GE4wsk FOs6uaYWwBVUBbL_ubGEs5Gitm1b86QFPuQYdS3g5s2f_uY3dyHhXh7DcBIB Ad4mJBzeRozs5.0s6vbqhsIEYlaKI4EDrsocJEbDbTUiUq2UyxZ7Ery8Iqow _sBVN0msHJvcI09KwmaXUteV_qCL7qFlj7WNSmdMM.wVUm3pyiWzw1VUZlyD nwoEzdEImQdwmJqoTM1YE8XU6BE8IkmUkh1Z8XkfLtHqmAsPi1_.Msbi8ubk QD71BcTABjb7ixELg5NfomsyZKVN.9G.TnuISlX5umByjS701ITyQ2PUYXai hoVCWg37bKWNR9MAWdogUK8PIV3MWPv5gdglNAKuPdS5Z7.01J39UGyH7R60 aIiIWdAsQ7_3VQBgIi9Seg2YM2j1U5g9QtdcJxBe0.1oigmj7G2sC9.YXNGX 3abQ1EcWVlJLuSuBbQ4Flpbe_Y3_ssz8nZIK2YjKy0U8WWe77vfnxdEBsknf w2OA_PAzHtuAAuxETnAOU_MeMIssgRAtihKC_26Au1LnKYCGPGADFBLaLNHF 30itI.kBvUjdvUfqV11dnGe50kFVzBGDMJFm8mXvb5WtIKq6qU1ZZmUroCew EgXjVZ.JKbux2KQmHh2zZbIJO3nOmLGkzuRczYiWCUNBDtmUZE6imuIQ4P6S RjusSbMITf2fIL_xe.qFCnW563sOdc4u.uXLDx.lq30740l8lWkkLX6KaDMF k9TY0VQKsMynqa4vXKpkTVNdukAcGd0p2i3newxY4q_9eZLn9czsJimfpKNW SX1bqjs0iCQHb4FTydf1Zpa2b.6lIhdjVlIM8tiWhfGhlUeM267T3njEM6nz 0vxyjparR_G_s0VnIVhSeLw2F5KpAL226w2yA.WBcqoG2ROSa7fK.0ZYwy36 Qcmk8C.HKj8Fng1qFLtEfaI4F66rCEJi7h1d6EK0Jk4a_TJnBBub1VQVoU.s SJ2ehs8aDjDqJw27_Ia4vYekKhIU8Oak0vYSmMXhZ5IvJfGfOHYVy4ebkoQf IDE3lSfex1nHZqcMqq0agPOZUOdznSIGJVx4T8m6MGwrEouvL.grhT6KUJQ5 g8UX6DVTgj.8lHuTyOzj3A3NRwDFs2JqicprOMJRS4UWYX8eQ1y4j.4ora36 LnWYm7k1n6X0lDBW5ZdZlsLy7.0al0G1uCIAZwBNo7FnHr6q2mQNwgFaPkNO FOiykqFHu0khLO_cZw87MpDslZO_3lFbJGlnchSs81hkESSQsldUxqdNkIV. yWsS58p1uuwVNksp4NB.QW41wfBtY5FU.Q80g8KiOZIz0daou3GlzoahcHoQ GPgSa86GKtSo.ew2xEUKk6c.ffAT9RjqNh5fzyhBdzEYURxJBYgMlL5DQp2G yYIGhlIS5h9JzPFVkk2XhBoY2NgEAAfJfAfqKoMNNKIW.bEwbgNa9xtSzHNg YdmDfOSkYkAGZDqwa.uONguq5.jqtnWDnx3GDyuoVg-- X-Originating-IP: [205.188.109.203] Authentication-Results: mta1343.mail.bf1.yahoo.com from=aol.com; domainkeys=neutral (no sig); from=mx.aol.com; dkim=pass (ok) Received: from 127.0.0.1 (EHLO omr-d06.mx.aol.com) (205.188.109.203) by mta1343.mail.bf1.yahoo.com with SMTP; Fri, 18 Oct 2013 10:11:15 -0700 Received: from mtaomg-da05.r1000.mx.aol.com (mtaomg-da05.r1000.mx.aol.com [172.29.51.141]) by omr-d06.mx.aol.com (Outbound Mail Relay) with ESMTP id ACFAD7012DFB4 for <.confidential.; Fri, 18 Oct 2013 13:11:15 -0400 (EDT) Received: from core-mfb002c.r1000.mail.aol.com (core-mfb002.r1000.mail.aol.com [172.29.47.199]) by mtaomg-da05.r1000.mx.aol.com (OMAG/Core Interface) with ESMTP id 2ADB6E000089 for <.confidential.; Fri, 18 Oct 2013 13:11:15 -0400 (EDT) References: <[email protected] <[email protected] <[email protected] To: .confidential. Subject:.confidential. In-Reply-To: <[email protected] X-MB-Message-Source: WebUI MIME-Version: 1.0 From: .confidential. X-MB-Message-Type: User Content-Type: multipart/alternative; boundary="--------MB_8D09A3C2FD3105D_1338_33A66_webmail-d257.sysops.aol.com" X-Mailer: AOL Webmail 38109-STANDARD Received: from 66.199.226.81 by webmail-d257.sysops.aol.com (205.188.17.42) with HTTP (WebMailUI); Fri, 18 Oct 2013 13:11:15 -0400 Message-Id: <[email protected] X-Originating-IP: [66.199.226.81] Date: Fri, 18 Oct 2013 13:11:15 -0400 (EDT) x-aol-global-disposition: G DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mx.aol.com; s=20121107; t=1382116275; bh=9TXLF90L8beaMnjNzoKDwcv3Eq06jiZGN40YTBw2YOI=; h=From:To:Subject:Message-Id:Date:MIME-Version:Content-Type; b=xHVjoH5AccrOpPZoZZW+b41uJ7nzHDrryGsO6WzvtBOFGWX3xJMO3RB1ILFlJAsF6 P9olk8Gz6LDydX9SOZ4w/yPI8y8eU6z1AauwOPxw9F1lu82goIGwK3jIcvOv72koB5 Izq9By7L6PESEmmJ5nFc4ko9vH2CBMcJKPV95HTg= x-aol-sid: 3039ac1d338d52616bb37d53 Content-Length: 26445 The Email was from the aol domain, so I understand the IP of aol. My question is, looking at 66.199.226.81 , would it be safe to say that the Email originated from "Access Integrated Technologies"? Thanks for any help!

    Read the article

  • Brand new Mac Pro tower fan suddenly runs full-tilt

    - by Caffeine Coma
    My Quad-Core Mac Pro tower is two days old. Initially, I was impressed with how quiet it was compared to my older Macbook Pro. Then on day two, for some reason it started running very loudly. It's not just a "little" loud- my wife walked into the room and asked what the noise was. At first I thought this was just because I was hitting the CPU a bit (importing my iPhone library into iLife '09, and running Eclipse). But now that that's done, Activity Monitor shows a virtually idle CPU; there's nothing running that ought to be causing this, as far as I can tell. I tried powering it off & letting it cool down for a few minutes to no avail; about 10 seconds after powering up, the box gets loud again. I took a look at it with the side cover off, and it seems to be the fan near the top middle, between the power supply and the disk drive. It can't be a dust issue, as the machine is only 2 days old (and I peeked inside anyway just to be sure- clean). I did do a software update over the past 24 hours or so, but I can't say that it occurred immediately after that. I also did a migration of my old apps and data from my MBPro, for what it's worth. Why is it suddenly so loud? How can I monitor the fan speed and various system temperatures? Here's a link to my temps and fan speeds. UPDATE 1: Took it to the Apple Store. They took it in the back (where it's presumably quieter) and ran a fan diagnostic; no problems were found. The guy also told me that it was "a little loud", but normal. I don't buy it. It was virtually silent the first 24 hours I was using it. They would not replace/service it in the store (grrr... that's why I went there, as directed by Apple Care) but said I could get a replacement from the online store, as it was just purchased. I think I will try that. UPDATE 2: Apple is letting me send it back for a replacement. Glad to see so many responses to this question mentioning that the MacPros are usually silent; it's not all just in my head. :-)

    Read the article

  • Intermittent Disconnection of Client Computers from Domain Server

    - by dilip nagle
    The Background: I have Windows 2008 server Enterprise Version with 25 user cal licences. It has a domain and all users and a network shared HP printer in it. The Server has two network cards and both these cards as well as all client machines are on IP addressing scheme of 192.168.1.* with subnetmask 255.255.255.0. Of the two network cards viz. 192.168.1.231 and 192.168.1.233, only 192.168.1.231 is registered with DNS. In 192.168.1.233(i.e. 2nd network card) has default getway as 192.168.1.231 and dns address as 192.168.1.231. The Server has three hard disks with capacities as 500gb, 500gb and 1TB and are partitioned as (C,D,E), (F,G) and (K) with partition K having all user data into various Shared Folders. Each of these folders(On Partition K), are mapped onto each user's computer as per the right of access given to them. The Problem: The Server was installed about 6 months ago and till date not even once, the Server has Hung or has given any problem. All the Clients computers are able to run the web based software from their computers via ip address, e.g. http://192.168.1.231/webERP/default.aspx. However, occassionally, when any client computer tries to browse network mappings, it hangs. Again, there is no fixed pattern. This may happen after running smoothly for say 3 days. On each Client's machine, the network settings are as follows: IP Address: 192.168.1.* where * is 1,2,3 .... Sunnetmask: 255.255.255.0 defauly getway: 192.168.1.231 Which is a server card and DNS address. preferred DNS Server: 192.168.1.231 In Advanced Tab under Wins: LMHostLookup is Unticked and default is radio buttoned. Ideally, I would have loved to have Disabled NETBIOS over TCP/IP but some network printers do not get accessed if this option is enabled(ie. Radio Buttoned). Bacause Disabling Netbios will drastically reduce traffic of NETBIOS broadcasting to all the computers on the net to do naming resolution. On Server, I have WINs Running which I have Scavanged Records, verified Database Integrity etc, removed Tombstoned Records etc. The Critical Errors shown only once a day when the server is statred are 4224(WINS) and 12923 - Server Licencing failed to Update DNS Record. I fail to understand as why do client machines HANG when they try to browse mapped network shared folders on K Drive. Kindly Advice

    Read the article

  • how to diagnosis and resolve: /usr/lib64/libz.so.1: no version information available

    - by matchew
    I had a hell of a time installing lxml for python2.7 on centOs5.6. For some background, python2.7 is an alternative installation of python on centOS5.6 which comes with python2.4 installed. it was bulit from source per its instrucitons ./configure make make altinstall However, after about 20 hours of trying I managed to find a workable solution and was able to install lxml. Until, I notice the following error at the top of the interpreter: python2.7: /usr/lib64/libz.so.1: no version information available (required by python2.7) Python 2.7.2 (default, Jun 30 2011, 18:55:26) [GCC 4.1.2 20080704 (Red Hat 4.1.2-50)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> print 'Sheeeeut!' this error is printed out everytime I run a script. For example: $ ./test.py /usr/local/bin/python2.7: /usr/lib64/libz.so.1: no version information available (required by /usr/local/bin/python2.7) the script runs flawlessly, but this error is bothersome. After some digging I have seem to believe I have a wrong version of libz installed, that it is either an older version or built for a different platform. I'm not quite sure how, I've only installed libz through yum, as far as I know. Although, I can't quite remember every little thing I tried in my twenty hours of trying. You may also be intereted in what my lib64 folder looks like, here is some information $ ls -ltrh libz* -rwxr-xr-x 1 root root 84K Jan 9 2007 libz.so.1.2.3 -rwxr-xr-x 1 root root 107K Jan 9 2007 libz.a -rwxr-xr-x 1 root root 154K Feb 22 23:30 libzdb.so.7.0.2 lrwxrwxrwx 1 root root 13 Apr 20 20:46 libz.so.1 -> libz.so.1.2.3 lrwxrwxrwx 1 root root 15 Jun 30 18:43 libzdb.so.7 -> libzdb.so.7.0.2 lrwxrwxrwx 1 root root 13 Jul 1 11:35 libz.so -> libz.so.1.2.3 lrwxrwxrwx 1 root root 15 Jul 1 11:35 libzdb.so -> libzdb.so.7.0.2 notice: the items that Say Jul 1st or Jun 30th are from me. I had initially moved these files into a backup folder as they seeemed to be 1. duplicates and 2. had a date after/during my problems I alluded to earlier that I had with lxml One inclination is to completely remove python2.7 and re-install. I think having it install to /usr/local/ was a poor default choice. However, without the make uninstall option being present it seems to be a time consuming task for a solution I am not quite sure would solve my problem.

    Read the article

  • Suspend only works once after full power cycle with ASUS P7P55D-E Pro

    - by John Chadwick
    This one is strange. I can't seem to get suspend working more than once per power cycle. When I say "power cycle," I mean the only way to get one proper suspend is to cut power from the power supply and boot back up cold. After the proper suspend, I get a failed suspend, and after all reboots or cold boots until power is cut, suspends fail. I'm using an ASUS P7P55D-E Pro with a Sandy Bridge Core i7, running on Ubuntu Precise repositories and UEFI. I'm running Nouveau from repository (And Gallium3d compiled from git, but that does not come into this since I can avoid OpenGL and it still happens the same way) with a GTX 285 (nv50.) I had to build a custom kernel (3.3) in order for ACPI 5.0 to be supported and make suspend work at all. I compiled it using the latest Ubuntu kernel's config file with the additional entries set to the default options. All packages are up to date. I know these are relatively exotic settings, but I'm hoping maybe I can get some help anyways. The behavior when suspend fails is strange. Upon a proper suspend, all fans turn off and the only led left on, the power led, is blinking. Upon a failed suspend, 1. USB power remains. 2. The power led stays on solid. 3. All fans seem to still be on. 4. I can hear what I believe is the primary harddrive shutting off. 5. Despite USB power remaining, the USB powered keyboard does not respond to anything, and the indicator leds on it shut off. Pressing the power button does nothing, and of course I have not to date found a way to wake it up. When trouble shooting the first round of issues I got with suspend not too long ago, I ended up building a list of modules to disable upon sleeping. Here's my config file for them: In /etc/pm/config.d/01modules: SUSPEND_MODULES="uhci_hd ehci_hd button" All of my other pm configuration files are stock. In case it's any help, here are my relevant BIOS settings. Thanks.

    Read the article

  • Looking for a new backup solution to replace dying tape drive

    - by E3 Group
    We're running Windows Server 2003 SBS and another machine with Server 2003 Standard on it. The SBS server is about 7 years old running pretty much 24/7 - a HP server of some description. We have an Ultrium 448 cycling LTO2 400GB tapes daily and incrementally backing up approximately 100gb worth of data (20gb C:\ and system state, 40gb exchange, 40gb database for some crap marketing software) on BackupExec 10D. As of 5 months ago, the backups have been consistently failing with IO errors, bad reads and some write errors. When I say consistent, I mean every time and we haven't had a proper backup for the entire 5 months - So if the server explodes tomorrow, 7 years worth of data will just cease to exist. I've only just recently rejoined the company and am looking at rectifying the more concerning problems, so the first thing I did was try a backup to an USB2.0 external drive. It was excruciatingly slow. In fact it was so slow it took 40 hours and it still wasn't finished. I ended up cancelling it and reconfiguring the selections again to reduce file size. This, however, isn't a permanent solution. I concluded that the IO error was either from a faulty tape drive (which has a tape stuck in there right now and not coming out) or from a dying SCSI controller. Neither of them are good news and both are extremely expensive to fix. I'm operating on an extremely low budget so have been looking at outsourcing the backups. A company in Sydney (where I'm located) offer incremental online backups via a NAS. It costs almost double a new tape drive but offers monthly repayments which will let us get through times when cash flow is minimal. It seems like a sweet deal but it is still a little bit pricey. So I'm looking for a cheaper, yet reliable solution. Maybe some in-house NAS or something offsite? The idea is to avoid using tapes. Are there any recommendations for rectifying my current situation? Or are tapes the only way to go? I'm concerned that the server will die one day in the near future and I must be able to restore it to another server with different hardware.

    Read the article

  • Installing OpenLDAP: ldap_bind: Invalid credentials (49)

    - by Arcturus
    Hello. I've been trying to set up the OpenLDAP installed by default on Fedora 12, very unsuccessfully. My ultimate goal is to use LDAP authentication for user login and Apache, using the OpenLDAP server running on the same machine. The server is running, but the error I always get when I try to use ldapsearch or ldapadd is: ldap_bind: Invalid credentials (49) I've been following these tutorials, but none of them helped me: http://www.howtoforge.com/openldap_fedora7 http://www.redhat.com/docs/manuals/linux/RHL-9-Manual/ref-guide/s1-ldap-quickstart.html http://www.howtoforge.com/linux_ldap_authentication http://docs.fedoraproject.org/deployment-guide/f12/en-US/html/s1-ldap-pam.html http://www.openldap.org/doc/admin24/quickstart.html First, some components were already installed, and I installed these with yum: yum install openldap-servers openldap-devel Then, I created a basic slapd.conf file in /etc/openldap: database bdb suffix "dc=sniejana-sandbox,dc=com" rootdn "cn=root,dc=sniejana-sandbox,dc=com" rootpw {SSHA}cxdz55ygPu4T3ykg7dgu+L0VRvsFSeom directory /var/lib/ldap/sniejana-sandbox.com I obtained the rootpw with this command: slappasswd -s changeme I also created the /var/lib/ldap/sniejana-sandbox.com directory and made sure the entire contents of /var/lib/ldap were owned by the ldap user. I found two ldap.conf files, one in /etc and one in /etc/openldap. I don't know which is the right one. If I understood correctly, this file is to configure the client. I put this in both: HOST localhost BASE dc=sniejana-sandbox,dc=com I then ran the server with: service slapd start It said OK. Most of the tutorials above say to use the command ldapsearch -D "cn=Manager,dc=my-domain,dc=com" -W to ensure that everything's working. When I execute this command, a password prompt appears, and after entering the password, I get the error. ldapsearch -D "cn=root,dc=sniejana-sandbox,dc=com" -W Enter LDAP password: ldap_bind: Invalid credentials (49) The same thing happens when trying to use ldapadd. I tried with an encrypted and unencrypted password in slapd.conf, it doesn't change anything. Adding a -x for simple authentication doesn't change anything either. netstat -ap confirms the server is listening: tcp 0 0 *:ldap *:* LISTEN 4148/slapd tcp 0 0 *:ldap *:* LISTEN 4148/slapd ps -ef|grep slapd confirms the process is running: ldap 4148 1 0 15:22 ? 00:00:00 /usr/sbin/slapd -h ldap:/// -u ldap Running slaptest procudes config file testing succeeded. I read somewhere that the command ldapsearch -x -b '' -s base '(objectclass=*)' namingContext can confirm the server is running. It appears to work: # extended LDIF # # LDAPv3 # base <> with scope baseObject # filter: (objectclass=*) # requesting: namingContext # # dn: # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 I'm running out of ideas. Am I missing something obvious?

    Read the article

  • Restore single users Exchange 2003 mailbox from backup

    - by Campo
    I take weekly backups of exchange in full. I also take complete weekly backups of the entire server. It is a Server 2003 R2 with AD and Exchange 2003 all on one box. One users inbox has disappeared. She has 19000+ junk items now. It is possible the inbox got mixed into the junk. Regardless it is such a huge mess she is not going to go through all of that.... I want to restore he mailbox from the backup. I followed this MS KB http://support.microsoft.com/kb/823176 I had to use Method 3. I have a VM of Server 2003 R2 with exchange but I am having failures on the restore from NT backup. The backup log just states to check the application log.... Application log points to backup log... Only info Is failed to restore Only thing different is the computer name... The only error I can find is in the Application log. Information Store Database not found All others just say that the backup failed. Any assistance is greatly appreciated. UPDATE I have successfully proven I can restore the DB into a recovery storage group in my VM Unfortunately due to the actual account being on a different store I am unable to do the recovery... Error is The attempt to log on to the Microsoft Exchange Server computer has failed. The MAPI provider failed. Microsoft Exchange Server Information Store ID no: 8004011d-0512-00000000 Two questions QUESTION 1 Should I repeat my steps on the production exchange server in the recover storage group? then merge into her original account? I am just concerned with doing recovery like that on the live server.... QUESTION 2 Is there any way I can extract her .PST from my recovery VM and then import into her outlook? On the Recovery VM: I restored the raw DB from my full backup repaired it with ESEUTIL then mounted in the recovery store. Was thinking I could just repeat and mount in the main store on the VM? Thanks for the suggestions.

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • TCP/IP & throughput between FreeNAS (BSD) server & other LAN machines

    - by Tim Dickerson
    I have got a question for someone that knows BSD a bit better than me that are in regards to my LAN setup at home/work here outside Chicago. I can't seem to fully optimize my network's (LAN) thoughput via my FreeNAS (BSD based) file server. It runs with the latest FreeBSD release which is modified to support several protocols for file transfers and more. Every machine that is behind my Smoothwall (Linux based) router is on the usual 192.168.0.x subnet and for most part works just fine. Behind the Smoothwall box, all machines are connected to a GB HP unmanaged switch. I host a large WISP here and have an OC-3 connection here at home/work and have no issues with downloading/uploading from/to the 'net'. My problem is with throughput. When I try and transfer large files...really any for that matter..between any of the machines to/and from the FreeNAS server via FTP, the max throughput I can achieve say between a Win 7 or a Linux box is ~65Mbit/sec. All machines are running Intel Pro 1000 GB NIC's and all cable is CAT6. Each is set to 'auto negotiation' and each shows 1500 MTU Full Duplex @1GB so I know the hardware is okay. I have not adjusted the MTU on any machine as I understand it to be pointless unless certain configurations are used (I assume I am not one of those). My settings for the FreeNAS machine are the following: # FreeNAS /etc/sysctl.conf - pertinent settings shown kern.ipc.maxsockbuf=262144 kern.ipc.nmbclusters=32768 kern.ipc.somaxconn=8192 kern.maxfiles=65536 kern.maxfilesperproc=32768 net.inet.tcp.delayed_ack=0 net.inet.tcp.inflight.enable=0 net.inet.tcp.path_mtu_discovery=0 net.inet.tcp.recvbuf_auto=1 net.inet.tcp.recvbuf_inc=524288 net.inet.tcp.recvbuf_max=16777216 net.inet.tcp.recvspace=65536 net.inet.tcp.rfc1323=1 net.inet.tcp.sendbuf_inc=16384 net.inet.tcp.sendbuf_max=16777216 net.inet.tcp.sendspace=65536 net.inet.udp.recvspace=65536 net.local.stream.recvspace=65536 net.local.stream.sendspace=65536 net.inet.tcp.hostcache.expire=1 From what I can tell, that looks to be a somewhat optimized profile for a typical BSD machine acting as a server for a LAN. I might be wrong and just wanted to find out from someone that knows BSD better than I do if indeed that is ok or if something is out of tune or what. Are there other ways I would find better for P2P file transfers? I honestly do not know what I SHOULD be looking for with respect to throughput between the NAS box and another client when xferring files via FTP, but I am told that what I get on average (40-70MB/sec) is too low for what it could be. I have thought about adding another NIC in the FreeNAS box as well as the Win7 machine and use a X-over cable via a static route, but wanted to check with someone first to see if that might be worth it or not. I don't know if doing that would bypass the HP GB switch and allow for a machine to machine xfer anyways. The FTP client I use is: Filezilla and have tried both active and passive modes with no real gain over each other. The NAS box runs ProFTPD.

    Read the article

  • Certain Programs cannot access internet

    - by Cindy
    Operating System: Windows 7 (x64) Problem: Certain Programs are unable to access the internet. They claim that there is no connection when you already are connected. Hello, before we start. Just letting you know I'm new here, and I'm very new to Windows 7. I installed it two days ago. I just installed Windows 7 on my laptop and I have a few problems. I play World of Warcraft, as well as a variety of games. And when I first attempt to log into the game, I get a windows error message, but it doesn't stop there. I thought World of Warcraft got corrupted during the upgrade. It seems that I am unable to access the internet from other online games as well. Most say in along the lines of "Cannot connect to patch server, try again later." I cannot use a downloader Also, I have internet explorer. The x32 version of the browser cannot connect to the internet, and when I try to enter "google.com", it says the same thing. I'm only accessing this site through Internet Explorer x64, which I would have been fine with is it's compatible with Adobe Flash. The only thing that seems to connect to the internet are Internet Explorer x64 and Windows Live Messenger. Here are the steps I have taken, but none worked. 1.) Disable Windows Firewall 2.) Have Windows Firewall Enabled, but allow the specific programs to access internet. And allowed all incoming access. 3.) Disabled UAC, Ran the programs as an admin, and set compatibility to Vista. 4.) Uninstalled an anti-virus program. (McAffee Security Suite 2010) 5.) Reinstalled the programs 6.) Reinstalled Windows 7 7.) Retaken the steps on the Administrator account. Please assist me in this problem. I need to get back into the game. Thanks so much in advance.

    Read the article

  • Resizing Partitions on Live RHEL/cPanel Server

    - by Timothy R. Butler
    I've resized many partitions over the years on Linux, Windows and Mac OS X -- but always using a GUI. However, the time has come where the preset partition sizes my data center placed on my server aren't the right sizes and I need to resize a production server's disks. I could fiddle with it and probably do OK, but given that it is a production server, I wanted to get some advice about the right way to do this. I do have KVM over IP access, so if it is best to take the server offline and boot off a rescue partition, I can do that. root [/var/lib/mysql]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 2.1G 7.3G 23% / tmpfs 7.8G 0 7.8G 0% /dev/shm /dev/sda1 99M 77M 18M 82% /boot /dev/sda8 884G 463G 376G 56% /home /dev/sda3 9.9G 8.0G 1.5G 85% /usr /dev/sda5 9.9G 9.1G 308M 97% /var /usr/tmpDSK 2.0G 38M 1.8G 3% /tmp As you can see /var and /usr are quite close to being full and I've actually had to symlink some logs on /usr to directories in /home to balance things out. What I would like to do is to add 6-10 GB each to /usr and /var, presumably taking the space from /home. As I think about how the disk is arranged, the best thought I've come up with is to reduce /home by 16 GB, say, and move /var to the spot freed up, then allocating /var's space to /usr. However, that would put /var at the far end of the disk, which seems less than idea, given that MySQL has all of its data on that partition. I'd love to take the space out of the closer end of /usr, but I assume that would take a very arduous (and perhaps risky) process of moving all of the data in /usr around. I seem to recall having such a process fail for me on a computer in the past. The other option might be to merge / and /usr since / is underutilized, though I'm not sure if that's a good idea. Do you have any suggestions both on the best reallocation plan and the commands to use to accomplish it? UPDATE: I should add -- here's the partition table. There's one unused partition, which, if memory serves, was the original tmp location before I created a tmp image: Name Flags Part Type FS Type [Label] Size (MB) ------------------------------------------------------------------------------ Unusable 1.05* sda1 Boot Primary Linux ext2 106.96* sda2 Primary Linux ext3 10737.42* sda3 Primary Linux ext3 10737.42* sda5 NC Logical Linux ext3 10738.47* sda6 NC Logical Linux swap / Solaris 2148.54* sda7 NC Logical Linux ext3 1074.80* sda8 NC Logical Linux ext3 964098.53*

    Read the article

  • Cannot access website from inside network

    - by musclez
    I have a website running from my internal network available at the example IP 192.168.1.5. When I type this in to the browser, it redirects to my domain name ie, "example.com", and gives me Error code: ERR_CONNECTION_REFUSED. Any other machine that is inside of the network can access the website. The website is also accessible outside of the network. Other services from the server, like file sharing or ftp, are available to all machines in the network including the one i'm having issues http issues with. The issue may be linked to a proxy service, but from my understanding the service has been completely disabled and any executable have been uninstalled from the machine. I am wondering if there is some residual proxy information remaining on the machine that limits the connection. I'm fairly positive that "example.com" is what is being blocked by the local machine, and not an IP address being blocked or a faulty connection. When I examine the hosts file, there are no redirects to the local machine for "example.com". There was a rule, as on my other machines within the network: 192.168.1.5 example.com But i have since removed that for troubleshooting purposes. What intrigued me is that when I use the actual IP, the IP address will redirect to the domain in the browser and THEN say ERR_CONNECTION_REFUSED. Server-Side Results The server logs are reporting this: example.com ::1 - - [Date & time] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2. 2.22 (Unix) (internal dummy connection)" However, this seems to be irrelevant as it is not triggered when I try to connect to the server with the specified machine. Fiddler results: Host: *example.com* Proxy-Connection: keep-alive Chrome-Side [Fiddler] The connection to 'example.com' failed. Error: ConnectionRefused (0x274d). System.Net.Sockets. SocketException No connection could be made because the target machine actively refused it 01.23.45.67:80 01.23.45.67:80 would be the external IP, which the server and the machine in question both share. I am doing so reading into 0x274d and its coming back with .NET web.config information. I am still at a loss to what to do with this information. I have WireShark running as well. Theres is a lot of sensitive information in the readout and I'm not sure what to extract from it. Either way, if it helps, I can access that information if anyone would like me to. Thanks for the help!

    Read the article

  • "Hostile" network in the company - please comment on a security setup

    - by TomTom
    I have a little specific problem here that I want (need) to solve in a satisfactory way. My company has multiple (IPv4) networks that are controlled by our router sitting in the middle. Typical smaller shop setup. There is now one additional network that has an IP Range OUTSIDE of our control, connected to the internet with another router OUTSIDE of our control. Call it a project network that is part of another companies network and combined via VPN they set up. This means: They control the router that is used for this network and They can reconfigure things so that they can access the machines in this network. The network is physically split on our end through some VLAN capable switches as it covers three locations. At one end there is the router the other company controls. I Need / want to give the machines used in this network access to my company network. In fact, it may be good to make them part of my active directory domain. The people working on those machines are part of my company. BUT - I need to do so without compromising the security of my company network from outside influence. Any sort of router integration using the externally controlled router is out by this idea So, my idea is this: We accept the IPv4 address space and network topology in this network is not under our control. We seek alternatives to integrate those machines into our company network. The 2 concepts I came up with are: Use some sort of VPN - have the machines log into VPN. Thanks to them using modern windows, this could be transparent DirectAccess. This essentially treats the other IP space not different than any restaurant network a laptop of the company goes in. Alternatively - establish IPv6 routing to this ethernet segment. But - and this is a trick - block all IPv6 packets in the switch before they hit the third party controlled router, so that even IF they turn on IPv6 on that thing (not used now, but they could do it) they would get not a single packet. The switch can nicely do that by pulling all IPv6 traffic coming to that port into a separate VLAN (based on ethernet protocol type). Anyone sees a problem with using he switch to isolate the outer from IPv6? Any security hole? It is sad we have to treat this network as hostile - would be a lot easier - but the support personnel there is of "known dubious quality" and the legal side is clear - we can not fulfill our obligations when we integrate them into our company while they are under a jurisdiction we don't have a say in.

    Read the article

  • Buying a new printer instead of replacing ink?

    - by Kelsey
    With prices of basic printers being around $40 - $50 and a ink cartridge being around $20 - $30 each for black AND color. It costs me more to replace the printer's ink than to just buy a brand new printer. This just seems like a total waste of materials though (I have 4 printers sitting in my basement with no ink). I know the ink cartridges are smaller (not as full) in a new printer but I go through it in about 1 to 1.5 years only and by then my $40 gets me a better printer to boot. Also with certain printers the heads are not part of the ink (Epson use to do this and still might) so I get new heads as well. Is this a bad practice? Are retailers making this a reality when they are selling working hardware cheaper than replacement parts? Is there something more I should be considering? Edit: Some background, long ago I bought an Epson printer which I used to print docs etc vary rarely. The ink started running low so I bought to new carts for around $60 if I recall. The printer then stopped working so I replaced the carts with the new ones but the head was dead on the black which was not worth repairing. I bought a new HP printer for $49. This lasted around 1.5yrs and then the ink ran out, I went to buy new carts and the guy at the store got me to buy a new printer (that was smaller, faster, higher dpi, etc) and it was cheaper than replacing the ink. When the ink ran out on that one I bought a new printer again, etc. The printer gets used maybe once a week at most and I never print photos or anything. It normally is jsut stored away unplugged accumulating dust. People say to buy a laser printer but they are much larger, do not print color, (in the price range I am looking at) and might have the exact same issues. The problem I see is the manufacturer is making my behaviour possible by selling new printers at a loss hoping that they will cash in on the ink later. How can they produce a printer for so cheap which HAS ink in it, and the refills cost more than the unit? It can't.

    Read the article

  • Hibernating and booting into another OS: will my filesystems be corrupted?

    - by Ryan Thompson
    Suppose I have Windows and Linux installed on the same computer. If I hibernate Windows, can I boot into Linux without corrupting the Windows filesystem when I resume Windows? What about the other way around? What if I hibernate one, boot into the other, and mount the hibernated filesystem read/write? Read-only? If this is unsafe, is there any way to detect the hibernated state of the other OS and prevent mounting its filesystem? Basically, how far can I push this before it breaks, and how dangerous is it near the edge? I think I know the answers to some of the above questions, but for other ones, I have no idea, and for obvious reasons I have not tested this on my own computer. If someone has tested these, please enlighten the rest of us. I'm not necessarily looking for a specific answer to every question; I'll accept any response that answers a reasonable portion. EDIT: Let me clarify that when I say "hibernate," I mean the process of writing the contents of RAM to the hard disk and completely powering down the computer. In this state, powering the computer back on brings you through the BIOS and bootloader again, and you could theoretically select another operating system on a multi-boot system. Anyway, on with the original question: RESULTS Ok, after everyone's assurances that this would work, I tested it for myself. I set up Ubuntu to remount all ntfs filesystems and external drives read-only before hibernating. There was no need for a similar Windows setup because Windows does not read Linux filesystems. Then, I tried alternately hibernating one operating system and resuming the other, back and forth a few times. I even tried mounting the Windows filesystem from Ubuntu read-write, and creating a few files. Windows didn't complain when I resumed. So, in conclusion, you can more or less freely hibernate in a dual-boot Windows/Linux scenario. Note that I did not test a dual Linux/Linux co-hibernation situation. If you have two or more Linux installs and you hibernate one of them, you might be able to corrupt the filesystem by mounting it from another.

    Read the article

  • How to make local drive available in apache localhost

    - by Ronald Allan
    How can I make my "Drive D:" "Drive E" available in localhost. I'm running apache on my backtrack machine. My default is /var/www/. Every directory I created inside the /var/www/ is available and all working fine. Let's say I created /var/www/PENTEST/ the contents of that PENTEST directory can be accessed through: localhost/PENTEST/ How can I make this work: localhost/media/DATA/ The /media/DATA/ is my DRIVE D: I edited this: ServerAdmin webmaster@localhost DocumentRoot /media/DATA/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /media/DATA/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> Still not working. I'm getting 404. # # I figured it out. Thank for the post of "RiggsFolly" which can be found here: http://forum.wampserver.com/read.php?2,89163. I just have to change this: ServerAdmin webmaster@localhost DocumentRoot /media/DATA/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /media/DATA/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> Into this: ServerAdmin webmaster@localhost DocumentRoot D:/media/DATA/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory D:/media/DATA/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory>

    Read the article

  • How to enable caching on Apache / Ubuntu Linux?

    - by Jim Mischel
    I have a large (several megabytes) XML file that's updated rather frequently (every 10 minutes or less) and gets a lot of traffic. I'd like to implement some caching to reduce bandwidth and server load. Looking at the Apache documents, I see a dizzying array of configuration options that involve various combinations of mod_expires, mod_headers, and mod_cache (and variants). I end up running in circles and the results aren't what I expect. I'm comfortable editing the various configuration files if I have some idea what I'm supposed to change. But at the moment I'm poking around in the dark and that's never a comfortable feeling. So, perhaps if I describe what I want, somebody here can take me by the hand and say, "This is what you need to do." Periodically, this file, call it "stuff.xml" is updated and a new version copied to the directory. The external url would be, for example, http://example.com/stuff.xml. Understand, this part works. Whenever I request the file, I get the expected result. But the file is big and I want to save bandwidth, so first I'd like to implement conditional GET semantics with the If-Modified-Since header. How do I do this? I've enabled mod_headers and mod_expired and added the <FilesMatching> section in my httpd.conf as recommended in countless examples I've seen online, but that didn't change the behavior when made a conditional GET request. I always get a status 200 with the entire document. So how the heck do I implement this? That'll cut down on neeless transfers. I'd also like to limit the amount of data transferred. Seeing as this is XML, gzipping it should save me 50% or more. My next step would be to somehow gzip the file and, if it's not too difficult, store it in memory. That'll cut down on per-access data transfer, and also reduce disk transfers. So how do I implement this type of caching? Thanks in advance.

    Read the article

< Previous Page | 765 766 767 768 769 770 771 772 773 774 775 776  | Next Page >