Search Results

Search found 8274 results on 331 pages for 'offtopic but important'.

Page 227/331 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • How do I use a Zyxel P660 router as just modem so that I can connect a WRT54GL router in cascade?

    - by Kenji Kina
    I have a Zyxel P660HW-t1 v2 router (which has a DSL port) and a WRT54GL router (which does not) and the exact same situation as in this thread (UPDATE: the connection between both devices is the important part, since I have been able to set the zyxel router to act as bridge by itself quite nicely. I have accessed my internet connection directly through a PC using PPPoE without any problems, the issues arise when I try to connect the WRT54GL router between the zyxel "modem" and my PCs). I've been trying to use my Zyxel P660 as a modem only: Setup P660 to bridge mode. Changed WRT54GL's IP address to 192.168.2.1 to avoid a conflict on the network. Configured the PPPoE settings as required on WRT54GL. The thing is that when I connect the Zyxel modem/router on the WRT54GL's internet port the light doesn't turn on. I can confirm that this port has been working ok, so I'm not really sure what's going between the devices. I checked several settings such as IPs, tried disabling DHCP on Zyxel/Linksys, Firewall on both and still nothing. Also, I tried connecting Zyxel directly to a computer in bridge mode and dialed successfully. I have even posted a question here before, thinking that what I asked there was the only thing I needed to get things done. Unfortunately it wasn't, and the guy that solved his issue didn't give enough details in his post (and is quite unlikely to give more details since he was an anonymous user). For one, I don't know how to do this part: connected to the Zyxel through telnet and forced LAN port 1 to be at 100mb as well I can't find the option that does this on the zyxel router. Not through telnet or the web admin. Can anyone help me solve this?

    Read the article

  • windows 8.1 upgrade fails with error code 0xc1900101-0x20017

    - by cmorse
    I just tried to install windows 8.1 on my laptop, but it fails to install with the message: Sorry we couldn't complete the update to Windows 8.1. We restored your previous version of Windows to this PC 0xC1900101 - 0x20017 It looks like on the first boot that the laptop is going to the PC restore screen (it asks what kind of keyboard I have, and then what repair optins I would like to take). Thus far I have just been selecting "Continue to windows 8." I'm running a Lenovo x220i tablet. I've got 43GB of free disk space. It installed just fine on my desktop. The primary difference between the two machines is that the desktop has media center install, and isn't using TrueCrypt. Full WindowsUpdate.log http://pastebin.com/hGmAW4Q1 Most important portion of WindowsUpdate.log: 2013-10-17 10:41:06:671 964 694 Agent ************* 2013-10-17 10:41:06:671 964 694 Agent ** START ** Agent: Finding updates [CallerId = AutomaticUpdates] 2013-10-17 10:41:06:671 964 694 Agent ********* 2013-10-17 10:41:06:671 964 694 Agent * Online = No; Ignore download priority = No 2013-10-17 10:41:06:671 964 694 Agent * Criteria = "IsInstalled=0 and DeploymentAction='Installation' or IsPresent=1 and DeploymentAction='Uninstallation' or IsInstalled=1 and DeploymentAction='Installation' and RebootRequired=1 or IsInstalled=0 and DeploymentAction='Uninstallation' and RebootRequired=1" 2013-10-17 10:41:06:671 964 694 Agent * ServiceID = {7971F918-A847-4430-9279-4A52D1EFE18D} Third party service 2013-10-17 10:41:06:671 964 694 Agent * Search Scope = {Machine & All Users} 2013-10-17 10:41:06:671 964 694 Agent * Caller SID for Applicability: S-1-5-18 2013-10-17 10:41:07:233 964 870 Report REPORT EVENT: {AD47FBDC-F7F9-4E7F-BAF4-DBA3784C7101} 2013-10-17 10:41:06:436-0600 1 202 [AU_REBOOT_COMPLETED] 102 {00000000-0000-0000-0000-000000000000} 0 0 AutomaticUpdates Success Content Install Reboot completed. 2013-10-17 10:41:07:233 964 870 Report REPORT EVENT: {8D4E7A67-9526-4702-A897-5BE5F97497AF} 2013-10-17 10:41:06:639-0600 1 204 [AGENT_INSTALLING_FAILED_POST_REBOOT] 101 {8951E70D-4332-4F7C-B92D-D9362E384959} 1 c1900101 WSAcquisition Failure Content Install Installation Failure Post Reboot. 2013-10-17 10:41:07:249 964 870 Report CWERReporter::HandleEvents - WER report upload completed with status 0x8 2013-10-17 10:41:07:249 964 870 Report WER Report sent: 7.8.9200.16715 0xc1900101(0x20017) 8951E70D-4332-4F7C-B92D-D9362E384959 Install 101 Unmanaged 2013-10-17 10:41:07:249 964 870 Report CWERReporter finishing event handling. (00000000)

    Read the article

  • Cygwin - Repo with Separate Git/Working Dir Doesn't Work

    - by Kyle Lacy
    Since I've switched to OS X and Vim, I've found it easiest to manage all of my 'dotfiles' (all of my configuration files and miscellaneous scripts) with Git. Having already set up my dotfiles in a repo following this tutorial, I figured it would also be easy enough to migrate all of my settings into my Cygwin setup on my Windows partition. Already having the repo setup on Github, I simply clone'd the repo, and moved all of the files over to my home directory, making it a mirror of my OS X home directory. Unfortunately, I cannot seem to use the actual repo any further within Cygwin. The problem is that I cannot use my dotfiles repo with git within Cygwin. The setup is unique from most normal git repos, in that the working directory and the git directory are in different locations. Specifically, the working directory is $HOME (/Users/kyle on OS X, /home/kyle in Cygwin), and the git repo is $HOME/.dotfiles.git. So, if I wanted to get the status of the repo, for example, I would type the following command (which I alias to reduce typing, of course): git --work-tree=$HOME --git-dir=$HOME/.dotfiles.git status -uno While this works fine on OS X, this refuses to work within Cygwin. Regardless of whether or not I use my alias, or whether or not I substitute $HOME by hand, I get the following git error: fatal: Not a git repository: /home/Kyle/dotfiles/.git/modules/.build/git I don't understand where this error comes from, but the path /home/Kyle/dotfiles was the original location of the git repo when I initially cloned it. Additionally, it's important to note that the repo relies heavily on submodules. If specifics are necessary, the repo in question can be found on GitHub. The commands I ran to setup the repo in Cygwin can also be found within the Readme file.

    Read the article

  • Sonicwall routing between multiple subnets on multiple interfaces

    - by Rain
    As shown by the network diagram below, I have two completely separate networks. One is being managed by a Sonicwall NSA 220, the other by some other router (the brand is not important). My goal is to allow devices within the 192.168.2.0/24 network to access devices in the 192.168.3.0/24 network. Allowing the reverse (192.168.3.0/24 - 192.168.2.0/24) is not required. So far, I have done the following: I connected the X3 Interface on the Sonicwall to the 192.168.3.0/24 network switch (shown as the dashed red line in the diagram). Next, I gave it a static ip address of 192.168.3.254 and set the Zone to LAN (the same Zone for the X0 interface). Judging by various articles and KBs I've read, this is all that should be necessary, although it does not work. I can ping 192.168.3.254 from any device in the 192.168.2.0/24 network although I cannot ping/connect to any device within the 192.168.3.0/24 network. Any help would be greatly appreciated! Network Diagram: (I asked a similar, yet more complicated, question earlier; although, I realized that I cannot solve that without first solving this (which may actually solve my original question))

    Read the article

  • Install Ubuntu on a new netbook?

    - by torbengb
    I'm planning to buy a netbook, and I am considering to install Ubuntu on it. It would mainly be used by my wife at home for web browsing and email at home (lightweight is important, hence a netbook!), and we'd occasionally bring it along on travels (mostly as digital photo dropzone). I want to use Ubuntu instead of Windows because I'm sick of all the Windows hassle and updates. I'm not concerned about Windows applications; I'd switch to native alternatives as far as possible because really only Firefox and something like Picasa are needed. I'm considering an ASUS Eee PC 1001P or an MSI Wind U100 or an Point of View Mobii II (click the links for specs; nevermind that the rest is German). I'm not in the USA. Whatever I buy will most likely have Windows 7 on it but no optical drive. I would also buy a large-ish USB stick but not an external optical drive. Should I (and can I) install Ubuntu alongside Windows 7, or remove Windows? If I remove Windows first, how would I be able to reinstall it if I change my mind? Can I make a backup? Is a recovery CD usually provided? Should I choose the regular Ubuntu, or the Ubuntu Netbook Remix (UNR)? Does UNR allow me to install additional applications just as easily? Note: I'm asking about Ubuntu vs. Windows; let's skip the hardware discussion for now. Edit: I'm assuming that Windows is already installed; if it isn't then I would only install Ubuntu and this question is irrelevant.

    Read the article

  • Auto load balancing two node Cluster Hyper-v 2008 R2 enterprise?

    - by Kristofer O
    My setup is a 2 node cluster with 72GB ram each and a ~10TB MD3000i Iscsi SAN. I have about 30VMs running I keep about 15 on either server. I do a live migration to the other server if I need to run updates or whatever... Either one of the servers is able of running all VM if needed, but the cpu is pretty high. Here's my issues. I know Hyper-v has a limit of a single Live-migration at a time. But Why doesn't it queue them up to move one at a time? If I multi select I don't get the option to live migrate a one at a time. OR if I'm in the process of Migrating one it will give me an error that it's currently migrating a VM... Is there a button I missed that will tell a Node that it needs to migrate all the VMs elsewhere? Another question, does anyone know whats the best way to load balance VMs based on CPU and/or network utilzation. I have some VMs that don't do much. and some that trash the CPU or network. I'd like to balance it out on both servers if at all possible. and Is there any way to automate it? last question... If I overcommit my Cluster is there a way to tell the cluster that I want certian VMs the be running and to savestate other VMs based on availible system resources? Say when my one node blue screens and the other node begins starting the VMs up. I want the unimportant ones to shutdown or savestate so the important ones can stay running or come back online. Thanks just for reading all that. Any help would be great.

    Read the article

  • File recovery from Mac results in random files and extensions – how do I get my data back?

    - by Robsta
    This Mac hard drive was dying. Someone I knew did a file recovery and got as many files as he could. The program (not sure how it was done, or what program it was) dished out a bunch of folders names such as: DIR56.TOC DIR55.CUR DIR54.GPZ DIR53.GZI … and so forth, all the way down to DIR0.LZH. Some of the file extensions I do understand — like .JPEG, or .MOV — but most of them are ones I've never heard of. I've googled some of them like .TOC, wich stands for "table of contents", but I don't understand how to transfer that data back to the Mac. Currently, they are on a Windows machine. They are being transfered onto an external hard drive that the Mac can read. It can also see all the files. However, the few that I tested to see if the Mac recognizes them (like .TOC and .CUR) cannot be opened. Anyone have any idea as to what I should do? There are some important assignments on there I need to get. EDIT: Data transfer was most likely done by: Easy Recover 6 professional (95% sure, no guarantee)

    Read the article

  • Why does bash sometimes think my $HOME isn't the correct directory?

    - by Adam Yanalunas
    Like the title says it seems that bash sometimes misidentifies my $HOME. This cropped up after a seemingly unique series of events that I will now replay in broad strokes. Running OS X 10.6 with normal, local account Work binds my account to Active Directory Much time passes with no issues Set up rvm to manage Ruby installs (this becomes important later) Upgraded to OS X 10.7 a few days ago After successful install, attempted to log in, was presented with "Must reset password" dialog that never allowed a password to be reset. Would simply shake the box after new password was entered. Much googling was done. Much more googling was done. Swearing was had. Logged in as root, created new account, set as admin, deleted /Users/[new account], renamed /Users/[old account] to /Users/[new account] Logged out of root, logged into new account with no issues After OS X asking for a my account password a few times to update Keychain and other system-level stuff it was back to business as usual. Opened Terminal, cd to project folder, tried "rails server" and was presented with: /usr/local/lib/ruby/1.9.1/rubygems/dependency.rb:247:in to_specs': Could not find rails (>= 0) amongst [] (Gem::LoadError) from /usr/local/lib/ruby/1.9.1/rubygems/dependency.rb:256:into_spec' from /usr/local/lib/ruby/1.9.1/rubygems.rb:1210:in gem' from /usr/local/bin/rails:18:in' Ran through a few exercises, decided to rm -rf ~/.rvm and reinstall. Running a --trace on the rvm installer shows it dies on this line: mkdir: /Users/[old account]: Permission denied Scrolling back through the --trace log I see many more mentions of /Users/[old account]. When inspect the install script the offending line is looking at "${HOME}/.rvm" as it tries to run the mkdir. To my confusion I also see mentions of /Users/[new account] in the log. I've tried exporting a new HOME in my .bash_profile to no luck. Can anyone guess why /Users/[old account] would still be kicking around?

    Read the article

  • Error when trying to deploy Windows XP SP3 with WDS

    - by Nic Young
    I have created a WDS server running Windows Server 2008 R2. I have built my custom images of Windows 7 using WAIK and MDT 2010 that are installed on the server. I used this guide to help me through the process. The Windows 7 images that I have created capture and deploy properly. I am attempting to follow the same steps from the guide I linked to capture and deploy a Windows XP SP3 image. I am able to sysprep and capture the reference machine with no errors. I am then able to import the custom .wim that I just captured in to MDT 2010 with no issues either. However when I try to deploy this image to a test virtual machine I get the following error: Deployment Error: I have made sure that the .iso that I am importing the source files from originally to create the sysprep and capture sequence is indeed a Windows XP SP3 iso. When I first select a PE boot environment before I deploy I select the x86 PE boot image that I created originally when making this for my Windows 7 deployments. Could this be the issue? If so how do I make a boot image specific for Windows XP SP3 deployments? I have Googled around for this error and some places point to the deployment image not being able to find setup.exe and other important system files for installing the operating system. If so, how do I add these to the image? Any ideas?

    Read the article

  • Server configuration advice for new site that could get lots of traffic within 6m

    - by alchemical
    We're setting up a new web2.0 type site with elements of e-commerce. Budget is kind of tight. Due to the nature of the site and promotions, etc., we expect traffic could ramp up fairly quickly. Looking for advice for a good configuration to start with, we' looking to co-lo with CalPop in downtown LA. We've looked at Dell, ABMX.com, and got a quote from CalPop (they make their own servers as they also do managed hosting). Price range has been anywhere from about $1200-$3300 per server. We're thinking to start with a web server and db server, both with mirrored drives. It would be nice to stay under about 2k per server if possible. Min configuration for each would probably be a quad-core with 8GB Ram. Thinking to run Windows Server 2008 R2 (Web Edition?) and SQL Server 2008. Looking for advice on the best server configurations and/or brands that fit the budget, yet will allow us to smoothly scale as traffic increases. Reliability is also pretty important. Also wondering if a switch/router is necessary or useful to connect the two servers.

    Read the article

  • Update to Lion, Cannot boot into Bootcamp partitions, but can use in Parallels

    - by Jon Jester
    Using Snow Leopard had boot camp partitions for both XP and Windows 7. These were both accessible through Parallels 7 or through direct boot through boot camp. Each is on a separate partitioned hard drive. Upgraded to Lion, both were still accessible through Parallels, but have not been able to directly boot into either. Unfortunately is important to me to be able to boot into a least the Windows 7 partition. Have tried virtually everything I can find online. Seen similar issues, but nothing where they were usable virtually but not directly. Nothing works. reFit, correcting the master boot records in Windows with command line, have wiped the Windows 7 partition clean and reinstalled Windows 7 several times 1st using Boot Camp4 drivers then using Boot Camp3 drivers. Have tried resizing the bootcamp partitions. When booting into the Boot Camp partitions directly will go all the way to seeing the desktop before it fails, where I get a Windows error screen. I can see all the disks and their appropriate partitions both in OS X disk utility as well as the Windows installer utility.

    Read the article

  • cpanel dns only / rdns questions

    - by Clear.Cache
    I started getting IPs from ARIN directly, instead of the data center I'm colocated at. Now I have to start applying rdns myself for my clients upon request, instead of having the NOC at the DC do this. That is obvious, since I am in full control over the IP delegation and therefore have nameserver authority. The question is, how do I "create" ptr / rdns records for my clients? My current server uses Cpanel / WHM with ns1/ns2.mycompany.com I also applied those as dns nameservers in the ARIN IP's whois record. How do I create rdns for my clients? Should I install Cpanel DNS Only on a entirely separate server and use this method instead? http://layer1.cpanel.net/ If so, how can I seamlessly transition over the dns records to that new dns server, retaining my ns1/ns2.mycompany.com and their ns1 and ns2 IP addresses? Even more important: I have to change the ns1/ns2 IPs to the new ones I retrieve from ARIN. How can this be done, avoiding downtime during the dns transition? On a side note, would it be easier to just install Cpanel DNS Only on a dedicated server and just use dns1.mycompany.com and dns2.mycompany.com with their own dedicated ns1/ns2 IPs from ARIN - and utilize this dns server for customers who request rdns? Would this be a more viable solution than using our current ns1/ns2.mycompany.com Nameservers? Is Cpanel DNS Only a standalone software that does not require Cpanel/WHM on another server? Is it possible to have redundant dns servers setup using this software solely, ns1 on one server and ns2 on another? Thanks.

    Read the article

  • How to avoid intrusion detection/anti spoofing issue on a sonicwall TZ series FW

    - by Ian
    We have a sonicwall tz series FW with two internet service providers connected. One of the providers has a wireless service which works a bit like an ethernet switch in that we have an ip with a /24 subnet and the gateway is .1. All other clients on the same subnet (say 195.222.99.0) have the same .1 gateway - this is important, read on. Some of our clients are also on the same subnet. Our config: X0 : Lan X1 : 89.90.91.92 X2 : 195.222.99.252/24 (GW 195.222.99.1) X1 and X2 are not connected, other than both being connected to the public Internet. Client config: X1 : 195.222.99.123/24 (GW 195.222.99.1) What fails, what works: Traffic 195.222.99.123 (client) <- 89.90.91.92 (X1) : Spoof alert Traffic 195.222.99.123 (client) <- 195.222.99.252 (X1) : OK - no spoof alert I have several clients with IPs in the 195.222.99.0 range and all provoke identical alerts. This is the alert I see on the FW: Alert Intrusion Prevention IP spoof dropped 195.222.99.252, 21475, X1 89.90.91.92, 80, X1 MAC address: 00:12:ef:41:75:88 Anti-spoofing is switched off on my FW (network-mac-ip-anti-spoofing - config for each interface) for all ports I can provoke the alerts by telneting to a port on X1 from the clients. You can't argue with the logic - this is suspicious traffic. X1 is receiving traffic with a source IP which corresponds to X2s subnet. Anyone know how can I tell the FW that packets with a src subnet of 195.222.99.0 can legitimately appear on X1? I know whats going wrong, I've seen the same thing before, but with higher end FWs you can avoid this with a few extra rules. I can't see how to do this here. And before you ask why we're using this service provider - they give us 3ms (yep 3ms, thats not an error) delay between routers.

    Read the article

  • Xbox 360 formatting for MP4 h.264?

    - by Kayle
    For the life of me, I can't get the Xbox 360 to recognize anything other than a WMV. It's full updated with Xbox Live and I even added all the Video apps (Zune, Netflix, etc) just to see if it would force some sort of codec update. I'm using handbrake, at the moment, to try and convert an mkv with subtitles. Handbrake burns the subtitles into the video for me, which is important. I can't figure out what is missing though, as the Xbox refuses to acknowledge my MP4s. Here's the VLC codec info output: As far as I can tell, this is EXACTLY what the xbox supports. I don't want to convert every video twice by throwing it through Windows Movie Maker just to get it to WMV. None of the converters mentions above output to WMV and I don't like WMV since it's not as universal as MP4. I've tried changing the container name manually to M4V, MOV, and AVI to see if I can trick the Xbox... no beans. The video displays perfectly in WMP, of course. If I convert it to WMV, suddenly it works fine. But I don't find this acceptable, as I know the Xbox is supposed to support other filetypes. If anyone knows why my Xbox won't play anything but WMVs, I would be very appreciative to know why! It's driving me nuts. The only other information I can think to include is that it's about 3 years old and of the cheapest variety (Xbox Arcade). I've added a hard drive, xbox live, and have thoroughly updated it. Maybe there's a specific video update I'm unaware of??

    Read the article

  • Online Storage and security concerns

    - by Megge
    I plan to set up a small fileserver. I already own a small server at HostEurope (VirtualServer L, 250GB space), but they don't offer enough space (there is the HostEurope Cloud, but paying for bandwidth isn't an option here, video-streaming should be possible) Requirements summarized: Storage: 2TB, Users: ~15, Filesizes: < 100GB, should be easily reachable (Mount as a networkdrive or at least have solid "communication" software) My first question would be: Where can I get halfway affordable online storages? And how should I connect them to my server? Getting an additional server is a bit overkill, as I know no hoster which allows 2 TB on a small 2 Ghz Dual Core 2 GB RAM thingy (that would be enough by far, I just need much space), and connecting it via NFS or FTP over Internet seems a bit strange and cripples performance. Do you have any advice where I could get that storage service from? (I sent HostEurope a custom request today, but they didn't answer till now. If they can provide me with that space, this question will be irrelevant, but the 2nd one is the more important one anway, don't do much more than recommend me some based on experience, you don't have to crawl hours through hosting services) livedrive for example offers 5 TB for 17€ / month, I'd be happy with 2 TB for 20 €, the caveat is: It doesn't allow multiple users, which leads me to my second question: Where are the security problems? Which protocol is sufficient (I want private and "public" folders etc. the usual "every user has its own and a public space"-thing), secure and fast? (I'd tend to (S)FTP, problem with FTP is: Most of those hosting services don't even allow FTP with mutliple users and single users lead me into "hacking" a solution (you could map the basic folder structure on the main server and just mount every subfolder from the storage, things get difficult with a public folder with 644 permissions though) Is useing something like PKI or 802.1X overkill for private uses?

    Read the article

  • How do I fix libdispatch problem crashing Mac OS X apps?

    - by david-ocallaghan
    In the last day I have started having a lot of brokenness on my Mac (MacBook Air running Mac OS X 10.6.2 with all software updates). Most noticably, iTunes no longer syncs with my iPhone. It fails with a crash dialog reporting "AppleMobileDeviceHelper quit unexpectedly" and an error dialog "iTunes was unable to load dataclass information from SyncServices. Reconnect or try again later." I've attempted the fix at support.apple.com/kb/HT1747 but it failed. I've also been having problems (at first seemingly unrelated) with the horrible Cisco VPN client, which started giving me this error: Error 51: Unable to communicate with the VPN subsystem I followed the steps at www.anders.com/cms/192/CiscoVPN/Error.51:.Unable.to.communicate.with.the.VPN.subsystem which don't seem to work for me, although I can connect if I use the command line with sudo : sudo vpnclient connect MyProfile I had a look in the Console app at the diagnostic messages and I noticed a pattern, that a number of apps were reporting "BUG IN CLIENT OF LIBDISPATCH". The affected programs are: AppleMobileBackup AppleMobileDeviceHelper Safari Webpage Preview Fetcher cvpnd (the Cisco VPN daemon) Of these, only the last is non-Apple software! The common text in the diagnostic messages is: Exception Type: EXC_BAD_INSTRUCTION (SIGILL) Exception Codes: 0x0000000000000001, 0x0000000000000000 Crashed Thread: 1 Dispatch queue: com.apple.libdispatch-manager Application Specific Information: BUG IN CLIENT OF LIBDISPATCH: Do not close random Unix descriptors I'm beginning to wonder if there's a permissions problem, or corruption of an important library, ... I should note that I've rebooted several times and verified the disk permissions and the disk. Any help would be great!

    Read the article

  • problem booting crusty old windows XP

    - by Carson Myers
    I have an acer aspire laptop running Windows XP home. I believe I have some virus on it, I'm not sure--I mostly just run linux in a VM on it so I wasn't too worried. I'm not sure if that virus caused this problem. The laptop wasn't recognizing my USB hard drive for some reason so I decided to restart it. When it started up, it got past the memory test, past the boot screen, (but it paused right here on a blank screen for awhile) and flashed the desktop once (like it does just before the login screen) and then crashed. I got a quick BSOD and then it restarted. Then it tried to boot again, etc etc infinite loop of failure. Well, before trying safe mode, I disabled automatic restart on system crash so I could read the blue screen. There wasn't anything important on it, it said *** STOP: 0x00000000 (0xC0000000 0x,.... ) beginning physical memory dump physical memory dump complete That's not verbatim (obviously) but it didn't help me. so I booted in safe mode, and it stopped on the driver gagp30kx.sys and then restarted (and infinite loop of failure again). I burned a recovery CD and tried that. It loaded it, and I went into repair mode. I ran chkdsk and then disabled the AGP driver. Same thing on booting in safe mode except it stopped at mup.sys instead. I enabled the AGP driver again, and ran chkdsk again from the CD. It said it found problems but didn't say it fixed them. So I ran it a second time, and it said "performing additional checking or recovery" lots of times (I can't tell how many, they went above the screen top). I tried booting again and no luck. Every time I run chkdsk after trying to boot again it says it found and fixed more errors. I think it might be whatever driver is after the AGP driver, but I don't know what it is or how to find out. Can anyone help me fix this?

    Read the article

  • Authentication required by wireless network.

    - by Roman
    I would like to use a wireless network from Ubuntu. In the network drop-down menu I select a network (this is a University network I have an account there). Then I get a windows with the following fields: Wireless Security: [WPA&WPA2 Enterprise] Authentication: [Tunneled TLS] Anonymous Identity: [] CA Certificate: [(None)] Inner Authentication: [some letters] User Name: [] Password: [] I put there my user name and password and do not change default value and leave "Anonymous Identity"blank. As a result of that I get "Authentication required by wireless network". How can I solve this problem? I think it is important to notice that our system administrator tried to find some files (which are probably needed to be used as "CA Certificate"). He said that he does not know where this file is located on Ubuntu (he support only Windows). So, probably this is direction I need to go. I need to find this file. But may be I am wrong. May be something else needs to be done. Could you pleas help me with that?

    Read the article

  • Our application is responding different on client server

    - by WtFudgE
    I have an application running on our local server and it works perfect. However when I deploy my application to our client's server there are problems. If i copy all the sources to another local from their server, it works fine again.... I am talking about an ASP.NET application talking with windows server 2008. The server specs of our client are: intel Xeon 2.13 GHz 6 GB RAM 300 GB HDD windows server 2008 R2 Although I think it's not that important I will describe the problems I am talking about to give u a better idea: It is related to upating and deleting fields in the sql server. Whenever there is more than one user running the application, and updates or deletes (not even the same record!0, there seems to be fields updating/deleting wrongly. But as I said before, if we copy the source to another server it is not the case. So I assumed it is more configuration related. Do you guys have any idea what the issue could be? Thanks a lot EDIT: my client old me he disabled the sql related accounts Could this be related?

    Read the article

  • FreeBSD Traffic Shaping

    - by alexus
    Hi I'm trying to do traffic shaping with FreeBSD, here are my rules su-3.2# ipfw show | grep pipe 08380 1514852 125523804 pipe 1 tcp from any to any dst-port 80 su-3.2# ipfw pipe 1 show 00001: 2.000 Mbit/s 0 ms 50 sl. 1 queues (1 buckets) droptail mask: 0x00 0x00000000/0x0000 - 0x00000000/0x0000 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 tcp 64.237.55.83/60598 72.21.81.133/80 6520267 1204533020 0 0 1216 su-3.2# first of all why when I run ipfw pipe 1 show i get same source and destination ip, that doesnt seem like ever change yet total packets/bytes increasing and most important question, after donig all that I'm looking at my MRTG stats and I see i'm very well over 2Mbit/s limit. what am I doing wrong? here is config file flush pipe flush pipe 1 config bw 2Mbit/s add 100 allow ip from any to any via lo0 add 200 deny ip from any to 127.0.0.0/8 add 300 deny ip from 127.0.0.0/8 to any add 8380 pipe 1 tcp from any to any src-port www uid daemon add 8380 pipe 1 tcp from any to any dst-port www uid daemon add 65000 pass all from any to any

    Read the article

  • Migrate installation from one HDD to another?

    - by dougoftheabaci
    I have a server with a 64 GB SSD inside. It runs just fine, but occasionally there's a hiccup that causes it to nearly fill up. When that happens my server starts to lockup and generally misbehave. I'm looking to buy a bigger SSD (either 128 GB or 256 GB) but I'm a bit unsure of how best to make the transition. For a start, I don't have an external monitor. If I need one I'll have to borrow it from work. Most of the time I just SSH into the server from my iMac. The only solution I can think of would be to buy two FW800 2.5" cases, boot from the 64 GB SSD and clone it to the 128 GB SSD. Seem a bit excessive but it might be my best option. I do have more than one SATA port on my server, but they're all currently being use for storage drives. They don't mount by default, so I could unplug them and just have the two SSDs and do the whole thing via SSH. This is another option I'm considering. My main concern with either is how best to make sure everything goes across. I want a carbon copy of the first one onto the second. This is especially important because I have a ZFS volume (my storage) and I'm a bit unfamiliar with how to move everything across. I could just start fresh and reinstall everything on the SSD, but that seems like extra trouble I don't need. So any advice on how best to achieve my goals would be appreciated. Thanks! Server is running Ubuntu Server 12.04. The iMac has 10.8.1.

    Read the article

  • Qmail Patching Makes me Nervous

    - by JM4
    We have a system running CentOS 5 with Plesk 8.6 and Qmail running. Our primary domain is hosted through Media Temple. When Plesk and Qmail are hosted on a single Dedicated Virtual server, it reads the primary server IP and domain and reports that when sending emails from the system. Our pages are written in PHP so we are using the mail() function. While our email goes out to everybody, several enterprise email domains reject our email because it shows a different originating IP (our primary server IP and domain) than the domain we list in the 'from' address. This is not modifiable. Every domain we own of course does have its own IP as well underneath our primary server IP. I have seen several places online that provide a patch, specifically - which allows Domain Binding: "DomainBindings -- For servers that host multiple domains or have multiple IP addresses assigned to them, it is sometimes useful (or important) to have qmail use a specific IP address for its outgoing mail. By default, qmail uses whatever address the OS chooses for all outbound connections. With this patch, you can specify which address to use. It uses a control file similar to smtproutes to specify the outbound IP address to use, based on the sender's domain (local copy) (pyropus.ca)" Qmail Link First off I do not have netqmail installed so I'll need to find another source, but also I am completely unfamiliar with applying patches to qmail. Will I lose email services if I patch? Is it a simple apply and use process? Will my existing email accounts and data be restored after the patch? I am very, very new to unix/linux so this does make me a bit nervous but I am the only person who can make the change and it is one our company "HAS" to have. Any ideas?

    Read the article

  • ISPConfig dovecot status=bounced (user unknown)

    - by Ivan Dokov
    Before you point me to Google or serverfault search I want to tell you that I've searched a lot, did some "fixes". They didn't help. I have ISPConfig 3 installed on Ubuntu 12.04 LTS Server The server has several domains and lets call the main domain: example.com. I have also demo.com I have several emails on each domain. The status of the email sent between the emails: [email protected] - [email protected] (Success) [email protected] - [email protected] (Failure) The failure is with error: postfix/pipe[31311]: 8E72ED058D: to=<[email protected]>, relay=dovecot, delay=0.1, delays=0.03/0/0/0.07, dsn=5.1.1, status=bounced (user unknown) I saw the fix for removing the example.com from: mydestination = localhost, localhost.localdomain in /etc/postfix/main.cf It didn't help. Also an important thing is that the example.com MX records are Google's. We are using Google Apps for this domain in order to use Gmail servers. I think the problem is that the mail server is not looking for the MX records of the domain. It knows the domain is set on this server and it searches for the destination email on the local server, not on Google's servers. For several days I'm really lost! Thanks for your help in advance!

    Read the article

  • How to Move SMS from iPhone to Mac?

    - by seda16
    SMS is the main form to Communicate with others, you would saved many messages on your iPhone. Well, there're many reasons you need to backup your iPhone sms to the Mac. For example, your family or friends have sent you some important and you want to save them on your iPhone in case you delete them by accident, or you just need to backup your sms for other use. So today let's talk about how to move sms from iPhone to Mac. It would be very easy if you use an app to help you, I always use the iPhone to Mac transfer on Amacsoft to copy sms from iPhone to Mac. Now let me tell you how to use this great app: Step 1:Connect iPhone to Mac First of all, you need to install and launch the iPhone to Mac transfer, then connect your iPhone to Mac. The iPhone to Mac transfer would recognise your iPhone automatically. And all information of your iPhone will be shown on an interface. Step 2:Select sms and Start the Export Now you can see many choice on the left, find "SMS" and click it, all sms on your iPhone will be listed on the right. Select and check those you want to move, then just click "Export" on the top to start the transfer. Wait just a few a minute the transfer will be done. Great! You have finish the transfer now, it's really very easy, right? I believe it won't be a problem if you want to transfer your sms from iPhone to Mac. By the way you can also use this Amacsoft iPhone to Mac transfer to move other kind of files , like photos, songs etc. If you're a windows user, you can use iPhone to PC transfer on this web to move sms from iPhone to your PC just with the same steps, good luck!

    Read the article

  • What's the right way to create a Ubuntu user whose home directory is /var/www/SITE?

    - by Leonnears
    First of, I need to state I'm a complete ignorant when it comes to server administration on Ubuntu, and I'm doing what I can. I have been trying to do this for hours with no luck. Basically, I want to create a Ubuntu user whose home directory is /var/www/SITE, and prefered it is chroot'd to it. The chroot part is not so important right now, as first I prefer to make anything work. The user should be able to upload files here and the webserver (www-data user?) should be able to pick them up with no problem. I was able to create the user and give it the home directory /var/www/SITE. (the user is "anders"). I gave him a password, and "anders" can connect to FTP just fine and upload files. But here's where things don't work: While my user can upload files to that /var/www/SITE directory, when I access the webpage on my browser I get a Forbidden error. Note that anders is also a member of the www-data group. I can fix this by running sudo chmod g+s /var/www/SITE/* anders -R but this is of course not ideal. Ideally the files should "work" as soon as I upload them. What's the right way to fix this? If it matters (don't think so), I'm editing my files in Coda 2 and anders is the user for it.

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >