Search Results

Search found 20484 results on 820 pages for 'small projects'.

Page 611/820 | < Previous Page | 607 608 609 610 611 612 613 614 615 616 617 618  | Next Page >

  • How to shrink Windows 7 boot partition with unmovable files.

    - by Alex Che
    I have just bought HP laptop with Windows 7 (64 bit). It has 500 GB HDD with three partitions: small hidden system partition, 12 GiB HP recovery partition, and 450 GiB C: boot partition. I would like to split this large C: partition into two partitions, leaving only 100 GiB for system, and giving the rest to new data partition. Although Windows built-in Disk Management utility has an option to shrink the bootable partition, it only allows me to shrink it roughly by half, even though only 20 GiB on the partition is used. As far as I understand, system unmovable files lie in the middle of the partition, preventing Disk Management utility to do what I want. And since new HP laptops don't come with OS installation disks (they only allow you to create recovery disks youself), I can't just repartition HDD and then reinstall OS. So, is there any way to shrink C: bootable partition and preserve Windows 7 working? P.S.: I have tried to use 3rd party GParted utility, and after shrinking the partition Windows 7 stopped booting with BSOD. System recovery didn't work, and I had to do factory recover. Since this is a long process, I would like to avoid doing it again :) So, please, suggest only proven solutions.

    Read the article

  • Avoiding DNS timeouts when a dns server fails

    - by user65124
    Hi there. We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

  • Couldn't upload files to Sharepoint site while passing through Squid Proxy

    - by Ecio
    Hi all, we have this issue: one of our employees is collaborating with a supplier and he needs to upload documents on a Sharepoint site hosted on the supplier's main site. In our environment we use Squid Proxy to allow people navigate on the net (we have NTLM authentication and users transparently authenticate while using IE and FF). It seems that this specific Sharepoint site is using Integrated Windows Authentication only, and according to some research on the net it seems that this can have troubles with proxies. More specifically, we have tried two Squid versions: with Squid 3.0 we are unable to login to the site (the browser loads an empty page) with Squid 2.7 (that supports "Connection Pinning") we are able to login into the site, move on the different sections BUT.. when we try to upload a file that is bigger than a couple of KiloBytes (i.e. 10KB) the browser loads an error page (i think it's a 401 unauthorized but i must verify it) we've tried changing a couple of Squid options (in 2.7), what we got is that when you try to upload the file you got an authentication box (just like the initial login) and it refuses to go on even if you enter the same authentication credentials. What's really strange is that when you try to upload a small file (i.e. a text or binary 1KB file) the upload succeeds. I initially thought that maybe there was something misconfigured on their Sharepoint site but I've tried also this site: www.xsolive.com (it's a sharepoint 2007 demo site) and I've experienced the same problem. Has any of you experienced such behaviour? Thanks! Of course we've suggested to the supplier to activate also Basic+SSL and we're waiting for their reply..

    Read the article

  • Difficulty restoring a differential backup in SQL Server, 2 media families are expected or no files

    - by digiguru
    I have sql backups copied from server A to server B on a nightly basis. We want to move the sql server from server A to server B without much downtime, but the files are very large. I assumed that performing a differential backup and restore would solve the problem with the databases. Copy full backup from server A to copy to server B (10+gb) Open SQL Server Managment Studio on server B Right mouse on databases Restore Database Type in the new DB-name Choose "From Device" and browse to the backup file Click Okay. This is now resorting the original "full" backup. Test new db with dev application - everything works :) On original database rightmouse on DB Tasks Backup... Backup Type = Differential, Backup to disk, add a new file, and remove the old one (it needs to be a small file to transfer for the smallest amount of outage) Copy the diff backup onto the new db Right mouse on DB Tasks Restore Database This is where I get stuck. If I add both the new differential file, and the original backup to the restore process I get an error The media loaded on "M:\path\to\backup\full.bak" is formatted to support 1 media families, but 2 media families are expected according to the backup device specification. RESTORE HEADERONLY is terminating abnormally. But if I try to restore using just the differential file I get System.Data.SqlClient.SqlError: The log or differential backup cannot be restored because no files are ready to rollforward. (Microsoft.SqlServer.Smo) Any idea how to do it? Is there a better way of restoring backups with limited downtime?

    Read the article

  • directory with 980MB meta data, millions of files, how to delete it? (ext3)

    - by Alexandre
    Hello, So I'm stuck with this directory: drwxrwxrwx 2 dan users 980M 2010-12-22 18:38 sessions2 The directories contents is small - just millions of tiny little files. I want to wipe it from the filesystem but have been unable to. My first try was: find sessions2 -type f -delete and find sessions2 -type f -print0 | xargs -0 rm -f but had to stop because both caused escalating memory usage. At one point it was using 65% of the system's memory. So I thought (no doubt incorrectly), that it had to do with the fact that dir_index was enabled on the system. Perhaps find was trying to read the entire index into memory? So I did this (foolishly): tune2fs -O^dir_index /dev/xxx Alright, so that should do it. Ran the find command above again and... same thing. Crazy memory usage. I hurriedly ran tune2fs -Odir_index /dev/xxx to reenable dir_index, and ran to Server Fault! 2 questions: 1) How do I get rid of this directory on my live system? I don't care how long it takes, as long as it uses little memory and little CPU. By the way, using nice find ... I was able to reduce CPU usage, so my problem right now is only memory usage. 2) I disabled dir_index for about 20 minutes. No doubt new files were written to the filesystem in the meanwhile. I reenabled dir_index. Does that mean the system will not find the files that were written before dir_index was reenabled since their filenames will be missing from the old indexes? If so and I know these new files aren't important, can I maintain the old indexes? If not, how do I rebuild the indexes? Can it be done on a live system? Thanks!

    Read the article

  • Mounting a TrueCrypt volume over FTP

    - by Maxim Zaslavsky
    Is it possible to mount a TrueCrypt volume file over FTP? Here's how TrueCrypt works with a local file: User inputs path to volume file, enters password TrueCrypt verifies that the password is correct (probably by decrypting the very first part of the volume file?) TrueCrypt reads the directory listing from the volume file and mounts the volume. However, in this step, TrueCrypt does NOT process the whole volume file. The user browses the directory listing and opens a file. TrueCrypt reads only the part of the volume file that contains the file the user wants, and then decrypts it. Once again, TrueCrypt doesn't process the whole volume file - it only reads part of it. The user edits part of the file and saves it. TrueCrypt encrypts the change and edits the volume file. I'm pretty sure it should be possible to mount a volume over FTP, without undermining security and without having to transfer the whole volume file just to read one small part of the volume. Here's how I imagine it: User inputs FTP path to volume file, enters FTP login information, enters password to volume TrueCrypt downloads the very first part of the volume file and verifies that the password is correct TrueCrypt downloads the part of the volume file that contains the directory listing - the data is sent encrypted over FTP and is decrypted locally. The user browses the directory listing and opens a file. TrueCrypt downloads only the part of the volume file that contains the file the user wants, and then decrypts it locally. The user edits part of the file and saves it. TrueCrypt encrypts the change and edits the volume file over FTP, transferring encrypted data only. Is such a feature available?

    Read the article

  • DNS slow after losing DNS Server

    - by Tim
    We have set up a small Windows Server 2008 R2 network with a domain controller which is also acting as the DNS server for the network (we opted to install DNS when setting up the domain). This network isn't connected to the Internet in any way, so all machines have been configured to use the IP address of the domain controller as their primary DNS and no secondary DNS server has been configured. If we shut down or unplug the network cable from the domain controller, DNS lookups become quite slow and the performance of the network suffers. For example, running a ping command using a hostname takes around 5-6 seconds to resolve the name. I presume this is because it is looking for the DNS, then falling back to some other method of resolving the names as the DNS server is now gone. All the machines have static IP addresses so we are considering just putting all entries in the HOSTS file of each machine. However, it would be nice to have a centralised DNS in case we one day change the IP of one of the machines. Is there a better way to speed this up?

    Read the article

  • mysql-python on Snow Leopard with MySQL 64-bit

    - by Derek Reynolds
    Can't seem to get mysql-python to work on Snow Leopard for the life of me. Currently using the default version of python that ships with Snow Leopard (python 2.6.1). Installed MySQL 5.1.45 x86_64. I download the source for mysql-python http://sourceforge.net/projects/mysql-python/ and compile with the following commands: tar xzf MySQL-python-1.2.3c1.tar.gz cd MySQL-python-1.2.3c1 ARCHFLAGS='-arch x86_64' python setup.py build ARCHFLAGS='-arch x86_64' python setup.py install And am getting the following error when I try to import: Python 2.6.1 (r261:67515, Jul 7 2009, 23:51:51) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import MySQLdb Traceback (most recent call last): File "<stdin>", line 1, in <module> File "build/bdist.macosx-10.6-universal/egg/MySQLdb/__init__.py", line 19, in <module> File "build/bdist.macosx-10.6-universal/egg/_mysql.py", line 7, in <module> File "build/bdist.macosx-10.6-universal/egg/_mysql.py", line 6, in __bootstrap__ ImportError: dlopen(/Users/derek/.python-eggs/MySQL_python-1.2.3c1-py2.6-macosx-10.6-universal.egg-tmp/_mysql.so, 2): no suitable image found. Did find: /Users/derek/.python-eggs/MySQL_python-1.2.3c1-py2.6-macosx-10.6-universal.egg-tmp/_mysql.so: mach-o, but wrong architecture Any ideas? Or the best route for starting over.

    Read the article

  • Home Networking Questions

    - by Eddie Parker
    Hello: I'm looking to wire my home with CAT-X (where X is probably going to be CAT-6, unless someone can convince me differently. ;) ). I'm seeking advice on what equipment I'll need for the job, and any things I should watch out for. It's a two story half-duplex I'll be wiring, roughly about 1800 sq ft. Here's what I believe I need so far: Bulk CAT-6 Ethernet cabling CM Rated Gigabit switch(es?) Patch panel Equipment for cutting, terminating wire, fishing through walls, etc Wall outlet covers, etc. Questions I have: Does it matter the MHz rating on the Ethernet cable? If so, why? I have two gigabit switches currently, an 8-port and a 5-port. Should I buy one massive switch to cover all the connections I need, or should I just chain the two together and buy a switch for however many other connections I need? Do I really need a patch panel? I understand it keeps the cables looking cleaner than coming out of a hole in the wall, but is there some other product I can use, perhaps combining a switch with a patch panel or some such? Ideally I'll have all this running out of a relatively small closet, so the less components (or smaller) the better. Any advice, links, or suggested product to use/avoid would be appreciated!

    Read the article

  • Windows Clients: Windows or Linux Domain Controller?

    - by Ramon Marco Navarro
    I'm planning to set up a domain controller for our small computer laboratory. I'm a little confused as to what operating system to use for our domain controller. What's in the lab: The lab has 25 units running a mix of Windows 7 and Windows XP. The domain controller will only have 2GB of RAM running a C2D E7200. (Is this enough?) What we want: The Domain Controller will also be running a git server. The Domain Controller will also be used as a general development machine (mostly Java, PHP). A way to centralize the updates for the windows clients, so that they won't have to download the same patches from the remote site. The machines would just query them from the local domain controller and get the updates from there. Our head recommended that I virtualize a Windows Server 2008 system under a Linux host and use the former as a domain controller and the latter for development or the other way around. A comparison of the advantages and disadvantages of using a Linux distribution or Windows Server 2008 in this situation would also be appreciated. As you may have noticed by now, I'm kinda new to setting up a domain so I hope you guys will be able to help me. Thank you.

    Read the article

  • Why am I getting a warning that windows is logging on with a temporary profile to run a task scheduler task?

    - by Dan C
    I am having a strange problem with the Windows Server 2008 Task Scheduler. I have to run a small command-line application every few minutes. This application just executes a quick web service call on the localhost and adds an entry to a log file; so it should not need anything special in terms of permissions. First, I created a new user account "my_scheduler" just for the task. This account is a member of the Users group (not sure what other settings I should turn on/off) and set it's password to not expire. I then create a task to run the application every few minutes. I set it to "Run whether user is logged on or not" and turned on "Do not store password. The task will only have access to local resources" (I did this since it's not hitting anything on the network. I did not turn on "Run with highest privileges" since it does not seem to need them. I set the schedule to "After triggered, repeat every 30 minutes for a duration of 1 day" and "Allow task to be run on demand" (no other settings enabled). However, I notice that in the Event Log, I see a bunch of these warnings whenever the task is run: "Windows cannot find the local profile and is logging you on with a temporary profile. Changes you make to this profile will be lost when you log off." Even though I get the warning, the task is executing (I see the log entries appearing). Another (possibly related) issue is that I also see that it's starting multiple copies of the task (within a few seconds of each other) even though it should only start one. This is also a big problem. Any idea how I can fix this? Thanks in advance, Dan

    Read the article

  • Server 2008 R2 Dns Lockup

    - by Richard Maynard
    Hi, We've deployed our first 2008 R2 server on a client site which has replaced their existing 2003 DC. This server provides DNS resolution services to all client machines on that site for general internet usage. Since using the 2008 R2 DNS services we have noticed every couple of days the DNS server starts timing out when requests to certain sites are made (google is the only example I can provide at this time although it seems to be larger sites with problems rather than small - CDN compatiblity issue?). When you restart the DNS Server service then resolution returns to normal... just only for a day or so. Is anybody aware of any significant changes to the DNS server architecture or configuration out of the box in R2 that may explain this intermittent behaviour? I have already tried the fix listed here to no avail: http://weblogs.asp.net/owscott/archive/2009/09/15/windows-server-2008-r2-dns-issues.aspx The following PS command prompt info illustrates the issue: PS C:\Users\Administrator.UK> nslookup Default Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 > www.google.com Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 Non-authoritative answer: Name: www.l.google.com Addresses: 66.102.9.99 66.102.9.104 66.102.9.105 66.102.9.103 66.102.9.147 Aliases: www.google.com > www.google.co.uk Server: s8209001.uk.kingdomfaith.com Address: 10.1.3.4 * s8209001.uk.kingdomfaith.com can't find www.google.co.uk: Server failed Thanks in advance. Regards,

    Read the article

  • Legacy non-dpi-aware application resolution scaling?

    - by Miles Erickson
    Our environment prominently featuers an outdated but absolutely mission-critical Win32 application that is not dpi-aware. It is optimized for an 800x600 display. Most of our users now have 17"-20" displays with native resolutions ranging from 1280x1024 to 1680x1050. However, they still operate these displays at 800x600 because the text in this legacy application is otherwise too small. Of course, it also means that nothing quite fits on the screen in Office 2007. Most of our workstations still run Windows XP, but some are on Windows 7 and there are more to come. About one-third of our users run the app remotely via MS Terminal Services, and the remainder run it locally. Is anyone aware of any method that could be used to scale this specific application to about 170%, so that it would fill a 1280x1024 screen, without affecting other applications that work best at the display's native resolution? I know how to do this in Mac OS X, but I have never found a way to do it in Windows. Of course, this ideally would be something that we could push out via Group Policy. I suppose we even could create a custom MSI package to re-deploy the legacy application with some sort of display virtualization layer, if such a thing exists.

    Read the article

  • Exchange 2003: Fresh install, couple noob questions.

    - by Eli
    Hi All, Thanks for reading! I have a small network set up for a local office here, and have a fresh install of Exchange 2003 on our sole-server PDC. The network uses one domain, call it ourdomain.net, which is DNSed locally, but not DNSed for the actual domain, so ourdomain.net works from within the network, but from outside, it's just pointed to some domain parking. I have a completely different domain, call it emaildomain.com, which is currently setup for our website and email, which is hosted with a standard hosting company. We've been using a combination of Thunderbird and Outlook (with local .pst files) for email. I've been asked to setup Exchange to work with our email, but am not familiar with it. The install seems to have gone just fine. The question is: How do I get email from a domain outside our network to work with the exchange server? Do I need to move the email for that domain to point to our local server (I so hope not!), or can I just set exchange so somehow slurp mail from the existing mailboxes on our host for that domain's mail? Or are there better ideas I don't know to ask for? Any help very appreciated - thanks!

    Read the article

  • SharePoint 2010 User Profile Synchronization

    - by manemawanna
    Hello, I'm completely new to working with SharePoint and Windows Server, but last week I was given a small brief to play with SharePoint 2010 to see how I got along with it. Anyway I've set up a SharePoint server and had a mess around to get some new sites and pages created etc, but I'm now looking to have a try at importing some AD groups. As part of this I've look at these tutorials, here and here. So far I've got through to the process of starting the User Profile Service which works fine, but when I get it starting the User Profile Synchronization service it sits on starting. But when I refresh the page or go to the monitoring section it shows it as aborted. Now I'm new to administering servers like I say and when I start the User Profile Synchronization service it tries to run as NT AUTHORITY\NETWORK SERVICE and asks for a password so I've been providing it with the admin password, now I'm not sure if this is part of the issue or not as I've checked the log files and they seem to say that it doesn't have permissions, which is fair enough, but I can't see how you can change the account even if I wanted to. So if anyone could help it would be appreciated, if you need any further information to help with an answer, just let me know.

    Read the article

  • SQL 2008 SP2 RsClientPrint ActiveX - "Unable to load client print control"

    - by Miles
    We recently updated our SQL 2008 server to use SP 2 and its causing a few headaches. We use SSRS on this server and when a client tries to print a report by the built-in print function, we're needing to download the RsClientPrint ActiveX control from the server from the client gets the following error Unable to load client print control. We have about 700 computers that are needing this fixed and I've followed the instructions found on the following URL: http://www.kodyaz.com/articles/client-side-printing-silent-deployment-of-rsclientPrint.aspx We have two issues: Most of the users who will be using this ActiveX control are not local administrators so they will not be able to install the control themselves Since there are so many computers, this has to be done silently behind the scenes run by a local admin account After following the information from the link above, we're able to put the files in the C:\Windows\System32 folder and register the DLL but we still get the same problem. The only small thing I've noticed is that in the HTML for the report page, everything that references a version is referencing version 2007.100.4000.00 and the version of the DLL that I pulled from the report server is 2007.100.1600.22. Also, for some clients that are local administrators, they are prompted every time to install the ActiveX control when they click print. This works successfully but we can't have the user asked if they want to install the same control every time they need to print.

    Read the article

  • Per client DNS server assignment using Pfsense

    - by Trix
    I have a network where pfsense is the gateway. There are two sets of clients that I want. One where there will be some restrictions to the network (example, IM being blocked) and one network where there are no restrictions. One easy way I thought about doing this was assigning the different domains different DNS servers. One set could use OpenDNS, the other could use Google's Public DNS. The set with OpenDNS would have the filter options on (using OpenDNS' dashboard, I can check block IM .... so I do not manually need to block login.oscar.aol.com, meebo.com, gmail chat ....etc). So the problem is the DHCP server looks like it will only assign a single set of DNS servers to clients. Is there a way to set a per client assignment? Is there a better way to obtain what I want to obtain. This is just a small home network. I do not need anything fancy, but I do need this functionality in one way or another.

    Read the article

  • Why Hebrew letters in the address bar break the ARR gateway (Only With Explorer 8,9,10)?

    - by Noamway
    The ARR is working great in all browsers except Internet Explorer 8,9,10. When I paste Hebrew URL directly to the address bar it's working good, but when I surf (click on a simple href URL) from one Hebrew URL page to another Hebrew URL the ARR return me that error: "502 - Web server received an invalid response while acting as a gateway or proxy server." There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server. I checked it number of times including with HTTP analyzer and I saw that the "referer" is making all the problems and cause to that error. For example when I enter to that page: mydomain.com/somehebrewchars (mydomain.com/???? you will need Hebrew install) And click in the page on a link to: mydomain.com/somehebrewchars2 (mydomain.com/???????? you will need Hebrew install) I will get the error above and when you look at the referrer you will see something like that: mydomain.com/עמוד-× ×—×™×ª×” We use other proxies application to others projects and we don't have the same issue like that. For this example we used WIN 2008 and 2012 with ARR 2.5 and also 3 beta. Any help is welcome :-) Thanks, Noam

    Read the article

  • ClearOS - how to create a site to site VPN between two ClearOS boxes?

    - by Scott Szretter
    I plan on setting up some ClearOS boxes at several sites, and would like to set up site-to-site VPN between the remote sites and a main site (all running ClearOS enterprise 5.2sp1 / latest version). I have found references for how to set up ClearOS to VPN in to devices such as cisco for IPSEC, and others with PPTP. But for these methods it did not mention how you might configure 2 ClearOS boxes to talk to each other ipsec or pptp. I also saw documentation on installing OpenVPN and using the OpenVPN client software to VPN in to the ClearOS box. I will probably use this for individual users to VPN in, but I have some small sites ( 1 to 10 users) that will have their own ClearOS box and need to create a site to site VPN link back to the main site's OpenVPN box. Is this possible, can you point me to docs, or other info or basically, how? A couple updates: I did find a thread that asks the same basic question, where the user has a vpn set up between the two clearos machines (after installing ipsec vpn modules), just not transporting traffic between the LANS - and the very last post claims you have to edit some files (/etc/ipsec.conf) and set leftnexthop rightnexthop values to %direct. After that, it's supposed to work. Could it be that simple? I also posted to clear foundation, and they pointed me to some documentation for setting up ipsec unmanaged vpn. This looks pretty good, but, I will most likely need to figure out how to handle a dynamic dns type setup at least on one end. Also, what does it mean by multi-wan? Finally, what happens when a vpn connection goes down exactly - someone has to reboot the box or ?

    Read the article

  • Dell Management Packs in System Center Operations Manager 2007 R2?

    - by bwerks
    Hey all, I recently set up SCOM in a small business network environment. The root management server is a Dell Poweredge 2950, and I'd like to use SCOM to monitor it using Dell's management packs. I've imported the management packs into the SCOM deployment and followed Dell's installation instructions, but it doesn't seem to be fully working yet. Currently, the Diagram views in the Dell tree (Monitoring tab) seem to show me the server's place in the network topology, so it seems that at least part of it is working. However, none of the reports under "Performance and Power Monitoring Views" provide any information. When clicking on one of them (Power Consumption (Watts), for instance), the display area is blank and there is a tooltip visible that reads "No performance counter is selected. To select a counter, place a check mark in the Show column in legend below." However, in the legend, there's nothing there for me to check. I've installed OpenManage 6.2 on the server as per the Dell documentation, but I don't know what else I could have done that I missed. Does this sound like a familiar problem to anyone?

    Read the article

  • Destination host unreachable - Windows Server 2008

    - by Doug
    Hi There, I'm working with a windows 2008 domain controller, which I'm having issues connecting to internet resources. A small bit of background, this is a 2008 domain controller that has been added into an existing Win 2k domain, with a goal of replacing the older computers. Both of the older controllers can still access internet resources, and so can all the clients. When I ping Google.ca from the new server, it does resolve to an ip address, but then says "Reply from 192.168.123.20: Destination host unreachable." I'm really at a lost now, I've checked and rechecked my ip configuration, the default gateway is my router, the primary DNS server is the my DC, and the secondary DNS is also my router. The DNS server on the domain has a forwarder added for the router as well. Everything on my local network works just fine, all my internal resources can be resolved. For the time being, I've stopped the Firewall service. I'm not 100% used to Server 2008 yet, but it might be a case of just missing something simple. Thanks for your time.

    Read the article

  • Destination host unreachable - Windows Server 2008

    - by Doug
    Hi There, I'm working with a windows 2008 domain controller, which I'm having issues connecting to internet resources. A small bit of background, this is a 2008 domain controller that has been added into an existing Win 2k domain, with a goal of replacing the older computers. Both of the older controllers can still access internet resources, and so can all the clients. When I ping Google.ca from the new server, it does resolve to an ip address, but then says "Reply from 192.168.123.20: Destination host unreachable." I'm really at a lost now, I've checked and rechecked my ip configuration, the default gateway is my router, the primary DNS server is the my DC, and the secondary DNS is also my router. The DNS server on the domain has a forwarder added for the router as well. Everything on my local network works just fine, all my internal resources can be resolved. For the time being, I've stopped the Firewall service. I'm not 100% used to Server 2008 yet, but it might be a case of just missing something simple. Thanks for your time.

    Read the article

  • Windows Audio Issue

    - by Nikki
    This one is driving me nuts. Hoping someone can shed some light. I'm running windows 7 using onboard audio. It's been fine for over 2 years but lately there's a problem every time I play audio. I hear a small soft burst of static and the volume turns itself down from 50% to 23%. Once at 23%, it plays fine. No related events logged in viewer. No reported problems with the device. Different headphones, same problem. I played around with audio settings for hours but the problem persists. EDIT: ok more info: Motherboard: ECS G31T-M LGA775 System info displays this: Name High Definition Audio Device Manufacturer Microsoft Status OK PNP Device ID HDAUDIO\FUNC_01&VEN_1106&DEV_E721&SUBSYS_10192683&REV_1001\4&3D4E739&0&0001 Driver c:\windows\system32\drivers\hdaudio.sys (6.1.7600.16385, 297.00 KB (304,128 bytes), 14/07/2009 9:51 AM) I'll keep adding info as I find it. The question I want resolved is; Is it faulty hardware? If so, I can buy a sound card. I can't imagine software is responsible since I haven't installed anything new for weeks. Virus scans are clear as well. The static burst is irritating to say the least. Tried 2 different headphones and separate speakers. Same problem. I know it's not an easy problem but I was hoping someone had encountered the same thing.

    Read the article

  • Recovering a VHD after resizing it using VBoxManage

    - by tjrobinson
    I am using VirtualBox 4.1.18 and had a virtual machine running Windows 8 RC with a single VHD, which was initially sized at 25GB (too small!). After installing the OS and some applications I ran out of disk space so shut down the guest and then used this command to resize the VHD to 80GB: C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe modifyhd "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd" --resize 81920 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe showhdinfo "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd" UUID: 03fb26e7-d8bb-49b5-8cc2-1dc350750e64 Accessible: yes Logical size: 81920 MBytes Current size on disk: 24954 MBytes Type: normal (base) Storage format: VHD Format variant: dynamic default In use by VMs: Windows 8 RC (UUID: a6e6aa57-2d3a-421b-8042-7aae566e3e0b) Location: D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd So far so good. However, when I started the guest up again I got the dreaded: Fatal: No bootable medium found! system halted If I boot into GParted it shows a single 80GB drive as "unallocated". The option to scan for and attempt to repair a filesystem doesn't find anything. I also tried cloning the VHD into a VDI file, just in case that magically fixed it: C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe clonehd "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd" "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vdi" --format VDI 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Clone hard disk created in format 'VDI'. UUID: baf0c2c4-362f-4f6c-846a-37bb1ffc027b C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe showhdinfo "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vdi" UUID: baf0c2c4-362f-4f6c-846a-37bb1ffc027b Accessible: yes Logical size: 81920 MBytes Current size on disk: 24798 MBytes Type: normal (base) Storage format: VDI Format variant: dynamic default In use by VMs: Windows 8 RC (UUID: a6e6aa57-2d3a-421b-8042-7aae566e3e0b) Location: D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vdi Is there anything else I could try to recover the drive? No, I don't have a backup :( My host OS is Windows 7 64-bit.

    Read the article

  • Outgrew MongoDB … now what?

    - by samsmith
    We dump debug and transaction logs into mongodb. We really like mongodb because: Blazing insert perf document oriented Ability to let the engine drop inserts when needed for performance But there is this big problem with mongodb: The index must fit in physical RAM. In practice, this limits us to 80-150gb of raw data (we currently run on a system with 16gb RAM). Sooooo, for us to have 500gb or a tb of data, we would need 50gb or 80gb of RAM. Yes, I know this is possible. We can add servers and use mongo sharding. We can buy a special server box that can take 100 or 200 gb of RAM, but this is the tail wagging the dog! We could spend boucoup $$$ on hardware to run FOSS, when SQL Server Express can handle WAY more data on WAY less hardware than Mongo (SQL Server does not meet our architectural desires, or we would use it!) We are not going to spend huge $ on hardware here, because it is necessary only because of the Mongo architecture, not because of the inherent processing/storage needs. (And sharding? Please! Cost aside, who needs the ongoing complexity of three, five, or more servers to manage a relatively small load?) Bottom line: MongoDB is FOSS, but we gotta spend $$$$$$$ on hardware to run it? We sould rather buy commercial SW! I am sure we are not the first to hit this issue, so we ask the community: Where do we go next? (We already run Mongo v2) Thanks!!

    Read the article

< Previous Page | 607 608 609 610 611 612 613 614 615 616 617 618  | Next Page >