Search Results

Search found 7324 results on 293 pages for 'operations research'.

Page 203/293 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • How can I fix Office Sharepoint Search service?

    - by unknown (yahoo)
    For some reason, in operations server services, Office Sharepoint Search Service is MISSING! Which means I cant get Shared services working, which in turn I cannot get, and I do not think I have ever got Usage and reporting to see who is visiting my website, counter and with what OS/ browser ETC. I dont think I have ever seen this work in the 2 years trying with sharepoint. So in essense I have two problems, most important SEARCH, second usage reports. When trying to search, all i get is unknown error. Nothing is in my event viewer at all. I have tried http://www.cjvandyk.com/blog/Lists/Posts/Post.aspx?ID=96 , in the comments on that site, on other person reports search is missing entirely and is going to uninstall. Im tired of uninstall sharepoint and resinstalling to fix an odd off issue. I have things setup with Team foundation server that took forever to get work and reinstallation is not my solution. As for usage reporting, this is what microsoft responds to the " Both Windows SharePoint Services Usage logging and Office SharePoint Usage Processing must be enabled to view usage reports. Please contact your administrator to ensure that these services are enabled. " error I cannot do all the steps since SSP needs to be setup which i cant above.

    Read the article

  • Problem linking two Cisco routers with a static route

    - by Chris Kaczor
    I'm trying to link two Cisco routers with a static route and I haven't been able to get it working as expected. Here is the basic setup: Router 1 - WRV210 - 192.168.1.1 - connected to cable modem Router 2 - RV120W - 192.168.2.1 I already have several machines on Router 1 that are working and I want to setup Router 2 with a few other machines on the different subnet. Here is what I've configured: Connected the WAN port on Router 2 to a LAN port on Router 1 Configured Router 1 to give 192.168.1.2 to Router 2 via DHCP Configured Router 1 with a static route (192.168.2.0 mask 255.255.255.0) to 192.168.1.2 using the LAN & Wireless interface Disabled the firewall on Router 2 (since it is covered by Router 1) Configured Router 2 to "Router" mode instead of "NAT" mode Configured Router 2 with a static route (192.168.1.0 mask 255.255.255.0) to 192.168.1.1 using the WAN interface From the research I've done I think that should be enough but things aren't working exactly as expected: Router 2 can ping 192.168.1.1 and 192.168.1.101 (a machine on router 1) A machine on Router 2 can ping 192.168.1.1 and 192.168.1.101 (a machine on router 1) ping 192.168.1.1 and 192.168.1.101 (a machine on router 1) Router 1 can NOT ping 192.168.2.1 or 192.168.2.101 (a machine on router 2) A machine on Router 1 can NOT ping 192.168.2.1 or 192.168.2.101 (a machine on router 2) can NOT ping 192.168.2.1 or 192.168.2.101 (a machine on router 2) Router 1 and a machine on Router 1 can ping 192.168.1.2 (Router 2 itself) I'm confused as to why Router 1 cannot talk to the 192.168.2.0/255.255.255.0 subnet. Any help would be greatly appreciated.

    Read the article

  • Why Is Web Sharing Broken on My Mac?

    - by Sam Murray-Sutton
    Background: I use my Mac for web development, running copies of web sites locally. I recently installed the Snow Leopard update, which to all intents and purposes seems to have gone fine, except... What's not working? Web-sharing; more specifically I can't turn it on via preferences. The preference pane just hangs when I try to. So Apache doesn't start on reboot. I can start Apache by hand, but I don't know enough to either setup apache to start with the computer, or to properly fix web sharing. Further details My Apache error log shows nothing on when the system boots up (as I would expect). This is the error message when I try to start web sharing from the sharing preference pane. 28/09/2009 10:58:05 System Preferences[834] setInetDServiceEnabled failed with 1 for org.apache.httpd Here's the messages given when I start apache from the command line. [Mon Sep 28 10:35:53 2009] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Mon Sep 28 10:35:54 2009] [warn] mod_bonjour: Skipping user 'sams' - index file /Users/sams/Sites/index.html has zero length. [Mon Sep 28 10:35:54 2009] [notice] Digest: generating secret for digest authentication ... [Mon Sep 28 10:35:54 2009] [notice] Digest: done [Mon Sep 28 10:35:54 2009] [notice] Apache/2.2.11 (Unix) mod_ssl/2.2.11 OpenSSL/0.9.8k DAV/2 PHP/5.3.0 Phusion_Passenger/2.2.5 configured -- resuming normal operations Please let me know if you need any further details on this. Any help would be greatly appreciated. UPDATE I have added an answer of my own below - I was able to solve it thanks to being pointed in the right direction by the comments below, so thanks very much. But I'm still not totally clear as to what caused the problem or how my solution addressed it, so I'm leaving the question open for now.

    Read the article

  • How do I troubleshoot a slow hard drive?

    - by Bruce Connor
    My computer is suffering of slow-downs and I'm not surprised (it's around 6 years old). Here's what I've verified: They are not very frequent (only a couple of times a day). When they happen a single application will hang for 10-60 seconds, while the rest don't hang but also get slow. Even as it is happening, the CPU usage stays low. It happens to applications (such as text editor, firefox, skype). It never happens to some applications (such as games) which I use for hours under heavy CPU load. Also of note: The Graphics card and PSU are new (around a year). Though I have a decent amount of software installed right now, this was happening even right after I reinstalled Windows. This HDD has been through many partinioning schemes, and a few heavy operations (such as moving around 200GB of data). Because of the above, I am already 70% sure the problem is with the hard drive. Before I replace it, however, I want to rule out other less likely possibilities (such as RAM, software, or PSU). I don't have the money to replace the entire box right now, but I can easily replace one of the components. I've read several questions (such as this one) which give general guidance on troubleshooting an unknown issue, that is not what I'm looking for here. My main question is: What tests or benchmarks can I run to verify I have a problematic hard drive? I don't need to solve this problem, I am content with just making sure it's the hard drive. I could borrow a newer hard drive from a friend and see if it gets better. A positive result would rule out all other components, but it wouldn't rule out a software issue (since this new hard drive won't have any of the software I use daily). Running on Windows/Linux.

    Read the article

  • WT-NMP - PHP-CGI randomly stops running with no error log

    - by alexfontaine
    We have recently installed WT-NMP and are currently running Php-Cgi with php 5.4.24. We are running fairly simple php scripts and when testing everything is running fine. Over the weekend we wanted to keep the server running test it over a longer period of time. The server and scripts ran fine all day on Friday, but sometime late on Saturday, the php-cgi stopped running. There are no errors in the error log (C:\WT-NMP\log). In the configuration (php.ini) I have the following options set: error_reporting = E_ALL display_errors = On display_startup_errors = On log_errors = On html_errors = On error_log = "c:/wt-nmp/log/php_error.log" We also have the standard nginx.conf error logs: access_log "c:/wt-nmp/log/nginx_access.log"; error_log "c:/wt-nmp/log/nginx_error.log" warn; So, since the log directory is empty, I am assuming that the running php scripts and general nginx operations are not causing the php-cgi to stop. So my questions are: What else could cause the php-cgi to stop running? Are there any other options for logging that we could turn on that could help us track this down? Are there other log locations that we should be looking at? Thanks!

    Read the article

  • How to balance the root domain using NS records?

    - by Patrick McCurley
    I have two load balancers that balance incoming traffic across multiple data centers. These work fine. I can test them out by doing an 'nslookup mydomain.com xIP' I have now taken out DNS services with DYN.com to allow me to manage the DNS Zone file so that typing mydomain.com will ask my load balancers what the IP address is to resolve. Step 1 : the NS record for www. I set up A records (glue) for ns1 & ns2, then the corresponding NS record to delegate the DNS lookup to the balancers instead of DYN.com's nameservers. ns1.mydomain.com A [ip address of load balancer 1] ns2.mydomain.com A [ip address of load balancer 1] www.mydomain.com NS ns1.mydomain.com www.mydomain.com NS ns2.mydomain.com All is well - when I type www.mydomain.com, the requests get delegated to my load balancers who provide the IP address of the endpoint and the connect is made successfully. Step 2 : the NS record for root. This is where I run into problems. I need customers to be able to type 'mydomain.com' (without the www) and ALSO get delegated to the load balancers for the IP address. However - of the research I have done, and through the DYN control panel, it seems to be not allowed to provide an NS record for the root - as this overrides the default NS servers. How can i delegate both the root, and the www. to my load balancers?

    Read the article

  • Installing OEM Windows Server 2008 under KVM

    - by rancidfishbreath
    Issue I have an HP server that came with an OEM copy of Windows Server 2008. I have installed CentOS 5.4 on the hardware and am trying to install Windows Server 2008 as a KVM guest. When I attempt to install Windows Server 2008 it complains that I am trying to install on unsupported hardware. This issue is caused because the hardware SMBIOS information is not being passed to the KVM guest. Background Before I go any further I want to state that what I am trying to do is within the license. HP offers a supported solution for VMWare but does not have an official solution for KVM. After much research the platform I am going to use is CentOS and KVM so please do not suggest other platforms. I emailed the KVM developers mailing list and was told that this is possible and was given the advice that: "You can dump SLIC table of your host bios and provide it to guest bios using -acpitable parameter." I used dmidecode and got the parameters that need to be passed, but I do not know where to pass the parameters into. Update Looks like CentOS 5.4 uses virt-install instead of qemu. Qemu is in the package manager and I was able to install it after uninstalling qemu-img (they conflict and qemu contains the packages in qemu-img). So now I know how to pass the acpitable parameters, but I am having trouble mapping what came out of dmidecode into -acpitable.

    Read the article

  • Registering publicly Mail server and Web server in a free dns server

    - by Bruno Vieira
    I'm trying to host the e-mails and the site of our company into our private server. I've already followed the Gentoo Virtual Mailhosting System with Postfix Guide and my mail server is working (actually it sends mails for the local users and for external users it goes to spam) and know how to set an Apache 2 server. What I don't know (and I mean really don't) is how to make them public. I did some research and found that I should ask my ISP to change the reverse DNS to my company domain in order to prevent my mails to be marked as spam, they are doing. I already know I have to configure a DNS Server, it seems like my register provider already has one but I don't know how I can configure CNET, A, MX, TXT and all those tags (Is it tags the name?) and If I must do some other configuration on my server. My Server: Linux mail 3.2.21-gentoo #1 SMP My /etc/hosts: 127.0.0.1 mail.example.com.br example example.com.br ::1 mail.example.com.br mail example.com.br My /etc/conf.d/hostname: hostname ="mail" What am I missing? If there's a guide about how to configure I would really be grate. Thanks in advance for the help. Cheers

    Read the article

  • Can you configure multiple KMS hosts in a primary / secondary relationship?

    - by Mark Hall
    We have two datacenters in our environment: primary and DR. I need to deploy a KMS service, and to be proactive, I would like to have a host in both datacenters. From what I have read, you can have up to 6 hosts without calling Microsoft, and it appears that what will happen is that a SRV record for each host will be placed in DNS. The client will query for those SRV records and randomly choose a host for the initial activation and will use that same server for all renewals. The server can be changed manually through a script and will automatically change if the initial server is unavailable when activating or renewing. My question is has anyone found a way to designate one server as the primary KMS host and designate the other as failover only? The reason I ask is that it is preferred that the client communicate with the primary datacenter during normal operations and only talk to the DR datacenter when needed because the bandwidth between the offices and the DR datacenter is limited compared to the primary. I am sure that this has been done before but I can not find it MSFT's documentation. Thanks, Mark

    Read the article

  • Terminating multi-mode fiber

    - by murisonc
    I'm looking at the feasibility of terminating multi-mode fiber connections ourselves. We would be using LC connectors. I've done some research and found two different methods. One requires polishing the ends and using epoxy while the other doesn't. I like the idea of not having to polish the ends but there doesn't seem to be much information on quality or ease of use. I've found two vendors (3M and Corning) that offer kits for terminating fiber without polishing or using epoxy. Does anyone have any experience with both methods that can offer some advice? Copper is easy but fiber seems to be a whole different animal. EDIT: After looking into fusion splicing suggested in the answer I've determined it's not for us. It's my understanding that is primarily used for outside plant and is better suited for single mode fiber. It's a good answer but doesn't address the question directly. Some more information about our situation. We will only be terminating multi-mode fiber inside a building and only doing between 4 and 20 pair a year. Hiring an outside person won't work due to our location. There are currently a couple people on-site that can terminate fiber (working for another company and charging large fees) but they can only do ST and SC connectors and we only use LC. So once again does anyone have experience with terminating using both epoxy type connectors and the other type (similar to Corning Unicam)?

    Read the article

  • .htm pages working but .aspx pages throwing errors

    - by Mike
    Our site has thousands of visitors per day but we've been receiving reports from some of our members that they are able to hit our main page, presumably because it's a .htm page, but when they click off to a .aspx page, they get an error. I've done as much research as I know how and here is what I have come up with: - We have not made any changes to IIS on our server in months. - We have a couple of customers that have been willing to work with us and provide information about their system. One customer is running Vista, the other is running XP. - We had one of the customers test both Firefox and MSIE. She gets same error in both. - One customer said, "We were able to post the profile and searched on available jobs and it worked w/ Firefox for a day, then it just quit working...we did not change any settings to Firefox after we posted the profile." - We asked the customer to clear their cache and try again. They responded with, "We just cleared the cache and got the same error; btw, I periodically clear the cache -- almost every day." Summary We have thousands of customers that hit our site with no problem. We can't reproduce these errors. These customers are getting the same error in different browsers. They are only getting the error on .aspx pages. They still get the error after clearing their cache. We would appreciate any thoughts on what other questions we could ask these customers or thoughts on how we can further troubleshoot this problem.

    Read the article

  • How to find hidden/cloak files in Windows 2003?

    - by homemdelata
    Here is the point. I set Windows to display all the hidden files and protected operating system files but even after that, my antivirus (Kaspersky) is still getting a ".dll" file on "c:\windows\system32" saying it's a riskware 'Hidden.Object'. I tried to find this file everytime but it's not there. So I asked one of the developers to create a service that verifies the folder each 5 seconds and, if it founds the file, copies to another place. If it copies to another place with the same name and extension, I still can't find the file on the other folder but Kaspersky now founds both. If I keep the same name but with a different extension, like ".temp123", I still can't find the file. Lastly, I created an empty text file and renamed with the same name as the other one, the file just gone too. After all this research It's clear that every file with this same name on this specific server gets cloak, doesn't matter the file extension. I created a file with this same name on another server and nothing happens, the file is still there without problem. How can I found this kind of file? How can I "uncloak" it? How can I know what this file is doing?

    Read the article

  • Memory upgrade to 8GB on unibody MacBook

    - by dlamblin
    I bought the 13" unibody MacBook one month prior to it's upgrade to become the new improved 13" unibody MacBook Pro (with unopenable battery compartment, extra 3 hours of battery life, more color gammut, SD slot, firewire 800 port, and a new 8Gb memory limit). My MacBook says it is limited to 4GB of DDR3 1066mhz memory, in 2 SO-DIMMs. But that was written back when you couldn't get 2 4GB SO-DIMMs. Now that you can get them, the very similar MacBook Pro is shipping with 4GB standard, and can be upgraded to 8GB. I've asked (Apple reps), and I'm repeatedly told that my model cannot be upgraded to 8. When I ask for the reason they alway say: "Because that's the published limit at the time your mac was built." I find this unconvincing. If they said: "Because the memory controller in the chipset is limited to 4GB, despite being seemingly identical to the memory controller in the same chipset newest MacBook model," then I'd just take their word for it. Has anyone tried it, or found any research as to whether two 4GB DDR3 1066mhz SO-DIMMs can be installed in the unibody MacBook without FireWire?

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • How to deduplicate 40TB of data?

    - by Michael Stauffer
    I've inherited a research cluster with ~40TB of data across three filesystems. The data stretches back almost 15 years, and there are most likely a good amount of duplicates as researchers copy each others data for different reasons and then just hang on to the copies. I know about de-duping tools like fdupes and rmlint. I'm trying to find one that will work on such a large dataset. I don't care if it takes weeks (or maybe even months) to crawl all the data - I'll probably throttle it anyway to go easy on the filesystems. But I need to find a tool that's either somehow super efficient with RAM, or can store all the intermediary data it needs in files rather than RAM. I'm assuming that my RAM (64GB) will be exhausted if I crawl through all this data as one set. I'm experimenting with fdupes now on a 900GB tree. It's 25% of the way through and RAM usage has been slowly creeping up the whole time, now it's at 700MB. Or, is there a way to direct a process to use disk-mapped RAM so there's much more available and it doesn't use system RAM? I'm running CentOS 6.

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • How to prevent getting infected by rogue security applications

    - by Ieyasu Sawada
    My computer never got infected with a virus before, because I'm using Web of Trust browser plugin, sandboxie and Avast Free antivirus. But today, it got infected with a rogue security application called antivirus.net. I have already removed it using MBAM, SAS, and Kaspersky Virus Removal Tool. And by the way, I was using MSE when my laptop got infected. Seems like the rogue application just killed off the MSE process. And I never even got a warning. I was using the wi-fi from our school, which I think is the cause since most of the computers in our laboratory has rogue applications on it. My question is, how do I prevent this from happening again? It took me about 6 hours to disinfect my computer and I don't want it to happen again. Please enlighten me if these rogue applications really just pop out of nowhere. Note I'm not dumb enough to agree with installing rogue security applications. It just came out of nowhere. I'm happy with MSE, well not after it let antivirus.net penetrate my computer. I've done a little bit of research and it says that it needs the permission of the user to actually install it in the computer: http://www.net-security.org/malware_news.php?id=1245 http://en.wikipedia.org/wiki/Rogue_security_software Is it possible that other computers in our school network have agreed to install those? Or maybe the network admin?

    Read the article

  • Huge or minimal performance hit running game servers on a Virtual Machine? [closed]

    - by Damainman
    I have a two dedicated servers to choose from depending on which one would do a better job. I plan on updating the Hard Drive space and RAM at a later date depending on how I move forward. Server 1: 500GB Hard Drive 8GB RAM 2x 64bit Intel Xeon L5420(Quad Core) @ 2.50Ghz Server2: 500GB Hard Drive 8GB RAM 2x 64bit Intel Xeon E5420(Quad Core) @ 2.50GHz I want to run a virtual machine that will host about 10 game servers, with about 16 active slots per server. It will be a mix and match from: Minecraft Counter Strike( 1.6, Source, Global Offensive) Battlefield Team Fortress I know the general consensus is virtualization is a horrible idea if you plan on running virtual servers on them. The issue is, the discussions I read do not really clearly state whether they are speaking about a virtual server running inside an OS(ie: VMware Player running on Windows with the game server in a VM) or a Hypervisor such as Xen Cloud Platform. I am trying to get a definite answer on how feasible the above would be and how much of a performance hit it might be if the VM running the game servers is on a hypervisor such as Xen Cloud Platform. My initial research lead me to believe that there wouldn't be a performance hit since the virtualization is different than running it via inside of a OS.

    Read the article

  • Question for Vim search and peck typists

    - by mike
    I'm trying to write a Vim tutorial and I'd like to start by dismissing a few misconceptions, as well as giving some recommendations. I don't know if I should dismiss touch-typing as a misconception, or include it as a recommended prerequisite. At the time I learned the editor, I had already been touch typing for a couple of years, so I have absolutely no idea what would be the experience of a two-fingered typist in Vim. Are you a vim two-fingered typist? what has your experience been like? EDIT: I'm not sure if my question was clear enough. Maybe it's my fault, I don't know. I get mixed replies and other questions (why do you write this? what does one have to do with the other?), instead of empirical info (I don't touch type and it's been (fine|hell)). Some programmers touch-type others search and peck. In the middle, there's Vim which requires a certain affinity with keys to do various operations. I am a touch typist and I have no clue what my experience would have been like with the editor if I wasn't. I can't honestly picture myself pecking some of these combos. But like I said, I don't know what it is like. Before telling someone to start using Vim, I'd like to know if I should dismiss touch-typing as a misconceived requirement. So, I'll rephrase the question, have you felt that not being a touch-typist has impeded on your experience with Vim?

    Read the article

  • Windows XP Language, explorer.exe

    - by nmuntz
    Hi, I was given by my company a laptop with Windows XP Professional in Spanish. I would like to translate it to English, since I really DISLIKE to use localized versions of programs. I have read about Windows MUI packs, however you MUST have Windows XP Pro in English in order to translate it to other language, you can't translate it TO English from other language. Since reinstalling the OS using a Win XP CD in english is not an option (don't have the license nor the CD, and don't have domain privileges to rejoin my computer to the domain), I was wondering what are the essential files that contain localized strings of text. I was doing some research, and apparently explorer.exe has many of the Windows Error Messages and other strings. Will replacing my original explorer.exe with one from Windows XP in English be enough (and work) for having a "basic" english version of windows? Im mainly interested in having error messages, start menu, and the control panel in english. Also, does it HAVE to be the same version as the Service Pack im running? Besides explorer.exe are there any other essential files that i should try to get and replace? Do you see any "dangers" in replacing this files with english version ones? Thanks in advance for your help.

    Read the article

  • How to Refresh or Reset Windows 8 without the System Reserved partition?

    - by Karan
    The article Refresh and reset your PC mentions exactly what happens during the refresh and reset operations in Windows 8: Refresh The PC boots into Windows RE. Windows RE scans the hard drive for your data, settings, and apps, and puts them aside (on the same drive). Windows RE installs a fresh copy of Windows. Windows RE restores the data, settings, and apps it has set aside into the newly installed copy of Windows. The PC restarts into the newly installed copy of Windows. Reset The PC boots into the Windows Recovery Environment (Windows RE). Windows RE erases and formats the hard drive partitions on which Windows and personal data reside. Windows RE installs a fresh copy of Windows. The PC restarts into the newly installed copy of Windows. It is my understanding that Windows RE (Recovery Environment) is included as part of the System Reserved partition created by default on the first hard disk. The size of this partition has gone up to 350 MB from the 100 MB it used to be in Vista/Windows 7, no doubt as a result of adding these features. Now we have already discussed how to skip the creation of this System Reserved partition during Setup. Basically, the same techniques that used to work with Windows 7 work with Windows 8 as well. What I want to know is, what will be the exact repercussions of not having the System Reserved partition in place? I assume Troubleshoot / Advanced options should still be available as before: But what about the Troubleshoot menu itself? Will the Refresh and Reset options disappear? Will they remain but be unavailable? Or possibly they will throw an error if selected? Also, will it be possible to access and successfully execute these options if installation media is available? Anything else that might be affected?

    Read the article

  • How do I get "Back to My Mac" (using MobileMe) from Windows?

    - by benzado
    I have a MobileMe subscription and a Mac at home with "Back to My Mac" enabled. When I'm away from home, this service lets me use another Mac to connect to my Mac back home and access file sharing, screen sharing, etc. As far as I know, the service doesn't use any proprietary protocols, so in theory I should also be able to get "Back to My Mac" from a Windows PC. This MacWorld article explains how it works. Basically, it uses Wide-Area Bonjour to give your Mac a domain name like hostname.username.members.mac.com. Remote computers can find your Mac using that address, then connect to it using a private VPN. The "Wide Area Bonjour" part seems to make it a little more complicated than simply a regular domain name, though. Note that I'm not interested in using the methods described by LifeHacker, which doesn't use the MobileMe service at all. I don't want to use a totally different dynamic DNS service. I'd like to use the one I'm already paying for, or at least find out why that's not possible from Windows. Also, my primary problem is finding a network route back to my mac... once I've got that I know how to enable services so that Windows can talk to it. UPDATE: Based on some additional research, it appears that Apple is only assigning IPv6 addresses to the hostname.username.members.mac.com names. So any solution will require enabling IPv6 support on Windows, if possible.

    Read the article

  • security update in centos, which way is it?

    - by user119720
    Recently something have been bothered with my mind regarding my linux CentOS box.My client have been asking to set up a CentOS machine in their environment which works as server. One of their requirement is to make sure that the set up is to be as secure as possible. Mostly have been covered except the security update inside CentOS. So my question are as follows: 1.. How to apply the latest security,patches or bug fixes in CentOS? When doing some research, I've been told that we can update the security of CentOS by running yum install yum-security but after install this plug in,seems there is no output for this method.Its like this command is not working anymore. 2.. Can i update the security patches through rpm packages? I couldn't find any site that can download the security patches,enhancement or bug fixes for CentOS.But I know that CentOS have been releasing these update through their CentOS announcement here It just it lack of documentation on how to apply these update into my CentOS installation. For now the only way that I know is to run yum update I am hoping that someone can help me to clarify these matter.Thanks.

    Read the article

  • Unable to access my IIS website using hostname. Works fine with localhost

    - by rajugaadu
    I am unable to access my IIS website or even the default website. I did a bit of research and checked/selected the option 'Integrate Windows Authentication' in the Properties > Directory Service tab. From then on I could access the website using localhost. But when I use my hostname, it asks for domain username/password. Why is it so? I don't understand why am I not able to access my website without checking this option to integrate windows authentication? My goal is to access the website using both localhost and hostname. More details on what I did: I haven't done anything out of world. What I did is: IIS - Websites - Create new Website - Create a working folder - Set a default page. I restart this website and then click on browse. And I do not see my default page. I had to go to Directory Service tab and select the check box "Integrate Windows Authentication". Then I can see the default page coming. On IE too I can see the default page coming when I use http://localhost. But when I use http://{hostname} it asks for domain username and password. Why???

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >