Search Results

Search found 2417 results on 97 pages for 'states'.

Page 69/97 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Installing Joomla on Windows Server 2008 with IIS 7.0

    - by Greg Zwaagstra
    Hi, I have been spending the past while trying to install Joomla on a server running Windows Server 2008. I have successfully installed PHP (using Microsoft's web tool for installing PHP with IIS) and MySQL and am now trying to run the browser-based installation. Everything comes up green, I fill in the appropriate information regarding the site name, MySQL information, etc. and no errors are thrown. However, when I get to the step that asks me to remove the installation directory, I am unable to do so as Windows states it is in use by another program (I cannot fathom how this is true). Also, there is no configuration.php file that is created so if I were to manage to delete this folder I have a feeling that there would be problems. I was thinking there was some kind of a permissions issue and have set the permissions for IIS_IUSRS to have read, write, and execute permissions for the entire folder that Joomla resides in but this has not helped. Any help in this matter is greatly appreciated. ;) Greg EDIT: I decided to try and manually install Joomla by manually editing the configuration.php file. This has worked great and now I am certain there is some kind of a permissions issue going on because I am able to do everything that involves the MySQL database (create an article, edit menu items, etc.), but anything that involves making changes to Joomla installation's directory does not work (install plugins, edit configuration settings using the Global Configuration menu within Joomla, etc.) I have granted IIS_IUSRS every permission except Full Control (reading on the Joomla! forums shows that this should be enough for everything to work). This is confusing to me and I am quite stuck on this problem. EDIT 2: The bizarre thing is that in the System Info under Directory Permissions, everything turns up as Writable but then whenever I try to actually use Joomla to, for example, edit the configuration.php file using the interface, it says it is unable to edit the file.

    Read the article

  • tcp msl timeout implementation in linux

    - by iamrohitbanga
    The following is given in the book TCP IP Illustrated by Stevens Quiet Time Concept The 2MSL wait provides protection against delayed segments from an earlier incarnation of a connection from being interpreted as part of a new connection that uses the same local and foreign IP addresses and port numbers. But this works only if a host with connections in the 2MSL wait does not crash. What if a host with ports in the 2MSL wait crashes, reboots within MSL seconds, and immediately establishes new connections using the same local and foreign IP addresses and port numbers corresponding to the local ports that were in the 2MSL wait before the crash? In this scenario, delayed segments from the connections that existed before the crash can be misinterpreted as belonging to the new connections created after the reboot. This can happen regardless of how the initial sequence number is chosen after the reboot. To protect against this scenario, RFC 793 states that TCP should not create any connections for MSL seconds after rebooting. This is called the quiet time Few implementations abide by this since most hosts take longer than MSL seconds to reboot after a crash. Do operating systems wait for 2MSL seconds now after a reboot before initiating a TCP connection. The boot times are also less these days. Although the ports and sequence numbers are random but is this wait implemented in Linux? Also RFC 793 says that this wait is not required if history is maintained. Does linux maintain any history of used sequence numbers for connections to handle this case?

    Read the article

  • Setting up Live @ EDU

    - by user73721
    [PROBLEM] Hello everyone. I have a small issue here. We are trying to get our exchange accounts for students only ported over from an exchange server 2003 to the Microsoft cloud services known as live @ EDU. The problem we are having is that in order to do this we need to install 2 pieces of software 1: OLSync 2: Microsoft Identity Life cycle Manager "Download the Galsync.msi here" the "Here" link takes you to a page that needs a login for an admin account for live @ EDU. That part works. However once logged in it redirects to a page that states: https://connect.microsoft.com/site185/Downloads/DownloadDetails.aspx?DownloadID=26407 Page Not Found The content that you requested cannot be found or you do not have permission to view it. If you believe you have reached this page in error, click the Help link at the top of the page to report the issue and include this ID in your e-mail: afa16bf4-3df0-437c-893a-8005f978c96c [WHAT I NEED] I need to download that file. Does anyone know of an alternative location for that installation file? I also need to obtain Identity Lifecycle Management (ILM) Server 2007, Feature Pack 1 (FP1). If anyone has any helpful information that would be fantastic! As well if anyone has completed a migration of account from a on site exchange 2003 server to the Microsoft Live @ EDU servers any general guidance would be helpful! Thanks in advance.

    Read the article

  • What is the IPv6 equivalent to IPv4 RFC1918 addresses?

    - by Kumba
    Having a hard time wrapping my head around IPv6 here. A lot of the lingo seems targeted at enterprise-level IPv6 deployments, discussing link-local, site-local, global unicast, scopes, etc. Not a lot of solid information on really small networks, like home networks. I want to check my thinking and make sure I am getting the correct translations from IPv4-speak to IPv6-speak. The first question is, what's the equivalent of RFC1918 for IPv6? Initial searches suggested there was no equivalent. Then I stumbled upon Unique Local Addresses (RFC4193), and that states that all ULA's should be assigned the prefix fc00, followed by a 40-bit random number in the routing prefix. This random number is to "prevent collisions when two IPv6 networks are interconnected" -- again, another reference to an enterprise-level function. If I have a small local LAN at home, numbered using 192.168.4.0/24, what's my equivalent in IPv6's ULA scope? Assuming I will never, ever, tie that IPv6 address into the real internet (a router will NAT & firewall it), can I ignore the RFC to an extent and go with fc00::4:0/120? It also seems that any address in fc00::/7 are to be globally routable. Does this mean I'll need extra protections so my router would not automatically start advertising these private IPv6 addresses to the world? Second question, what's this link-local thing? Reading suggests a default-assigned address in the fe80::/10 range that has the last 64bits of the address comprised of the interface's MAC address. Seems to be required, too, but I'm annoyed by the constant discussion of it in relation to enterprise networks. Third question, what is scope id for? Seems to be yet another term tossed around in relation to enterprise networks, especially when interconnecting them, but almost no explanation on the smaller home network level. Can I see a scope ID AND CIDR notation used together? I.e., fc00::4:0/120%6, or are scope IDs only supposed to be applied to a single /128 IPv6 address?

    Read the article

  • Postfix won't pipe to PHP file through aliases file

    - by jfreak53
    I'm trying to pipe from postfix to a command. According to Postfix logs it worked, but when I check the command it didn't. This is a fresh postfix install. This is my alias file: # See man 5 aliases for format postmaster: root support: "| /usr/bin/php -q /var/www/pipe/pipe.php" I run sendmail [email protected] then type it and then on a separate line type . and it goes. I check the postfix log /var/log/mail.log and this is what it states: Nov 2 15:32:33 server3 postfix/local[13284]: 42C429E0B5: to=<[email protected]>, relay=local, delay=156, delays=156/0.01/0/0.05, dsn=2.0.0, status=sent (delivered to command: /usr/bin/php -q /var/www/pipe/pipe.php) So according to that it worked, but it doesn't. If I run echo 'text' | /usr/bin/php -q /var/www/pipe/pipe.php it does work just fine. Any ideas what I did wrong? I know piping is working, I originally checked it by running that command above WITHOUT the quotes, so just support: | /usr/bin/php -q /var/www/pipe/pipe.php What it did there was append my email header and all to the file pipe.php. So I know postfix was piping it, but when I put in the quotes it says it's going but it's not according to my script.

    Read the article

  • RHEL 5.2 installing on ProLiant BL460c - hangs at 'now booting the kernel'

    - by Dr Rocket Mr Socket
    As the title states, I have a problem. This server and the installation disc are also on the other side of the world to me so... So far, I have tried to start the install with the parameters: linux text noapic noacpi no=apic no=acpi which results in the same hang. I have also disabled a PCI ethernet adapter, I am uneasy about disabling the onboard ethernet adapter I do not know if ILO uses this. Anyone have any advice? Much appreciated. EDIT: full output after trying to begin the installation. boot: linux text Loading initrd.img.................. Loading vmlinuz....... Uncompressing Linux...done. Now booting the kernel stays on this for hours EDIT2: adding the 'mem=40960M' (server has 40 gigs of ram) parameter allows it to proceed but the following output directly after 'Now booting the kernal' Memory: sized by int13 0e801h initrd extends beyond end of memory (0x00ef2090 > 0x00000000) disabling initrd Console: 16 point font, 400 scans Console: colour VGA+ 80x25, 1 virtual console (max 63) pcibios_init : BIOS32 Service Directory Structure at 0x000ffee0 pcibios_init : BIOS32 Service Directory entry at 0xf0000

    Read the article

  • How to get automatic upgrades to work on Ubuntu Server?

    - by J. Pablo Fernández
    I followed the documentation for enabling automatic upgrades in Ubuntu servers, but it's not really updating anything at all. My /etc/apt/apt.conf.d/50unattended-upgrades looks almost like the default. // Automatically upgrade packages from these (origin, archive) pairs Unattended-Upgrade::Allowed-Origins { "Ubuntu karmic-security"; "Ubuntu karmic-updates"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. The package 'mailx' // must be installed or anything that provides /usr/bin/mail. Unattended-Upgrade::Mail "[email protected]"; // Automatically reboot *WITHOUT CONFIRMATION* if a // the file /var/run/reboot-required is found after the upgrade //Unattended-Upgrade::Automatic-Reboot "false"; The directory /var/log/unattended-upgrades/ is empty. Running /etc/init.d/unattended-upgrades start is not very nice: root@mozart:~# /etc/init.d/unattended-upgrades start Checking for running unattended-upgrades: root@mozart:~# Something seems to be broken, but I'm not sure why. I have pending updates and they are not being applied: root@mozart:~# aptitude safe-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following packages will be upgraded: linux-libc-dev 1 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 0B/743kB of archives. After unpacking 4096B will be used. Do you want to continue? [Y/n/?] In all the servers I have, unattended upgrades seems to have been disabled: root@mozart:~# apt-config shell UnattendedUpgradeInterval APT::Periodic::Unattended-Upgrade root@mozart:~# Any ideas what am I missing?

    Read the article

  • Does a mini PCIe SSD fit into a Acer Aspire One?

    - by Narcolapser
    Question: What, if any, mini PCIe SSDs fit into the mini PCIe slot of the Acer Aspire one AOD250? Info: I have an Aspire One and I've been considering loading it with an SSD. The mini PCIe drives fascinate me and so I want to try that approach. Also they tend to be cheaper and not much slower. (at least not on Read time which matters more for a netbook) But I've heard that some times computers don't support certain mini PCIe cards. And I was wondering if anyone knew about the Aspire One? I tried asking Acer tech support, but they didn't know jack and spent the whole time informing that I would have to support my Ubuntu install on my own, which I was. Anyway. Rant Aside, I'm looking at this drive: http://www.newegg.com/Product/Product.aspx?Item=N82E16820183252 It states it is exclusively for the Eee PC. now does that mean It was designed for the Eee PC but will work in my netbook. or is something going to go wrong? (like right now my concern is it physically not fitting.) Any information would be appreciated. o7

    Read the article

  • Printing on Windows 8 with PDF viewer (Adobe Reader) from network

    - by Bongo
    i have a problem with the Adobe Reader 8, but the problem seems to be equally bad with other pdf viewers. Here is the configuration: My PDF viewer is located on network drive "Z:" which is the network adress \dgs-main\progs. I tried to start the adobe reader from here - \\dgs-main\progs\Adobe\Reader 8.0\Reader\AcroRd32.exe and open the PDF from here - C:\Users\ServiceDesk\AppData\Local\Temp\GeneratedPDF.pdf The problem is as follows, if i open the PDF with a local PDF viewer everything works fine and i can print the document. If i open the PDF with the Network PDF viewer then it opens, but printing is impossible. The error message states: "Unable to start print job. Is printer available?" As mentioned above, it works with a local pdf viewer. In both cases i use the same printer. The Printer is a network printer but even with a local printer it fails. The error occurs only on Windows 8 machines. On windows 7 it works fine. I Hope somebody can tell me what the problem is. Thanks in advance and have a fine day.

    Read the article

  • My linux server "Number of processes created" and "Context switches" are growing incredibly fast

    - by Jorge Fuentes González
    I have a strange behaviour in my server :-/. Is a OpenVZ VPS (I think is OpenVZ, because /proc/user_beancounters exists and df -h returns /dev/simfs drive. Also ifconfig returns venet0). When I do cat /proc/stat, I can see how each second about 50-100 processes are created and happens about 800k-1200k context switches! All that info is with the server completely idle, no traffic nor programs running. Top shows 0 load average and 100% idle CPU. I've closed all non-needed services (httpd, mysqld, sendmail, nagios, named...) and the problem still happens. I do ps -ALf each second too and I don't see any changes, only a new ps process is created each time and the PID is just the same as before + 1, so new processes are not created, so I thought that process growing in cat /proc/stat must be threads (Yes, seems that processes in /proc/stat counts threads creation too as this states: http://webcache.googleusercontent.com/search?q=cache:8NLgzKEzHQQJ:www.linuxhowtos.org/System/procstat.htm&hl=es&tbo=d&gl=es&strip=1). I've changed to /proc dir and done cat [PID]\status with all PIDs listed with ls (Including kernel ones) and in any process voluntary_ctxt_switches nor nonvoluntary_ctxt_switches are growing at the same speed as cat /proc/stat does (just a few tens/second), Threads keeps the same also. I've done strace -p PID to all process too so I can see if any process is crating threads or something but the only process that has a bit of movement is ssh and that movement is read/write operations because of the data is sending to my terminal. After that, I've done vmstat -s and saw that forks is growing at the same speed processes in /proc/stat does. As http://linux.die.net/man/2/fork says, each fork() creates a new PID but my server PID is not growing! The last thing I can think of is that all process data that proc/stat and vmstat -s show is shared with all the other VPS stored in the same machine, but I don't know if that is correct... If someone can throw some light on this I would be really grateful.

    Read the article

  • Implications of disabling the AMD Phenom's TLB patch?

    - by DMA57361
    I'm currently running a AMD Phenom X4 9600 processor (yeah, it's aging a bit, but other recent problems mean it's not getting upgraded in the immediate future), which happens to be one of the chips that suffer from the TLB errata. I recall that the first time I played with disabling the TLB patch (probably over a year ago, while playing a game that had a severe performance problem such that it was almost unplayable unless the patch was disabled) I had at least one BSOD, but I can't remeber them being particularly frequent. However, because it decreased instability, I stopped disabling the patch once I was done with the game. Now, after some recent hardware changes I was experiancing much worse performance than expected from the new hardware under some circumstances, and the TLB jumped to mind - after testing I found that disabling the patch would improve the performance to expected levels. I'm now wondering if it's worthwhile always having the patch disabled to avoid any potential slowdowns cropping up in the future, or if it is too dangerous. Everything I read states that the bug, when not patched, can causes a system lock-up in "rare circumstances". So, with the TLB patch disabled: How frequently should system lock-ups be expected? Do we know what the circumstances that trigger the lock-ups are? (Don't worry too much about being highly technical, but essentially I wonder if the chip more vunerable under heavy load, or heavy memory usage, etc?) Are there any secondary problems I should be aware of? (Don't include things that are charateristic to all lock-ups, please)

    Read the article

  • illegitimate traffic from user agent Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.10) Gecko/2009042316 Firefox/3.0.10 (.NET CLR 3.5.30729)

    - by user114293
    Since the beginning of the year, I'm getting a lot of traffic with the user agent Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.10) Gecko/2009042316 Firefox/3.0.10 (.NET CLR 3.5.30729). My access logs show 40% - 60% from that user agent. That's strange because the user agent states a Firefox 3.0.10 browser (is anybody using that browser in 2012? Definitely not 40%-60% of visitors on a normal website). Also, the logs show that this user agent only requested the HTML document and no referenced assets like images, css, js files. I checked the IPs of those requests (with that UA). It's coming from all over the world. I recognized that those IPs sometimes have a mobile user agent. So my suspicion is a mobile app that is doing a lot of "spider requests" - but if that would be the case than other web sites should have the same problem. That's actually my question: Does anybody experience same/similar problems?

    Read the article

  • Network latency and speed of light

    - by James
    This was kinda of covered by the following Is minimum latency fixed by the speed of light? , but i would like to add the follow up a bit. The scenario is as follows; we have two opposing sites one on the West Coast of the US and one in Ireland. The customer is in central Europe, and has requested a latency test. Ireland gives responses of ~65-70ms. However the West Coast guys claim to be faster with a response of 60ms. Now a quick check says that light in fiber would take about 42ms to make the trip to the States and 8.5ms to Ireland. So obviously this is a single hop and does not include routers, switches, firewalls, protocol overhead etc. Would I be right to call BS on their figures? As a final note I tested a ping to Google IP address that was allegedly on the west coast from a site that covered a similar distance and was amazed to get a response time of 20ms. Suggesting ICMP packets that travel twice the speed of light. So A) what am I missing B) Am I right to suspect shenanigans? UPDATE: Guys thanks so far for your help and I have been reading various previous questions on this. About 5 years I had an issue where the hop from the UK to Ireland added 10ms of latency no matter what we did. In the end I moved the servers; So imagine my surprise when I have guys that claim they are 5ms faster with a transatlantic trip. So again should I call BS? Oh and assume both sites are normal mortals that don't have access to Google magical routing, warp dives or flux capacitors. :)

    Read the article

  • How is made sure magnetic or electric fields from devices like transformers or fans close nearby do

    - by matnagel
    Fans and transformers which are inside the server case create magnetic and electric fields. Electric fields can be easily shielded, but what about magnetic fields, they can only be shielded with high cost materials like mu metal http://en.wikipedia.org/wiki/Mu-metal If a hard drive is installed too close to an intense transformer field, how is the magnetically stored information on the ferromagnetic surfaces of the disk kept safe? Even if drives are shielded, where are the limits? Is there some technical investigation or recommendation from manufacturers about this? (I never heard about something and never had any problem but I am interested in some facts. This is much preferred over what you believe or a habit you developed. Please try to give some solid infromation.) I have built and repaired many servers and sometimes I did put the harddrive on top of the power supply. Edit: This question is not about frequencies that could affect the drive via the power or data connectors of the drive, those are electronically decoupled and that's another question. Edit 2: The wikipedia page states that the motor inside the drive is shielded with mu metal. It is obvious that manufactureres have to take care of this. This question is about such influences from outside the drive.

    Read the article

  • Requesting better explanation for expires headers

    - by syn4k
    I have successfully implemented expires headers however, for several days I have been stumped by one thing. This article: http://www.tipsandtricks-hq.com/how-to-add-far-future-expires-headers-to-your-wordpress-site-1533 states Keep in mind that when you use expires header the files are cached in the browser until it expires so do not use this on files that changes frequently. Other sites indicate the same in my reading. But this doesn't seem to be true. I have updated an image, using the same name, several times. Each time I update and refresh my browser, the new image (with the same name) displays. I understand from this article that the old image should display unless I use a new name. Do you happen to know where the misunderstanding is? I have verified that the image in question has expires headers set on it: Request Headers: Host domain.com User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.28) Gecko/20120306 Firefox/3.6.28 FirePHP/0.5 Accept image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://domain.com/index.php Cookie __utma=1.61479883.1332439113.1332783348.1332796726.4; __utmz=1.1332439113.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none);PHPSESSID=lv2hun9klt2nhrdkdbqt8abug7; __utmb=1.33.10.1332796726; __utmc=1; ck_authorized=true x-insight activate If-Modified-Since Mon, 26 Mar 2012 21:55:33 GMT Cache-Control max-age=0 Response Headers: Date Mon, 26 Mar 2012 22:06:50 GMT Server Apache/2.2.3 (CentOS) Connection close Expires Wed, 25 Apr 2012 22:06:50 GMT Cache-Control max-age=2592000

    Read the article

  • Is there a way in Windows 7 to disable "journaling"?

    - by Psycogeek
    C:\$extend\$Usn.Jrnl:$J:$data Here is a picture finally. The large strip in the center of the top band is the largest chunk, in the other, grey areas are the various clusters with it. On the right, the big long grey line is $logfile (not paging), and it is 63&nbsb;MB. Paging, 500&nbsb;MB is the dark cyan chunk, next to the yellow MFTres in the inner rings.. The disk was defragged so they could be seen easier. Not all clusters of this type of file are tagged, but the idea is there. The disk is 4k clusters, now about 12 GB size. Each cute little block in the picture is .81 MB and represents 207 clusters. The dkGreen section, is mostly the whole Winsxs pile, also interesting when they keep telling us it doesn't take much disk space. Wikipedia suggests that in previous NT systems "USN journaling" would be turned on when enabled (assumes it could also be turned off?). What aspects, services, or program is working on putting that stuff all over the disk which is known by $jrnl$ type clusters, even if it is not actual USN journaling? Is it possible in a Windows 7 system to completly disable the journaling, and what would be the ramifications of that? On a Windows XP NTFS system, I do not recall seeing the quantity of disk clusters used with these $jrnl$ names, so I do not recall this being necessary in this quantity for an NTFS file system itself? I understand that it would not be there, if it did not have a useful function :-) Information about how wonderful is fine, if that information will help track down what parts of the system create and use it. Change Journals states: Change journals are also needed to recover file system indexing Hmm, that might explain some of them, or why it was left on the disk. A crash while background indexing?

    Read the article

  • Why does my DSL modem now need a reboot each time for my laptop to connect?

    - by msorens
    I have a rather peculiar home networking issue. For sometime my home network was purring along fine. I could turn on either of my laptops and they would quickly find and connect to my DSL modem (and thence the internet). Several days ago I unplugged my DSL modem for the first time in months. Upon turning it back on and waiting for the boot to finish, the lights on the panel indicated the DSL modem was fully operational, just as before. But that's not what happened. Not at all. Now when I turn on my Win7 laptop, the network icon in my system tray shows a small starburst; hovering over it the tooltip states "Not connected; connections are available". Clicking it lists several nearby networks including my own network showing a strong signal. If I click to connect, it attempts a connection but then I get a dialog stating "Windows was unable to connect to MyNet.". Turning off wireless on my laptop and turning it back on yields no difference. Running the network troubleshooter (which includes doing a repair on the network connection) yields no difference. The only remedy is to reboot the DSL modem (i.e. unplug it, wait a few seconds, then plug it back in). As soon as it goes online my laptop finds it and connects properly. To add one more twist to the story, this happened to me once before, several months ago. After a couple weeks, the situation resolved itself(!). Everything started working properly again, due to nothing I did. Final note: this problem only affects the wireless connection to the DSL modem. My desktop computer, connected via hardline to the DSL modem, connects fine when I turn it on. Any thoughts on why this is happening or how to fix it?

    Read the article

  • What is the peak theoretical WiFi G user density? [closed]

    - by Bigbio2002
    I've seen a few WiFi capacity planning questions, and this one is related, but hopefully different enough not to be closed. Also, this is related specifically to 802.11g, but a similar question could be made for N. In order to squeeze more WiFi users into a space, the transmit power on the APs need to be reduced and the APs squeezed closer together. My question is, how far can you practically take this before the network becomes unusable? There will come a point where the transmit power is so weak that nobody will actually be able to pick up a connection, or be constantly roaming to/from APs spaced a few feet apart as they walk around. There are also only 3 available channels to use as well, which is a factor to consider. After determining the peak AP density, then multiply by users-per-AP, which should be easier to find out. After factoring all of this in and running some back-of-the-envelope calculations, I'd like to be able to get a figure of "XX users per 10ft^2" or something. This can be considered the physical limit of WiFi, and will keep people from asking about getting 3,000 people in a ballroom conference on WiFi. Can anyone with WiFi experience chime in, or better yet, provide some calculations for a more accurate figure? Assumptions: Let's assume an ideal environment with no reflection (think of a big, square, open room, with the APs spaced out on a plane), APs are placed on the ceiling so humans won't absorb the waves, and the only interference are from the APs themselves and the devices. As for what devices specifically, that's irrelevant for the first point of the question (AP density, so only channel and transmit power should matter). User experience: Wikipedia states that Wireless G has about 22Mbps maximum effective throughput, or about 2.75MB/s. For the purpose of this question, anything below 100KB/s per user can be deemed to be a poor user experience. As for roaming, I'll assume the user is standing in the same place, so hopefully that will be a non-issue.

    Read the article

  • Which events specifically cause Windows 2008 to mark a SAN volume offline?

    - by Jeremy
    I am searching for specific criteria/events that will cause Windows 2008 to mark a SAN volume as offline in disk management, even though it is connected to that SAN volume via FC or iSCSI. Microsoft states that "A dynamic disk may become Offline if it is corrupted or intermittently unavailable. A dynamic disk may also become Offline if you attempt to import a foreign (dynamic) disk and the import fails. An error icon appears on the Offline disk. Only dynamic disks display the Missing or Offline status." I am specifically wondering if, on the SAN, changing the path to the disk (such as the disk being presented to the host via a different iSCSI target IQN or a different LUN #) would cause a volume to be offlined in disk management. Thanks! Edit: I have already found two reasons why a disk might be set offline, disk signature collisions and the SAN disk policy. Bounty would be awarded to someone who can find further documented reasons related to changes in the volume's path. Disk signature collisions: http://blogs.technet.com/b/markrussinovich/archive/2011/11/08/3463572.aspx SAN disk policy: http://jeffwouters.nl/index.php/2011/06/disk-offline-with-error-the-disk-is-offline-because-of-a-policy-set-by-an-administrator/

    Read the article

  • Is VGA port hot-pluggable?

    - by Martin Bøgelund
    In meetings, I often see people detaching the VGA connector from one running laptop and connecting it to another, while the projector is still on. Is this 100% risk free, and OK by design of the VGA standard? If there's a risk involved in hot-plugging VGA, can it be removed by turning off or suspending either laptop, display, or both? I see this being done all the time without causing disaster, so clearly I'm not interested in answers stating "we do it all the time, so it should be OK!". I want to know if there's a risk - real or in theory - that something breaks when doing this. EDIT: I did an internet search on the topic, and I never found a clear statement as to why it is safe or unsafe to hot swap VGA devices. The typical form is a forum question asking basically the same question as I did, and the following types of statements Yes it's hot swappable! I do it all the time! It involves some kind of risk, so don't do it! You're some kind of moron if you think there's a risk, so just do it! But no explanation as to why it safe or not... Joe Taylors answer below contains a link to a forum post and answers that basically give me the same statements as mentioned above. But again, no good explanation why. So I looked for an actual manual for a projector, and found "Lenovo C500 Projector User’s Guide". It states on page 3-1: Connecting devices Computers and video devices can be connected to the projector at the same time. Check the user’s manual of the connecting device to confirm that it has the appropriate output connector. [image] Attention: As a safety precaution, disconnect all power to the projector and devices before making connections. But again, no good explanation.

    Read the article

  • System State Backup Retention Policies

    - by isoscelestriangle
    I was wondering if there was a general consensus on how long to keep system state backups. I am trying to reevaluate our current backup process, and trying to get a good handle on our current storage requirements. Our current setup involves tapes and sending backups offsite with Barracuda Networks. We have been doing our system state backups with Barracuda now, which does full backups daily, leaving our storage requirements growing quite quickly. My boss is a little too gung-ho with backups and wants our system states saved for quite a while. We currently have 5 days of nightlies, 5 weeklies, 3 monthlies, and so on. I think this is quite overkill for system state backups. My boss wants the ability to go back in time to find when an issue appeared, but I don't think that is practical. Many things change in the course of several months. I also think it would be hard not to notice problems with our DCs and other servers for several months. I would think that a previous week's snapshot and the current week's dailies would suffice. Any advice or reading you can point me to? Thanks!

    Read the article

  • Redirect 301 fails with a path as destination

    - by Martijn Heemels
    I'm using a large number of Redirect 301's which are suddenly failing on a new webserver. We're in pre-production tests on the new webserver, prior to migrating the sites, but some sites are failing with 500 Internal Server Error. The content, both databases and files, are mirrored from the old to the new server, so we can test if all sites work properly. I traced this problem to mod_alias' Redirect statement, which is used from .htaccess to redirect visitors and search engines from old content to new pages. Apparently the Apache server requires the destination to be a full url, including protocol and hostname. Redirect 301 /directory/ /target/ # Not Valid Redirect 301 /main.html / # Not Valid Redirect 301 /directory/ http://www.example.com/target/ # Valid Redirect 301 /main.html http://www.example.com/ # Valid This contradicts the Apache documentation for Apache 2.2, which states: The new URL should be an absolute URL beginning with a scheme and hostname, but a URL-path beginning with a slash may also be used, in which case the scheme and hostname of the current server will be added. Of course I verified that we're using Apache 2.2 on both the old and the new server. The old server is a Gentoo box with Apache 2.2.11, while the new one is a RHEL 5 box with Apache 2.2.3. The workaround would be to change all paths to full URL's, or to convert the statements to mod_rewrite rules, but I'd prefer the documented behaviour. What are your experiences?

    Read the article

  • Preventing h/w RAID cards from dropping slow JBOD disks

    - by Kevin
    I'm considering buying a used SAS h/w RAID card for externally attaching HDDs to an HP ProLiant I'm setting up. However, I only require RAID functionality on some of the drives. Theoretically it should be simple to JBOD the other drives, but some of them are inexpensive SATA disks and probably cannot have TLER disabled. I'd like to know, prior to actually ordering a RAID card, whether typically RAID cards would still enforce dropping of disks that do not respond within a few seconds, even if the disk is in a JBOD, and whether there is any way to disable this. Ideally it would be nice to be able to select certain SAS ports that will be pass-through, bypassing the RAID engine entirely and just acting as an HBA for those ports. I know I could buy a separate SAS HBA but that seems like a waste of $ and is also impractical as it's a 1U server so space is extremely limited. My question then is whether the functionality I'm looking for (pass-through on certain ports or at least JBOD drives not getting themselves dropped due to slow response) is typical of proper h/w RAID cards such as PERC 5/E etc. I've browsed through the latter's manual but unfortunately, as with most user manuals, it states the obvious and doesn't state the unobvious. Thanks for any info, Kevin

    Read the article

  • Can a folder on a NAS be made available as a physical drive in VMWare?

    - by asbjornu
    We are currently in the process of moving from a single web server to two load balanced web servers and are facing some challenges we don't quite know how to fix. One of these is that the current single server hosts applications that write stuff to disk. The applications running on the server expects that when something is written to disk it later will in fact exist, so it's important that this premise is fulfilled with the dual server architecture as well. The dual server setup is a couple of VMWare instances with Windows Server 2008 R2 as the guest operating system. Out of the box, these instances does not share any kind of file system, so just moving the applications over would make them break since one instance would write something to the file system that doesn't exist on the other. Thus we need to share a file system between the two virtual servers. Our host has proposed to create a network share on a SAN and map this share individually on each virtual machine. This doesn't work too well due to NTFS permissions, etc., because the share needs to be accessed by several independent web applications that won't even be in the same application pool. The only solution that kind of works is to hard code an "identity" for each web application into its web.config file, but this means password in clear text which doesn't sit well with me. Since the servers are virtual, I'm thinknig: Wouldn't it be possible to make a NAS area available as a physical disk in the gues operating system somehow? Since VMWare has full control of the virtual hardware, you'd think it would be able to "fake" a local hard drive in the virtual machine that in reality is a folder on a NAS, but so far I haven't found anything that states how and if this is possible. So I have to ask the wonderful Server Fault community: Can a folder on a NAS be made available as a physical drive (typical D:) in both of the virtual machines?

    Read the article

  • How do I uncompress vmlinuz to vmlinux?

    - by Lord Loh.
    I have already tried uncompress, gzip, and all other solutions that come up as google results and these have not worked for me. To get just the image search for the GZ signature - 1f 8b 08 00. > od -A d -t x1 vmlinuz | grep '1f 8b 08 00' 0024576 24 26 27 00 ae 21 16 00 1f 8b 08 00 7f 2f 6b 45 so the image begins at 24576+8 => 24584. Then just copy the image from the point and decompress it - > dd if=vmlinuz bs=1 skip=24584 | zcat > vmlinux 1450414+0 records in 1450414+0 records out 1450414 bytes (1.5 MB) copied, 6.78127 s, 214 kB/s Got these instructions verbatim from a forum online: http://www.codeguru.com/forum/showthread.php?t=415186 This process does not work for me and end up giving errors that states file not found 0024576 and all subsequent numbers. How do I proceed extracting vmlinux from vmlinuz? Thank you. EDITED: This is a reverse engineering question. I have no access to the distro to install any RPM or recompile. I start with nothing but vmlinuz.

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >