Search Results

Search found 2074 results on 83 pages for 'stick'.

Page 63/83 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Keyboards for kiosk/outdoor/abusive environments?

    - by Justin Scott
    We have a bunch of kiosks deployed into let's just say... abusive environments. The enclosures we had built are touch as nails, and the HP thin client computers are working great. The keyboards that were purchased for the project have been nothing but problems. They're a generic brand direct from a Chinese manufacturer. They're stainless steel with keys mounted from the inside and a trackball, but they've been deployed for only a month and nearly 20% of them are already out of service due to keys sticking, keys not working, trackball problems, water damage, and a variety of other issues. Are there any kiosk keyboards that can take a beating without breaking so easily? Ideally they should be tamper-proof (keys can't be removed), waterproof, lettering should be engraved into the keys, trackball, option for a single mouse button would be nice, and some protection to keep debris out of the keys so they don't stick (sticky cleaners, food debris, etc.). Does such a beast exist? Everything we've looked at is susceptible to easy damage. We need the M1 Abrams Tank of keyboards. Any suggestions?

    Read the article

  • Dell PowerEdge 1600SC Server won't boot from Fedora 12 DVD because of CD only drive.

    - by studiohack23
    Dell PowerEdge 1600SC Server won't boot from Fedora 12 DVD in the drive because it only supports CDs as I found out after the fact. I'm a complete novice @ servers, so if you need more detail, let me know, and I'll try to provide it. This server is around 4-6 years old. it has "PXE" boot, not sure what that means? This particular server has 3 RAID hard drives. As far as I know, they have all been wiped. I looked up the service tag on Dell, and it has: Compact Disk Drive, 650M, I Internal, Half Height, 48X, BlackHitachi LG Data Storage as its CD drive. Thus, the CD drive does not support DVDs, so installation will have to be via a live CD. However, I'm trying to put Amahi Home Server (http://www.amahi.org/), and Live CD/USB stick installs are not recommended unless one is an expert Linux user. any suggestions as to how to get around this? PROBLEM SOLVED! THANKS for all the help!

    Read the article

  • Advice needed: warm backup solution for SQL Server 2008 Express?

    - by Mikey Cee
    What are my options for achieving a warm backup server for a SQL Server Express instance running a single database? Sitting beside my production SQL Server 2008 Express box I have a second physical box currently doing nothing. I want to use this second box as a warm backup server by somehow replicating my production database in near real time (a little bit of data loss is acceptable). The database is very small and resources are utilized very lightly. In the case that the production server dies, I would manually reconfigure my application to point to the backup server instead. Although Express doesn't support log shipping natively, I am thinking that I could manually script a poor man's version of it, where I use batch files to take the logs and copy them across the network and apply them to the second server at 5 minute intervals. Does anyone have any advice on whether this is technically achievable, or if there is a better way to do what I am trying to do? Note that I want to avoid having to pay for the full version of SQL Server and configure mirroring as I think it is an overkill for this application. I understand that other DB platforms may present suitable options (eg. a MySQL Cluster), but for the purposes of this discussion, let's assume we have to stick to SQL Server.

    Read the article

  • Http-Only cookies in WebLogic: what versions support them/how and why are they supported?

    - by John
    We want to make all cookies set by our webapp http-only. I only have a basic understanding of the benefits of doing this but I'm told by security people that it's a Good Thing (tm) Our app is running under JDK1.6.05 and WebLogic10.3.0 After way too much digging around Oracle's website for documentation, I've found good evidence that the first version of WebLogic to support http-only cookies is 10.3.1. By "support," I mean the cookie-http-only deployment-descriptor element. Before we go about upgrading, I'd be nice to have these questions answered: 1a) Is it accurate that WL10.3.1 is the first version to support http-only cookies and that we're out of luck with 10.3.0? 1b) If we do indeed need to upgrade, is there an easy to do so under Windows? I've heard people mention an "upgrade jar" that you just stick in the classpath but I can't find any mention of this by Oracle. Does an easy way exist, or do we need to do a full-install of the new version? 2) What does the cookie-http-only deployment-descriptor element do when enabled? Will it ensure all cookies set by the application have an http-only=true attribute? Will it do more or less? Is there anything I'll have to do programmatically? 3) Is there anything in general I should know about http-only cookies, getting my web app to take advantage of them, or other security concerns?

    Read the article

  • Is this memory compatible with this motherboard???

    - by ClarkeyBoy
    Hi, I have a Foxconn P35AP-S as seen here. I need to get some more RAM since I only have 1 2GB stick. The current one is 1066MHz. I would like to get the memory situated here: www.scan.co.uk/Products/6GB-(3x2GB)-Corsair-XMS3-Classic-DDR3-PC3-10666-(1333)-Non-ECC-Unbuffered-CAS-7-7-7-20-165V memory. It is 6GB of Corair 1333MHz memory. According to the motherboard website it is able to take 1333MHz, but it says oc** next to it (which means achieved when overclocked). So my question is: are they still compatible without overclocking, or does the motherboard require overclocking to be compatible? If it requires overclocking (which I have no idea how to do) can anyone recommend any other memory (in the region of 6GB) which the motherboard is compatible with? I'd rather it were from Scan, but to be honest it doesnt need to be. Many thanks in advance. Regards, Richard Edit: I just realised that the motherboard has a maximum capacity of 4GB of RAM. Scrap the RAM given above, I'd like to go for something like that but only 4GB. Edit: Scrap that last edit - its only if I go for DDR3 that I need to take this into account. DDR2 is a maximum of 8GB.

    Read the article

  • How to configure VirtualBox server for performance at home

    - by BluJai
    I currently have two physical Ubuntu Server 10.10 servers at home: one serves as our firewall/router/DHCP/VPN server and the other performs double-duty as a file server and a VirtualBox host for an Ubuntu Desktop 10.10 machine which I use from remote connections (via NoMachine) for many thin-client purposes which are irrelevant to my question. What I'd like to accomplish is to consolidate the two physical machines into one which is a dedicated VirtualBox host (most likely running Ubuntu Server 10.10). Note that I'd like to stick with VirtualBox (if possible) because I'm most comfortable with it and use it on a daily basis at both home and work. Specifically, I plan to have one VM set up as file server, another as the firewall/router/DHCP/VPN (or possibly split those a bit) and a third, which is the only current VM (already VirtualBox), which is the thin-client host. My question comes down to performance and/or recommendations about the file server VM. The file server hosts about 6 terabytes of data across 4 drives. What I'd like to do is use raw disk access from the VM directly to the existing disks. However, I'm curious what performance advantage/disadvantage that would have as compared to using shared folders from the VM host and basically just have the whole drive served as a shared folder to the VM which would then serve it to the other machines on the network. I don't know if virtual disks would even work in this scenario and I certainly wouldn't want a drive to be filled with just a single file which is 1.5 TB (disk image). To add understanding of context, but not to get additional advice, I want to virtualize these machines because I intend to regularly use the snapshot capabilities of VirtualBox for the system disks (which will be virtual drives) of the VMs and I have some physical space/power needs to address (as I mentioned, this is at home).

    Read the article

  • Two hosted servers, one public - VPN?

    - by Aquitaine
    Hello there, Web developer here who has to occasionally wear a system & network admin hat (small company). We currently have a single hosted server running Windows Server 2003 that runs both our web server (IIS/Coldfusion) and our database server (SQL Server 2008). We lock down the SQL server by allowing only specific IPs to connect to it. Not ideal but it's worked thus far. We're moving up to two distinct servers and I want to take the opportunity to 'get things right' and make only the web server face the public. What I need to be able to do is to allow only a handful of people to connect to the database server. Rather than using an IP allow list, I'd prefer to use a VPN to let people through so that access is based on the user and not simply the user's location. I'm leaning toward something like OpenVPN, just so I can stick with Server 2008 Web edition. Do I: Use the web server as a VPN server and set up the database server to only accept connections from the web server? Is there an extra step required to make connections to, say, db.mycompany.com route through the VPN rather than through a different connection? I'm ignorant of this part of network infrastructure stuff. Or, Set up a VPN server on the database server as the only public-facing server connection so that there aren't any routing issues to deal with? I know this is Network 101 stuff but I thought I'd ask before just blundering through it since it could affect the company a bit. Thanks very much!

    Read the article

  • How to configure VirtualBox server for performance at home

    - by BluJai
    I currently have two physical Ubuntu Server 10.10 servers at home: one serves as our firewall/router/DHCP/VPN server and the other performs double-duty as a file server and a VirtualBox host for an Ubuntu Desktop 10.10 machine which I use from remote connections (via NoMachine) for many thin-client purposes which are irrelevant to my question. What I'd like to accomplish is to consolidate the two physical machines into one which is a dedicated VirtualBox host (most likely running Ubuntu Server 10.10). Note that I'd like to stick with VirtualBox (if possible) because I'm most comfortable with it and use it on a daily basis at both home and work. Specifically, I plan to have one VM set up as file server, another as the firewall/router/DHCP/VPN (or possibly split those a bit) and a third, which is the only current VM (already VirtualBox), which is the thin-client host. My question comes down to performance and/or recommendations about the file server VM. The file server hosts about 6 terabytes of data across 4 drives. What I'd like to do is use raw disk access from the VM directly to the existing disks. However, I'm curious what performance advantage/disadvantage that would have as compared to using shared folders from the VM host and basically just have the whole drive served as a shared folder to the VM which would then serve it to the other machines on the network. I don't know if virtual disks would even work in this scenario and I certainly wouldn't want a drive to be filled with just a single file which is 1.5 TB (disk image). To add understanding of context, but not to get additional advice, I want to virtualize these machines because I intend to regularly use the snapshot capabilities of VirtualBox for the system disks (which will be virtual drives) of the VMs and I have some physical space/power needs to address (as I mentioned, this is at home).

    Read the article

  • Creating basic, redundant gigE or IB storage network for Xen?

    - by StaringSkyward
    With only a modest budget, I want to move my 4 xen servers over to network storage -either NFS or iSCSI which will be determined based on how well it performs when we test it (we need good throughput and it must continue to work through link and switch failure tests). We may add another couple of xen servers at some point when this is done. I don't know much about the design and operation of storage networks, so would really appreciate some hints from those with experience. The budget is around $3,800 excluding the storage appliance. I am currently thinking these are my options to remain on budget: 1) Go for used infiniband hardware and aim for 10gb performance. 2) Stick with gig ethernet and buy some new switches (cisco or procurve) to create a storage-only ethernet LAN. Upgrade to 10gigE later but try to use hardware capable of it where possible to reduce upgrade costs. I have seen used, warrantied infiniband switches at reasonable prices (presumably because big companies are converging on 10gbit ethernet?) and the promise of cheap 10gb is attractive. I know nothing about IB, so here come the questions: Can I buy 2 x switches and have multiple HBAs in my xen and storage nodes to get redundancy and increased performance without complexity or expensive management software costs? If so, can you point me to some examples? Do NFS and iSCSI work just the same regardless? Is IB a sensible choice or could/should I use ethernet or FC on the same budget - I'm keen not to get boxed into a corner for future upgrades, however. For the storage I am likely to build a storage server using nexentastor with the intention that I can later add more disks, SSDs and add another server to provide a failover option at the storage level. An HP LeftHand starter SAN is also under consideration, too. Thanks in advance.

    Read the article

  • PC won't boot after hanging during Windows 8 automatic repair [closed]

    - by Mun
    I've got a custom built PC using an ASUS P5E motherboard and Intel Q6600 CPU. I plugged in my mp3 player to the USB port yesteday, and when I came back to the machine after about an hour or so, the Windows 8 automatic repair message was on the screen. It seemed to stick there for an hour, after which I decided to just hit reset and try and figure out what was going on. However, the machine rebooted to a black screen before even getting to the BIOS, with the monitor lights just blinking indicating there was no signal. Tried powering down completely, waiting a few minutes and then powering back up again with no difference; black screen with monitor lights blinking. Tried leaving it on for a while and then pinging from another machine or accessing it via something like LogMeIn, but everything showed the machine as being offline. There were also no error beeps or anything like that. Also tried unplugging all of the memory and rebooting and that also caused no error beeps. Removed one of the display cards and left the other one in there, and still only a black screen. I'm inclined to think that the motherboard or CPU is fried, but there is no indication of damage on any components and the CPU fan seems to be working fine as it always has, so overheating seems unlikely. It's also plugged into a surge protector. The motherboard also has a green light which still lights up. As everything was still working fine before hitting the reset button during Windows 8 automatic repair screen, at which point everything stopped working, it seems unlikely that this problem is down to component failure. Has anyone else experienced anything like this or have any ideas on what could be causing this behavior?

    Read the article

  • Value of Itanium over x86_64 for Oracle Deployment

    - by Antitribu
    We are looking at a new environment to run our Oracle Database running on SUSE (potentially migrating to RedHat). Our database is approximately 100GB and performs adequately on our current hardware (x86_64) with approximately 6GB of ram allocated to it. We are growing quickly however and will require more performance shortly. Given the cost of Oracle licenses we would like to maximize the value from each license by choosing the most appropriate CPU to run the software on. The questions are: Are there substantial benefits to looking at Itanium hardware, are there any drawbacks? Is there a point where Itanium starts to scale out better? What are the long term support options for Itanium? Given the dominance of x86 would it be safer long term to stick with x86? On average what would be the performance benefit of implementing an Oracle database on Itanium over x86_64? Is this an issue at all or will other factors (IO/RAM) cap out first? If anyone can point me towards some solid documentation on comparisons between the two platforms that provides good case analysis of when to choose which I'm more than happy to accept that as an answer.

    Read the article

  • Where to get grub files without using grub-install

    - by Jacky
    I am in a particular situation. I have a MacBook Pro with no internal CD drive and both MacOS X (minimal setup) and Linux (my main system) is installed. During a cross-upgrade to Ubuntu 12.04 I messed up grub, so that my /boot/grub directory is basically empty. This means I can't boot Linux on the laptop anymore but only get into grub rescue. Normally this is no issue as you'd just boot from a rescue CD or USB stick, but unfortunately with a MacBook Pro this is not possible (I have reFIT installed and it attempts to boot, but it fails and the manual says that Apple's EFI firmware is not able to handle this situation). From MacOS X, however, I still have write access to the Linux partition. I've now been trying to figure out how to populate the /boot/grub folder with the necessary files, to no avail so far. The ISO image of Ubuntu 12.04 contains an EFI folder which is not what I am looking for, instead I need the normal.mod files for the grub version of Ubuntu 12.04. I do not have any other machine to set up a virtual machine of Ubuntu 12.04 to extract this from after a grub-install, so I am asking for ideas here how to solve this mess. P.S.: I installed the Linux previously when I still had a working internal CD drive. This is gone now.

    Read the article

  • Volume licenced copy of MS Office 2007 shows "Non Commercial Use" in title bar

    - by Linker3000
    I have just removed the demo copy of Office 2007 preinstalled on a new laptop and replaced it with an install of the full professional edition downloaded from the MS Volume Licensing site and installed one of our volume licence keys, yet the apps (Word etc.) show "Non Commercial Use" in the title bar, which is what usually happens in the Home and Student edition. I have tried: Deleting the Office registration keys in the registry and using one of our other Office 2007 volume licence keys (we have 7) when prompted to re-register Uninstalling Office completely and reinstalling it from a newly-downloaded ISO burned to CD and also from a compressed file that installs from hard disk/USB stick (both from Microsoft - no dodgy stuff) Yet the non-commercial message persists. Although it's a cosmetic issue, the laptop is going to be used for customer presentations and so the sales person is rightly concerned about the image this portrays. I presume there may be something floating around the registry or in a file somewhere but I can't find it. Articles I have found elsewhere just refer to the message being related to the use of a Home and Student licence key, which is 100% not the case. Any thoughts? Thanks.

    Read the article

  • Netgear GS724Tv3 and link aggregation Mac OS X Server 10.6.8

    - by Manca Weeks
    I need to link aggregate 2 sets of ports on the Netgear GS724T with my Apple server tower (latest generation). I have 2 built in ports and 2 ports on a PCIe ethernet card. It is not obvious to me how to properly configure the Netgear end. I have access to the Netgear box through its web interface, just don't know how to properly set the settings. I tried going to Netgear for help, but they said my software support has expired. I bought this unit on their recommendation - they say it is compatible with 802.3ad protocol. I cannot locate any references to this protocol in the manual and I noticed some people in formus say that this device is actually not compatible with 802.3ad and that Netgear is misleading potential customers by saying it is. Any help will be appreciated. Thanks, M My own answer - posted as edit because of restrictions on my user: OK folks, turns out one must use a Windows machine on this one or nothing makes sense. I was unable to get much farther than viewing the default inactive LAGs because in Firefox and Safari on Mac things don't make much sense - i.e. the Apply buttons (supposedly JavaScript) don't work. You can view the configurations, but none of the modifications you make stick. Then, in Switching - LAGs, choose the ports to include and make sure you switch the LAG type from Static to LACP and all is well. Haven't tested the performance of the config yet, but both sides appear to be happy with the configuration. Apple server says link active and so does the Netgear. Will report if any other discoveries. Thanks for all who read and to user84104 for responding. M

    Read the article

  • Small Business HP Virtualisation and iSCSI SAN Options

    - by Robin Day
    We are a small business that hosts our core product on a number of HP servers. Our core production setup is 1x HP DL380, high powered for a SQL Server Database 1x HP DL360, mid powered for our core application server 6x HP DL320, low powered for our front ends We run our training / testing / support systems on a similar setup, the servers are just older and less powerful. Unfortunately this is now causing us issues as the system has grown beyond the capabilities of these older servers. Upgrading these servers would be expensive and we believe that virtualisation is probably the way to go for the future. Locally we run a number of test / dev environments on ESXi using Direct Storoage on a couple of high powered DL360's and these are performing fairly well. We're thinking that instead of replacing all of our test servers that we can implement an iSCSI SAN and one or two high powered hosts. Hopefully looking that when it comes to replace our live servers as well that we can just expand the virual environment to cope. So my question is... Can anyone offer any advice on some suitable options? We have generally always been extremely happy with HP servers, all of our kit is currently HP, therefore our preference would be to stick with HP, however, I'm always happy to hear about other options. I'm hoping that initially a budget of around 15-25k (GBP) would be suitable, this could potentially be increased if I had confidence that the system would pave the way for a cost effective upgrade of our live systems in the future as well. I am new to SAN's and my only real experience is playing with OpenFiler on some old desktops. I think iSCSI should be suitable, but I've not done any research into how SQL server may perform. I've had a browser through HP's sites and see plenty of information about EVA, MSA, LeftHand, etc. However, from looking at all that, I don't see which options would be best and more importantly I don't know exactly what I would need to buy. Any help, links, opinions would be much appreciated. Thanks

    Read the article

  • Windows Server 2008 (Web Server) Replication

    - by justjoshingyou
    We have a load balanced environment with Windows Server 2008. What are some best practices to setting up replication across the web servers? Do I only want to replicate the web folders? How about replicating IIS changes - or do I need to make IIS changes on every server? I've never, ever set up replication, but I have worked with a web farm that used it before. Basically, I only know the basics about how it works, and am looking for any advice, guides, warnings, etc on setting this up. If you'd like to offer any advice, I'll let you know how our environment is for now. We have 1 prod server up and the second is nearly ready to go. We are using a cloud system and all machines are VM's. I am in the process of setting up the domain controller now (as I need to have one for DFS). Any ideas on the best way to go about setting up replication? Should we just stick the prod server in from the start or set up using a test VM and our second server and then switch it up later? I do not want to risk overwriting our prod server. Thanks!

    Read the article

  • ubuntu 9.10 installer doesn't recognize the hard drive

    - by dan
    I downloaded Ubuntu 9.10 x86_64 and am trying to install it on a fairly modern system with a Gigabyte GA-MA770-UD3 motherboard. Ubuntu 9.04 installed fine and still will when I stick that disc in, but 9.10 doesn't see my hard drive (western digital 250GB). If I boot from the disc, I can install gparted and it does recognize the drive, but when I try to start the install process from the live disc, Ubuntu again doesn't recognize the hard drive. I checked /var/log/messages and see this: Nov 12 17:28:08 ubuntu activate-dmraid: Serial ATA RAID disk(s) detected. If this was bad, boot with 'nodmraid'. Nov 12 17:28:08 ubuntu activate-dmraid: Enabling dmraid support Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. Nov 12 17:28:08 ubuntu activate-dmraid: no raid sets and with names: "nvidia_ciiajheb-0" Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. I checked my BIOS, SATA is enabled and is set to IDE mode, so there shouldn't be software RAID, but nonetheless, I added nodmraid to the boot line and tried again. It still doesn't recognize the drive. I checked /var/log/messages again and now see this: Nov 12 17:49:38 ubuntu activate-dmraid: Serial ATA RAID disk(s) detected. If this was boad, boot with 'nodmraid'. Nov 12 17:49:38 ubuntu activate-dmraid: Enabling dmraid support Nov 12 17:49:38 ubuntu activate-dmraid: WARNING: dmraid disabled by boot option Nov 12 17:49:38 ubuntu activate-dmraid: WARNING: dmraid disabled by boot option Any ideas on things to try? I've tried all of the various BIOS settings for SATA. IDE,RAID, etc. Nothing seems to work.

    Read the article

  • XP SP2 Event log not logging events

    - by Weedfreer
    I have a problem whereby a terminal appears not to be logging events correctly and occasionally appears to have problems communicating accross the network.The terminal has previously been infected with a virus which apears to have 'played' with the default group policy in the standard user profile. Although, outwardly, the terminal appears to be working normally I still have a nagging feeling that it isn't quite back to the way it was. It was infected by a user plugging in a USB Stick while the company was using the older version of the AV software...typically a week or so before it was updated.I have configured the Event logs to Overwrite as required and to be 5056KB in Maximum size. I have also attempted:- Disabling the Event Log service & restarting Renewing the EVT files in Windows\system32\config directory Restarting the event log service and restarting Clearing the event log in the Services MMC Resetting the Filters to Default in the services MMC Using the EVENTCREATE command remotely from a CMD window on the server to force an event creation event. So far the only operation to have any sort of success is the remote computer EVENTCREATE command from a CMD window on the server. As it stands, the only other time that the computer has managed to create events is while it is being restarted.Has anyone gotany ideas on how to proceed? I'm thinking that possibly a refresh of the 'Windows\system32\config\SystemProfile' folder. I'm also thinking about running a tool such as Malwarebytes but this could be slightly controvertial as the system needs to be running on 'up-time' for as long as possible. I'm also wonderign whether anyone knows of any Windows admin tools that allow me to control the event logging options or default security options so that i could get it back to some sort of standard.What I'm trying to avoid is a complte re-imaging of the terminal. Although this is an option, I dont really want to have to take it if i dont need to.Many thanks in advance for any suggestions anyone may be able to provide.

    Read the article

  • Rebuild Fedora 19 ISO adding Kickstart for USB install

    - by dooffas
    I am attempting to edit a Fedora 19 DVD ISO to add a kickstart file. I then need this ISO burnt to a USB stick for instillation. The error I get when booting is Warning: Could not boot. Warning: /dev/root does not exist To try and determine which part of the process is failing I have broken the process down in to separate stages. Step 1: Burn the original ISO "Fedora-19-x86_64-DVD.iso" (Available - here) to a pendrive and see if that will install. dd if=/path/to/iso of=/dev/sdc Burning this image was successful and it installed without issue. Step 2: Exctract the ISO, repackage it and burn it to a pendrive and see if that will install. PLEASE NOTE: The final command in this section has been broken down in to multiple lines for ease of reading, in fact it was run as a single command on one line. mkdir -p /mnt/linux mount -o loop /tmp/linux-install.iso /mnt/linux cd /mnt/ tar -cvf - linux | (cd /var/tmp/ && tar -xf - ) cd /var/tmp/linux xorriso -as mkisofs -R -J -V "NewFedoraImage" -o ouput/file.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -isohybrid-mbr /usr/share/syslinux/isohdpfx.bin . This iso was then burnt to a pendrive as before. dd if=/path/to/iso of=/dev/sdc This ISO burnt to the pen drive with no problem and will boot. I then see the fedora options screen. After choosing either "Install Fedora 19" or "Test this media & install Fedora 19" I then receive the errors highlighted above. This means the kickstart file is not to blame, but repackaging the ISO. Is there something I am missing in the repackaging process? Any input would be great! NOTE: If it is of any help, I attempted Step 2 with an Ubuntu server ISO and the process was successful.

    Read the article

  • PC can't detect second RAM installed

    - by kulwinder
    I have PC with 512 MB RAM installed (motherboard manufacture MICRO STAR, chipset P4M800), pc was running very slow so I decided to upgrade the ram. I installed CPU-Z and check the ram installed on the machine, also had a look at the stick installed. 512 MB PC 3200 400 MHz DDR but my mother supports 200 MHz and it was working ok. So I bought 2GB which I checked on manual that it support upto 2 GB Ram. So I installed 2GB PC 3200 400 MHz same as the old one, I plugged in both eventhough motherboard only support upto 2 GB but system spec only shows 512 (deducts 64 MB shared vga memory) I checked on CPU-Z, it detects both, slot 1 512 MB, slot 2 2048 MB, comparing screen for both slots, both the same, volt 2.5, frequency 166 MHz and 200MHz, only difference on those is 2gb ram shows under timings table 133MHz 166 MHz and 200MHz but 512 MB shows only 166MHz and 200MHz. I checked on Google and can't seems to figure out whats wrong with it. If I only plug in 2GB. Pc doesn't boot up like ram not working.With only 512 MB plugged in seems ok. Please help.

    Read the article

  • I am looking for a tool to measure or detect "unresponsiveness" of a desktop PC

    - by Tom H
    I have a client that provides some server systems to a hospital, and a support ticket was raised that the desktop application was hanging waiting for the server. We did some extensive testing and its pretty clear that the server is responsive, and the network is fine, and that the problem is on the client end. (no requests are received during the hang etc...) We take a look at the desktop machines and they should be fine, so we raise tickets with the software vendor who says that it must be the hardware, the hardware company says that it is the software, etc etc Anyway, so talking to the nurses, they say that these machines often "hang" for 30 seconds at a time, and sometimes during important moments where they need to get data for a patient who is unwell, such as charts and status. So I want to stick a client on these machines that would be able to detect arbitrary "unresponsiveness" of the keyboard/mouse and log that for analysis later. Obviously I am wary to suggest some application that takes resources and makes the problem even worse, so I would interested to see any tools that would detect these (is it correct to say that the keyboard interrupts are being discarded?) scenarios by looking for the OS discarding the interrupts, or whatever is appropriate here. so go on then serverfault, here is your chance to save a life.... ;-) Edit: I am starting to think that some of the tools associated with real time systems might be appropriate, at least as a diagnostic.

    Read the article

  • What is the replacement of the floppy

    - by alexanderpas
    While CD (and to an lesser extend DVD) disks have reached the price-point of the floppy, they have one significant downside, it is WORM (Write-Once Read-Many) media, allowing it to be used only one single time, and you need to be explicit in writing the data to the actual media (you need to burn it.) While CD-RW solves the "use only once" problem, it is still EWORM (Erasable Write-Once Read-Many) media, which still means you need to be explicit in writing the data to the actual media (you still need to burn it.), and also, you still need to be very explicit in erasing it. (simple delete is not possible.) Okay, we can use a CD-RW in Packet Writing mode, however the downside to that, is that this mode is not very universal, and also, not the native mode of the media. Now, while USB-sticks and SD-cards may not have the poblems of the CD, they have a whole other kind of problem: their PRICE! USB-sticks and SD cards are generally 10 to 100 times as expensive as diskettes per piece. SD-cards, in addition have an added problem, because they need a reader to operate. While it is a very standard thing, it is not default equipment on the computer like the CD drive or USB port (or historically the diskette drive). You wouldn't give out an USB stick or SD card with a 100 kB text file, not caring weither you would get it back or not. So, to recap: CD & DVD are basically WORM media. SD cards and USB sticks are relatively expensive. SD cards also needs special readers. Diskettes have a very low data-rate Diskettes have a very low storage capacity. Now, is there a media out there that solves all these problems, or is there a way to get (very) small USB sticks or SD cards for a very low price (as they're the closest thing to diskette).

    Read the article

  • .htaccess redirect to error page if port is not 80

    - by Momo
    I'm running a portable server through usb stick. The thing is I also have WAMP installed in my local machine and Apache somehow gets started on windows startup, because of some random reason which I don't recall now and it can't be changed. I want to prepare my portable server in situations like this, so closing httpd.exe from process and starting my portable server is not an option. Anyway, because of already active httpd.exe my portable server's WordPress site can only be accessed through localhost:81 - this is a problem as WP site is very dependent on the URL and I don't want to include the url with port on WP database. Here is what I want to do through .htaccess: On any path except for error.php file check if not port 80 If not port 80 redirect to /error.php?code=port It it possible for it to have priority over WP redirection or URL handling? In the error.php I provided info on how to manually close httpd.exe and such so my family and friends can access the portable site. It's sort of like a gallery and calender application for events and other such stuff... Please help? I'm I can't figure it out at all. I know others may not have apache already running, but I want to prepare for such a situation. Something like the following, but the following doesn't work. # BEGIN WordPress <IfModule mod_rewrite.c> <If "%{SERVER_PORT} = 80"> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </If> <Else> RewriteEngine On RewriteRule ^(error.php)($|/) - [L] RewriteRule ^(.*)$ /error.php?code=port [L] </Else> </IfModule> # END WordPress By the way, the portable server Server2Go automatically generates vhosts based o the hostname set on it's config file and changes ports if the port (e.g. 80) is already open.

    Read the article

  • Apache-style multiviews with Nginx

    - by Kenn
    I'm interested in switching from Apache/mod_php to Nginx for some non-CMS sites I'm running. The sites in question are either completely static HTML files or simple PHP, but the one thing they have in common is that I'm currently using Apache's mod_negotiation to serve them up without file extensions. I'm not concerned with actual content negotiation; I'm using this just so I don't have to use file extensions in my URLs. For example, the file at /info/contact.php is accessed via a URL of just /info/contact The actual file is a .php file in that location, but I don't use the extension in the URLs. This gives me slightly shorter, cleaner URLs and also doesn't expose what's essentially a meaningless implementation detail to the user. In Apache, all this takes is enabling mod_negotiation and adding +MultiViews to the Options for the site. In Nginx I gather I'll be rewriting somehow but being new to Nginx, I'm not exactly sure how to do it. These sites are currently working fine proxied from Nginx to Apache, but I'd like to try running them solely with Nginx/fastcgi. They work fine this way as long as I'm using the extensions, so the fastcgi aspect is working great. My concern now is just with removing those extensions. It's important to keep in mind that the filename is not always in the URL, in the case of subdirectories. That is, /foo/bar should look for /foo/bar.php or /foo/bar/index.php /foo/ should look for /foo/index.php Is there a simple way to achieve this with Nginx or should I stick with proxying to Apache?

    Read the article

  • Only half of RAM is recognized by BIOS

    - by Rick Crawford
    I got a Gigabyte GA-P35-DS3 mainboard. Some time ago I noticed that Windows only showed 2GB instead of 4GB. I don't know exactly what caused it anyway. I tried putting in each of the 4 x 1GB RAM modules one by one, and tried every slot one by one, until every stick and slot worked. However, then I tried adding one more at a time, and it kept showing 1GB, until I put in all 4, where it only showed 2 GB instead of 4 (in BIOS and windows 7 64bit). I tried replacing the BIOS battery since I've read that low battery could cause it. It didn't help though. I also bought 4GB new RAM (yes, it's supported, I checked it), and it's still the same, it only shows 2GB (or 3GB, when I put in 4 of the new and 2 of the old). I also did the latest BIOS update, and used default BIOS settings, but nothing of that helped. When my PC boots it shows "RAM modules used 2 and 3", when 4 sticks are in - or "0 and 1", when only 2 are in.

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >