Search Results

Search found 7533 results on 302 pages for 'knowledge transfer'.

Page 36/302 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Slow local file transfer (copy) on ESX vmware server?

    - by Sorin Sbarnea
    I have a 8 CPU VmWare ESX server (3.5) with 4 HDD drives in RAID that is not loaded at all. I enabled SSH and installed mc (midnight commander) in order to be able to copy(clone) virtual machines but I observed that if does copy the files very slow - around 3.5mb/s on local drive. Why is this happening and how should I solve the issue?

    Read the article

  • Any Recommendations for a Web Based Large File Transfer System?

    - by Glen Richards
    I'm looking for a server software product that: Allows my users to share large files with: The general public securely to 1 or more people (notification via email, optionally with a token that gives them x period of time to download) Allows anyone in the general public to share files with my users. Perhaps by invitation. Has to be user friendly enough to allow my users to use this with out having to bug me as the admin. It needs to be a system that we can install on our own server (we don't want shared data sitting on anyone else's server) A web based solution. Using some kind or secure comms channel would be good too, eg, ssh Files to share could be over 1 GB. I found the question below. WebDav does not sound user friendly enough: http://serverfault.com/questions/86878/recommendations-for-a-secure-and-simple-dropbox-system I've done a lot of searching, but I can't get the search terms right. There are too many services that provide this, but I want something we can install on our own server. A last resort would be to roll my own. Any ideas appreciated. Glen EDIT Sorry Tom and Jeff but Glen specifically says that he's looking for a 'product' so given that I specialise in this field thought that my expertise in this area may have been of use to him. I don't see how him writing services is going to be easy for him to maintain going forward (large IT admin overhead) or simple for his users and the general public to work with.

    Read the article

  • Do You Know How OUM defines the four, basic types of business system testing performed on a project? Why not test your knowledge?

    - by user713452
    Testing is perhaps the most important process in the Oracle® Unified Method (OUM). That makes it all the more important for practitioners to have a common understanding of the various types of functional testing referenced in the method, and to use the proper terminology when communicating with each other about testing activities. OUM identifies four basic types of functional testing, which is sometimes referred to as business system testing.  The basic functional testing types referenced by OUM include: Unit Testing Integration Testing System Testing, and  Systems Integration Testing See if you can match the following definitions with the appropriate type above? A.  This type of functional testing is focused on verifying that interfaces/integration between the system being implemented (i.e. System under Discussion (SuD)) and external systems functions as expected. B.     This type of functional testing is performed for custom software components only, is typically performed by the developer of the custom software, and is focused on verifying that the several custom components developed to satisfy a given requirement (e.g. screen, program, report, etc.) interact with one another as designed. C.  This type of functional testing is focused on verifying that the functionality within the system being implemented (i.e. System under Discussion (SuD)), functions as expected.  This includes out-of-the -box functionality delivered with Commercial Off-The-Shelf (COTS) applications, as well as, any custom components developed to address gaps in functionality.  D.  This type of functional testing is performed for custom software components only, is typically performed by the developer of the custom software, and is focused on verifying that the individual custom components developed to satisfy a given requirement  (e.g. screen, program, report, etc.) functions as designed.   Check your answers below: (D) (B) (C) (A) If you matched all of the functional testing types to their definitions correctly, then congratulations!  If not, you can find more information in the Testing Process Overview and Testing Task Overviews in the OUM Method Pack.

    Read the article

  • Domain transfer from Yahoo to Godaddy. Google apps downtime

    - by Kedar
    I am moving my domain from Yahoo to Godaddy (cause yahoo charges ridiculously hugh amounts than others). My problem is I use this domain for Google apps and one of those is my custom email. So here are a few questions that I have - 1) Godaddy told me there is going to be a 48 hours of downtime. Is there anything that I can do to minimize the downtime? 2) Will I lose all the email that I get during this downtime? or they be stored in the cloud and bulk emailed me once my domain is up with Godaddy? If they are lost is there any workaround to forward them to my gmail during the downtime (i know sounds stupid, but I have to ask). Any help is much appreciated. Thanks in advance.

    Read the article

  • Is it possible to rate-limit an scp/sftp/rsync/etc transfer from the command-line? ie, manual QoS on

    - by warren
    Specifically, I am looking to rate-limit an scp or sftp session (or other arbitrary network call) in the call itself. For example, let's say I want to copy 100MB to one server, and 1GB to another. I'd like to be able to run both of these at the same time, but maintain a QoS for "normal" computer usage - somewhat similar to how you can rate-limit bittorrent. Is there a way to do this without touching the networking hardware? I'm envisioning something akin to: magic-qos-tool 'scp file user@host:/path/to/file' Or.. scp -rate 40kbps file user@host:/path/to/file

    Read the article

  • Switching from GoDaddy Hosting to Bluehost Hosting with/without transfering domain names?

    - by leeand00
    I currently have my Wordpress blog hosted with GoDaddy. I want to transfer my hosting to another hosting provider called Bluehost. I also have my domain name for that blog registered with GoDaddy. How can I either transfer the domain name and the hosting to BlueHost, or (for purposes of not losing that domain name) just transfer the hosting to BlueHost and keep the GoDaddy domain registered with GoDaddy?

    Read the article

  • Organizations &amp; Architecture UNISA Studies &ndash; Chap 7

    - by MarkPearl
    Learning Outcomes Name different device categories Discuss the functions and structure of I/.O modules Describe the principles of Programmed I/O Describe the principles of Interrupt-driven I/O Describe the principles of DMA Discuss the evolution characteristic of I/O channels Describe different types of I/O interface Explain the principles of point-to-point and multipoint configurations Discuss the way in which a FireWire serial bus functions Discuss the principles of InfiniBand architecture External Devices An external device attaches to the computer by a link to an I/O module. The link is used to exchange control, status, and data between the I/O module and the external device. External devices can be classified into 3 categories… Human readable – e.g. video display Machine readable – e.g. magnetic disk Communications – e.g. wifi card I/O Modules An I/O module has two major functions… Interface to the processor and memory via the system bus or central switch Interface to one or more peripheral devices by tailored data links Module Functions The major functions or requirements for an I/O module fall into the following categories… Control and timing Processor communication Device communication Data buffering Error detection I/O function includes a control and timing requirement, to coordinate the flow of traffic between internal resources and external devices. Processor communication involves the following… Command decoding Data Status reporting Address recognition The I/O device must be able to perform device communication. This communication involves commands, status information, and data. An essential task of an I/O module is data buffering due to the relative slow speeds of most external devices. An I/O module is often responsible for error detection and for subsequently reporting errors to the processor. I/O Module Structure An I/O module functions to allow the processor to view a wide range of devices in a simple minded way. The I/O module may hide the details of timing, formats, and the electro mechanics of an external device so that the processor can function in terms of simple reads and write commands. An I/O channel/processor is an I/O module that takes on most of the detailed processing burden, presenting a high-level interface to the processor. There are 3 techniques are possible for I/O operations Programmed I/O Interrupt[t I/O DMA Access Programmed I/O When a processor is executing a program and encounters an instruction relating to I/O it executes that instruction by issuing a command to the appropriate I/O module. With programmed I/O, the I/O module will perform the requested action and then set the appropriate bits in the I/O status register. The I/O module takes no further actions to alert the processor. I/O Commands To execute an I/O related instruction, the processor issues an address, specifying the particular I/O module and external device, and an I/O command. There are four types of I/O commands that an I/O module may receive when it is addressed by a processor… Control – used to activate a peripheral and tell it what to do Test – Used to test various status conditions associated with an I/O module and its peripherals Read – Causes the I/O module to obtain an item of data from the peripheral and place it in an internal buffer Write – Causes the I/O module to take an item of data form the data bus and subsequently transmit that data item to the peripheral The main disadvantage of this technique is it is a time consuming process that keeps the processor busy needlessly I/O Instructions With programmed I/O there is a close correspondence between the I/O related instructions that the processor fetches from memory and the I/O commands that the processor issues to an I/O module to execute the instructions. Typically there will be many I/O devices connected through I/O modules to the system – each device is given a unique identifier or address – when the processor issues an I/O command, the command contains the address of the address of the desired device, thus each I/O module must interpret the address lines to determine if the command is for itself. When the processor, main memory and I/O share a common bus, two modes of addressing are possible… Memory mapped I/O Isolated I/O (for a detailed explanation read page 245 of book) The advantage of memory mapped I/O over isolated I/O is that it has a large repertoire of instructions that can be used, allowing more efficient programming. The disadvantage of memory mapped I/O over isolated I/O is that valuable memory address space is sued up. Interrupts driven I/O Interrupt driven I/O works as follows… The processor issues an I/O command to a module and then goes on to do some other useful work The I/O module will then interrupts the processor to request service when is is ready to exchange data with the processor The processor then executes the data transfer and then resumes its former processing Interrupt Processing The occurrence of an interrupt triggers a number of events, both in the processor hardware and in software. When an I/O device completes an I/O operations the following sequence of hardware events occurs… The device issues an interrupt signal to the processor The processor finishes execution of the current instruction before responding to the interrupt The processor tests for an interrupt – determines that there is one – and sends an acknowledgement signal to the device that issues the interrupt. The acknowledgement allows the device to remove its interrupt signal The processor now needs to prepare to transfer control to the interrupt routine. To begin, it needs to save information needed to resume the current program at the point of interrupt. The minimum information required is the status of the processor and the location of the next instruction to be executed. The processor now loads the program counter with the entry location of the interrupt-handling program that will respond to this interrupt. It also saves the values of the process registers because the Interrupt operation may modify these The interrupt handler processes the interrupt – this includes examination of status information relating to the I/O operation or other event that caused an interrupt When interrupt processing is complete, the saved register values are retrieved from the stack and restored to the registers Finally, the PSW and program counter values from the stack are restored. Design Issues Two design issues arise in implementing interrupt I/O Because there will be multiple I/O modules, how does the processor determine which device issued the interrupt? If multiple interrupts have occurred, how does the processor decide which one to process? Addressing device recognition, 4 general categories of techniques are in common use… Multiple interrupt lines Software poll Daisy chain Bus arbitration For a detailed explanation of these approaches read page 250 of the textbook. Interrupt driven I/O while more efficient than simple programmed I/O still requires the active intervention of the processor to transfer data between memory and an I/O module, and any data transfer must traverse a path through the processor. Thus is suffers from two inherent drawbacks… The I/O transfer rate is limited by the speed with which the processor can test and service a device The processor is tied up in managing an I/O transfer; a number of instructions must be executed for each I/O transfer Direct Memory Access When large volumes of data are to be moved, an efficient technique is direct memory access (DMA) DMA Function DMA involves an additional module on the system bus. The DMA module is capable of mimicking the processor and taking over control of the system from the processor. It needs to do this to transfer data to and from memory over the system bus. DMA must the bus only when the processor does not need it, or it must force the processor to suspend operation temporarily (most common – referred to as cycle stealing). When the processor wishes to read or write a block of data, it issues a command to the DMA module by sending to the DMA module the following information… Whether a read or write is requested using the read or write control line between the processor and the DMA module The address of the I/O device involved, communicated on the data lines The starting location in memory to read from or write to, communicated on the data lines and stored by the DMA module in its address register The number of words to be read or written, communicated via the data lines and stored in the data count register The processor then continues with other work, it delegates the I/O operation to the DMA module which transfers the entire block of data, one word at a time, directly to or from memory without going through the processor. When the transfer is complete, the DMA module sends an interrupt signal to the processor, this the processor is involved only at the beginning and end of the transfer. I/O Channels and Processors Characteristics of I/O Channels As one proceeds along the evolutionary path, more and more of the I/O function is performed without CPU involvement. The I/O channel represents an extension of the DMA concept. An I/O channel ahs the ability to execute I/O instructions, which gives it complete control over I/O operations. In a computer system with such devices, the CPU does not execute I/O instructions – such instructions are stored in main memory to be executed by a special purpose processor in the I/O channel itself. Two types of I/O channels are common A selector channel controls multiple high-speed devices. A multiplexor channel can handle I/O with multiple characters as fast as possible to multiple devices. The external interface: FireWire and InfiniBand Types of Interfaces One major characteristic of the interface is whether it is serial or parallel parallel interface – there are multiple lines connecting the I/O module and the peripheral, and multiple bits are transferred simultaneously serial interface – there is only one line used to transmit data, and bits must be transmitted one at a time With new generation serial interfaces, parallel interfaces are becoming less common. In either case, the I/O module must engage in a dialogue with the peripheral. In general terms the dialog may look as follows… The I/O module sends a control signal requesting permission to send data The peripheral acknowledges the request The I/O module transfers data The peripheral acknowledges receipt of data For a detailed explanation of FireWire and InfiniBand technology read page 264 – 270 of the textbook

    Read the article

  • Can I setup a test server and then transfer everything to a diff. production server?

    - by Justin
    Hello, I am going to be setting up a "real" server, but it's not being shipped for another week. I was planning on setting up most of the server's functionality using an extra workstation I have. I wanted to set-up Windows Server 2003 or 2008, IIS, Terminal Services, Firewall, and Antivirus on this regular machine. I'd also be installing software like Winzip and VMWare that'll be used on the server. I can't ghost the machine, as far as I've done in the past, because the motherboard/cpu/etc. will all be different. Is there any way to export all of the "server settings" or something like that so I can move everything from test to production? Is there any software out there that does something similar to this? Some things I'm going to have to wait on such as setting up the file server completely in its raid configuration, but I'd like to get the simple server stuff and network setup out of the way. Has anyone done this before? Do I need software, open-source or not, to do this? Or maybe there's a way to export all the server settings in some way? Thanks in advance! Justin

    Read the article

  • Can a file change size when the transfer protocol changes?

    - by djechelon
    I am very curious about what I have just found happening on my computers. I have set up SyncBackPro to synchronize a music folder from my home desktop to my laptop using Windows network share (SMB). Files get synchronized regularly. Now I tried to switch to FTP and I noticed that NO FILE matches its counterpart even if they have never been modified (I make sure there is the readonly flag and no application is allowed to retag MP3s and whatever...), so SyncBack asks me what side should overwrite the other. FTP files are a little larger than local files. I run synchronization from the laptop. How can such a thing happen? Files are the same, bytes should be the same... If I run SMB sync again it matches all the files again.

    Read the article

  • Any Recommendations for a Web Based Large File Transfer System?

    - by Glen Richards
    I'm looking for a server software product that: Allows my users to share large files with: The general public securely to 1 or more people (notification via email, optionally with a token that gives them x period of time to download) Allows anyone in the general public to share files with my users. Perhaps by invitation. Has to be user friendly enough to allow my users to use this with out having to bug me as the admin. It needs to be a system that we can install on our own server (we don't want shared data sitting on anyone else's server) A web based solution. Using some kind or secure comms channel would be good too, eg, ssh Files to share could be over 1 GB. I found the question below. WebDav does not sound user friendly enough: http://serverfault.com/questions/86878/recommendations-for-a-secure-and-simple-dropbox-system I've done a lot of searching, but I can't get the search terms right. There are too many services that provide this, but I want something we can install on our own server. A last resort would be to roll my own. Any ideas appreciated. Glen EDIT Sorry Tom and Jeff but Glen specifically says that he's looking for a 'product' so given that I specialise in this field thought that my expertise in this area may have been of use to him. I don't see how him writing services is going to be easy for him to maintain going forward (large IT admin overhead) or simple for his users and the general public to work with.

    Read the article

  • For large files compress first then transfer or rsync -z? which would be fastest?

    I have a ton of relativity small data files but they take up about 50 GB and I need them transferred to a different machine. I was trying to think of the most efficient way to do this. Thoughts I had were to gzip the whole thing then rsync it and decompress it, rely on rsync -z for compression, gzip then use rsync -z. I am not sure which would be most efficient since I am not sure how exactly rsync -z is implemented. Any ideas on which option would be the fastest?

    Read the article

  • How can I save my operating system and transfer it to a new SSD?

    - by Dave Duhrkoop
    I recently purchased a Mushkim SSD to replace my failing hard drive of my H/P Dv6-12465dx laptop. Physical installation of the SSD should be easy. I have my existing HD divided into five virtual drives, one of which contains the Windows 7 Operating System. There were no back up disks when I purchased the machine originally. How do I go about saving the Operating system and transferring it to the new SSD?

    Read the article

  • Is it possible to rate-limit an scp/sftp/rsync transfer from the command-line? ie, manual QoS on a s

    - by warren
    Specifically, I am looking to rate-limit an scp or sftp session in the call itself. For example, let's say I want to copy 100MB to one server, and 1GB to another. I'd like to be able to run both of these at the same time, but maintain a QoS for "normal" computer usage - somewhat similar to how you can rate-limit bittorrent. Is there a way to do this without touching the networking hardware? I'm envisioning something akin to: magic-qos-tool 'scp file user@host:/path/to/file' Or.. scp -rate 40kbps file user@host:/path/to/file

    Read the article

  • What's the fastest and automatic way to transfer 2GB of data between 2 PCs every night?

    - by phan
    While it's fast (less than 2 minutes) I hate having to copy files from PC #1 onto a USB stick, and then manually popping it in PC #2 to copy the files to PC #2. Dropbox is too slow in uploading and then downloading 2GBs (synching), it could take hours. Copying 2GBs over the network is also slow because we're dealing with 10,000 little files that totals 2GBs, and not just one, giant 2gb file. Not sure why, but dealing with 10,000 little files makes the copy process much longer. Is there any other method that I'm missing? Any ideas? I'm using Win7 on both PCs. Edit: These files change every single night.

    Read the article

  • Mikrotik queues and limiting total upstream bandwidth

    - by g18c
    With a Mikrotik router (form of embedded Linux) I have created simple queues per machine matched by source IP address. Each of the 4 machine queues has an unlimited burst 3Mbps/3Mbps for Tx/Rx. During speedtest.net on all 4 machines at the same time, each machine shows 3Mbps (and is limited correctly there), however the total bandwidth on the uplink goes to 12Mbps (i need to set this to 10Mbps max for the upstream). I want to restrict the actual traffic passing across the uplink port to 10Mbps regardless of what the other queues are doing (I need this catch all queue to have the final say on the uplink speed). For example I need: Scenario A Machine A transferring @ 3Mbps Machine B transferring @ 3Mbps Machine C transferring @ 3Mbps Machine D transferring @ 0Mbps Up-link speed = 9Mbps Scenario B Machine A trying to transfer @ 3Mbps Machine B trying to transfer @ 3Mbps Machine C trying to transfer @ 3Mbps Machine D trying to transfer @ 3Mbps Up-link speed = 10Mbps Actual transfer speed of machine A,B,C,D = 2.5Mbps This is to allow slight over subscription of bandwidth queues as not all will be transmitting at 3Mbps all the time. Is this possible and if so how would one go about doing this?

    Read the article

  • How can I transfer a logged in user's login data from one server to another?

    - by Martin
    I have one server "A" where users can login. Login is verified by an LDAP server "L". I have a different server "B" were users can log in, too. Login is verified by the same LDAP server as before. Both servers are standard web servers with PHP. My goal is: If a user is logged in to server "A", and if he clicks a link to log in to server "B", the user should automatically be logged in without re-entering username and password. What is a good and secure way to achieve this? I can't submit username and crypted password to server "B". I can't use the PHP session of server "A", because it does not exit on "B". Cookies won't work either. I think that there is a way, but I just can't see it. Any help is very much appreciated.

    Read the article

  • Can IP address transfer from person to another after he disconnects from ISP or any other way?

    - by learner
    I have been checking this website that sells a product (health related) and trying to find out if it is a scam site. The site is something.blogspot.in (and not something.blogspot.com, which happens to be a different site altogether). So is it an Indian site? It has a CBox chat box where the owner communicates with customers (or potential ones) for information. The owner shows that his product has worked for people by providing links from a forum (created by him at network54.com) where people have posted positively. One doesn't have to be registered to post on there, but the IP address of the poster gets shown along with the post. According to the owner, IP address is basis of authenticity. I found that many people had different IP addresses on their different posts. The owner has declared the nationalities of the people who posted. When I traced the IP addresses of them with this site, I found that the nationalities provided by the owner were wrong. Is it possible that when a person disconnects himself from an ISP, another person from another country gets his old IP address?

    Read the article

  • What Counts For a DBA – Depth

    - by Louis Davidson
    SQL Server offers very simple interfaces to many of its features. Most people could open up SSMS, connect to a server, write a simple query and see the results. Even several of the core DBA tasks are deceptively straightforward. It doesn’t take a rocket scientist to perform a basic database backup or run a trace (even using the newfangled Extended Events!). However, appearances can be deceptive, and often times it is really important that a DBA understands not just the basics of how to perform a task, but why we do a task, and how that task works. As an analogy, consider a child walking into a darkened room. Most would know that they need to turn on the light, and how to do it, so they flick the switch. But what happens if light fails to shine forth. Most would immediately tell you that you need to consider changing the light bulb. So you hop in the car and take them to the local home store and instruct them to buy a replacement. Confronted with a 40 foot display of light bulbs, how will they decide which of the hundreds of types of bulbs, of different types, fittings, shapes, colors, power and efficiency ratings, is the right choice? Obviously the main lesson the child is going to learn this day is how to use their cell phone as a flashlight so they don’t have to ask for help the next time. Likewise, when the metaphorical toddlers who use your database server have issues, they will instinctively know something is wrong, and may even have some idea what caused it, but will have no depth of knowledge to figure out the right solution. That is where the DBA comes in and attempts to save the day. However, when one looks beneath the shiny UI, SQL Server has its own “40 foot display of light bulbs”, in the form of the tremendous number of tools and the often-bewildering amount of information they can present to the DBA, to help us find issues. Unfortunately, resorting to guesswork, to trying different “bulbs” over and over, hoping to stumble on the answer. This is where the right depth of knowledge goes a long way. If we need to write a SELECT statement, then knowing the syntax and where to find the data is not enough. Knowledge of indexes and query plans is essential. Without it, we might hit on a query that “works”, but we are basically still a user, not a programmer, because we have no real control over our platform. Is that level of knowledge deep enough? Probably not, since knowledge of the underlying metadata and structures would be very useful in helping us make sense of any query plan. Understanding the structure of an index makes the “key lookup” operator not sound like what you do when someone tapes your car key to the ceiling. So is even this level of understanding deep enough? Do we need to understand the memory architecture used to process the query? It might be a comforting level of knowledge, and will doubtless come in handy at some point, but is not strictly necessary in most cases. Beyond that lies (more or less) full knowledge of SQL language and the intricacies of every step the SQL Server engine takes to process our query. My personal theory is that, as a professional, our knowledge of a given task should extend, at a minimum, one level deeper than is strictly necessary to perform the task. Anything deeper can be left to the ridiculously smart, or obsessive, or both. As an example. tasked with storing an integer value between 0 and 99999999, it’s essential that I know that choosing an Integer over Decimal(8,0) will likely offer performance benefits. It is then useful that I also understand the value of adding a CHECK constraint, to make sure the values are valid to the desired range; and comforting that I know a little about the underlying processors, registers and computer math. Anything further, I leave to the likes of Joe Chang, whose recent blog post on the topic offers depth by the bucketful!  

    Read the article

  • Do I need to transfer Server license CALs to new Domain Controller during AD transition?

    - by drpcken
    I have an old Server 2003 domain controller I'm ready to decommission. I notice in Server 2003 there is a Licensing module under Administrative Tools that seems to manage and track user CAL's for the domain controller. I don't see this on my newly promoted Server 2008 domain controller, nor do I see any roles to add it. Does this need to be transferred to my new Server 2008 domain controller or will it all happen when the old server is decommissioned? I've already transferred all my Terminal Server licenses to the new server. Thank you!

    Read the article

  • How do I transfer videos from DV camera to divx?

    - by Ward
    I have a videocamera that uses mini-dv tapes. In the past, I've transferred the files and made DVDs, but that was time- and disk-space- consuming. I wanted to find new tools and to figure out how to convert the videos to something smaller like divx but I didn't know enough about all the different formats to answer a previous question. Well, now I've done a bunch of research and I understand some of the details of video encoding, and in the process I wrote up some notes on the different formats involved in going from a DV videocam to divx or H.264 They're a bit rambling, but in case it's of any use, I'm going to post them as an answer. I'd be very interested in anyone else's answer as well.

    Read the article

  • Transfer hard-drive with windows XP to another computer. On booting, asks to activate xp

    - by Jesse
    I had an old computer sitting around that I have not been able to boot successfully. I moved the hard drive and placed it in my newer computer. If I boot linux, I can mount the XP hard-drive and access the files. If I try to boot from the XP hard-drive, it will boot, but it asks me to activate windows before proceeding. If I continue, I get the "activation window" with two images/icons(?) which are failing to load. Nothing else happens. The version of windows came with the original computer the hard-drive came from, so I'm not sure if I'm married to the broken computer (I hope not!). Is there anything I can do in order to boot into XP from the new computer?

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >