Search Results

Search found 8048 results on 322 pages for 'partial upgrade'.

Page 248/322 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • Trying to run a codeigniter app on custom php

    - by hamstar
    I have a CodeIgniter app that I deployed to a server with php 5.2 and my dev box has 5.3, and some stuff doesn't work anymore. I didn't want to upgrade php and risk the other app on the server having issues. Anyway I compiled a custom PHP and added the following to a single .conf file in /etc/httpd/conf.d/zcid.conf with all the other conf files. <VirtualHost *:80> DocumentRoot /var/www/cid/app ServerName sub.example.co.nz </VirtualHost> <Directory "/var/www/cid/app"> authtype Basic authname "oh dear how did this get here i am no good with computer" authuserfile /path/to/auth require valid-user RewriteEngine on RewriteCond $1 !^(index\.php|robots\.txt|createEvent\.php|/cgi-bin) RewriteRule ^(.*)$ /index.php/$1 [L] AddHandler custom-php .php Action custom-php /cgi-bin/php53.cgi </Directory> In /var/www/cid/app I have the cgi-bin folder and the php53.cgi that I copied from /usr/local/php53/bin/php-cgi But now when I navigate to the subdomain it says: The requested URL /cgi-bin/php53.cgi/index.php/ was not found on this server. And if I try to browse to /cgi-bin it says (what it is supposed to?): You don't have permission to access /cgi-bin/ on this server. Quite confused now. Anyone know what to do here? Thanks :)

    Read the article

  • Slow Local Network, Windows 7, Snow Leopard, WiFi/Wired

    - by WerkkreW
    Hello - I am experiencing really poor local network performance in my home. I was recently using a Linksys WRT54G Router with DD-WRT on it, and a couple comparable Linksys-G PCI cards for connectivity but decided to upgrade hoping it would help with my performance issues. The computers in my house are connected as follows: Comcast Business Class Commercial 25mbps/10mbps (Verified with SpeakEasy and Speedtest.net) D-Link DGL-4500 Wireless N Router Windows 7x64 - D-Link DWA-552 Wireless-N Windows 7x64 - D-Link DWA-552 Wireless-N Mac Mini 10.6.2 - AirPort Extreme N Playstation 3, Hard Wired Xbox 360, Hard Wired Essentially the problem is very specific. Web browsing and uploading/downloading files from the internet is fine, more than fine. But if I want to say, Stream a video from one of my Windows 7 computers to my PS3, or copy a large video file between either of the PC's or the Mac, I get a consistent 500-900Kbps throughput at the high end. If I open my network browser, or try to browse my homegroup the response time is horrible. Both of my Windows computers are showing Strong wireless signals with a connection speed of 300Mbps. I know I can never expect to achieve anything near those speeds, but 500Kbps? Here is what I have tried so far: Enabled Single mode N-only and N/G Only on router WPA2 with AES Encrpytion Disabled "Remote Differential Compression" in Windows 7 Disabled TCP "Auto-Tuning" Used other software for file copies such as "Teracopy" I am at the end of my rope. Unfortunately I live in a 75 year old home with plaster walls, so hard-wiring my entire house isn't really an option I can handle right now. Any ideas to help me get decent speed when transferring files across my network would be greatly appreciated.

    Read the article

  • SQL Server replication and load balance

    - by Ahmed Galal
    I'm running a web service that serves a mobile app on IIS 8 and SQL Server 2014, my service has a massive load and i'm trying to improve performance, most of the load is happening on SQL. i don't think i have a bottleneck, my processor and ram is up to the max and i think my code is not that bad, am already using memcached and other stuff to avoid hitting SQL too much. i know i can always upgrade the server hardware but i already have a spare server that i would like to use, so i was thinking to split the SQL load on the 2 servers. What i was thinking of is to setup replication on the other server and do some load balancing, but am not sure how to do the load balance. I know i can adjust my code to hit the other server for some queries but i was hoping to find a solution that avoid changing my code. So my question is, What are the ways of doing load balancing between 2 SQL servers ? I would appreciate suggestions or best practices or some directions. Thanks.

    Read the article

  • Reconnecting OST with Exchange

    - by syrenity
    Hi. I have a quite big problem with customer's MS Exchange. The server got it's disk filled about 2 weeks ago, so it's currently offline. They plan to upgrade it, but not in hurry, as they use it mainly for OWA and back-up - the mails exchange is done via SMTP and POP3. Trying to diagnose some problem today, one of the users has (following the ISP instructions), removed the Exchange account from Outlook, which essentially left the OST orphaned. The user naturally didn't move the emails or any other data to the Archive / PST before, so these emails located on the OST only. So currently I'm trying to figure out how to restore them. There are 2 options: 1) Make the user buy some tool to convert them to PST, and import as archive / main Outlok file? 2) Reconnect the Outlook to Exchange (once it up), let it sync the old server content, then shutdown Outlook and replace the new OST with the old one, start Outlook again in offline mode and move these files to archive. 3) Any other method? Can someone advice what would be the best approach here? The used versions are Outlook 2007 and Exchange 2003. Thanks!

    Read the article

  • Amazon EC2 instance missing Network Interface

    - by Sergiks
    I am running Linux on a t1.micro instance at Amazon EC2. Once I noticed bruteforce ssh login attemtps from a certain IP, after litle Googling I issued the two following commands (other ip): iptables -A INPUT -s 202.54.20.22 -j DROP iptables -A OUTPUT -d 202.54.20.22 -j DROP Either this, or maybe some other actions like yum upgrade perhaps, caused the follwing fiasco: after rebooting the server, it came up without the Network Interface! I only can connect to it through AWS Management Console JAVA ssh client - via local 10.x.x.x address. Console's Attach Network Interface as well as Detach.. are greyed out for this instance. Network Interfaces item at the left does not offer any Subnets to choose from, to create a new N.I. Please advice, how can I recreate a Network Interface for the instance? Upd. The instance is not accessible from outside: cannot be pinged, SSH'ed or connected by HTTP on port 80. Here's the ifconfig output: eth0 Link encap:Ethernet HWaddr 12:31:39:0A:5E:06 inet addr:10.211.93.240 Bcast:10.211.93.255 Mask:255.255.255.0 inet6 addr: fe80::1031:39ff:fe0a:5e06/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1426 errors:0 dropped:0 overruns:0 frame:0 TX packets:1371 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:152085 (148.5 KiB) TX bytes:208852 (203.9 KiB) Interrupt:25 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) What is also unusual: a new micro instance I created from scratch, with no relation to the troubled one, was not pingable too.

    Read the article

  • FTP Server with advanced features

    - by Nikolas Sakic
    Hi, We supply zone-files to our customers. Some zone files are big about 300MB and some are quite small, maybe like 1MB. We had this issue that someone setup a script to continually download the file. Imagine downloading 300MB file a few hundred times a day. Since, we don't have packet-shaper to throttle the traffic, we need to upgrade ftp server and use add-on modules to limit the download somehow. We currently use proftpd server. Also note that there are different users for different domains - say, if you want to download zone file for .INFO domain, then you use a particular user. That user can't download any other zone's file. This is what we are looking for: Have maximum of 400MB download per user per day. Or even have different download limit for different users per day. Have one connection per user at any time. Max # of connection (non-simultaneous) per user per day is 5. Anyone trying to exceed that gets banned for 24 hours. Has anyone used FTP server with similar restrictions above? Does anyone have any ideas where I can start? Any help would be appreciated. Thanks. -N

    Read the article

  • MySQL extension of PHP not working

    - by Víctor
    In a Debian server, and after intallation and removal of SquirrelMail (with some downgrade and upgrade of php5, mysql...) the MySQL extension of PHP has stopped working. I have php5-mysql installed, and when I try to connect to a database through php-cli, i connect successfully, but when I try to connect from a web served by Apache I cannot connect. This script, run by php5-cli: echo phpinfo(); $link = mysql_connect('localhost', 'user, 'password'); if (!$link) { die('Could not connect: ' . mysql_error()); } echo 'Connected successfully'; mysql_close($link); Prints the phpinfo, which includes "/etc/php5/cli/conf.d/mysql.ini", and also the MySQL section with all the configuration: SOCKET, LIBS... And then it prints "Connectes successfully". But when run by apache accessed by web browser, it displays the phpinfo, which includes "/etc/php5/apache2/conf.d/mysql.ini", but has the MySQL section missing, and the script dies printing "Fatal error: Call to undefined function mysql_connect()". Note that both "/etc/php5/cli/conf.d/mysql.ini" and "/etc/php5/apache2/conf.d/mysql.ini" are in fact the same configuration, because I have in debian the structure: /etc/php5/apache2 /etc/php5/cgi /etc/php5/cli /etc/php5/conf.d And both point at the same directory: /etc/php5/apache2/conf.d -> ../conf.d /etc/php5/cli -> ../conf.d Where /etc/php5/conf.d/mysql.ini consists of one line: extension=mysql.so So my question is: why is the MySQL extension for PHP not working if I have the configuration included just in the same way as in php-cli, which is working? Thanks a lot!

    Read the article

  • Mirroring the Global Address List on Blackberries

    - by Wyatt Barnett
    In times immemorial, back in the day when men were men and blackberries still took AA batteries, we rolled them out to our users for our 100 person operation. At that time, there was no such thing as address list lookups, so we were forced to hack a bit. The ingenious hack we came up with was to mirror the GAL as a public folder and then synch up blackberries to that. While there have been a few downsides here and there, they have been mere annoyances. And our users, having grown fat and prosperous in the intervening years, have been used to seeing every single employee and department here listed on their hand-held automatically. Alas, it appears that Outlook 2010 breaks this functionality as Blackberry desktop manager is completely incompatible with it. Moreover, this presents us with an opportunity to change things for the better given that public folders are going away next time we upgrade exchange. So, we are in search of a tool or technique that will allow us to mimic current functionality--that is to: Push an essentially arbitrary list of ~100 contacts to blackberry address books Said list shall be centrally updated Without requiring desktop manager or exchange public folders. Any suggestions, crowd?

    Read the article

  • Mystery 0xc0000142 error on starting java from a service, as a different user.

    - by cpf
    This is a very convoluted setup, but effectively this is what goes down: Manager service (which I don't have control over) running as admin user X starts my executable, which then starts Java as user Y using the standard c# StartInfo.Username/Password controls. Now, from a basic (not elevated or anything, just admin) command prompt I can run that executable, and Java pops up and works fine, running perfectly under the user it should be. When the service runs the same executable, however, Java silently fails. The only hint I see is this series of events in the event viewer: Service starts "Application popup: java.exe - Application Error : The application was unable to start correctly (0xc0000142). Click OK to close the application. " (googling this reveals a lot of scam sites telling me to use their "free antivirus to fix 0xc0000142 errors easy!"... sigh) Service stops (the java shutdown propagated, which is supposed to happen) And here's what process explorer has for the failure: As you can see, everything shows as a success. Now, I think this might have something to do with the permissions (the user java.exe is running under has traverse permission for the entire drive and full permissions to Directory A, which is where the .jar is), but I just can't fathom how something that works fine from the command line (and, this is an upgrade, the previous system without the user-switching aspect works fine from the service) can fail with such a cryptic message and little showing up in logs.

    Read the article

  • How should I perform database maintenance on a 24x7 system

    - by solublefish
    I'm a software developer who inherited a part-time DBA role. I'm responsible for an application backed by a small, high-volume 24x7 database on SQL Server 2008. While there's other stuff in the DB, the critical piece is a 50GB, 7.5M row table that serves 100K requests/sec during peak load, and about half that at "night". This is 99%+ read traffic, but the writes are constant, and required. I need to be able to perform periodic maintenance without a maintenance window. Say an index rebuild, a job to purge old data, Windows Update, or hardware upgrade. Most of the advice I've seen is along the lines of "MAKE a maintenance window." While I appreciate the sentiment, I hope there's another way. If it will solve this problem, I do have the ability to purchase new hardware or modify the database, the clients (a set of web services servers), and much of the application code (ADO.NET + ASP.NET). I've been thinking along the lines of using the warm spare (or a 3rd server) to do the maintenance, and then "swap" it into production. 1 Synchronize the spare by restoring backups, including a current transaction log. 2 Perform the maintenance tasks. 3 Reconfigure clients to connect to the spare server. Existing connections are finished within a minute or so. 4 The spare server is now the production server. The problem remaining is that the new production server is now out of date by however long it took to perform maintenance. Is there some way that the original production server can be made to queue up changes and merge them to the spare between steps 2 and 3? Any other ideas?

    Read the article

  • ASP.NET Session State SQL Server 2008 R2 Freezes with High CPU Usage

    - by jtseng
    Our ASP.Net website uses SQL Server as the session state provider. We currently host the database on SQL Server 2005 since it does not play well on 2008 R2. We would like to know why, and how to fix it. hardware setup Our current session state server has SQL Server 2005 with the files hosted on a single local disk. It is one of our oldest servers since it has served us well, and we never felt the need to upgrade it. The database is about 2 GB holding 6000 sessions. (The sessions are a little big, but we need it.) We have another server with SQL Server 2008 R2 with a much faster CPU, much more RAM, and a much faster hard disk. situation One day, we have a huge surge in traffic. The transaction log growth on SQL Server freezes the server for 10's of seconds, allowing only a few requests through in minutes. So we load up the new server with ASPState with very large data and log files and point all of our applications to the new server. It chugs along fine for about 5 minutes, and then the CPU usage jumps up to 50% of the 16 cores that Standard Edition can use and freezes for 10's of seconds at a time. The files do not record any autogrowth events. The disk queue is nice and low. RAM usage is low. CPU usage on our old server has never been higher than 5%. What happened on the new server? Alternatively, I would like to hear success stories with ASP.NET session state server running on SQL Server 2008 R2 with an average write load of 30MB/sec with bursts up to 200MB/sec.

    Read the article

  • Trying to limit IMAP folders/mailboxes my iPhone/iPad sees

    - by QuantumMechanic
    (Note: I am using dovecot 1.0.10 on Ubuntu 8.04.4 LTS. Yes, I know I need to upgrade before next year :) (Note: The SMTP/IMAP server in question only serves my family, so there's only a very few users. Certainly what I propose below, even it it works, would be a logistical nightmare with any significant number of users). I have noticed (and have confirmed via google) that the iOS mail app is terrible in its handling of IMAP subscriptions, namespaces, etc. For example, my iPhone and iPad will see EVERYTHING (all mailboxes, folders, etc.), whereas clients like Thunderbird, alpine, etc. only see what I tell them to see. This makes it an incredible pain to move mail between mailboxes because I have to scroll through a gazillion things. The mail_location in dovecot.conf is: mail_location = mbox:%h/Mail/:INBOX=/var/mail/%u To get around this, I've been considering doing the following for user foo: Create a dovecot userdb with a foo-ios virtual user in it, whose UID is identical to that of the real (in /etc/passwd) foo user and with a homedir of /home/foo-ios. ln -s /var/mail/foo /var/mail/foo-ios mkdir -p /home/foo-ios/Mail cd /home/foo-ios/Mail ln -s /home/foo/Mail/mailbox-i-want-visible mailbox-i-want-visible Make symlinks for the rest of limited set of mailboxes/folders I want visible to the iOS mail app. chown -R foo:foo /home/foo-ios Change iOS mail app settings to log in as user foo-ios instead of user foo. Will this work or will there be some index/file corruption hell because there will be two sets of indexes (one set living in /home/foo/Mail/.imap and other set living in /home/foo-ios/Mail/.imap) indexing the same underlying mbox files? And I'd be more than happy to hear of a better way to do this with dovecot! (Or to hear that dovecot 2.x works better with iOS devices).

    Read the article

  • outlook security alert after adding a second wireless access point to the network

    - by Mark
    Just added a Netgear WG103 Wireless Access Point in our conference room to allow visitors to access the internet through out internal network. When switched on visitors can connect to the intenet and everything works fine. Except, when the Access Point is switched on, normal users of the network get a Security Alert when they try to start Outlook 2007. The Security Alert is the same as the one shown in question 148526 asked by desiny back in June 2010 (http://serverfault.com/questions/148526/outlook-security-alert-following-exchange-2007-upgrade-to-sp2) rather than "autodiscover.ad.unc.edu" my security alert references our "Remote.server.org.uk". If I view the certificate it relates to "Netgear HTTPS:....", but the only Netgear equipment we have is the new Access Point installed in the conference room. If the Access Point is not switched on we do not get the Security Alert. At first I thought it was because we had selected "WPA-PSK & WPA2-PSK" Network Authentication Type but it continues to occur even if we opt for "Shared Key" WEP Data Encryption. I do not understand why adding a Netgear Wireless Access point would cause Outlook to issue a Security Alert when users try to read their email. Does anyone know what I have to do to get rid of the Security Alert? Thanks in advance for reading this and helping me out.

    Read the article

  • Docking Station Sound Doesn't Work on Dell D830 with Windows 7

    - by cisellis
    I have a Dell Latitude D830 laptop that is running Windows 7 Enterprise x64. I connect to a docking station during the day with multiple monitors, a keyboard and a mouse. Everything runs with no problems including most of the docking station ports (usb, monitors, etc.) However, the sound port from the docking station does not work since the upgrade to Windows-7. Even with the laptop plugged in, the sound always comes out of the laptop, not the headphones plugged into the docking station. Here's what I've tried: I've seen other issues like via Google this that seem to be mostly unanswered. I found one or two that referenced using the Vista x64 drivers, especially the Nvidia drivers. I do not have an Nvidia chipset but I've reinstalled the sound drivers and that has not helped. I don't have a support contract and considering the cost is usually high to call Dell, that's not an option. Dell's forums are pretty much a wasteland and I've found no help there. Since this is a docking station I thought I might need to try the SATA or Intel chipset drivers from the dell site instead, however I'm not really sure and I need to work on this laptop in the meantime. I can't really afford the downtime to experiment with random drivers all day in case they turn out to be incompatible (Dell still hasn't added Windows 7 to their support site as far as I can tell). Does anyone have any other ideas? Has anyone had this issue and solved it? If so, how? Thanks in advance for your help.

    Read the article

  • BackupExec 2012 File System Archiving - Access is denied to Remote Agent

    - by AllisZero
    Gentlemen, I've been struggling with a Trial version of Symantec Backup Exec 2012 for about a week now. It was installed as an upgrade to our 12.5 license, and the setup completed with no issues. The reason I upgraded is solely for the File System Archiving option as I'm working to reduce the amount of live data in my servers. Backups work A-Ok and I have followed the instructions in the Admin Manual to make sure I had filled all requirements. The account BE is running under is a member of the Local administrators group as required and has been added to the test share that I'm using to evaluate the archiving function. Testing the credentials in the job setup window always works fine, and I am able to add both regular and Admin$ shares to my Archive selection. However, every time I run the Archive job, I get the following message: https://dl.dropbox.com/u/59540229/BEXec.png I've already tried to troubleshoot DNS resolution issues as suggested in the Symantec KB to no avail. The only thing I can think of, at this point, is that a trial license doesn't allow me to use the Archiving function, although that would seem silly on their part. Appreciate any assistance or information. Thanks.

    Read the article

  • Microsoft Access: computer freezes when user tries to update record

    - by CarlF
    A colleague and I have developed an Access 2003 database which is used throughout our department. Currently about four dozen people do data entry using one of two very similar forms. If 47 of us use them, they work perfectly. If Mr. 48 clicks the "Save" button, Windows XP freezes and a hard reset is needed. The problem has to be on his specific computer (Dell latitude D630) and not in the code because this problem only affects him. Complicating the matter: I don't work for IS, and this project is not supported by IS. If I'm going to get our tech support to fix the problem I had better be able to explain exactly what to do and how to do it, because they aren't going to invest any resources. I don't even have admin rights on the computer (and neither does its regular user). I've asked him to bring his laptop the next time he visits my building. (Just to make matters worse, he doesn't usually work in the same location as me or the other developer.) Any suggestions on debugging the problem? My first try will be to uninstall and reinstall Office, which I can do using corporate utilities without being admin. Note: yes, those are old versions of Office and Windows. We expect to upgrade later this year.

    Read the article

  • Redeploy using Active Directory

    - by Noam Gal
    I am trying to use group policy to deploy our msi through AD. For some strange reason, when I overwrite the msi with a newer version, and then go to the policy, and click on "Redeploy Application", the application gets uninstalled on the users' machines, and all reg keys, binaries and shortcuts are gone from them. The "Add/Remove Programs" still contain the application entry. I have managed to create a minimal vdproj that does nothing but write its current Product Version to a registry key, and created two versions of it (1.0.0 and 1.1.0). I still face the same problems when using this msi in my AD environment. I did check that my Package Codes and Product Codes are different for both versions, and that the Upgrade Codes are identical. I also checked the RemovePreviousVersion to true. Checking with some other msi (firefox 3.0.0 and 3.6.3) I downloaded from a site specifically for AD deploy, it worked just as expected (first installing the 3.0.0, then I over-written the msi, and clicked on "Redeploy", and the users got 3.6.3 after the next log-off-log-on). What am I missing here?

    Read the article

  • Server freezes while installing Redhat Enterprise Linux Server 6

    - by eisaacson
    We've tried both the first options Install or upgrade an existing system Install system with basic video driver When trying option #1, it gets to a screen that has a solid cursor about halfway down, then freezes. When trying option #2, it freezes at the point where it says: Waiting for hardware to initialize... Of course, we bought the unsupported version and haven't found anything to help us so far. Here are the specs to the server in the original post: ASUS P8Z68-M Pro LGA 1155 Intel Z68 HDMI SATA 6Gb/s USB 3.0 Micro ATX Intel Motherboard with UEFI BIOS RAIDMAX Reiter ATX-305WBP Black Steel / Plastic ATX Mid Tower Computer Case 450W Power Supply Intel Core i7-2600 Sandy Bridge 3.4GHz (3.8GHz Turbo Boost) LGA 1155 95W Quad-Core Desktop Processor Intel HD Graphics 2000 BX80623I72600 16GB Ram OCZ Agility 3 SSD 120GB From some of the posts out there could the UEFI Bios or the Sandy Bridge processor be a culprit here? We just tried the DVD on a different computer and it got past that point with ease. It's a standard Dell build compared to our custom machine. Could it be having difficulty recognizing drivers? How do we get past that?

    Read the article

  • Virtualization in Ubuntu 9.10

    - by Jeff Dege
    I have an existing Centos 5 installation. I would like to upgrade to Ubuntu. Thing is, I don't want to be down for as long as it will take to get my entire environment moved over - software installed, connectivity configured, etc. I'd like to take it one step at a time. But I don't really want to keep rebooting back and forth from the new OS to the old OS. That's what I did last time I upgraded to a new OS, and it got old real fast. So, since my new MB is virtualization-ready (AMD Phenom II 945 quad-core), I figured I could create a virtual machine, under the new OS installation, that ran the old OS installation. The problem is that the documentation I've been able to find has been pretty sparse. I've found a lot of possibilities, and little info on which would be capable of doing what I want. I have a new Ubuntu 9.10 installation, and a second disk containing the Centos 5 installation. And I don't know where to go next. Any help would be appreciated.

    Read the article

  • Windows Server 2012 Hyper-V very slow

    - by Matt Taylor
    I have been running several Hyper-V VMs on Windows Server 2008 R2 for the past couple of years and enjoying perfectly adequate performance for my testing/development/r&d environments. I'm a software developer so my hardware knowledge is basic however I built the rig using: •Gigabyte GA-X58A-UD3R Intel X58 (Socket 1366) DDR3 Motherboard •Intel Core i7 960 3.20GHz (Bloomfield) (Socket LGA1366) •24GB triple channel RAM The host OS is running on an OCZ SSD and all the VMs are running on a 2TB Marvell SATA3 RAID 0 array consisting of 2 Western Digital Caviar Black 7,200rpm drives. I have tested the speed of the 2TB drive and appear to be getting less than 3Mbs but it can adequately run a 4 VM farm including a DC, (SQL) database and IIS application servers. I recently upgraded the SSD on which the host runs to a 256GB OCZ Vertex 4 and took the opportunity to upgrade to Windows Server 2012 and installed the Hyper-V role. I tried importing one of my existing Windows Server 2008 R2 VMs (and converted it to .vhdx) plus I have tried creating a brand new Windows Server 2008 R2 VM but both are running extremely slowly and I can see nothing obvious using the host and guest Task Manager/Resource Monitor tools. In both cases the VM has 8GB RAM (fixed), 4 CPUs, fixed size HD (not expanding) and is using an external virtual network running on a separate NIC to the host. I have upgraded the BIOS to the latest available version and checked the virtualization settings. I have run out of "obvious" (to a developer) things to check/configure and my next option will be to re-install the host OS but before I do I would very much appreciate any advice from any experts out there. Thanks

    Read the article

  • Windows 7 on a 64-bit computer

    - by GetFree
    I read on Wikipedia that Windows 7 on a 64-bit PC needs twice as much RAM as on a 32-bit PC. I understand why is that: every number stored in memory takes 8 bytes rather than just 4. That, in simple terms, means that your amount of RAM is reduced to half when you use Windows 7 on a 64-bit computer. Now, I have a Intel Core 2 Duo Laptop with Windows Vista right now (2 GB of RAM). My question is: Since Core 2 is a 64-bit architecture, if I upgrade to Windows 7 will my laptop be working as if it had just 1 GB of RAM? Or... to say it in other words: Having a 64-bit PC with Windows 7 do you need twice as much RAM as you need on a 32-bit PC to have the same performance? If I am right, then I'd say it's a terrible business to have a 64-bit computer and Windows 7 on it (I hope I am mistaken, though). Follow-up: After some answers, I'm realizing it's not the same thing to have a 32-bit OS on a 64-bit PC than a 64-bit OS on a 64-bit PC. Apparently, the problem of Windows 7 requiring twice as much RAM on 64-bit architectures is when you have both the OS and PC supporting 64 bits. I'd like new answers to address this issue. Also, is it possible to have more that 4 GB of RAM on a 64-bit PC using a 32-bit version of Windows?

    Read the article

  • Linux: can't downgrade ATI graphic driver

    - by java.is.for.desktop
    I had the old 10.9 driver of ATI, it worked fine. Sadly, I decided to upgrade it to the new 10.12. After that, HDMI stopped working. So, I want to downgrade back to 10.9. I did everything by the book. But it fails: As root, I do: cd /usr/share/ati sh ./fglrx-uninstall.sh Yes, the uninstall process tells me that everything is fine uninstalled. And after doing reboot, I have the default opensource ATI driver which comes with X11 (or the kernel?). The ATI control panel shortcut points to nowhere. So, the driver seems uninstalled. Now I install the older (10.9) proprietary ATI driver. It also finishes successfully. But after reboot, the ATI control panel tells me that I still have 10.12. And it seems to be true, because HDMI doesn't work. So, how can I completely purge the new non-working driver and downgrade to the old one?

    Read the article

  • Is there a historical computer peripherals or accessories museum or even just a current list?

    - by zimmer62
    Thinking about all the unique and different peripherals I've owned over the years, from ISA capture cards, to parallel port controlled shutter glasses for 3d games. I've seen many many accessory or computer peripherals come and go. The nostalgia of these things is a lot of fun. I tried to find some sort of historical time-line or list but what mostly turned up is computers themselves. I'm more interested in the mice, scanners, the weird adapters that shouldn't exist, short run very rare products, strange devices from computer shows in the 80's and 90's... Hardware you might find in a geeks basement that would be completely useless now, but was the coolest thing around when it was new. An example would be a drawing tablet I had for my TI-99 computer, or the audio tape player accessory for a C64 which let you save files to audio tapes, An ISA card that did the same for PC's hooked up to a VCR. Remember that IBM-PC Jr upgrade kit, that added a floppy drive, more memory and the AT switch in the back? I'd love to find either a wiki, or a list that has already been assembled which contain many of these weird (or common) accessories. I've had so many over the years I suppose I could start a wiki here if such a list doesn't already exist.

    Read the article

  • Running multiple copies of openssh-server (sshd) on Ubuntu

    - by cecilkorik
    I may be attacking this problem the wrong way, if so let me know. I have a server which is available through SSH from both the public internet and the local LAN. I would like to have two very different security policies for each, by running two copies of sshd with two different sshd_config files each on a different port. Some of the things I'd like to change is to allow password or public-key authentication on the LAN, but public-key only from the internet. All (real) users could login from the LAN side, but only certain authorized users would be individually whitelisted to login through the internet. As far as I can tell this requires having two different SSH daemons running on different ports with different sshd_configs. I am fine with the different ports part, I can easily forward port 22 to any port I want through my firewall. So my question is what is the best way to actually START the second sshd under Ubuntu 10.04 LTS. Is there a recommended way to do something like this? Surely I am not the first person with this sort of need. I have a bit of experience with upstart, and I can manually hack the second sshd into /etc/init/ssh.conf I suppose but I'm not sure if that will get overwritten by the package. However I do it, It's important to ensure both sshd processes always get restarted after any automatic or manual upgrade of the openssh-server package. Thanks in advance.

    Read the article

  • Updated my WAMP Server and MySQL is eating up 580mB of memory

    - by Jon
    I updated my dev-box's WAMPSERVER, and along with updating PHP and Apache, MySQL updated to '5.6.12'. After doing that, I copied the data folder from my old (5.1.36) install to the new one and now MySQL takes up 580mB which is way too much, since I'm the only person using it (Locally) and there are only 20 or so databases on it, none of which have 'memory' tables. How can I get this down to a decent amount? My my.ini: # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL. [mysqld] # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin # These are commonly set, remove the # and set as required. # basedir = ..... # datadir = ..... # port = ..... # server_id = ..... # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES Database info: Storage Engine Data Size Index Size Total Size InnoDB 48.00 KB 0.00 B 48.00 KB MEMORY 0.00 B 0.00 B 0.00 B MyISAM 163.64 MB 122.49 MB 286.13 MB Total 163.69 MB 122.49 MB 286.18 MB

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >