Search Results

Search found 3823 results on 153 pages for 'kim major'.

Page 107/153 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Looking for some IIS redirect help/ideas

    - by CoreyT
    Right now we have a site with a LOT of static asp pages such as, www.site.com/123.asp. This is due to how our current site's CMS builds it's pages by default. I don't have an exact count but we have roughly 6000 asp files in the site right now. We are in the middle of a redesign and restructuring of the site, and are looking to migrate to SEO friendly URLs. The problem we're having right now is what do we do to redirect the old pages to the new friendly URLs? I know how to do redirects that is not the issue here. The problems I am coming up with right now are listed below. 1 - Is there a limit to the number of redirects in IIS? 2 - Would having even a few thousand redirects affect IIS performance? 3 - My understanding is that we would not be passing along page rank to the new URLs, is that true? (not a major question I can ask on more SEO forums if nobody here is sure) 4 - Would using something like the IIS URL Rewrite 2 module for IIS 7 help us out? Or would I still need to define several thousand unique redirects in it? Our server right now is running Server 2003, however in the redesign I would be open to migrating to Server 2008 R2 if there is a good case for it (i.e. the URL Rewrite module). Thanks for any guidance or help. I have been looking for a good way to do this for a while now and keep coming up with things that sound problematic and bad (such as having 6000 redirects).

    Read the article

  • Virtualize SBS 2003 - P2V vs migrating to new VM

    - by jlehtinen
    I need to virtualize a SBS 2003 server in my work environment. I need some tips on what people think is the best way to proceed. Background: The SBS 2003 server is the primary DC for the domain and also hosts FTP, RRAS(VPN), DNS, and file shares. Exchange is NOT used, neither is SQL server. DHCP is done via a firewall appliance. I have added a Server 2003 VM to the domain and promoted it to the DC role. AD/DNS is replicating here correctly. This was mainly done to provide fault-tolerance to the domain, I was not intending to make this VM the primary DC. I've already asked about buying upgraded licensing for Server 2008/2012 but was refused due to cost. Options: I see (at least) two routes I could take to complete this. From what I've read option 2 is the "preferred" method, but there's a few steps where I'm not clear on what to expect. Option 1.) P2V the primary DC Power off primary DC Power off secondary DC (to prevent USN rollback in case P2V has issue) P2V (cold clone) primary DC Boot new PDC VM Allow new hardware to detect Remove old NIC hardware from device manager Assign old IPs to new virtual NICs Reboot PDC VM, confirm connectivity and no major issues Power on secondary DC, confirm replication Option 2.) Create new VM, transfer roles, remove original DC from domain Create new VM, install SBS 2003 Do I need the original SBS install discs for this? MS migration doc mentions this. Add VM to domain, promote to DC role Does this start 7 day timer where two SBS servers can be in same domain? Set up RRAS on new VM Set up IIS/FTP on new VM Move file shares to new VM Transfer FSMO roles to new VM DC dcpromo original primary DC out of domain

    Read the article

  • Ubuntu xrandr rotate issue

    - by user83544
    I've just bought a second monitor for my PC which happens to be a pivot monitor. I've already read lots of forums related to my problem but haven't come across a solution - I have the same symptoms as dozens of posts but no matter whatever I try it just doesn't work. I've already changed the xorg.conf file and added in the device section just under Driver "nvidia" the following for my second monitor: Option "RandRRotation" "on" When I save and reboot I try to rotate my screen with the nvidia X server settings by choosing the second monitor and clicking either "left" or "right" for the rotation. It immediately exits the nvidia settings window and does nothing. I tried within the terminal by typing: xrandr -o right I get the following error: X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 154 (RANDR) Minor opcode of failed request: 2 (RRSetScreenConfig) Serial number of failed request: 14 Current serial number in output stream: 14 I actually manage to rotate it with Option "Rotate" "CCW" instead of "RandRRotation". The problem with this solution is that you get the second monitor in the right position, but any window you open on that screen is practically unchangeable. You can't change the size nor move it, making it useless for reading PDFs, which is the main reason why I bought this second screen to help me write my thesis. Any help is really appreciated. sudo lshw -c video hiram@hiram-linux:~$ sudo lshw -c video *-display description: VGA compatible controller product: nVidia Corporation vendor: nVidia Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:16 memory:f8000000-f9ffffff memory:d8000000-dfffffff memory:d4000000-d7ffffff ioport:dc00(size=12 memory:fbd80000-fbdfffff

    Read the article

  • Recent DDE / file open issue with Office 2007 affecting only a few machines, is a Windows Update to blame?

    - by kafka
    All our workstations run Windows 7 Professional 64 bit. It started with one, then another, then another couple of machines having a problem accessing Word files locally and on the network. This doesn't happen on my machine though. Affected users get the error message 'There was a problem sending the command to the program'. I've Googled for solutions, but none of the answers worked. They suggested deleting certain registry keys; unregistering and reregistering the program for DDE; resetting the way that the shell opens .docx programs etc. each to no avail. As it affects local and network shares I believe the problem lies with the clients, and not the server, and I'm starting to suspect that there could have been a recent Windows Update which has caused this. I've tried comparing the updates on my working machine with an affected machine, but I can't immediately see any major differences. Has anyone else recently encountered this problem? What are the best steps to take to further isolate what could be causing this?

    Read the article

  • tomcat vs FULL J2EE Solutions

    - by jrhickey
    We are getting ready to make a major revision in our Web Application architecture which currently is running on JBoss 4.2. At first we were looking at moving from 4.2 to JBoss 6 but after some research tomcat may be a better solution for us. My first question is their anything that JBoss can do that tomcat cannot do assuming you are using the correct plugins. We do not really use EJB's in our solution and it would appear there are simple plugins for web services, JMX and other features. Tomcat appears to have much better support, faster upgrade cycles and many, many books. Since there is less to the system it also seems much easier to support from an admin point of view. What am I missing? The main features we want to enable are better clustering support and session replication / persistence. We will consider other application servers as well such as Glassfish / Geronimo. quote form a web article: Apache Tomcat is the world’s most widely used web application server, with over one million downloads per month and over 70% penetration in the enterprise datacenter. Tomcat is used to power everything from simple one server sites to large enterprise networks.

    Read the article

  • 7zip many files from different folders?

    - by mafutrct
    I would like to add a large number of files with different names from different folders to a single 7zip archive using 7za.exe. This should be simple, but it turned out to be a major pain. I created a file that contains the paths (7za -a @list.txt) but once there are too many (~100) files, it fails. Apparently the content of the argument file is pushed onto the command line buffer, which is far too small (the number of files to add is 1m). Splitting the process up by adding the files one by one is not feasible due to the way 7za works: When adding the next file, it creates a copy of the archive, adds the file to the copy and finally replaces the original. This is terribly slow once the archive gets to a couple 100MB in size. So far I am using a combination of the two approaches by adding a dozen files each time in a loop, but it is an unreliable hack and still very slow. Is there a better way to do it? I tried to use 7zip wrapper DLLs (I'm a C# programmer), but none of them worked reliably and I was repeatedly suggested to just use 7za instead.

    Read the article

  • Recommended drive encryption solution

    - by Chris Driver
    Hello, I will soon be purchasing a number of laptops running Windows 7 for our mobile staff. Due to the nature of our business I will need drive encryption. Windows BitLocker seems the obvious choice, but it looks like I need to purchase either Windows 7 Enterprise or Ultimate editions to get it. Can anyone offer suggestions on the best course of action: a) Use BitLocker, bite the bullet and pay to upgrade to Enterprise/Ultimate b) Pay for another 3rd party drive encryption product that is cheaper (suggestions appreciated) c) Use a free drive encryption product such as TrueCrypt Ideally I am also interested in 'real world' experience from people who are using drive encryption software and any pitfalls to look out for. Many thanks in advance... UPDATE Decided to go with TrueCrypt for the following reasons: a) The product has a good track record b) I am not managing a large quantity of laptops so integration with Active Directory, Management consoles etc is not a huge benefit c) Although eks did make a good point about Evil Maid (EM) attacks, our data is not that desirable to consider it a major factor d) The cost (free) is a big plus but not the primary motivator The next problem I face is imaging (Acronis/Ghost/..) encrypted drives will not work unless I perform sector-by-sector imaging. That means an 80Gb encrypted partition creates an 80Gb image file :(

    Read the article

  • EC2 AMI won't boot after edit

    - by Eric Lars0n
    I did something stupid, I got a new laptop and copied everything over to the new one, then wiped the old one clean. Then I realized that I forgot to copy the private key out of .ssh that I use to connect to my AWS EBS backed instance. So I can't log in to my custom AMI. So I created a new Volume from the Snapshot of the AMI, then started up a public instance and attached the Volume to it, edit the sshd_config to allow for password log in. Unmounted the volume, detached it, made a snapshot of it, then made a new AMI from the snapshot. The new AMI launches, but never passes the Status Checks and is not reachable. What am I doing wrong? Or alternatively how can I fix my problem? Edit: Adding some of the console output Linux version 2.6.16-xenU ([email protected]) (gcc version 4.0.1 20050727 (Red Hat 4.0.1-5)) #1 SMP Mon May 28 03:41:49 SAST 2007 BIOS-provided physical RAM map: Xen: 0000000000000000 - 000000006a400000 (usable) 980MB HIGHMEM available. 727MB LOWMEM available. NX (Execute Disable) protection: active IRQ lockup detection disabled RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize NET: Registered protocol family 2 Registering block device major 8 XENBUS: Timeout connecting to devices! Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(8,0)

    Read the article

  • Removing a device in "removed" state from Linux software RAID array

    - by Sahasranaman MS
    My workstation has two disks(/dev/sd[ab]), both with similar partitioning. /dev/sdb failed, and cat /proc/mdstat stopped showing the second sdb partition. I ran mdadm --fail and mdadm --remove for all partitions from the failed disk on the arrays that use them, although all such commands failed with mdadm: set device faulty failed for /dev/sdb2: No such device mdadm: hot remove failed for /dev/sdb2: No such device or address Then I hot swapped the failed disk, partitioned the new disk and added the partitions to the respective arrays. All arrays got rebuilt properly except one, because in /dev/md2, the failed disk doesn't seem to have been removed from the array properly. Because of this, the new partition keeps getting added as a spare to the partition, and its status remains degraded. Here's what mdadm --detail /dev/md2 shows: [root@ldmohanr ~]# mdadm --detail /dev/md2 /dev/md2: Version : 1.1 Creation Time : Tue Dec 27 22:55:14 2011 Raid Level : raid1 Array Size : 52427708 (50.00 GiB 53.69 GB) Used Dev Size : 52427708 (50.00 GiB 53.69 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Nov 23 14:59:56 2012 State : active, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Name : ldmohanr.net:2 (local to host ldmohanr.net) UUID : 4483f95d:e485207a:b43c9af2:c37c6df1 Events : 5912611 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 0 0 1 removed 2 8 18 - spare /dev/sdb2 To remove a disk, mdadm needs a device filename, which was /dev/sdb2 originally, but that no longer refers to device number 1. I need help with removing device number 1 with 'removed' status and making /dev/sdb2 active.

    Read the article

  • Software RAID 1 Configuration

    - by Corve
    I have created a software RAID 1 quite some while ago and it always seemed to work for me. However I am not completely sure that I have configured everything right and do not have the experience to check so I would be very grateful for some advice or just verification that all seems right so far. I am using Linux Fedora 20 (32 bit with plans to upgrade to 64bit) The RAID 1 should consist of two 1TB SATA hard drives. This is the output of mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Jan 29 11:25:18 2012 Raid Level : raid1 Array Size : 976761424 (931.51 GiB 1000.20 GB) Used Dev Size : 976761424 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Sat Jun 7 10:38:09 2014 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : argo:0 (local to host argo) UUID : 1596d0a1:5806e590:c56d0b27:765e3220 Events : 996387 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 0 1 active sync /dev/sda The RAID is mounted successfully: friedrich@argo:~ ? sudo mount -l | grep md0 /dev/md0 on /mnt/raid type ext4 (rw,relatime,data=ordered) Basically my question are: Why do I only have 1 active device? What does the State removed at bottom mean? Also I noticed some strange error messages that I see on the console on system start and shutdown and always repeating in the background when I switch with Ctrl + Alt + F2: ... ata2: irq_stat 0x00000040 connection status changed ata2: SError: { CommWake DevExch } ata2: COMRESET failed (errno=-32) ata2: exception Emask 0x10 SAct 0x0 SErr 0x4040000 action 0xe frozen ata2: irq_stat 0x00000040 connection status changed ata2: SError: { CommWake DevExch } ata2: exception Emask 0x10 SAct 0x0 SErr 0x4040000 action 0xe frozen ... Are these errors related to the RAID? Something seems wrong with the SATA devices.. All together the system works (I can read and write to the mounted raid) but I always had these strange errors on startup shutdown (probably always in the background). Thx for your help

    Read the article

  • How to display/define Mirror/Stripping pairs with mdadm

    - by Chris
    I want to make a standard linux software Raid10 over 4 HDD. The server has 4HDDs, 2 pairs from different vendors in order to avoid batch problems. I want to have the mirror over two different Vendors, and then the Stripe over the mirror pairs. I could do that by manually creating Raid1/0, but mdadm supports Raid level 10. I just cant figure out how the Raid10 is then handled and how the data is distributed. mdadm --detail /dev/md10 /dev/md10: Version : 1.2 Creation Time : Wed May 28 11:06:23 2014 Raid Level : raid10 Array Size : 1953260544 (1862.77 GiB 2000.14 GB) Used Dev Size : 976630272 (931.39 GiB 1000.07 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Wed May 28 11:06:23 2014 State : clean, resyncing (PENDING) Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : pdwhost:10 (local to host pdwhost) UUID : a3de0ad5:9e694ee1:addc6786:c4449e40 Events : 0 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 81 1 active sync /dev/sdf1 2 8 97 2 active sync /dev/sdg1 3 8 113 3 active sync /dev/sdh1 does not really give any information about that. How it should be: Raid 1 / Mirror over /dev/sda1 /dev/sdf1 and /dev/sdg1 /dev/sdh1 Raid 0 over the two Raid 1 pairs Is it possible to do that with the built in "level=10", how can I see what pairs are mirrored? Thanks a lot for you help

    Read the article

  • On clients use generic driver for printer shared by CUPS

    - by Daniel
    I know this worked really easy with a recent CUPS version some years ago. Unfortunately I fail to do the following with latest CUPS nowadays. How to share a printer without using printer specific drivers on the clients with CUPS? The printer is a Samsung ML-2010. The important fact is that it needs a quite specific library for printing called splix. That is installed on the server and prints well. What makes trouble is using that printer over the network. I found out how to use Avahi under Linux to make use of DNSSD to advertise and discover printers. But as far as I understand the new CUPS offers the internal driver interface on the network. This has 2 major issues: anybody can fiddle with the driver and I don't trust any uncommon printer library to be "network secure" anyone who wants to use my printer including guests needs to install the specific drivers first I remember the old days when I could enable "Share this printer" on the CUPS server and all clients would magically detect the printer and just send their job data to the server and have it do the driver stuff. After everything a read I guess this is related to the changes Apple introduced with CUPS including dropping of the integrated network discovery protocol. If it helps: Server: Ubuntu LTS 12.04 Server with CUPS 1.5.3 Client: Arch Linux with current CUPS 1.6.1 On another box with Ubuntu the printer was setup automatically at least but the mechanism used the Splix library for that.

    Read the article

  • Backup data from RAID 1 disk out of its server

    - by Doomsday
    I'm facing with a pretty easy problem in my opinion. I've extracted a working disk from a RAID1 and I'm looking to copy only data (FS and RAID configuration doesn't matter) into another location (another FS). My problem is I'm not able to mount properly this disk into another linux. I've first looked the partition table : # fdisk -l /dev/sdc Disk /dev/sdc: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders, total 1250263728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 1249535699 624767818+ fd Linux raid autodetect /dev/sdc2 1249535700 1250017649 240975 fd Linux raid autodetect /dev/sdc3 1250017650 1250258624 120487+ 82 Linux swap / Solaris I've understood I should use dmraid tools. Once installed : # cat /proc/mdstat Personalities : md0 : inactive sdc1[1](S) 624767744 blocks unused devices: <none> And some other informations : # mdadm --examine /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 0.90.00 UUID : 8f292f54:7e5aef72:7e5ab5fd:b348fd05 Creation Time : Mon Jun 2 03:39:41 2008 Raid Level : raid1 Used Dev Size : 624767744 (595.82 GiB 639.76 GB) Array Size : 624767744 (595.82 GiB 639.76 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Update Time : Tue Feb 7 22:34:59 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : a505b324 - correct Events : 15148 Number Major Minor RaidDevice State this 1 8 1 1 active sync /dev/sda1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 1 1 active sync /dev/sda1 From here, I've tried to mount but I'm not comfortable with dmtools and how it's working. # mount /dev/sdc1 /mnt/sdc1 mount: unknown filesystem type 'linux_raid_member' # mount /dev/md0 /mnt/sdc1 mount: /dev/md0: can't read superblock I've seen some options to alter RAID array with mdadm but I only want to copy data on its filesystem before wiping them... Anyone has a clue ?

    Read the article

  • Subdomain is preventing my search results from rising as expected in page rank

    - by culov
    My problem is that I have a site which has requires a dedicated page for every city I choose to support. Early on, I decided to use subdomains rather than a directly after my domain (ie i used la.truxmap.com rather than truxmap.com/la). I realize now that this was a major mistake because Google seems to treat la.truxmap.com as a completely different site as ny.truxmap.com. So for instance, if i search "la food truck map" my site will be near the top, however, if i search "nyc food truck map" im no where in sight because ny.truxmap.com wouldnt be very high in the page rank by itself, and it doesnt have the boost that it ought to be getting from the better known la.truxmap.com So a mistake I made a year ago is now haunting my page rank. I'd like to know what the most painless way of resolving my dilemma might be. I have received so much press at la.truxmap.com that I can't just kill the site, but could I re-direct all requests at la.truxmap.com to truxmap.com/la and do the same for all cities supported without trashing my current, satisfactory page rank results I'm getting from la.truxmap.com ?? EDIT I left out some critical information. I am using Google Apps to manage my domain (that is, to add the subdomains) and Google App Engine to host my site. Thus, Google Apps provides a simple mechanism to mask truxmap.appspot.com (the app engine domain) as la.truxmap.com, but I don't see how I can mask it as truxmap.com/la. If I can get this done, then I can just 301 redirect la.truxmap.com to truxmap.com/la as suggested below. Thanks so much!

    Read the article

  • Viewing Postscript (or PDF) on OS X: Aliasing issues

    - by mankoff
    I am generating postscript graphics and am trying to find a balance between non-aliasing and over-aliasing. If I use the raw ghostscript viewer gs on the Postscript, it looks good. The text appears anti-aliased, but the image remains nice and blocky. Unfortunately, gs has no real user interface and loses all of the nice things that Preview.app has. I could install gv, but the dependency bloat is huge! It requires all of gnome. And even that isn't a great viewer compared to Preview.app or Skim.app. Here is an image viewed with gs: From a user-interaction and Mac-ish perspective, Preview.app (or Skim.app is a much nicer program to use. They have the option to turn on or off aliasing, but neither option looks very good. Which aliasing on, the image is blurry. When it is off, the graphic matches what is seen from gs, but there are two issues. Minor issue: the font is ugly. Uglier than with gs. Major issue: Every PDF is un-aliased, making it hard to read regular PDFs full of text. So, in summary: Is there a way to manually generate the PDF from the PS that overcomes these issues? Is there a way to find a middle ground of alias/unalias with Preview.app? Is there another app that displays with quality like gs, but has a decent UI like Skim.app or Preview.app Is there a way to have Preview.app turn off aliasing for only one file (containing graphics) but leave it enabled in general so that text PDFs are still readable?

    Read the article

  • Disable touch pad for mouse button region on new HP pavillion models?

    - by John
    i bought a new hp pavillion dv6 series laptop. the laptop itself is fine but it has the new hp touchpad mouse which i absolutely hate. its such a stupid problem to have with a computer. the left and right mouse buttons are, themselves, part of of the touchpad, meaning that if i tap the buttons without actually pressing them down, it registers the same way as the mouse pad (the cursor moves, tap to click activates, etc.) this is a major annoyance because it prevents you from operating the mouse pad with anything more than a single finger; if for example i use my right hand index finger to move the cursor using the touch pad and rest my left hand index finger on the left mouse button for more efficient mouse-ing, the mouse will react as if im trying to use 2 fingers to move it and it will either just sit there or will spaz out. the only way this works is if i keep the finger that is resting on the mouse button absolutely still, which is very difficult and, therefore, very annoying. also, even if i do abide by the arbitrary new decree of single-finger mousepad operation, i still have a problem because when i press down on the left or right click buttons, the mouse moves slightly, what with the buttons also being part of the touch pad and all. this would not be that hard to avoid except that they decided to also make the buttons much harder to press down. now whenever i go to click something, i press hard on the mouse button, causing my finger to slightly move or roll or flatten out a bit, causing the cursor to move slightly, and causing me to click on something different. what i would like to know is if there is anyway that i can disable the touch pad on the buttons. i have gone through all of the settings under the synaptics menus but i cannot find anything about this. did i miss something in one of the menus? if not, then are there any updated drivers that allow for toggling of this function?

    Read the article

  • Memory overcommitment on VmWare ESXi 5.0

    - by Tibor
    I would like to understand better the possibilities of VmWare ESXi memory overcommitment. I've read this paper from VmWare, so I am familiar with general concepts, such as hypervisor swapping, memory balooning and page sharing. It seems that a combination of these techniques allows for quite a large degree of overcommitment. However, I am not sure. I am deploying a virtual test lab comprising of 4 identical sets of virtual servers and workstations and a couple of virtual router instances. Overall, I expect to be running around 20 virtual machines with Windows XP, Windows 7 and Ubuntu for workstation hosts as well as CentOS and Windows 2008 Server instances for servers. The problem is, however, that the host machine only has 12GB of RAM and I don't have an option to stuff in some more. I would like to know what is the best option to configure hosts in order to achieve reasonable performance within the constrains. I have these two options: Allocate as little as possible of RAM to each virtual machine. Allocate an extraordinary amount (such as 4 GB per instance) and let the baloon driver do the rest. Something else? Which would work better? Machines will mostly be idle, so I don't have any major performance expectations, but they should run reasonably smoothly nevertheless.

    Read the article

  • System x3550 M2 stuck on system initializing after firmware upgrades

    - by itmanager223
    Hey guys so i have been having this major issue for the last few hours now. I have a system x 3550M2 server and i ran the UEFI and IMM firmware upgrades within windows server 2008 R2 x64. All upgrades went fine and i was told to reboot. I performed the reboot cleanly using the OS restart option. Upon restarting it showed the system initializing screen and it stayed there for a good hour. After that hour i figured it froze and so i powered down and power up again. Now i can not boot back up to even see the IBM splash screen or the OS... I have tried switching the Jumpers to go to the backup UEFI and backup IMM and no luck. I have also reset the CMOS power and i have pressed the reset button 3 times on the light path and no luck. There are no lights light on the light path to indicate anything is broken. The only thing i see on the light path is the number 0.5. Any ideas guys i am quite stumped with this problem. Thanks a lot, Dani Cela

    Read the article

  • In which order does Excel process its formulae?

    - by dwwilson66
    I've got a fairly large spreadsheet with major calculations going on, and it's starting to slow down every time a value that's part of a calculated field is modified. I'm in the process of optimizing the file, adding arrays where I can, and seeing where I can shave off a few milliseconds here and there. Let's say there's data in Columns A-H. Column H is set based on relationships between values in Columns A, B and C, which change dynamically from an outside program. Users enter the data in Column F. Formulas in D & E calculate relationships between F & H and H & D, respectively. How does Excel manage formulae in the case, for instance, where they're dependent on data further into the sheet? Will my value in H be available the first time that the formulae in D & E calculate? or, will D & E calculate based on an old value for H, because H's update hasn't happened yet? Are there any efficiencies to be gained by positioning dependencies in particular rows or columns in the speadsheet? Do positions above and left the current position get processed sooner than things below and to the right?

    Read the article

  • 7-Zip many files from different folders?

    - by mafutrct
    I would like to add a large number of files with different names from different folders to a single 7-Zip archive using 7za.exe. This should be simple, but it turned out to be a major pain. I created a file that contains the paths (7za a out.7z @list.txt), but once there are too many (~100) files, it fails. Apparently the content of the argument file is pushed onto the command line buffer [Edit: This was likely a misinformation on my part, either way it was not the reason], which is far too small (the number of files to add is more than one million). Splitting the process up by adding the files one by one is not feasible due to the way 7za works: When adding the next file, it creates a copy of the archive, adds the file to the copy and finally replaces the original. This is terribly slow once the archive gets to a couple 100 MB in size. So far I am using a combination of the two approaches by adding a dozen files each time in a loop, but it is an unreliable hack and still very slow. Is there a better way to do it? I tried to use 7-Zip wrapper DLLs (I'm a C# programmer), but none of them worked reliably and I was repeatedly suggested to just use 7za instead.

    Read the article

  • RAID degraded on Ubuntu server

    - by reano
    We're having a very weird issue at work. Our Ubuntu server has 6 drives, set up with RAID1 as follows: /dev/md0, consisting of: /dev/sda1 /dev/sdb1 /dev/md1, consisting of: /dev/sda2 /dev/sdb2 /dev/md2, consisting of: /dev/sda3 /dev/sdb3 /dev/md3, consisting of: /dev/sdc1 /dev/sdd1 /dev/md4, consisting of: /dev/sde1 /dev/sdf1 As you can see, md0, md1 and md2 all use the same 2 drives (split into 3 partitions). I also have to note that this is done via ubuntu software raid, not hardware raid. Today, the /md0 RAID1 array shows as degraded - it is missing the /dev/sdb1 drive. But since /dev/sdb1 is only a partition (and /dev/sdb2 and /dev/sdb3 are working fine), it's obviously not the drive that's gone AWOL, it seems the partition itself is missing. How is that even possible? And what could we do to fix it? My output of cat /proc/mdstat: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid1 sda2[0] sdb2[1] 24006528 blocks super 1.2 [2/2] [UU] md2 : active raid1 sda3[0] sdb3[1] 1441268544 blocks super 1.2 [2/2] [UU] md0 : active raid1 sda1[0] 1464710976 blocks super 1.2 [2/1] [U_] md3 : active raid1 sdd1[1] sdc1[0] 2930133824 blocks super 1.2 [2/2] [UU] md4 : active raid1 sdf2[1] sde2[0] 2929939264 blocks super 1.2 [2/2] [UU] unused devices: <none> FYI: I tried the following: mdadm /dev/md0 --add /dev/sdb1 But got this error: mdadm: add new device failed for /dev/sdb1 as 2: Invalid argument Output of mdadm --detail /dev/md0 is: /dev/md0: Version : 1.2 Creation Time : Sat Dec 29 17:09:45 2012 Raid Level : raid1 Array Size : 1464710976 (1396.86 GiB 1499.86 GB) Used Dev Size : 1464710976 (1396.86 GiB 1499.86 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Thu Nov 7 15:55:07 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : lia:0 (local to host lia) UUID : eb302d19:ff70c7bf:401d63af:ed042d59 Events : 26216 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 0 0 1 removed

    Read the article

  • How can I make the Firefox Password Manager more intelligent?

    - by Philip
    I have two major gripes about the FF password manager: If I restore a session with multiple tabs with sites with saved passwords, the master password prompt pops up once for each of them, even if I correctly enter the password the first time. Sometimes I want Firefox not to use my saved passwords at all (e.g. because I want to let someone else use it without getting access to my accounts), but hitting cancel results in erratic behavior--sometimes the box just pops up again and again, or sometimes it stops and behaves as I wish (continuing to browse w/o my passwords) until it encounters another site that wants my password. Thus even when hitting cancel does leave me free to browse passwordless, it doesn't get Firefox to leave me alone for the whole session. Thus: do you know of any tweak or add-on that could (1) make Firefox smart enough to get my master password once and then leave me alone, and/or (2) add an option (checkbox-style, toggle button, etc.) to browse "for now" (until I toggle the option) or even "for this session" (until I restart) without using any of my saved passwords? I'm running Firefox 3.5.6 on Mac OS X 10.5; thanks.

    Read the article

  • Pulling application updates from closest server?

    - by Mike Morris
    Setup: 6 Major Sites with Server 2003/2008 DCs doing DHCP/AD Integrated DNS, each on their own subnet. All connect back to datacenter through a 3 mbps WAN ERP server running in the datacenter, accessed by clients at all sites Currently, when we update the software, I manually push a copy of the updated client/config files down to each DC. I have a script that we run on each PC to update the clients. It determines what subnet the PC is on, and pulls the software from that DC. It's messy, but it works. The client has an autoupdate feature, but it'll only pull from the application server (which is housed in the datacenter, over the 3 meg link). It takes forever, since the updates are not "patches" but a full version of the client, even for minor upgrades (bad design). After the most recent patch, you can configure the clients to pull from a different server. Unfortunately, it is the same for all clients. Is there some kind of DNS magic I can use to pull from the local server? For instance, if I tell the clients their update server is ERPUPDATE, can I have their local DNS server return a different IP for ERPUPDATE than the other sites? Example: Client 1 is at site A, client 2 is at site b. They each run the software and a version change is detected. As per the config files, the clients look to ERPUPDATE for their updated client. Client 1 queries DNS for the IP of ERPUPDATE at its current location (site A) DNS at site A returns 192.1.1.5 Client 1 pulls update from 192.1.1.5 Client 2 queries DNS for the IP of ERPUPDATE at its current location (site B) DNS at site B returns 192.1.2.5 Client 2 pulls update from 192.1.2.5 Excuse the poor explanation, I worked 61 hours over the weekend and haven't completely rebounded. I'll be happy to clarify if needed!

    Read the article

  • How can I get Windows to apply its settings?

    - by Jouke van der Maas
    I have a computer with a major problem; it gives a blue screen when the login screen loads. I've been using this guide to troubleshoot the issue, and now I've run into a problem. I have determined the issue is not bad memory or a bad hard drive. According to the guide, this means the problem is in the OS. I've tried to follow the steps, but Windows (Vista SP1) somehow doesn't remember any changes. On every reboot, the computer is in exactly the same state it was in before. Any changes to system settings or files won't be recorded. As this means I can't check what is causing the problem, I can't fix my PC. Is there a way to find out what's causing this? Is it just a mode Windows goes into to protect itself, or is it some other problem? Anything to help troubleshoot will be of great help here. PS. I'm kind of new to this site. If I messed up, please tell me in the comments.

    Read the article

  • Tuning up a MySQL server

    - by NinjaCat
    I inherited a mysql server, and so I've started with running the MySQLTuner.pl script. I am not a MySQL expert but I can see that there is definitely a mess here. I'm not looking to go after every single thing that needs fixing and tuning, but I do want to grab the major, low hanging fruit. Total Memory on the system is: 512MB. Yes, I know it's low, but it's what we have for the time being. Here's what the script had to say: General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries When making adjustments, make tmp_table_size/max_heap_table_size equal Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Your applications are not closing MySQL connections properly Variables to adjust: query_cache_limit (> 1M, or use smaller result sets) tmp_table_size (> 16M) max_heap_table_size (> 16M) table_cache (> 64) innodb_buffer_pool_size (>= 326M) For the variables that it recommends that I adjust, I don't even see most of them in the mysql.cnf file. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_buffer_pool_size = 220M innodb_flush_log_at_trx_commit = 2 innodb_file_per_table = 1 innodb_thread_concurrency = 32 skip-locking big-tables max_connections = 50 innodb_lock_wait_timeout = 600 slave_transaction_retries = 10 innodb_table_locks = 0 innodb_additional_mem_pool_size = 20M user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = localhost key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 4 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M log_error = /var/log/mysql/error.log expire_logs_days = 10 max_binlog_size = 100M skip-locking innodb_file_per_table = 1 big-tables [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >