Search Results

Search found 3315 results on 133 pages for 'magic packet'.

Page 101/133 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Excel equivilant of java's String.contains(String otherString)

    - by corsiKa
    I have a cell that has a fairly archaic String. (It's the mana cost of a Magic: the Gathering spell.) Examples are 3g, 2gg, 3ur, and bg. There are 5 possible letters (g w u b r). I have 5 columns and would like to count at the bottom how many of each it contains. So my spreadsheet might look like this A B C D E F G +-------------------------------------------- 1|Name Cost G W U B R 2|Centaur Healer 1gw 1 1 0 0 0 3|Sunspire Griffin 1ww 0 1 0 0 0 // just 1, even though 1ww 4|Rakdos Shred-Freak {br}{br} 0 0 0 1 1 Basically, I want something that looks like =if(contains($A2,C$1),1,0) and I can drag it across all 5 columns and down all 270 some cards. (Those are actual data, by the way. It's not mocked :-) .) In Java I would do this: String[] colors = { "B", "G", "R", "W", "U" }; for(String color : colors) { System.out.print(cost.toUpperCase().contains(color) ? 1 : 0); System.out.print("\t"); } Is there something like this in using Excel 2010. I tried using find() and search() and they work great if the color exists. But if the color doesn't exist, it returns #value - so I get 1 1 #value #value #value instead of 1 1 0 0 0 for, example, Centaur Healer (row 2). The formula used was if(find($A2,C$1) > 0, 1, 0).

    Read the article

  • Unable to resize ec2 ebs root volume

    - by nathanjosiah
    I have followed many of the tutorials that pretty much all say the same thing which is basically: Stop the instance Detach the volume Create a snapshot of the volume Create a bigger volume from the snapshot Attach the new volume to the instance Start the instance back up Run resize2fs /dev/xxx However, step 7 is where the problems start happening. In any case running resize2fs always tells me that it is already xxxxx blocks big and does nothing, even with -f passed. So I start to continue with tutorials which all basically say the same thing and that is: Delete all partitons Recreate them back to what they were except with the bigger sizes Reboot the instance and run resize2fs (I have tried these steps both from the live instance and by attaching the volume to another instance and running the commands there) The main problem is that the instance won't start back up again and the system error log provided in the AWS console doesn't provide any errors. (it does however stop at the grub bootloader which to me indicates that it doesn't like the partitions(yes, the boot flag was toggled on the partition with no affect)) The other thing that happens regardless of what changes I make to the partitions is that the instance that the volume is attached to says that the partition has an invalid magic number and the super-block is corrupt. However, if I make no changes and reattach the volume, the instance runs without a problem. Can anybody shed some light on what I could be doing wrong? Edit On my new volume of 20GB with the 6GB image,df -h says: Filesystem Size Used Avail Use% Mounted on /dev/xvde1 5.8G 877M 4.7G 16% / tmpfs 836M 0 836M 0% /dev/shm And fdisk -l /dev/xvde says: Disk /dev/xvde: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x7d833f39 Device Boot Start End Blocks Id System /dev/xvde1 1 766 6144000 83 Linux Partition 1 does not end on cylinder boundary. /dev/xvde2 766 784 146432 82 Linux swap / Solaris Partition 2 does not end on cylinder boundary. Also, sudo resize2fs /dev/xvde1 says: resize2fs 1.41.12 (17-May-2010) The filesystem is already 1536000 blocks long. Nothing to do!

    Read the article

  • WAN Optimization for Small Office/Home Office

    - by TiernanO
    I have been reading up on WAN optimization for the last while, mostly out of interest of speeding up my own internet connections, but also to speed up the office internet connection. At home, I have 2 cable modems plugged into a RouterBoard RB750, which load balances the connections. In the office, we have a single connection into a NetGear router. Most of the WAN Optimization products I have seen, seem to be prohibitively expensive, but also seem to be based on the idea of having multiple branches around the world. What I am looking for, ideally, is as follows: software install: I am "guessing" I need to install it in 2 places: one in the office or house, and one in "the cloud". any connections going to, say, The US (we are in Europe, but our backup's live in the US currently, which would be something important to speed up) would be "tunnelled" though the Optimizer. If downloading or uploading large files, open multiple connections between both "the cloud" and the optimizer... This is where a lot of speed could be gained. finally, for items not compressed, they would be compressed on the cloud side of things, also items that are already on the optimizer could be not sent again. kind of like RSync or Proxy servers... So, is there something that can be done? Is it available using off the shelf components (some magic script with SSH, Squid, Linux and duct tape) or is it something that needs to be purchased? or even an Open Source Project that does 90% of what i am asking?

    Read the article

  • Pulling application updates from closest server?

    - by Mike Morris
    Setup: 6 Major Sites with Server 2003/2008 DCs doing DHCP/AD Integrated DNS, each on their own subnet. All connect back to datacenter through a 3 mbps WAN ERP server running in the datacenter, accessed by clients at all sites Currently, when we update the software, I manually push a copy of the updated client/config files down to each DC. I have a script that we run on each PC to update the clients. It determines what subnet the PC is on, and pulls the software from that DC. It's messy, but it works. The client has an autoupdate feature, but it'll only pull from the application server (which is housed in the datacenter, over the 3 meg link). It takes forever, since the updates are not "patches" but a full version of the client, even for minor upgrades (bad design). After the most recent patch, you can configure the clients to pull from a different server. Unfortunately, it is the same for all clients. Is there some kind of DNS magic I can use to pull from the local server? For instance, if I tell the clients their update server is ERPUPDATE, can I have their local DNS server return a different IP for ERPUPDATE than the other sites? Example: Client 1 is at site A, client 2 is at site b. They each run the software and a version change is detected. As per the config files, the clients look to ERPUPDATE for their updated client. Client 1 queries DNS for the IP of ERPUPDATE at its current location (site A) DNS at site A returns 192.1.1.5 Client 1 pulls update from 192.1.1.5 Client 2 queries DNS for the IP of ERPUPDATE at its current location (site B) DNS at site B returns 192.1.2.5 Client 2 pulls update from 192.1.2.5 Excuse the poor explanation, I worked 61 hours over the weekend and haven't completely rebounded. I'll be happy to clarify if needed!

    Read the article

  • Apache, logerror and logrotate: what is the best method?

    - by OlivierDofus
    Here's a vhost example of my sites: <VirtualHost *:80> DocumentRoot /datas/web/woog ServerName woog.com ServerAlias www.woog.com ErrorLog "|/httpd-2.2.8/bin/rotatelogs /logs/woog/error_log 86400" CustomLog "|/httpd-2.2.8/bin/rotatelogs /logs/woog/access_log 86400" combined DirectoryIndex index.php index.htm <Location /> Allow from All </Location> <Directory /*> Options FollowSymLinks AllowOverride Limit AuthConfig </Directory> </VirtualHost> I've got 12 sites running now. This gives something like: [Shake]:/sources/software/mod_log_rotate# ps x | grep rotate /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/error_log 86400 /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/error_log 86400 [snap (as many error_log as virtual hosts)] /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/access_log 86400 /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/access_log 86400 [snap (as many access_log as virtual hosts)] grep rotate [Shake]:/sources/software/mod_log_rotate# !!! I've been looking everywhere but I've only found mod_log_rotate. The "little" problem is that the author (very good C developper) explains: "Unfortunately Apache error logs are handled in such a way that we can't work the same log rotation magic on them. Like transfer logs they support piped logging though so you can still use rotatelogs for them. " So my question is: what would be the best way to handle multiple logs? If I just do a very classical log and I use the system's "logrotate" program couldn't this be a good deal? How would/do you deal with that? Thank you!

    Read the article

  • Can't mount hard drive. Ubuntu 12.04

    - by Sam
    I am trying to recover some pictures on my 320 GB Hard Disk, so I put in a Live Ubuntu CD and am in that right now. In the devices list, it shows my USB drive, but not my 320 GB Hard Disk. I can see the disk in Disk Utility (it says it's on /dev/sda), but it's not mounted, and it says it has a few bad sectors but it is OK. In Disk Usage Analyzer, it says my maximum capacity is 13.4 GB, so it's definitely not using the 320 GB Hard Disk. I tried the following: sudo mkdir /media/newhd (worked) sudo mount /dev/sda /media/newhd (didn't work. it says I must specify the filesystem type) I then tried: fsck.ext4 -f /dev/sda (didn't work. Said: Superblock invalid, trying to backup blocks. then: Bad magic number in super-block while trying to open /dev/sda. The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock) Does anyone have any ideas? The whole problem started when my Windows Vista said "Can't find operating system". Any ideas on how I can get on to my hard drive at /dev/sda?

    Read the article

  • Looking for advice on using dd to backup a dual boot laptop.

    - by AvatarOfChronos
    My questions boils down to this. If I do "dd if=/dev/sda of=usbdrive" can anybody confirm that this will get everything including mbr/partition information/all four partitions and create a drive that I can swap with the failing internal drive without losing anything? If this is done while the computer is running will it still copy everything? At this point I'm afraid to shutdown the computer for fear of it never starting again. Secondly, how tolerant is dd of failing drives? Has anybody used it to recover a half dead drive before that can share any potential pitfalls? Did it get the data ok or is this going to be a hope for the best kind of situation? And lastly, If the usbdrive is larger than the failing internal drive I'll still be able to expand the partitions later so I'm not losing space? this last part seems silly to ask but with my current streak of bad luck I'll end up overwriting some magic bit and forever turning a 640gb hdd into a 500gb hdd. Also if anybody has a better solution to create a complete clone that gets everything I'm all for hearing about it. PostScript: I had been making periodic backups however when whatever miasma that killed the laptop struck it also got the NAS :( Post PostScript: both devices were on a UPS system.

    Read the article

  • EMERGENCY! Update Statement for critical mysql production database now running for 18 hours, need help.

    - by Tim
    We have a table with 500 million rows. Unfortunately, one of the columns was int(11), which is a signed int, and it was an incrementing value that just rolled over the 2.1 billion magic number. This immediately caused downtime for about 10.000 users. We discussed many solutions, and decided that we could just roll back this value safely, by say, a billion. But we had to roll it back for every row. Here is what we did: update Table1 Set MessageId = case when MessageId < 1073741824 then 0 else MessageId - 1073741824 end; I tested this on a table with 10 million rows and it took 11 minutes. So I assumed the larger table would take 550 minutes, or 9 hours. This was going to be our biggest downtime in 3 years. (We're a startup). It's now going on 18 hours. What should we do? Please don't say what we should have done. I think we should have updated a few million rows at a time. Is there a way we can see progress? Could Mysql have hung? Using mysql 5.0.22. Thanks!

    Read the article

  • Relinking a deleted file

    - by mbac32768
    Sometimes people delete files they shouldn't, a long-running process still has the file open, and recovering the data by catting /proc/<pid>/fd/N just isn't awesome enough. Awesome enough would be if you could "undo" the delete by running some magic option to ln that would let you re-link to the inode number (recovered through lsof). I can't find any Linux tools to do this, least with cursory Googling. What do you got, serverfault? EDIT1: The reason catting the file from /proc/<pid>/fd/N isn't awesome enough is because the process which still has the file open is still writing to it. A delete removes the reference to the inode from the filesystem namespace. What I want is a way of re-creating the reference. EDIT2: 'debugfs ln' works but the risk is too high since it frobs raw filesystem data. The recovered file is also crazy inconsistent. The link count is zero and I can't add links to it. I'm worse off this way since I can just use /proc/<pid>/fd/N to access the data without corrupting my fs.

    Read the article

  • Adventures in Drupal multisite config with mod_rewrite and clean urls

    - by moexu
    The university where I work is planning to offer Drupal hosting to staff/faculty who want a Drupal site. We've set up Drupal multisite with clean urls and it's mostly working except for some weird redirects. If you have two sites where one is a substring of the other then you'll randomly be redirected to the other site. I tracked the problem to how mod_rewrite does path matching, so with a config file like this: RewriteCond %{REQUEST_URI} ^/drupal RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupal/index.php?q=$1 [last,qsappend] RewriteCond %{REQUEST_URI} ^/drupaltest RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupaltest/index.php?q=$1 [last,qsappend] /drupaltest will match the /drupal line and all of the links on the /drupaltest page will be rewritten to point to /drupal. If you put the end of string character ($) at the end of each rewrite condition then it will always match on the correct site and the links will always be rewritten correctly. That breaks down as soon as a user logs in though because the query string is appended to the url so just the base url will no longer match. You can also fix the problem by ordering the sites in the config file so that the smallest substring will always be last. I suggested storing all of the sites in a table and then querying, sorting, and rewriting the config file every time a Drupal site is requested so that we could guarantee the order. The system administrator thought that was kludgy and didn't address the root problem. Disabling clean urls should also fix the problem but the users really want them so I'd prefer to keep them if possible. I think we could also fix it by using an .htaccess file in each site to handle the clean url rewriting but that also seems suboptimal since it will generate a higher load on the server and the server is intended to host the majority of the university's external facing web content. Is there some magic I can do with mod_rewrite to get it to work? Would another solution be better? Am I doing something the wrong way to begin with?

    Read the article

  • Windows mounted network drives slow after upgrading switch

    - by Kver
    On our small business network our old 10/100 consumer grade switch gave up the ghost, and we replaced it with a proper business-grade gigabyte switch. After wiring it in our Linux and Mac users immediately got back to working off of network drives; But 2 of our 3 Windows 7 PCs have suddenly experienced a tremendous slowdown with mapped network drives; Windows will become stuck "discovering" a folder causing applications to freeze when trying to open files. It will instantly display and browse files, but the moment you try to open one the bug hits. To remedy this we have our users copying files to the desktop, but it can take a few minutes while windows is stuck "calculating" the time it will take to copy. These aren't big files, mostly excel sheets less than 500KB - these operations are instant on Linux and Mac. (The third Windows machine is having no issues) I've tried remapping the drives, mapping to different drive letters, rebooting, etc. I'm at a loss, because switches are mostly transparent, and it's only after the switch was replaced that the Windows PCs started acting up. What black-magic voodoo am I missing to make Windows work? Thank you.

    Read the article

  • Help in recovering partition

    - by goshopedero
    Okay so i had one NTFS partition and i wanted to resize it, but while resizing it with partition magic some error occurred and now i am not able to enter in my partition anymore. I have slackware 13 also and i tried mounting the partition from there but it didn't succeed. One friend of mine came to my house with some live-cd os called backtrack3 and when he booted from cd, he was able to mount the damaged partition - and was able to read/write on it anywhere. I saw my files, they are all there, so nothing's erased just the partition is somehow damaged. But strange thing was that from backtrack we weren't able to mount some of the working partitions of my comp, and we could mount the damaged one. So i am asking for some help here: My files are all there, and i saw them from backtrack. What can i do to fix the partition so it would be usable from windows/slackware again ? Please tell me anything you've got because i have some important data on it. Thank you.

    Read the article

  • Adding a transaction ID to ruby-on-rails logs

    - by Blue Warrior NFB
    We have a RoR app (rails version 3.2.15 right now). As it has been getting busier, the log-files it's producing are becoming less and less useful for troubleshooting. When they come in like this, it's not a problem: Started GET "/accounts/28088166/kittens/22894/rendered_png?file_id=5d3eaec77954a489b5ddd75143091767&kitten_store_id=9970569bbacf7b6dbeb4eb9295960d69&size=large" for 172.16.202.30 at 2013-11-12 13:45:00 +0000 Processing by KittenController#rendered_png as HTML Parameters: {"file_id"="5d3eaec77954a489b5ddd75143091767", "kitten_store_id"="9970569bbacf7b6dbeb4eb9295960d69", "size"="large", "kitten_cam_id"="280941", "id"="kjlak357aw479607t"} Rendered text template (0.0ms) Sent data (1.8ms) Completed 200 OK in 1037.4ms (Views: 1.4ms | ActiveRecord: 98.4ms) Short request, quickly assembled, all the relevant log-lines are in one block. However, not all of our code renders in 1037ms. There are a few calls that can exceed several seconds, and during that time several of these quicker ones can come in. When that happens, its very, very hard to identify which log-lines belong to which GET. Sent data (4.1ms) Completed 200 OK in 767.4ms (Views: 3.2ms | ActiveRecord: 72.2ms) Completed 200 OK in 2338.0ms (Views: 0.2ms | ActiveRecord: 0.0ms) Ooookaaaay... which goes to what? Is it possible to add something like a transaction-ID to these log-lines? The log-spam would be interspersed, but at least grep-magic would give me the unified entries that I need.

    Read the article

  • Backing up 80G hard drive 1G per day

    - by barrycarter
    I want to securely backup my 80G HD, but doing a complete backup takes forever and slows down my machine, so I want to backup just 1G per day. Details: % First hurdle: on the first day, I want to backup the "first" 1G of the hard drive. Of course, there really is no "first" 1G on a hard drive. % After 80 days, I'll have my whole HD backed up... assuming none of my files ever change, which of course they do. So the backup plan/program must also catch file creation/changes as they come along. % The backups must be consistent, in that I can restore my system by restoring the backups sequentially. In other words, "dd if=/harddrive" probably won't work. % The backups should encrypt file contents AND names, but I don't see this as a major hurdle. % Once the backup has backed up everything (even changed files), it can re-backup the first 1G on my hard drive. Even though this backup is redundant, that's OK, because I always want to be backing up something (eg, if I'm backing up to optical media, the older media might start going corrupt). Is there a magic backup plan/program that does this? In reality, I want to do this for multiple machines with multiple drives each, but think that solving the above will solve the general case.

    Read the article

  • Is it possible to add a WiFi HotSpot to an already established LAN, keep the two separate, and not modify the primary router?

    - by user12844
    I have a set up where my Cisco ASA is sitting in one facility, providing access to the Internet for two buildings. The two buildings are geographically separated by a Wireless Bridge spanning about 10 miles. All computers and equipment inside the LAN are on the same subnet (its pretty small) and we have WiFi AP's in both locations providing Wired and Wireless access to the LAN. Given all the BYOD (Ipods, and SmartPhones etc...) coming into the office as well as Visiting reps etc... we would like to also provide a non-secure, device independent (the devices cannot see or communicate with each other), and LAN independent (the devices cannot see or use anything on the LAN) HotSpot that anyone could use for their Devices that gives them access to the Internet ONLY without needing a password. I get that this could be possible at the facility with my Cisco if I messed with it and created VLANs etc... but then I would need to get it across my Bridge as well and don't think that would be possible without serious reconfiguration of everything. Would really like some kind of magic drop in solution that can kind of piggy back on my LAN without really needing to do very many if any changes to the current set up.

    Read the article

  • Follow through - How to setup equivalent USVIDEO.ORG DNS-Proxy on Linux

    - by DNSDC
    I'm quite keen to setup similar service (but FREE) and seems you know how to do this. "you need to run your own private dns with artificial records for example pandora.com you also need a real dns to fall back on. now that all requests for these sites are going to your US located box you can open up port 80 on squid and listen for the traffic. your cache_peer settings should allow you to map each domain to their real ip. The trafic now flows initially from your US located box to the service but then the server responds it responds directly to the host. no magic here. I won't share the fine details as it probably best serves all to not over exploit this." Did you mean we need to 1. Setup Forward-only DNS on a US-based server/ip? 2. Setup cache_peer and cache_peer_domain in Squid, I got this. 3. Any iptables rule, prerouting, postrouting rules needed to accomplish this? Appreciate your expert advice. Cheers, Don

    Read the article

  • All USB ports on my laptop are dead, any options via Ethernet, SD/MMC or HDMI?

    - by carbontracking
    My son's laptop has taken alot of pain in his school over the last few months and he and his buddies have succeeded in breaking both USB ports. I've opened the box, unsoldered the USB ports, replaced them by new components but no joy - the ports seem dead. If I assume that the insertion of LEGO pieces, etc. in USB ports has rendered them unsalvageable, do I have any other options for restoring USB access to the laptop? The laptop has an ethernet port, a HDMI port and an SD/MMC port. I've trawled the web for a magic adadpter, i.e; ethernet=USB, HDMI=USB or SD/MMC=USB but to no avail. Lots of options for going the other way though. Does anyone have any ideas on the feasibility of an ethernet=USB cable? Ethernet doesn't seem to have +5V or GND so I can run a cable from the motherboard that could provide those. Amazing how many functions of a laptop just disappear when you have no USB ports.

    Read the article

  • Checking for valid document files

    - by sweb
    I need a simple way to check if my files are valid documents (pdf, doc, docx, ppt, pptx, xls, xlsx, odt, ods, odp and etc). I can't use file because magic does not work well at all. For example, for PDF files, this is my output. sweb@sweb-laptop: /media/files/ebooks/PDF and CHM$ file --mime *. Pdf PHP 5 for Dummies. Pdf: application/pdf; charset=binary PHP 6 and MySQL 5 for Dynamic Web Sites. Pdf: application/octet-stream; charset=binary PHP6 and MySQL Bible. Pdf: application/pdf; charset=binary PHP6.pdf: application/octet-stream; charset=binary PHP and MySQL for Dummies SE. Pdf: application/pdf; charset=binary For example, I use abiword – which is a good tool – but it converts any format. It doesn't check for valid documents: abiword --to=txt --to-name=output.txt audio.mp3 Is there any command available to check for valid documents then?

    Read the article

  • Toshiba Satellite C665 Rebooting from Standby

    - by Coodu
    I currently am working on a C665 with a strange issue. When the panel is closed the notebook will put itself to sleep in the usual way, and the power LED changes to the pulse to indicate that the device is asleep. However, when the panel is opened to resume using the notebook, the system will restart itself, instead beginning from the Toshiba logo and proceed to boot back in to W7. I should also note that each time this occurs, the "Windows Startup Recovery" option occurs, indicating that the system was not shut down correctly. Some things I have tried: Updated to latest Toshiba BIOS. Returned BIOS settings to their defaults. Swapped Memory to known good module, tested KGM in both memory slots within system. Confirmed that power settings are set to sleep/wake when power button is pressed. Ran a quick HDD fitness test using a parted magic USB stick. Checked for BSOD logs using BlueScreenView, none found. Ran src, no violations found. Any ideas? I have a good feeling the system is restarting itself, but in the event viewer there is a "Kernel Power" error, but it simply says "The system was not shut down correctly." Perhaps a bad driver? I'm not sure. Any advice would be great.

    Read the article

  • Multi-petabyte scale out storage solution [closed]

    - by Alex Yuriev
    Let's say that I have a need to have a single-name space scale to multi-petabyte object store with a file system-like wrapper. What is currently out there that supports the following: Single name space that can take 1B files. Support for multiple entry points using NFS At least node level replication ( preferably node and file level replication ) Online software upgrades No "magic sauce" on the storage layer The following has been evaluated: Gluster & Lustre - just ick - fundamental lack of understanding of why online upgrades are mandatory. OneFS - we have it. It is smelling more and more like it hides a dead body under the hood. Other than MapR and zfs am I missing anything? P.S. Oh yes, I keep forgetting that the forums are for people to discuss if 2TB drive actually stores 2TB info. May bad. Seriously though - how the heck can "meets the following requirements" can be considered a "debate"? P.P.S. I did not throw an idiotic insult - i pointed out that this is actually an interesting question compared to a conversation about storage capacity of a 2TB hard drive. It is not a question of what works better - it is a question that asks did I miss any of the products that currently exist which fit the criteria where criteria is clearly outline. I got one answer below which included something that I have not looked at in a long time which looks quite a bit grown up compared to the time I briefly look at it before.

    Read the article

  • Virtual host Alias not routing properly

    - by Jacob
    I apologize if this question has been asked many times in the past. I am not 100% sure of the exact cause of my issue and am out of google magic right now. Basically I have a virtual host file setup with an Alias record that points to a different directory other the document root. It basically looks like this <VirtualHost *:80> ServerName iBusinessCentral.com ServerAdmin [email protected] DocumentRoot /var/www/marketingsites/ ServerAlias iBusinessCentral.com *.iBusinessCentral.com Alias /unsub "/var/www/unsub/site_index/" </VirtualHost> When I navigate to ibusinesscentral.com/unsub/?randomquerystring I am directed to the correct folder. If I remove the query string and navigate to ibusinesscentral.com/unsub/ I am taken to the directory in the document root. The unsub directory is a zend application and I need to be able to navigate to different url paths like ibusinesscentral.com/unsub/unenroll?querystring I have tried using AliasMatch instead of Alias. I have also tried adding a slash after the unsub portion of the Alias record, and have not had any luck to this point. Thanks in advance for any assistance

    Read the article

  • Fixing bent pins on a CPU

    - by Pekka
    While replacing a mainboard in a desktop machine (see related question), I did something stupid. I inserted the CPU into the new mainboard, but didn't check for the right position. When it didn't immediately lock in, I pressed slightly before realizing what was wrong. The result was a number of bent pins. I tried every tutorial that popped up when Googling "CPU bent pins" - using credit cards, sewing needles, and a hunting knife to get the pins back into position - but to no avail: For every pin I get straightened out, two others are bent. I have no problem getting individual pins straightened out, but my many attempts have led to many pins being slightly askew - enough for the CPU not to fit into the socket (An AMD X3 one). Maybe I just lack the motoric finesse. What I would need is some sort of a grid to fix all pins at once. It's a €50 processor so the loss is not catastrophic. But I thought before I go buy a new one, I thought I'd check here whether anybody knows some magic trick, or a cheap generally-available tool to fix this.

    Read the article

  • Merging two separate DNS zones

    - by cube
    This is a hypothetical question. Let's suppose I have two networks, each with its own DNS server. Network A has names a1.local, a2.local, ... and network B has b1.local, b2.local, .... Zone file for each of the networks looks something like this: $ORIGIN local @ IN SOA .... blah blah blah a1 A 1.2.3.4 a2 A 2.3.4.5 ... for A, and $ORIGIN local @ IN SOA .... blah blah blah b1 A 3.4.5.6 b2 A 4.5.6.7 ... for B. Now I also have a regular internet domain example.com and I want to access the machines as a1.A.example.com, b1.B.example.com, ... How will I have to change the configuration of name servers in networks A and B? (in fact I am writing a super-magic DNS server, currently serving A and B separately, but there is a chance that I will have to add the ability to merge the networks; so I'm interested in knowing the problems which lie ahead of me and how to prepare for the possibility)

    Read the article

  • Windows scroll without focus

    - by DanielCardin
    So I have a windows 8 laptop at home, and a windows 7 laptop at work. Both have synaptics touchpads. The problem is that on the work laptop, I can scroll any window regardless of which one is currently focused. That is the behavior that I want on both computers. This does not currently happen on the windows 8 computer. I know I can use (and have tried!) wizmouse, alwaysmousewheel, katmouse, etc; but none of them work 100% like the work computer. katmouse sometimes stops working, alwaysmousewheel, ive had problems with it scrolling on its own, wizmouse sometimes makes the mouse lag. Others have just not worked. Before I got the work computer, I had resigned myself to it, but now I see that it works, out of the box without using any external programs, on an older operating system, and wonder why I cant get it to work the same way on my own computer! All my searches have just been people suggesting the external programs that ive already tried, so answers suggesting those aren't really what I'm looking for (unless its some magic I can do with the synaptics driver, which by the way is more up to date on the windows 8 computer that is doesnt work on).

    Read the article

  • SQL Server - Physical connection is not usable Error Code: 19

    - by Harry
    Having trouble working out this SqlException: A transport-level error has occurred when sending the request to the server. (provider: Session Provider, error: 19 - Physical connection is not usable) Any ideas what this is about? Couldn't get any hits on google or msdn on this one. The connection is MARS and Asynchronous in my own SQL pool. But don't get all excited about that - that is old code that has been stable for a long time! Exception details pasted below: System.Data.SqlClient.SqlException occurred Message=A transport-level error has occurred when sending the request to the server. (provider: Session Provider, error: 19 - Physical connection is not usable) Source=.Net SqlClient Data Provider ErrorCode=-2146232060 Class=20 LineNumber=0 Number=-1 Server=SERVER State=0 StackTrace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning() at System.Data.SqlClient.TdsParserStateObject.SNIWriteAsync(SNIHandle handle, SNIPacket packet, DbAsyncResult asyncResult) at System.Data.SqlClient.TdsParserStateObject.WriteSni() at System.Data.SqlClient.TdsParserStateObject.WritePacket(Byte flushMode) at System.Data.SqlClient.TdsParserStateObject.ExecuteFlush() at System.Data.SqlClient.TdsParser.TdsExecuteRPC(_SqlRPC[] rpcArray, Int32 timeout, Boolean inSchema, SqlNotificationRequest notificationRequest, TdsParserStateObject stateObj, Boolean isCommandProc) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.BeginExecuteNonQuery(AsyncCallback callback, Object stateObject) at CrawlAbout.Library.SQL.SQLAdapter.<ExecuteNonQueryTask>d__12.MoveNext() in D:\Work\Projects\CrawlAbout.com 2.0\CrawlAbout.Library.SQL\SQLAdapter.cs:line 149 InnerException:

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >