Search Results

Search found 11427 results on 458 pages for 'live cd'.

Page 385/458 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • Half of installed RAM is hardware reserved

    - by user968270
    After a rather arduous and convoluted series of problems that left me without a desktop for ~80 days, I've finally got the thing up and running, having replaced the power supply, motherboard, graphics card and CPU. Now, however, I'm experiencing the 'hardware reserved RAM' issue. Perhaps this is the exhaustion talking, but looking at the question that tends to get pointed to when this kind of topic gets locked as a duplicate hasn't helped. I have 16 GB of RAM installed in an MSi 970A-G46, which is spec'd for up to 32 GB of RAM. The BIOS recognizes that I have 16 GB installed, and the resource monitor also shows the whole 16 GB, only it shows 8 GB as hardware reserved. I've seen suggestions that it's an OS issue, but the particular installation of Windows 7 (64-bit) which I'm running on my boot drive is the same as the one that could actually access the 16 GB in my previous motherboard (MSi 870A-G54). I've updated my BIOS using the MSi Live Update tool and restarted the machine with no effect, and I cannot seem to locate any 'Memory Remapping' option as I've seen mentioned. I've physically swapped the RAM between the slots to no effect. I've unchecked the Maximum Memory box in the msconfig Boot tab's advanced options, also to no effect. These are my system's basic specifications OS: Windows 7 Home Premium (64-Bit) Motherboard: MSi 970A-G46 CPU: AMD FX-8150 Graphics Card: XFX Radeon HD 6870 Boot Drive: OCZ Agility 3 Storage Drive: Samsung Spinpoint F3 ST1000DM005/HD103SJ 1TB PSU: Thermaltake TR-2 TR600 600W ATX12V v2.3

    Read the article

  • Is it possible to record a screen-video from a VNC server?

    - by nikie
    I have a computer that's running VNC server. I would like to record a video of what's going on on this computer, if possible without installing additional software on that computer. Is there a program that can connect to the VNC server port and instead of displaying the screen save it to an (e.g. AVI) video file? Background: One of our customers sometimes has problems with the software he bought from us when he's performing a complex procedure. To help him, we offered that someone (a service technician or programmer) watches what he's doing during that procedure to find out if he's doing something wrong or if there's a bug in the software. Currently, this is done live via VNC. That has a few disadvantages: The service technician has to be in the office at the time. As the customers are in different time zones, that can be in the middle of the night. If the service technician forgets something or doesn't notice something, it's lost. There's no way to see what happened again. Only a single computer can be watched by one service technician at a time. I know I could install normal screen-grab software on the computer, but we're talking about an embedded system with limited RAM, CPU, HDD space, so installing something new is not an easy decision. And VNC is already there. I could of course open a VNC client on some office PC and capture that PC's screen, but I can only record one remote computer that way. I often have to watch up to 8 screens in parallel. (And I don't think that screen-grabbing VNC would improve image quality, either.)

    Read the article

  • Windows 8 Windows Store (Metro/ModernUI) applications not working (just show 'busy' animation or white screen)

    - by davidm_uk
    I have a Dell XPS 15 which shipped with Windows 7 x64, which I recently upgraded to Windows 8. The process went surprisingly smoothly (given that this was an upgrade, not a complete re-install), and the system generally seems very stable. However, today I noticed that several of the Windows Store apps don't work: they all behave in the same way, launching but then showing a spinning 'wait' animation indefinitely. This is affecting the standard Microsoft Mail, Store, Weather, News, Travel, Finance, Sport, Games and Music apps. The Bing app just shows a Bing logo on a coloured background (but no wait animation). The Calendar, Photos and SkyDrive apps open but then show a white screen. The Maps and Camera apps work without problems. The live tiles on the Start screen are updating correctly, for example the Mail app's tile shows a summary of new mail despite the Mail app's problems. All of these applications were working correctly a few days ago. I'm sure I've used several without problems since the last Windows update occurred on 7th November. Any suggestions on what might have happened and/or how to fix it would be very welcome. I don't need these Windows Store applications, but the fact that they're not working is irritating me.

    Read the article

  • WAN Optimization for Small Office/Home Office

    - by TiernanO
    I have been reading up on WAN optimization for the last while, mostly out of interest of speeding up my own internet connections, but also to speed up the office internet connection. At home, I have 2 cable modems plugged into a RouterBoard RB750, which load balances the connections. In the office, we have a single connection into a NetGear router. Most of the WAN Optimization products I have seen, seem to be prohibitively expensive, but also seem to be based on the idea of having multiple branches around the world. What I am looking for, ideally, is as follows: software install: I am "guessing" I need to install it in 2 places: one in the office or house, and one in "the cloud". any connections going to, say, The US (we are in Europe, but our backup's live in the US currently, which would be something important to speed up) would be "tunnelled" though the Optimizer. If downloading or uploading large files, open multiple connections between both "the cloud" and the optimizer... This is where a lot of speed could be gained. finally, for items not compressed, they would be compressed on the cloud side of things, also items that are already on the optimizer could be not sent again. kind of like RSync or Proxy servers... So, is there something that can be done? Is it available using off the shelf components (some magic script with SSH, Squid, Linux and duct tape) or is it something that needs to be purchased? or even an Open Source Project that does 90% of what i am asking?

    Read the article

  • Unable to resize ec2 ebs root volume

    - by nathanjosiah
    I have followed many of the tutorials that pretty much all say the same thing which is basically: Stop the instance Detach the volume Create a snapshot of the volume Create a bigger volume from the snapshot Attach the new volume to the instance Start the instance back up Run resize2fs /dev/xxx However, step 7 is where the problems start happening. In any case running resize2fs always tells me that it is already xxxxx blocks big and does nothing, even with -f passed. So I start to continue with tutorials which all basically say the same thing and that is: Delete all partitons Recreate them back to what they were except with the bigger sizes Reboot the instance and run resize2fs (I have tried these steps both from the live instance and by attaching the volume to another instance and running the commands there) The main problem is that the instance won't start back up again and the system error log provided in the AWS console doesn't provide any errors. (it does however stop at the grub bootloader which to me indicates that it doesn't like the partitions(yes, the boot flag was toggled on the partition with no affect)) The other thing that happens regardless of what changes I make to the partitions is that the instance that the volume is attached to says that the partition has an invalid magic number and the super-block is corrupt. However, if I make no changes and reattach the volume, the instance runs without a problem. Can anybody shed some light on what I could be doing wrong? Edit On my new volume of 20GB with the 6GB image,df -h says: Filesystem Size Used Avail Use% Mounted on /dev/xvde1 5.8G 877M 4.7G 16% / tmpfs 836M 0 836M 0% /dev/shm And fdisk -l /dev/xvde says: Disk /dev/xvde: 21.5 GB, 21474836480 bytes 255 heads, 63 sectors/track, 2610 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x7d833f39 Device Boot Start End Blocks Id System /dev/xvde1 1 766 6144000 83 Linux Partition 1 does not end on cylinder boundary. /dev/xvde2 766 784 146432 82 Linux swap / Solaris Partition 2 does not end on cylinder boundary. Also, sudo resize2fs /dev/xvde1 says: resize2fs 1.41.12 (17-May-2010) The filesystem is already 1536000 blocks long. Nothing to do!

    Read the article

  • Website hosted on my virtualbox web server not displaying images or applying css when viewed through phone

    - by WebweaverD
    I would really appreciate it if someone could help me. Please let me know if you need more info in the comments. My Set Up I have a windows 7 pc. On it I run a virtual box VM with a ubuntu 12 guest os and LAMP setup. I share files between the two machines using samba from linux to windows and using windows file sharing (Workgroup) the other way round. The vm is set up with a bridged network adapter and can happily serve web pages to my host machine. I use DHCP reservations on my home wireless router/modem to reserve an ip for the vm and give it a sitename.dev in my windows host file so I can access it at sitename.dev through the browser. The Problem So far so good but I have a dev project which needs a lot of mobile template development, now obviously I can use a browser plugin to simulate a mobile device but I would like to be able to see the real thing easily on my phone during development. So ideally I would like a similar setup on my iphone to my windows setup Now I'm not great on networking and dont have much experience with web server set up. So when I typed the ip of my virtual box into my iphone i wasnt expecting to see anything. I was pleasantly surprised when my site loaded up. The javascript even seems to be running but the images and css are not happening. My Question 1) What is happening here, is it something to do with the bridged set up on the vm network? 2)How do I make the sites load properly through my phone Notes I've also tried another phone. The same sites viewed on live servers work fine.

    Read the article

  • Windows 7 Media Center: sometimes no sound on a TV channel

    - by torbengb
    Problem Sometimes, recorded TV shows have no sound. This is of course very annoying because the recorded show is worthless. Also, when switching channels while watching live TV, sometimes a channel has no sound. This is always solved by switching channel again and then back (most often, once will do the trick). Media Center doesn't do this trick by itself when recording of course, so that's the bigger issue - but the cause is certainly the same. Details The computer is running a newly installed Windows 7 and Windows Media Center. It has 2 different tuner-cards installed. Both are installed with signed and up-to-date Win7 drivers and appear OK in Device Manager. Both tuners get the same antenna signal, from a split cable from the wall. The cable delivers analog cable TV (40+ channels) and digital cable TV (4 channels) at the same time. Both tuners have been configured to receive both analog and digital channels. This only happens with analog channels. How can I fix the no-sound problem? Update: I've now spent some time with the computer to try and pinpoint the problem, but I've had little success so far: I flipped through the channels until one didn't have sound, then I disconnected the antenna cable from one of the tuners. It was the right one because then the video also went away. I flipped lots more channels to see if the other card also would come up mute once but I never had a channel without sound. It might still be possible, I don't know. Then I disconnected the "good" tuner and connected the "bad" tuner and again flipped lots of channels but again I never had a channel without sound. It seems to me that the problem is erratic. It happens on any channel, and I haven't ruled out yet that it only happens on one tuner.

    Read the article

  • Changing the modified date of a message in Exchange 2010

    - by jgoldschrafe
    My organization is in the middle of a process to move their Exchange 2010 messaging system from one archiving platform to another. As part of this process, we need to restore all archived messages back into users' email accounts, and then let the new system import them again. The problem is that when the messages are dumped back, the modified date on the message is set to the date it was restored, which trips up message archiving and basically means nobody will have anything archived for six months. So you don't have to ask: no, our archiving platform only uses the modified timestamp on the message and cannot be altered to temporarily use the sent or received timestamp instead to determine whether to archive it. We and others have asked for the feature, but it doesn't exist right now. What we're looking for is a method to go through the user's mailbox and alter the modified timestamp of each message (or preferably received more than X months ago) to the received date of the message. We also don't want to spend more on this tool per user than we're spending on the archiving solution in the first place. We've run across a few tools that are something ridiculous like $25 per user. I don't think we're even paying close to that for Exchange and the archiving solution put together. Whatever we settle on should function on a live mailbox with no downtime. Playing around with PST imports and hacky little things like that isn't going to work. We're fine with programming/scripting, if anyone knows the best way through PowerShell, COM automation or some other way to best handle this.

    Read the article

  • Hyper-V vss-writer not making current copies [migrated]

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • Logging Remote Server Access via Remote Desktop

    - by Nate Bross
    The objective here is to start a simple .NET application I've written which captures some environment variables (time, username, computername, etc) upon login. This .NET application subscribes to the Windows "User logout" event. Upon launch, the application captures the above variables, and creates a record in my database, upon logout (which I'm capturing) I update another field in the same record, with the logout time. The above is working exactly as I would like, when I launch the binary, it makes its initial log entry, then waits for the logout event and updates the same record. Restrictions, the .NET binary should be able to live on a share point (\server\share\myapp\v1) so I can update the application to (\server\share\myapp\v2) and simply update the GPO/Logon script. My initial thought was to use the \domaincontroller\sysvol\ directory to store the binary and then update all user accounts to include a call to my application. Can you see any flaws in this approach? My question is this: First, is there anything wrong with my idea above? Second, if so, what is the best way (through group policy or otherwise) to ensure this application launches whenever a session is started on a server?

    Read the article

  • Something like Dropbox for local use

    - by Casper
    I am looking for a solution to sync folder pairs between a NAS and multiple local macs. Each of the macs could edit files and the other macs should then get synced automatically. Basically my own local version of Dropbox without using "cloud-storage". I have looked into solutions using rsync. As I understand it rsync is not really capable of doing a bi-directional sync. I also do not want to necessarily invoke the sync process. I would prefer a daemon running in the background - waiting and checking for changes and then syncing them "live". The program should also be flexible enough to recognize that it sometimes (in the case with laptops) can not reach the NAS. It should then just wait for the connection to be back again, without bugging me ever few minutes. I have looked into synk, folderwatch, rsync and a few others, but I haven't really found a solution. Isn't there something like "offline folders" from microsoft for the mac? Thanks PS: just for clarification - I don't want to sync for backup purposes, instead I am wanting to sync so that all macs have a local copy of the most recent changes to files.

    Read the article

  • How to perform this Windows 7 permissions change on many files via GUI or command line

    - by hippietrail
    After using my external hard drive on another Windows 7 computer to tweak photos with Windows Live Photo Gallery then upload them to Facebook I found the modified images were now not visible on the original Windows 7 computer. I'm not sure if the things I tried to get it working subsequently changed anything, but I do know this is the sequence of actions that makes the permissions of the modified files match those of the unmodified files: Right click on broken image file, select "Properties" On the "Security" tab press the "Advanced" button In the "Permissions" tab press the "Continue" button with the shield icon on it Tick the box marked "Include inheritable permissions from this object's parent Click the "Remove" button to remove the only current entry "Type: Allow, Name: Administrators (XYZ\Administrators), Permission: Full control, Inherited From: OK on the "Permissions" tab. OK on the "Security" tab. Now this same procedure does not work at the folder level. It results in "access denied" dialogs. I'm looking for some way to perform this exact modification on all the images I edited on the other computer. I'm happy to use the Windows GUI in Explorer or any other included tools. I'm happy to use the Windows command line. I'd prefer not to use a third-party tool since I'd have to be satisfied it's not doing anything else. I'm not looking for a different way to change permissions to other settings to make an external drive full of photos editable on multiple computers. At least not in this question.

    Read the article

  • Is it possible to create a simple frontend indexer for openbittorent torrents?

    - by SimonK
    I run a website which distributes a few files every now and again, live music performances by a rock band. I create a torrent file, set the trackers as openbittorrent, publicbt and other similar open trackers. I upload the torrent file to my forum, my users download it and the files are shared. No problems there. What I would like to do is index those torrents properly on my website though so I can follow seeders/leechers and other stats online. I know the open torrent trackers don't have an index but I am aware of many, many indexing sites that do that exact thing. I don't know how though. So what I'm asking is what do I need to do to do that myself? I simply want to create a page that lists the torrents I and other users on my site create, the seeders/leechers ratio and a link to the torrent file etc. What data do I need to be able to do that? I'm proficient in general web design but I don't know what I would need data wise to pull the required info on the torrents? Thanks

    Read the article

  • Puppet: variable overriding best practices

    - by rvs
    I'm wondering what are best practices for overriding variables in puppet. I'd like to have all my nodes (which located in different places, some of them are qa, some live) with same classes. Now I have something like this: class base_linux { <...> # something which requires $env and $relayhost variables } class linux_live { $relayhost="1.1.1.1" $env = "prod" include base_linux } class linux_qa { $relayhost="2.2.2.2" # override relayhost include base_linux } class linux_trunk { $env = "trunk" # override env inlude linux_qa } node "trunk01" { include linux_trunk include <something else> } node "trunk02" { $relayhost = "3.3.3.3" # override relayhost include linux_trunk include <something else> } node "prod01" { include linux_prod } So, I'd like to have some defaults in base_linux, which can be overrided by linux_qa/linux_live, which can be overrided in higher level classes and node definition. Of course, it does not work (and not expected to work). It does not work with class inheritance as well. Probably, I'll be able to archive using global scope variables, but it does not seems like a good idea to me. What is the best way to solve this problem?

    Read the article

  • Fedora 19 no longer bootable

    - by Parisa
    I had fedora dual-booted with windows on my laptop for a while but with windows refresh grub was gone and my system directly booted windows. I booted fedora with my systems boot options and with this tutorial: https://fedoraproject.org/wiki/GRUB_2 I reinstalled grub2 but then had my system booted into an empty grub prompt: grub So I found the drive containing vmlinuz and initramfs (completely sure about thair location and versions) and tried to manually boot it but after the boot command it said: no suitable video mode found booting in blind mode and nothing happened. Such a tragedy... I have already tried to use live disks rescue system. Funny but troubleshooting options don't apear on my laptop while they do on my desktop pc. I cant even go to boot prompt on my lenovo idepad z400 laptop. I also tried EasyBCD so maybe I could boot it with windows but it comes up with this error: missing AutoNeoGrub().mbr Now I have removed the grub prompt (don't know why) and its really hard for me to reinstall my dearly customized fedora. If anyone knows a way to help boot it again or reinstall it keeping my files and installations I really need it. Thanks PS:I have already tried Boot-repair Disk but it asks me to enable the repo containing grub-efi on my fedora to reinstall the grub2 and fix the boot for me (how could i?).

    Read the article

  • How to make Thunderbird play nice with Google mail

    - by Christi
    Thunderbird and gmail aren't exactly the best of friends. Gmail's tags mean that Thunderbird often downloads multiple copies of a single mail. Anything tagged in gmail will appear in a folder related to that tag, the "all mail" folder, and possibly the "inbox" and "sent mail" folders too. Thus a mail with multiple tags could potentially be stored more than four times in a local Thunderbird cache. This can make searching difficult, and is obviously wasteful of disk space. The best solution I have come up with is as follows. Operate a zero inbox policy (i.e. use the inbox for processing live mail only and archive everything else) which eliminates an extra copy in the inbox. Secondly, configure Thunderbird not to sync the "Sent Mail" folder - this is a bit of a pain, since I actually find it quite useful to be able to look through just the mails I've sent, but a search can duplicate this functionality. In this way, most of the duplicates are removed, and only mail with tags is stored locally more than once. Ideally, however, I'd only like one copy of each mail to be stored locally. I am surprised Thunderbird doesn't store mail by some sort of hashing algorithm to prevent precisely this problem - but it wouldn't be compatible with the way the folders are mirrored in a local directory structure, I suppose. Can anyone think of a better way to get Thunderbird to cache a Google mail account locally efficiently.

    Read the article

  • virtual host settings fail on multiple sites

    - by Ricalsin
    Wow. I'm puzzled. On my ubuntu system I've setup an apache2 server and configured three virtual hosts in the /etc/apache2/sites-available directory. a2ensite to symbolic link the sites-enabled. The first two work great; a simple url of localhost.mysitenames.com works great for the first two sites, both finding their DocumentRoot and Directory paths. The third always generates a Bad Request (Invalid Hostname) response. No server error.log as it never hits it. I've copied/pasted the working vhost files, made the minor changes to the ServerName, DocumentRoot and Directory and the same problem persists. I always "sudo /etc/init.d/apache2 restart" whenever I make a change. I've cleared the browser cache as well. No love. There's not a limit to the number of sites you can host, right? My goal was a localhost development environment with the expectation I can run any number of websites locally before pushing them to a live server. Any thoughts on how to debug this? Or, just a simple solution I am missing?

    Read the article

  • is there a man in the middle attacking to my server machine?

    - by GongT
    My server works well about half a year. But a strange thing happened (several hours before). This server has two IP-address 58.17.85.19 & 117.21.178.19 When I navigate to http://58.17.85.19, nothing different as before. But http://117.21.178.19 will return a "302 Object moved" and become a "redirect loop" I do some test: ($cmd = "wget http://117.21.178.19/?xx=$RANDOM --max-redirect 0 -S --no-cache -O -") Step by step: run $cmd on my PC and my firend's one (we live in two side of China, far away). - got 302 run $cmd on this server - got 200 OK (content is correct result of index.php) run $cmd on another server in same computer room - got 200 OK telnet from my PC and build an HTTP request (type by hand) - got 200 OK shutdown php-fpm, run $cmd on my PC - got 302 run $cmd on server - 502 Bad Gateway shutdown nginx, run $cmd on both the server and my PC - Connection refused. create iptables rule, refuse any connection to 58.17.85.19:80. run nc -l 80 -k -vvv on server and run $cmd on my PC NC show me that.... Server accept connection (Connection from [my ip]) My connection closed ! (Remove fd xx from list) wget dump out response - got 302 I know that, normaly, NC will accept connection, then dump HTTP request from client, and client will wait for response. this connection will open forever(infact client will close connection becouse timeout), becouse NC can't give any response. So... where my request gone? who send an response to the client? some virus on my server system? If so, why 58.17.85.19 didn't has this error? or... I was attacked by a middleman?

    Read the article

  • Nginx Redirect when URL includes variable p=1

    - by ChrisD
    Need to write a small nginx rewrite line(s) to alter/301 redirect some URLs within our existing website. for example: www.example.com.au/pageone.html?p=1 to www.example.com.au/pageone.html www.example.com.au/pagetwo.html?dir=asc&limit=200&order=price&p=1 to www.example.com.au/pagetwo.html?dir=asc&limit=200&order=price www.example.com.au/pagethree.html?dir=dsc&limit=100&order=price&p=1 to www.example.com.au/pagethree.html?dir=dsc&limit=100&order=price As you can see p=1 has been stripped from the URL's (as it is superfluous but has been live on the site and needs to be redirected now) - all http and https links. Basically if, and only if, p=1 is used anywhere within the URL then it should redirect to the same URL without the p=1. This should also let p=11, p=12 through as normal (and not redirect), as it is not specifically p=1. # # # # If that is not possible then, I'd like to know how to redirect this kind of URL as a standalone one off: www.example.com.au/pageone.html?p=1 to www.example.com.au/pageone.html I tried several redirects but they were all pointless and did not work, and was not able to get it working. To be honest I do not really know where to start with this - I am new to nginx.

    Read the article

  • Logging Remote Desktop to Servers via Logon Script or GPO or What?

    - by Nate Bross
    The objective here is to start a simple .NET application I've written which captures some environment variables (time, username, computername, etc) upon login. This .NET application subscribes to the Windows "User logout" event. Upon launch, the application captures the above variables, and creates a record in my database, upon logout (which I'm capturing) I update another field in the same record, with the logout time. The above is working exactly as I would like, when I launch the binary, it makes its initial log entry, then waits for the logout event and updates the same record. Restrictions, the .NET binary should be able to live on a share point (\server\share\myapp\v1) so I can update the application to (\server\share\myapp\v2) and simply update the GPO/Logon script. My initial thought was to use the \domaincontroller\sysvol\ directory to store the binary and then update all user accounts to include a call to my application. Can you see any flaws in this approach? My question is this: First, is there anything wrong with my idea above? Second, if so, what is the best way (through group policy or otherwise) to ensure this application launches whenever a session is started on a server?

    Read the article

  • Does Google sometimes ignore "special" characters, possibly depending on your location or font type settings? [closed]

    - by RLH
    TLDR Google tends to ignore special characters in my search strings. Is there anything that I can do about it and is it, possibly, happening because Google makes certain assumptions based off of my default text-encoding settings and my location? I just posted this question over at StackOverflow. I had found a C preprocessor that I'd never seen before. As I should have done, I Googled it and tried to find out further information. I attempted various search terms which were all variations of "C Operator ##" (some times with and some times without the double-quotes.) Google didn't bring back anything of use so I posted my question on SO. As you can see from the comments, someone mentioned a search string (ironically one which I did try to search) and stated that I could have even hit the "I'm feeling lucky" button and have gotten my answer. The problem is I did search that, and the results that I received were far more basic and even after following the top results and searching the resulting pages, I could find nothing referencing the string "##". I'm not posting this question to complain but it does provide an empirical example of something I've seen before that really bugs me-- Google often ignores special characters in my search strings and the results are often useless. As a developer I often need to search for string values containing non-alphanumeric characters. Some characters (like the underscore or hyphen) can be used without trouble. However, other characters (such as the ampersand, carat, tilde and pound sign) are often ignored in my query strings. Is there a way to prevent this from happening so that I can get meaningful results from Google? NOTE I stay logged into Google and I live in the US. I wonder if Google detects some form of text-encoding setting or derives my results based off of certain, localized text-based assumptions. Regardless, I would like to for Google to search for what I give it. Is there anything that I can do to improve my results?

    Read the article

  • Is domain-transfer inherently safe for downtime when the name servers remain the same?

    - by jlmt
    I've been reading around this topic towards understanding whether there's some or no chance of downtime during an upcoming domain transfer for 15 live and very critical domains. In our case there are three companies involved: CompanyA is the original registrar and DNS host, CompanyB is the new DNS host, and CompanyC is the new registrar. I've already changed the nameservers for all domains to those of CompanyB. We suffered some downtime because CompanyA deleted their hosted DNS for our domains directly after the change, but the changes propagated and we're now able to configure our DNS with CompanyB. From what I understand (please correct where wrong!): There exists an SOA record that points oneofourdomains.com to ns.companyb.com. That record is maintained and authoritatively hosted by the ccTLD registry for the domain (eg. Verisign for .com). CompanyA currently has the ability to change the SOA record because they're the registrar. There exist NS records for oneofourdomains.com, which are also related to the link from domain name to nameserver, are similarly hosted by the ccTLD, and which CompanyA are also able to change while acting as registrar. Neither CompanyB nor CompanyC currently have any control over the SOA or NS records. CompanyA are unable to cause us (DNS) problems during the transfer by dropping service early, because they are not the authoritative source for the SOA and NS records. When we transfer the domains, it's administrative control of the SOA and NS records that will be transferred to CompanyC. As long as we advise CompanyC that the SOA and NS records must not change (as regards pointing to CompanyB's nameservers), there's no need for any kind of DNS change, and therefore no possibility of downtime. Is my understanding of this correct? My fear is that CompanyA will somehow cut us off again, and their support dept hasn't given me much confidence in their understanding of the topic.

    Read the article

  • How to backup virtual machines on a standalone ESXi host?

    - by Massimo
    Standalone ESXi (4.1) host without any vCenter Server. How to backup virtual machines as quickly and storage-friendly as possible? I know I can access the ESXi console and use the standard Unix cp command, but this has the downfall of copying the whole VMDK files, not only their actually used space; so, for a 30-GB VMDK of which only 1 GB is used, the backup would take 30 full GBs of space, and time accordingly. And yes, I know about thin-provisioned virtual disks, but they tend to behave very badly when physically copied, and/or to blow up to their full provisioned size; also, they are not recommended for actual VM performance. It is ok for me to shut down the VMs before backing them up (i.e. I don't need "live" backups); but I need a way to copy them around efficiently; and yes, a way to automate shutdown/startup when taking a backup would also help. I only have ESXi; no Service Console, no vCenter Server... what's the best way to handle this task? Also, what about restores?

    Read the article

  • postfix email gateway

    - by k-h
    I am setting up a postfix email gateway. It will not hold any mail but will accept email for my domain and forward it to another internal mailserver and relay mail out from the internal server. One of the main problems is that I am working on a live running system and this will be an upgrade so I am using a test domain which I will change at some point to the real domain. I tried various methods but found the simplest way (that worked) was to use a script to create an aliases file (from ldap entries). There are various problems with this method. The main one being that the entries can't be of the simple form [email protected] because the gateway doesn't know where to send them. They have to be of the form: [email protected]. What I would like doesn't seem hard but I can't get my head around the postfix documentation. There seem to be various ways but none of them seem to work. Most of the examples I have found on the web assume the mail is going to end up on the server. I want a list of users somewhere, preferably of the form: user1, user2, etc rather than [email protected] (I can easily generate this list) and I would like postfix to forward all email to example.com to a particular server: ie realmailserver.example.com. Can anyone suggest clues as to how I might do this?

    Read the article

  • is there a way to run a command before puppet implements a change?

    - by Patrick
    I want to have puppet run a specific command before performing any type of change. I am aware of the prerun_command option in the main puppet.conf, but this is not what I'm looking for. I want the command to only run if something is about to change, not on every puppet run. Here's the scenario. Let's say I have a bunch of web servers behind a load balancer. I then want puppet to update the web site files. But in order to prevent issues where some files have been updated, but other files haven't, and the mixed versions causing problems, I want to take the server out of the load balancer pool. I could write a script which when run will tell the load balancer to remove the box from the pool. Then puppet can do the change, and use postrun_command to put the box back in the pool once complete. But I need a way to run that script to remove the server from the pool. The only solution I can think of is to keep 2 copies of the files on the box. One a staging copy, and when puppet updates that, use a notify action to trigger the removal script, and then copy from staging into the live location. But I was hoping for something a little more generic that would work on any change being performed (upgrading a package, restarting a service, creating a user, anything).

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >