Search Results

Search found 1666 results on 67 pages for 'andrew cetinick'.

Page 18/67 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Does /NOCANDY avoid any adware-related activities with OpenCandy?

    - by Andrew Grimm
    OpenCandy claims that using the /NOCANDY switch when using a OpenCandy-affiliated installer allows you to avoid opencandy. Should I take their word for it? If not, can anyone independent of OpenCandy and their affiliates verify that /NOCANDY works? Background: About to install WinSCP onto a fresh Windows installation, and found out that new versions have OpenCandy associated with their installer. For the sake of balance, here's a link to WinSCP's FAQ on OpenCandy. The claim about /NOCANDY working appears on WinSCP's web site, but the same boilerplate appears on other OpenCandy web sites. If the OpenCandy people are offended by the tag "spyare": sorry, but it's the main tag here, rather than "adware".

    Read the article

  • How to install rmagick on Ubuntu 10.04?

    - by Andrew
    Here's what I've done so far: sudo apt-get install imagemagick libmagickcore-dev This did not throw any errors, so I think that ImageMagick is installed fine. Then I tried installing the gem: sudo gem install rmagick This resulted in the following error: ERROR: Error installing rmagick: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for Ruby version >= 1.8.5... yes checking for gcc... yes checking for Magick-config... yes checking for ImageMagick version >= 6.4.9... yes checking for HDRI disabled version of ImageMagick... yes checking for stdint.h... yes checking for sys/types.h... yes checking for wand/MagickWand.h... no Can't install RMagick 2.13.1. Can't find MagickWand.h. *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/rmagick-2.13.1 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/rmagick-2.13.1/ext/RMagick/gem_make.out What do I need to do to install rmagick on Ubuntu 10.04?

    Read the article

  • Replacing files in a folder structure with files from an unsorted folder

    - by Andrew
    I have over 50,000 PDFs organized into folders in a file called PDFACT. I needed to compress these files so I ran them through Adobe to batch compress them and this worked—except Adobe could only output the files without their folder structure. So basically I have 50,000 PDFs set up in a folder with hundreds of subfolders, and everything was organized. I ended up with one folder with 50,000 compressed PDFs in it, just in alphabetical order. Somehow I need to replace all the orginal PDFs with their compressed copies. Let me give an example: In the folder PDFACT we have the following file: C:\PDFACT\BIG DINNER\BILL\NEWESTBILL.PDF … and in the output folder that Adobe created we have just: C:\COMPRESSED_PDF_FOLDER\NEWESTBILL.PDF This copy is smaller then the one in PDFACT and has the same name but it is just lumped in with every other PDF. The folder structure and subfolders are gone. Is there any way to replace all the larger uncompressed PDFS inside the orginal folder structure with their now compressed counterparts?

    Read the article

  • Exim: How to turn off DKIM for forwarded mail?

    - by Andrew
    I have DKIM configured in Exim for outgoing mail, as per the documentation. Exim signs all outgoing mail. But some of that outgoing mail is forwarded, thanks to a users .forward file. This is a problem for me, because some of those messages are spam (my exim configuration does not do any verification) and I don't want to take responsibility for them. But I can't figure out how to configure Exim not to sign these messages. My configuration is basically the Debian Squeeze default, with a few DKIM_* macros set. I can post more details, but I think seeing any example of conditional DKIM signing would set me right.

    Read the article

  • What is good usage scenario for Rackspace Cloud Files CDN (powered by AKAMAI) [closed]

    - by Andrew Smith
    I have just setup my website as static page via Rackspace CDN / Akamai. www.example.co.uk is an alias for d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com. d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com is an alias for a61.rackcdn.com. a61.rackcdn.com is an alias for a61.rackcdn.com.mdc.edgesuite.net. a61.rackcdn.com.mdc.edgesuite.net is an alias for a63.dscg10.akamai.net. a63.dscg10.akamai.net has address 63.166.98.41 a63.dscg10.akamai.net has address 63.166.98.40 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ecb9 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ed09 The HTTP header: HTTP/1.0 200 OK Last-Modified: Fri, 19 Oct 2012 23:27:41 GMT ETag: fdf9e14b77def799e09e8ce815a521da X-Timestamp: 1350689261.23382 Content-Type: text/html X-Trans-Id: tx457979be3bd746c2b4e5403a1189cdbc Cache-Control: public, max-age=900 Expires: Sat, 27 Oct 2012 22:18:56 GMT Date: Sat, 27 Oct 2012 22:03:56 GMT Content-Length: 7124 Connection: keep-alive I am wondering, if it's really the fastest solution to power the website? By investigating it thru http://www.just-ping.com/ it seems, that from many places the ping is very high, and during quick investigation I found that they use GeoIP to resolve addresses based on WHOIS, which is not accurate and because of that from many places the ping is above 300ms (for example, if ISP is in balgladore and request is routed to bangladore even if it's 300ms, for period of 1 month), while by just using Amazon Web Services and Route 53 Anycast DNS servers and only 4 EC2 instances it seems that for example India is always below 100ms, while using Akamai it goes above 300ms in some cases, and this is because Route 53 is using BGP. By quickly checking the Akamai, it seems that they are not getting feedback from the traffic - the high ping stays constant even if I keep downloading large files and videos, which is opposite to what they say on their website. They state, that they optimize the performance by taking feedback from the requests, while it seems they just use GeoIP with per City resolution (which are mostly big cities). Because of this, AWS with Route 53 / Anycast DNS seems to be much more reliable, as well EdgeCast which is using BGP, but I dont know how much does it cost to deploy static website. Actually, I dont know if EdgeCast is not a lie, because from isolated places there are many errors - so their performance is at the cost of quality of delivery, because of BGP switching the routes during transfer of large files. So I was wondering, what is really Akamai good for, because they dont seem to pose any strength in any field in what I do understand now, except they offer some software based WAF on their website, but what I really care about is the core distribiution, so the question is? Is really Akamai good for Videos? For static websites? ??? I found so far AWS most usable with most consistent ping and stable transfers.

    Read the article

  • Easy way to restrict permissions in an elementary school computer lab?

    - by Andrew
    I'm putting together an elementary school computer lab. I have nine winxp pro machines that are not networked and do not have internet access (no money to do either). I've created separate student and admin accounts, and have the students set as limited users. However, I'm interested in further restricting their permissions. I want to make it such that they cannot: -delete any files, even just from their own profile -rename any files -move around the icons on the desktop -change any display settings -access a usb device without a password (they bring in their own from home which are chock-full of viruses) Oh, one last thing, they still have to be able to save word documents. Is this even possible? I can download software, but, like I said: no internet, no server.

    Read the article

  • Hosting online with xampp?

    - by Andrew
    I'm not quite sure what I'm doing wrong, because from what I've read, this should all be working. What I've done: Forwarded ports 80, 8080, and 443. Changed the ServerName localhost:80 line in \apache\conf\httpd.conf to ServerName myip:80. Registered at dyndns.com, and have been using their update client to link my IP to the DNS thingy. Made sure xampp was using port 80, and started apache and MySql. And...nothing. What did I miss? =/.

    Read the article

  • Stop Windows Automatic Update in boot menu or start up-sequence

    - by Andrew
    My girlfriend has a corrupted hard drive running Windows Vista. She is getting a new hard drive and has also purchased an external hard drive to back up her data. However windows downloaded an automatic updated and keeps getting held up and restarting when it trys to apply the update. Is there a way she can disable this from the boot menu or start-up sequence?

    Read the article

  • CPQLOFG XML "Syntax error"

    - by Andrew J. Brehm
    HP Lights-Out Configuration Utility- CPQLOCFG v. 4.00 dated 04/04/2012 Whatever input I use, whether a sample file from the HP Web site, whether an empty file or whether a file just containing <RIBCL VERSION="2.0"> </RIBCL> the HP utility always returns CHECKING XML SYNTAX... 1 - Syntax error: Line #0: syntax error near "" in the line: "" What's wrong with me or the utility? Did anyone ever get this to work?

    Read the article

  • Windows 7 Wireless Network Adapter Stopped Working

    - by Andrew B Schultz
    I have a Windows 7 Ultimate machine where the wireless adapter all of a sudden started having trouble connecting to wireless networks. Whenever I go to a new place and try to connect to a wireless network, it says that the DNS server is not responding, and tells me to go unplug the router and try again. After several locations in a row telling me this, I began to realize something was wrong with my adapter, not the routers. I am no longer asked to identify the security level for any new networks (Work, Home, or Public) like I used to be (it defaults to Public now - with the park bench icon). Often, resetting the router doesn't even work. Running the Windows 7 troubleshooter doesn't give me anything better than the advice to reset the router. However, the adapter will still connect to the wireless network at my main office without any problems. Does anyone know why a wireless network adapter can get so finicky so suddenly? Thanks!

    Read the article

  • Celery - minimize memory consuption

    - by Andrew
    We have ~300 celeryd processes running under Ubuntu 10.4 64-bit , in idle every process takes ~19mb RES, ~174mb VIRT, thus - it's around 6GB of RAM in idle for all processes. In active state - process takes up to 100mb of RES and ~300mb VIRT Every process uses minidom(xml files are < 500kb, simple structure) and urllib. Quetions is - how can we decrease RAM consuption - at least for idle workers, probably some celery or python options may help? How to determine which part takes most of memory?

    Read the article

  • Ping and crawling not working, site still resolving

    - by Andrew Alexander
    Ok, so we're trying to figure out why the site of one of our clients isn't being crawled by Google (we've ruled out robots.txt or meta tags) When we go to the site, either IP address or domain name, the site resolves, everything works. However, Google is getting a 302 redirect (which it apparently isn't following for crawling), and when we ping the address, it times out (note, the site is still resolving in the browser throughout all of this). The site is built in ASP.Net (I assume C#) and so my thoughts were that it was an errant redirect rule, or some other sort of server side issue. We also thought that it might be due to incorrect domain pointing (but if we try to ping the IP, it doesn't work, so that sorta rules that out). We're really not sure what is causing all of these errors, or even if they have one single source. Anyone have any ideas what could be going on? Do you need any more information? To boil it down in a TL; dr: * Site resolving in browser, both IP and domain name. No problems here. * Site not being crawled by Google (gets a 302 it doesn't seem to follow) - it is not due to robots.txt or meta tags * Ping is not working for the IP address. This is very odd, because again, the IP address seems to work fine in the browser. * Our thoughts are either redirect rule issue, domain pointing issue, or possibly some errant code - or some combination of the three

    Read the article

  • Cannot load viper-mode in Emacs24

    - by Andrew-Dufresne
    Every time I try to load viper-mode in Emacs24 using M-x viper-mode, I get error Wrong type argument: symbolp, (quote 3) I have observed it happens when I try to load viper-mode for the after opening Emacs or after I have issued M-x viper-go-away So once I get this error and run the same command again, viper-mode loads successfully. When I use toggle-viper-mode to turn it off, viper-mode goes away successfully. But when I try to turn it back on using toggle-viper-mode again, I get the above stated error. I have to use viper-go-away and then M-x viper-mode twice to get it back working. How can I get rid of this error?

    Read the article

  • How to define template for org-mode HTML export?

    - by Andrew-Dufresne
    I am using org-mode to generate html pages from my notes. I used Publishing Org-mode files to HTML to setup blog system. I have defined an export template. But to use it I have to add following line in top of my every org file inside my notes project. #+SETUPFILE: ~/.emacs.d/org-templates/level-0.org Is there a way to set this up in .emacs or to customize an org-mode variable so that I do not have to place this line in every file? According to org-mode manual, #+SETUPFILE is an in-buffer setting. Does this mean I cannot define it globally for all org files? These two answers on SU tell how to customize style for HTML export. But my template file contains other settings besides CSS style. So only customizing style won't do it for me.

    Read the article

  • Converting a multi-sheet per page pdf to single sheet per page

    - by Andrew Aylett
    My father-in-law usually creates his newsletters pre-'booked' -- that is, two columns with the pages in the right place such that you can print and staple the newsletter. Unfortunately, this month we're using a printer that wants an un-booked PDF -- with one page per page, in the right order. I can re-order pages easily enough, but is there any way to take a PDF which is essentially 2-up and split the pages?

    Read the article

  • what does the @ symbol mean in ls -l directory listing?

    - by Andrew Arrow
    When I run ls -l on my mac I see two .yml files: -rw-r--r-- 1 aa staff 6 Apr 15 05:50 s1.yml -rw-r--r--@ 1 aa staff 362 Apr 15 05:49 s3.yml same owner, same permissions but one has a @ at the end of the permisions. The one with the @ shows up in my editor, the one without does not. So there must be some significance. How can I turn on the @ for the file without it? I selected the files in the finder and did get info and everything looks identical between the two files.

    Read the article

  • How important is dual-gigabit lan for a super user's home NAS?

    - by Andrew
    Long story short: I'm building my own home server based on Ubuntu with 4 drives in RAID 10. Its primary purpose will be NAS and backup. Would I be making a terrible mistake by building a NAS Server with a single Gigabit NIC? Long story long: I know the absolute max I can get out of a single Gigabit port is 125MB/s, and I want this NAS to be able to handle up to 6 computers accessing files simultaneously, with up to two of them streaming video. With Ubuntu NIC-bonding and the performance of RAID 10, I can theoretically double my throughput and achieve 250MB/s (ok, not really, but it would be faster). The drives have an average read throughput of 83.87MB/s according to Tom's Hardware. The unit itself will be based on the Chenbro ES34069-BK-180 case. With my current hardware choices, it'll have this motherboard with a Core i3 CPU and 8GB of RAM. Overkill, I know, but this server will be doing other things as well (like transcoding video). Unfortunately, the only Mini-ITX boards I can find with dual-gigabit and 6 SATA ports are Intel Atom-based, and I need more processing power than an Atom has to offer. I would love to find a board with 6 SATA ports and two Gigabit LAN ports that supports a Core i3 CPU. So far, my search has come up empty. Thus, my dilemma. Should I hold out for such a board, go with an Atom-based solution, or stick with my current single-gigabit configuration? I know there are consumer NAS units with just one gigabit interface (probably most of them), but I think I will demand a lot more from my server than the average home user. Any advice is appreciated. Thanks.

    Read the article

  • How do you manage large web farms?

    - by Andrew Katz
    I have a quickly growing web farm running IIS 7 (30+ servers). All servers are identical copies of each other and all servers are physical. We update the software about once a month, and in the current process, we follow the following steps: Disable server from pool on F5 load balancer. Disable HTTP Keep-alives in IIS so connections drop quickly. Change default directory of website to new folder containing new binaries. Test server Enable HTTP Keep-alives. Enable server in F5 pool. Move to server 2 Microsoft used to have Application Center which was abandoned a while ago. They have made a second attempt with the Web Farm Framework, but this adds as much QA time testing the release package as it saves in the deployment. Has anyone seen a commercial off the shelf application that is tailored for managing and deploying to large web farms? Thanks!

    Read the article

  • Bare minimal Chef provisioning and deployment?

    - by Andrew McCloud
    I've read the documentation on Chef twice over. I still can't wrap my head around it's concept because they skip but fundamentals and jump to complex deployments with chef-server. Using chef-solo and possibly knife, is there a simple way to provision a server and deploy? I may be wrong, but it seems like with my cookbooks prepped, this should be very simple. knife rackspace server create --flavor 1 --image 112 That provisions my server. I can optionally pass --run-list "recipe[mything]", but how do my cookbooks in ~/my_cookbooks actually get on the server? Do I have to manually transfer them? That seems counterproductive.

    Read the article

  • OAS log files filling up hard drive

    - by Andrew Hampton
    We've had issues with log files for Oracle Application Server filling up the hard drive on our server. The files are in the /network/admin folder and are named server.log_XXXXX.trc and client.log_XXXXX.trc where XXXXX are 5 digits. The files are typically anywhere from 1-2MB in size but can be up to 100MB and thousands of them are created at a rate of about 5-10 per minute. Does anyone know how to disable these logs? Thanks!

    Read the article

  • Cannot install grub to RAID1 (md0)

    - by Andrew Answer
    I have a RAID1 array on my Ubuntu 12.04 LTS and my /sda HDD has been replaced several days ago. I use this commands to replace: # go to superuser sudo bash # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded" # remove broken disk from RAID mdadm /dev/md0 --fail /dev/sda1 mdadm /dev/md0 --remove /dev/sda1 # see partitions fdisk -l # shutdown computer shutdown now # physically replace old disk by new # start system again # see partitions fdisk -l # copy partitions from sdb to sda sfdisk -d /dev/sdb | sfdisk /dev/sda # recreate id for sda sfdisk --change-id /dev/sda 1 fd # add sda1 to RAID mdadm /dev/md0 --add /dev/sda1 # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded, recovering" # to see status you can use cat /proc/mdstat This is the my mdadm output after sync: /dev/md0: Version : 0.90 Creation Time : Wed Feb 17 16:18:25 2010 Raid Level : raid1 Array Size : 470455360 (448.66 GiB 481.75 GB) Used Dev Size : 470455360 (448.66 GiB 481.75 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Nov 1 15:19:31 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 92e6ff4e:ed3ab4bf:fee5eb6c:d9b9cb11 Events : 0.11049560 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 After bebuilding completion "fdisk -l" says what I have not valid partition table /dev/md0. This is my fdisk -l output: Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00057d19 Device Boot Start End Blocks Id System /dev/sda1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sda2 940910985 976768064 17928540 5 Extended /dev/sda5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000667ca Device Boot Start End Blocks Id System /dev/sdb1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sdb2 940910985 976768064 17928540 5 Extended /dev/sdb5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/md0: 481.7 GB, 481746288640 bytes 2 heads, 4 sectors/track, 117613840 cylinders, total 940910720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table This is my grub install output: root@answe:~# grub-install /dev/sda /usr/sbin/grub-setup: warn: Attempting to install GRUB to a disk with multiple partition labels or both partition label and filesystem. This is not supported yet.. /usr/sbin/grub-setup: error: embedding is not possible, but this is required for cross-disk install. root@answe:~# grub-install /dev/sdb Installation finished. No error reported. So 1) "update-grub" find only /sda and /sdb Linux, not /md0 2) "dpkg-reconfigure grub-pc" says "GRUB failed to install the following devices /dev/md0" I cannot load my system except from /sdb1 and /sda1, but in DEGRADED mode... Anybody can resolve this issue? I have big headache with this.

    Read the article

  • Firefox only shows a black window over RDP

    - by Andrew Aylett
    I have a Windows 7 desktop running Firefox 4. If I remote-desktop (RDP) from another PC into my already-running session, the Firefox window turns black. Flash applications are still visible, if I'm on a tab with one, and I can apparently interact with Firefox normally, I just can't see anything in the window. I'm after an explanation, if possible, and (even better!) a workaround or fix that doesn't involve a different web browser :).

    Read the article

  • multiple problems--windows 7 mbr, others

    - by Andrew
    After buying a new laptop(XPS L501X) with windows 7 preinstalled, I decided to make it dual boot with windows Vista. I now know about the issue with installing previous versions of windows, and the deletion of the MBR. Normally this wouldn't be a problem, but Microsoft no longer sends install disks with their computers, so there is no way to just re-install windows 7. If anyone can help it would be much appreciated. Data preservation is not an issue, as the laptop is new.

    Read the article

  • Programs keep waiting for external disk to spin up - how to ignore disk?

    - by Andrew J. Brehm
    Like many Mac users I have an external Firewire disk hooked up to my Mac to be used by Time Machine. This works very well, backup-wise. The problem is that very often when I use a Mac application and try to open a file, the file selection dialogue window hangs until the external disk has spun up. I never ever want to open a file on the external disk. Sometimes this happens even when I just want to save a file I already saved (i.e. type something and press meta-s). Is there anything I can do about this?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >