Search Results

Search found 1666 results on 67 pages for 'andrew kalashnikov'.

Page 18/67 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • HAproxy roundrobin balancing does not appear to be distributing evently

    - by andrew
    Hello, I know that with loaded servers, roundrobin in HAproxy (1.4.4) does not evenly distribute, but my servers are currently getting NO traffic (test setup), and roundrobin balancing does www1,www1,www1,www1,www1,...www2,www2,www2,...,www1... I'm verifying this by having the script being run on each server cat /etc/HOSTNAME (slackware). I need to have it switch back and forth each time to test some session stuff (stored in shared memcached) but am having trouble getting it to switch between my two web servers on each request. global log 127.0.0.1 local0 warning maxconn 4096 chroot /usr/share/haproxy pidfile /var/run/haproxy.pid uid 99 gid 99 daemon defaults balance roundrobin fullconn 100 maxconn 4096 mode http option dontlognull option http-server-close option forwardfor option redispatch retries 3 timeout connect 5000 timeout client 20000 timeout server 60000 timeout queue 60000 stats enable stats uri /haproxy stats auth ***:*** frontend www *:80 log global acl is_upload hdr_dom(host) -i uploads.site.com acl is_api hdr_dom(host) -i api.site.com acl is_dev hdr_dom(host) -i dev.site.com acl is_apidev hdr_dom(host) -i apidev.site.com use_backend uploads.site.com if is_upload use_backend api.site.com if is_api use_backend dev.site.com if is_dev !is_apidev default_backend site.com backend site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend api.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:api.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend dev.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:dev.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend uploads.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:uploads.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 backup weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 So basically, I have some different back-ends (I've verified the ACLs are working), with the default option "roundrobin" selected. I've tried removing weights, removing the minconn/maxconn/fullconn attributes for all servers (not just the backend I'm testing), tried removing the ACLs, etc. I've been testing on dev.site.com BTW. Anyone see a reason why I can't get something like www1,www2,www1,www2,...? Also, this is one of my first questions on here, so please let me know if I left anything needed out of my post. Thanks!

    Read the article

  • Install Windows 8 64-bit over the top of Windows 8 32-bit

    - by Andrew Gee
    I currently have Windows 8 32-bit installed from MSDN (I didn't realise at the time that my processor supports 64-bit). I understand that you can't upgrade within Windows 32-bit to 64-bit directly from the ISO. I have burned the ISO to a DVD, and have attempted booting from this drive. The problem I am encountering: The operating system couldn't be loaded because a required file is missing or contains errors. File: CI.dll Error code: 0xc0000221 You'll need to use the recovery tools on your installation media. If you don't have any installation media (like a disc or USB device), contact your system administrator or PC manufacturer. Additional info: Computer: HP Pavillion m9280.uk-a Processor: AMD Phenom 9600 Quad-Core RAM: 3 1GB sticks Thanks in advance!

    Read the article

  • Win 7 Home Premium 64 bit running Cobian Backup 11 (Gravity)

    - by Andrew
    I'm really enjoying Cobian 11, but am fairly new to it. My question is this. I back up a pretty large folder on a regular basis. I started off by doing a Full backup, and have followed that monthly using differential backups. I was told that, to restore my computer after a crash, I need to copy back the original full backup AND copy back the latest differential over the full. That's fine. However, over the months there are quite a few large differential backups dated between the original Full one and the latest differential one. To free space on my backup HD, can I every now and then delete the differential backups that lie between the original Full and the latest differential, and just leave the original Full and the latest differential backup on the HD?

    Read the article

  • htaccess rewrite different folder url, two index files

    - by Andrew
    I've been searching for awhile now and haven't found anything that comes close to what I'm trying to accomplish. Right now my URL's look like this: www.website.com/something which are using the root folder /index.php Now I have created plugins within folders: /plugins/PLUGINNAME/index.php I want to be able to have URLs like: www.website.com/plugins/PLUGINNAME/anything/iwant/here which are all using /plugins/PLUGINNAME/index.php and not the root directory index.php. Currently www.website.com/plugins/PLUGINNAME/ works, but anything after /PLUGINNAME/xxx defaults to the /index.php.

    Read the article

  • Coffeeshop limits Internet connection to 30 minutes -- how does it recognize me if I delete my cooki

    - by Andrew
    I was connected to the Internet in a coffeeshop earlier today, but I was only allowed 30 minutes of access. I tried deleting my cookies after my time was up (though admittedly I didn't delete my Flash cookies -- would that have solved the problem?), but the connection still recognized that I'd already used 30 minutes, so I couldn't connect again. How did the connection recognize me still? The wireless was unprotected (no code or password), it just had a portal you had to pass through upon the initial connection. I'm not terribly familiar with web development or computer networks, so just trying to get a better idea of what's happening (and possibly to know what to do next time I use up my minutes =)).

    Read the article

  • Configure IIS Web Site for alternate Port and receive Access Permission error

    - by Andrew J. Brehm
    When I configure IIS to run a Web site on Port 1414, I get the following error: --------------------------- Internet Information Services (IIS) Manager --------------------------- The process cannot access the file because it is being used by another process. (Exception from HRESULT: 0x80070020) However, as according to netstat the port is not in use. Completely aside from IIS, I wrote a test program (just to open the port and test it): TcpListener tcpListener; tcpListener = new TcpListener(IPAddress.Any, port); try { tcpListener.Start(); Console.WriteLine("Press \"q\" key to quit."); ConsoleKeyInfo key; do { key = Console.ReadKey(); } while (key.KeyChar != 'q'); } catch (Exception ex) { Console.WriteLine(ex.Message); } tcpListener.Stop(); The result was an exception and the following ex.Message: An attempt was made to access a socket in a way forbidden by its access permissions The port was available but its "access permissions" are not allowing me access. This remains after several restarts. The port is not reserved or in use as far as I know and while IIS says it is in use, netstat and my test program say it is not and my test program receives the error that I am not allowed to access the port. The test program ran elevated. The IIS Site is running MQSeries, but the MQ listener also cannot start on port 1414 because of this issue. A quick search of my registry found nothing interesting for port 1414. What are socket access permissions and how can I correct mine to allow access?

    Read the article

  • How to create a password-less service account in AD?

    - by Andrew White
    Is it possible to create domain accounts that can only be accessed via a domain administrator or similar access? The goal is to create domain users that have certain network access based on their task but these users are only meant for automated jobs. As such, they don't need passwords and a domain admin can always do a run-as to drop down to the correct user to run the job. No password means no chance of someone guessing it or it being written down or lost. This may belong on SuperUser ServerFault but I am going to try here first since it's on the fuzzy border to me. I am also open to constructive alternatives.

    Read the article

  • Running a bash script from an HTML link or button

    - by Andrew
    I have a webserver that's hosting lots of images. I want the client to be able to press a button or a link, which will run a bash script, which will create a video based on all these pictures. The script I'm trying to run is this: #!/bin/bash # cd to the directory cd /var/www/gallery # use ffmpeg to make video ffmpeg -pattern_type glob -i 'img-*jpg' -r 1 video.mp4 # Take the first file in the directory and name it video.mp4.jpg (for thumbnail) cp `ls | sort -n | head -1` video.mp4.jpg The script is located on the server. So when the client clicks the link or button, the script will run, and the video is created. I've tried both solutions listed here but I can't seem to get it to work. I have php installed on my server.

    Read the article

  • Can I list file names (or their parent directories) that were recently deleted using rm in OS X?

    - by Andrew Grimm
    Is it possible to find out which files and directories have recently been deleted by rm in OS X? Or failing that, is it possible to find which parent directories have had files or directories within it deleted? The OS version is Snow Leopard. Background: Last night, rvm (ruby version manager) did rm -rf of the ~/ruby directory from the home directory. (This bug has since been fixed) Ideally, I'd like to know what files within the ~/ruby directory were deleted, but failing that, I'd like to know if rvm deleted anything outside of ~/ruby . In case anyone's wondering about backups...: Just about everything within ~/ruby is a git project that has a remote repo, and I have a fairly recent Time Machine backup (only 20 days old).

    Read the article

  • Cannot properly read files on the local server

    - by Andrew Bestic
    I'm running a RedHat 6.2 Amazon EC2 instance using stock Apache and IUS PHP53u+MySQL (+mbstring, +mysqli, +mcrypt), and phpMyAdmin from git. All configuration is near-vanilla, assuming the described installation procedure. I've been trying to import SQL files into the database using phpMyAdmin to read them from a directory on my server. phpMyAdmin lists the files fine in the drop down, but returns a "File could not be read" error when actually trying to import. Furthermore, when trying to execute file_get_contents(); on the file, it also returns a "failed to open stream: Permission denied" error. In fact, when my brother was attempting to import the SQL files using MySQL "SOURCE" as an authenticated MySQL user with ALL PRIVILEGES, he was getting an error reading the file. It seems that we are unable to read/import these files with ANY method other than root under SSH (although I can't say I've tried every possible method). I have never had this issue under regular CentOS (5, 6, 6.2) installations with the same LAMP stack configuration. Some things I've tried after searching Google and StackExchange: CHMOD 0777 both directory and files, CHOWN root, apache (only two users I can think of that PHP would use), Importing SQL files with total size under both upload_max_filesize and post_max_size, PHP open_basedir commented out, or = "/var/www" (my sites are using Apache VirtualHosts within that directory, and all the SQL files are deep within that directory), PHP safe mode is OFF (it was never ON) At the moment I have solved this issue with the smaller files by using the FILE UPLOAD method directly to phpMyAdmin, but this will not be suitable for uploading my 200+ MiB SQL files as I don't have a stable Internet connection. Any light you could shed on this situation would be greatly appreciated. I'm fair with Linux, and for the things that do stump me, Google usually has an answer. Not this time, though!

    Read the article

  • Can't ping through default gateway

    - by Andrew G.H.
    I have the following configuration: Routing table on M3 is: Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 eth1 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 192.168.3.0 0.0.0.0 255.255.255.192 U 0 0 0 eth0 Routing table on M1 is: Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 So basically M3's gateway is M1, and M1's gateway is M2's wireless internet interface. If I ping 8.8.8.8 from M1, everything is ok, replies are received. Pinging from M1 to M3 and viceversa is also possible. I have configured M1 as gateway trafic forwarder using firestarter package and stopped firewall with it. iptables policies are ACCEPT for everything. Problem: I have tried ping-ing ip 8.8.8.8 from M3 but without success. What could be the source of this problem?

    Read the article

  • Cannot load from raid with grub

    - by Andrew Answer
    I have a RAID1 array on my Ubuntu 12.04 LTS and my /sda HDD has been replaced several days ago. I use this commands to replace: # go to superuser sudo bash # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded" # remove broken disk from RAID mdadm /dev/md0 --fail /dev/sda1 mdadm /dev/md0 --remove /dev/sda1 # see partitions fdisk -l # shutdown computer shutdown now # physically replace old disk by new # start system again # see partitions fdisk -l # copy partitions from sdb to sda sfdisk -d /dev/sdb | sfdisk /dev/sda # recreate id for sda sfdisk --change-id /dev/sda 1 fd # add sda1 to RAID mdadm /dev/md0 --add /dev/sda1 # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded, recovering" # to see status you can use cat /proc/mdstat After bebuilding completion "fdisk -l" says what I have not valid partition table /dev/md0. So 1) "update-grub" find only /sda and /sdb Linux, not /md0 2) "dpkg-reconfigure grub-pc" says "GRUB failed to install the following devices /dev/md0" I cannot load my system except from /sdb1 and /sda1, but in DEGRADED mode... This is my partial fdisk -l output: Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000667ca Device Boot Start End Blocks Id System /dev/sdb1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sdb2 940910985 976768064 17928540 5 Extended /dev/sdb5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/md0: 481.7 GB, 481746288640 bytes 2 heads, 4 sectors/track, 117613840 cylinders, total 940910720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Anybody can resolve this issue? I have big headache with this.

    Read the article

  • Ubuntu Pound Reverse Proxy Load Balancing Based off active server load?

    - by Andrew
    I have Pound installed on a loadbalancer. It seems to work okay, except that it randomly assigns the backend server to forward the request to. I've put 1 backend machine under so much load that it went into using swap, and I can't even ssh into it to test this scenareo. I would like the loadbalancer to realize that the machine is overloaded, and send it to a different backend machine. However it doesn't. I've read the man page and it seems like the directive "DynScale 1" is what would monitor this, but it still redirects to the overloaded server. I've also put in "HAport 22" to the backend figuring since I can't ssh in, neither could the loadbalancer and it would consider the backend server dead until it gets rid of the load and responds, but that didn't help either. If anyone could help with this, I'd appreciate it. My current config is below. ###################################################################### ## global options: User "www-data" Group "www-data" #RootJail "/chroot/pound" ## Logging: (goes to syslog by default) ## 0 no logging ## 1 normal ## 2 extended ## 3 Apache-style (common log format) LogLevel 3 ## check backend every X secs: Alive 5 DynScale 1 Client 1200 TimeOut 1500 # poundctl control socket Control "/var/run/pound/poundctl.socket" ###################################################################### ## listen, redirect and ... to: ## redirect all requests on port 80 to SSL ListenHTTP Address 192.168.1.XX Port 80 Service Redirect "https://xxx.com/" End End ListenHTTPS Address 192.168.1.XX Port 443 Cert "/files/www.xxx.com.pem" Service BackEnd Address 192.168.1.1 Port 80 HAport 22 End BackEnd Address 192.168.1.2 Port 80 HAport 22 End End End

    Read the article

  • SUSE Linux and Xen on Mac Pro - How best to prepare and configure?

    - by Andrew J. Brehm
    This is a longwinded question, so bear with me please. I have a 2009 Mac Pro with two CPUs and 8 GB of memory which is totally overpowered for Mac OS X. I am also in the process of slowly moving away from Mac OS X as my main platform. Since the Mac Pro is really new and nice I have finally decided to use it for another platform. I am familiar with Linux and SUSE Linux. Ultimately I want to run some version of SUSE Linux (recommend one, doesn't have to be free as in no money) and Xen. Here are the individual questions: Which version of SUSE Linux should I use and how do I install it on a Mac Pro? Note that the distribution must come with usable Xen. I am willing to pay. I assume Xen will work on my computer (it has VT support etc.). Is my assumption correct? I want to run Windows 7 and another instance of SUSE Linux under Xen. Is it possible to run Mac OS X Server under Xen (on a Mac Pro)? Which email client under Linux supports imap is is best-suited for integrating with MobileMe? Does SUSE Linux support the ATI Radeon HD 4870 and the Apple Cinema Display 1920 x 1200 resolution? What else should I take into account?

    Read the article

  • How to set the subversion repository root in Debian?

    - by Andrew Whitehouse
    I have just switched from an old Fedora Core server to Debian Linux v5.0.4. Having migrated the old repository and configured access through svn+ssh, I now want to be able to access the repository with the same path on the client as before. On Fedora you could specify the repository root with "svnserve -r " but having checked the config files and svnadmin options I'm stuck as to how I can do this on Debian. Is there a way to set the repository root in Debian?

    Read the article

  • Generate a limited amount of random network traffic between 2 hosts

    - by Andrew S
    I'm trying to find a utility that will allow me to generate a constant flow of random network traffic at a specified rate between 2 hosts. The utility needs to run on Windows and OSX. I've tried iperf but it seems to be more oriented toward short-term testing/statistics and it really taxes the CPU even at slower rates. I want something that will generate traffic for a few weeks at say 10Mbps while I use other tools to monitor the impact of that level of traffic on the network.

    Read the article

  • How to install rmagick on Ubuntu 10.04?

    - by Andrew
    Here's what I've done so far: sudo apt-get install imagemagick libmagickcore-dev This did not throw any errors, so I think that ImageMagick is installed fine. Then I tried installing the gem: sudo gem install rmagick This resulted in the following error: ERROR: Error installing rmagick: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for Ruby version >= 1.8.5... yes checking for gcc... yes checking for Magick-config... yes checking for ImageMagick version >= 6.4.9... yes checking for HDRI disabled version of ImageMagick... yes checking for stdint.h... yes checking for sys/types.h... yes checking for wand/MagickWand.h... no Can't install RMagick 2.13.1. Can't find MagickWand.h. *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/rmagick-2.13.1 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/rmagick-2.13.1/ext/RMagick/gem_make.out What do I need to do to install rmagick on Ubuntu 10.04?

    Read the article

  • What is good usage scenario for Rackspace Cloud Files CDN (powered by AKAMAI) [closed]

    - by Andrew Smith
    I have just setup my website as static page via Rackspace CDN / Akamai. www.example.co.uk is an alias for d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com. d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com is an alias for a61.rackcdn.com. a61.rackcdn.com is an alias for a61.rackcdn.com.mdc.edgesuite.net. a61.rackcdn.com.mdc.edgesuite.net is an alias for a63.dscg10.akamai.net. a63.dscg10.akamai.net has address 63.166.98.41 a63.dscg10.akamai.net has address 63.166.98.40 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ecb9 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ed09 The HTTP header: HTTP/1.0 200 OK Last-Modified: Fri, 19 Oct 2012 23:27:41 GMT ETag: fdf9e14b77def799e09e8ce815a521da X-Timestamp: 1350689261.23382 Content-Type: text/html X-Trans-Id: tx457979be3bd746c2b4e5403a1189cdbc Cache-Control: public, max-age=900 Expires: Sat, 27 Oct 2012 22:18:56 GMT Date: Sat, 27 Oct 2012 22:03:56 GMT Content-Length: 7124 Connection: keep-alive I am wondering, if it's really the fastest solution to power the website? By investigating it thru http://www.just-ping.com/ it seems, that from many places the ping is very high, and during quick investigation I found that they use GeoIP to resolve addresses based on WHOIS, which is not accurate and because of that from many places the ping is above 300ms (for example, if ISP is in balgladore and request is routed to bangladore even if it's 300ms, for period of 1 month), while by just using Amazon Web Services and Route 53 Anycast DNS servers and only 4 EC2 instances it seems that for example India is always below 100ms, while using Akamai it goes above 300ms in some cases, and this is because Route 53 is using BGP. By quickly checking the Akamai, it seems that they are not getting feedback from the traffic - the high ping stays constant even if I keep downloading large files and videos, which is opposite to what they say on their website. They state, that they optimize the performance by taking feedback from the requests, while it seems they just use GeoIP with per City resolution (which are mostly big cities). Because of this, AWS with Route 53 / Anycast DNS seems to be much more reliable, as well EdgeCast which is using BGP, but I dont know how much does it cost to deploy static website. Actually, I dont know if EdgeCast is not a lie, because from isolated places there are many errors - so their performance is at the cost of quality of delivery, because of BGP switching the routes during transfer of large files. So I was wondering, what is really Akamai good for, because they dont seem to pose any strength in any field in what I do understand now, except they offer some software based WAF on their website, but what I really care about is the core distribiution, so the question is? Is really Akamai good for Videos? For static websites? ??? I found so far AWS most usable with most consistent ping and stable transfers.

    Read the article

  • Exim: How to turn off DKIM for forwarded mail?

    - by Andrew
    I have DKIM configured in Exim for outgoing mail, as per the documentation. Exim signs all outgoing mail. But some of that outgoing mail is forwarded, thanks to a users .forward file. This is a problem for me, because some of those messages are spam (my exim configuration does not do any verification) and I don't want to take responsibility for them. But I can't figure out how to configure Exim not to sign these messages. My configuration is basically the Debian Squeeze default, with a few DKIM_* macros set. I can post more details, but I think seeing any example of conditional DKIM signing would set me right.

    Read the article

  • Does /NOCANDY avoid any adware-related activities with OpenCandy?

    - by Andrew Grimm
    OpenCandy claims that using the /NOCANDY switch when using a OpenCandy-affiliated installer allows you to avoid opencandy. Should I take their word for it? If not, can anyone independent of OpenCandy and their affiliates verify that /NOCANDY works? Background: About to install WinSCP onto a fresh Windows installation, and found out that new versions have OpenCandy associated with their installer. For the sake of balance, here's a link to WinSCP's FAQ on OpenCandy. The claim about /NOCANDY working appears on WinSCP's web site, but the same boilerplate appears on other OpenCandy web sites. If the OpenCandy people are offended by the tag "spyare": sorry, but it's the main tag here, rather than "adware".

    Read the article

  • Replacing files in a folder structure with files from an unsorted folder

    - by Andrew
    I have over 50,000 PDFs organized into folders in a file called PDFACT. I needed to compress these files so I ran them through Adobe to batch compress them and this worked—except Adobe could only output the files without their folder structure. So basically I have 50,000 PDFs set up in a folder with hundreds of subfolders, and everything was organized. I ended up with one folder with 50,000 compressed PDFs in it, just in alphabetical order. Somehow I need to replace all the orginal PDFs with their compressed copies. Let me give an example: In the folder PDFACT we have the following file: C:\PDFACT\BIG DINNER\BILL\NEWESTBILL.PDF … and in the output folder that Adobe created we have just: C:\COMPRESSED_PDF_FOLDER\NEWESTBILL.PDF This copy is smaller then the one in PDFACT and has the same name but it is just lumped in with every other PDF. The folder structure and subfolders are gone. Is there any way to replace all the larger uncompressed PDFS inside the orginal folder structure with their now compressed counterparts?

    Read the article

  • Why can't Win7 remember my tether? New Connection Setup every time

    - by Andrew Heath
    In Win7 when you connect to a new network you're prompted to set it as a Home, Work, or Public connection. Presumably this influences default security settings. That's all well and good, except for the part that everytime I USB tether my smartphone Win7 comes up with a new dialogue. I think I'm up to generic connection 25 or something? It's getting ridiculous. Is there anyway to get Win7 to remember the phone tether as ONE connection only?

    Read the article

  • Hosting online with xampp?

    - by Andrew
    I'm not quite sure what I'm doing wrong, because from what I've read, this should all be working. What I've done: Forwarded ports 80, 8080, and 443. Changed the ServerName localhost:80 line in \apache\conf\httpd.conf to ServerName myip:80. Registered at dyndns.com, and have been using their update client to link my IP to the DNS thingy. Made sure xampp was using port 80, and started apache and MySql. And...nothing. What did I miss? =/.

    Read the article

  • Easy way to restrict permissions in an elementary school computer lab?

    - by Andrew
    I'm putting together an elementary school computer lab. I have nine winxp pro machines that are not networked and do not have internet access (no money to do either). I've created separate student and admin accounts, and have the students set as limited users. However, I'm interested in further restricting their permissions. I want to make it such that they cannot: -delete any files, even just from their own profile -rename any files -move around the icons on the desktop -change any display settings -access a usb device without a password (they bring in their own from home which are chock-full of viruses) Oh, one last thing, they still have to be able to save word documents. Is this even possible? I can download software, but, like I said: no internet, no server.

    Read the article

  • Stop Windows Automatic Update in boot menu or start up-sequence

    - by Andrew
    My girlfriend has a corrupted hard drive running Windows Vista. She is getting a new hard drive and has also purchased an external hard drive to back up her data. However windows downloaded an automatic updated and keeps getting held up and restarting when it trys to apply the update. Is there a way she can disable this from the boot menu or start-up sequence?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >