Search Results

Search found 3797 results on 152 pages for 'will love'.

Page 111/152 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • New Static Website with Hosted DNS alternating 502, 503 and Page Does Not Exist Errors

    - by Dave
    This has become an increasingly frustrating ordeal. I'm mostly a web developer, so forgive me if I am using improper terminology here. I have a client that had purchased a domain at JustHost. We built him a website and have it on our own server space. Now, I'm mostly used to dealing with godaddy and it is simple enough to manage dns records and point the A record to our server IP, where Apache on our end deals with the domains via name-based virtual hosts. But for some reason, in setting this up with JustHost, when attempting to go to the domain name, I either get a 502 or 503 error or "webpage does not exist". Now, I know that the basic functionality of the webpage must be working because I can access the the index etc straight through my servers www data (IE [server-ip]/website_folder). I was on the phone with technical support for over three hours yesterday with justhost and the best I could get was "That's really weird..." I've checked my logs and there doesn't seem to be anything coming through to my end. Does anybody have an idea of whats going on here? I would love for it to be a problem on my end, because justhost doesn't seem capable of helping further. Any help is greatly appreciated, thanks. I forgot to mention that we have several other sites up and running and completely accessible.

    Read the article

  • Enable gzip on Nginx

    - by Rob Wilkerson
    Yes, I know that there are a lot of other questions that seem exactly like this out there. I think I must've looked all of them. Twice. In desparation, I'm adding another in case my specific configuration is the issue. Bear with me. First, the question: What do I need to do to get gzip compression to work? I have an Ubuntu 12.04 server installed running nginx 1.1.19. Nginx was installed with the following packages: nginx nginx-common nginx-full The http block of my nginx.conf looks like this: http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "msie6"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Both PageSpeed and YSlow are reporting that I need to enable compression. I can see that the request headers indicate Accept-Encoding:gzip,deflate,sdch, but the response headers do not have the corollary Content-Encoding header. I've tried various other config values (gzip_vary on, gzip_http_version 1.0, etc.), but no joy. As far as I know, I can only assume that nginx was compiled with compression support, but if there's any way to verify that, I'd love to know. If anyone sees anything I'm missing or can suggest further debugging, please let me know. I'm no sysadmin and I'm new to Nginx so I've exhausted everything I can think of or have read. Thanks.

    Read the article

  • How important is dual-gigabit lan for a super user's home NAS?

    - by Andrew
    Long story short: I'm building my own home server based on Ubuntu with 4 drives in RAID 10. Its primary purpose will be NAS and backup. Would I be making a terrible mistake by building a NAS Server with a single Gigabit NIC? Long story long: I know the absolute max I can get out of a single Gigabit port is 125MB/s, and I want this NAS to be able to handle up to 6 computers accessing files simultaneously, with up to two of them streaming video. With Ubuntu NIC-bonding and the performance of RAID 10, I can theoretically double my throughput and achieve 250MB/s (ok, not really, but it would be faster). The drives have an average read throughput of 83.87MB/s according to Tom's Hardware. The unit itself will be based on the Chenbro ES34069-BK-180 case. With my current hardware choices, it'll have this motherboard with a Core i3 CPU and 8GB of RAM. Overkill, I know, but this server will be doing other things as well (like transcoding video). Unfortunately, the only Mini-ITX boards I can find with dual-gigabit and 6 SATA ports are Intel Atom-based, and I need more processing power than an Atom has to offer. I would love to find a board with 6 SATA ports and two Gigabit LAN ports that supports a Core i3 CPU. So far, my search has come up empty. Thus, my dilemma. Should I hold out for such a board, go with an Atom-based solution, or stick with my current single-gigabit configuration? I know there are consumer NAS units with just one gigabit interface (probably most of them), but I think I will demand a lot more from my server than the average home user. Any advice is appreciated. Thanks.

    Read the article

  • Tools to manage sql 2008 database mirroring?

    - by lemkepf
    We are going to be moving about 20 databases that live on a single instance of sql 2000 to a sql 2008 r2 environment with database mirroring. What I'm looking for is a tool or scripts that will help me manage the conversion and management of those 20db's onto this new mirrored environment easily. There are many steps in setting each DB up and I want to automate as much as possible. Edit: Here are the steps I've been doing manually: Create the same username/passwords from the old sql 2000 server onto new sql 2008 server. Then sync those users/passwords onto the other sql 2008 server with the same SSID's so when we do the db backup and restore they match up. Take a backup of each sql 2000 db's. Copy them to server A. Restore the backup to server A. Backup from server a, copy to server b, restore there. Run the mirror "configure security" wizard. Start mirroring. I've love to be able to script this out or have a tool that does it for me. Thanks! Paul

    Read the article

  • Installing Windows 7 over PXE, preferably with domain autojoin

    - by Ivan Vucica
    At an educational non-profit, I've inherited a previously set-up Windows domain that, after the first reinstall of the machines, we ended up not using by simply not joining machines back into the domain. Over last summer, before the annual reinstall for shipping machines to the summer school, I toyed with the idea of installing Windows 7 over network, instead of just imaging the machines. It took a bit longer than I expected to figure out the basics; honestly, I expected that Windows would be more friendly for PXE installation out of the box. What I'm interested in is best practices for installing Windows 7 over PXE with domain autojoin. I'd love it if the whole setup could optionally be hosted on a UNIX based system as well. I've had some success by preparing an ISO using Windows Deployment Kit, and loading the ISO into memory. This was needed since I wanted a menu, and I think I couldn't get PXELINUX to chainload into Windows' bootloader. Unfortunately, I couldn't figure out much about customization of the Windows setup in that timeframe nor could I get Samba to work properly; studying the stuff ended up being too lengthy, especially the portion where I edited a disk image on Windows and copied it outside. WDK didn't make things easier by mounting the disk image into RAM, and writing it in its entirety when done with it, making me a very sad boy. I've recently found a different approach, too, that appears to be closer to Microsoft's original idea for netboot deployment and does not involve ISOs. So my question boils down to the following. What exact approach do you use for netbooting Windows 7 setup? How can Windows 7 setup be best customized to be completely unattended, including installation on specific system partition and not destroying the data partition, creation of passworded admin and default user, choice of MAC-address-based hostname, and joining a domain? As much details as possible for everyone's future reference would be appreciated. WDS isn't a bad choice, but if a Linux-based install can be used, that'd be better.

    Read the article

  • Is Cherokee (probably) the best static content server for beginner sysadmins?

    - by Bad Learner
    I have read the pros and cons of most of the popular web servers and have come to a conclusion that Apache would (probably) be the best web server for serving dynamic content - - no wonder YouTube, Flickr and Facbook, among many others, use it. I do not know if that C10K problem applies to Apache even when serving dynamic content only, but I think any web server used to serve dynamic content needs some good tweaking for optimized performance, and the fact that nothing beats Apache when it comes to documentation, resources and support on the web, I think should will go with Apache for dynamic content. That apart, the confusion begins when it comes to choosing web servers for static content (including streaming videos). I see that Nginx, Cherokee and Lighttpd are among the best (I am not considering non-open source or non-linux stuff here). So, which too choose? I know one cannot go wrong with any of the three (Nginx, Cherokee, Lighttpd). Lighttpd's development has evidently gotten slower than it was a good time ago. The documentation is pretty good for all the three, and hopefully, so are the resources (knowledge of these among the users of Stackoverflow/Serverfault sites, the web etc). Precisely, and noting point [2] and [3], if I am not wrong, I should either go with Nginx or Cherokee. I would love to see someone clarify these... is Cherokee just as fast (mb/s), performant (connections/s), and reliable (think downtime/restarting server) as Nginx for serving static content and load balancing, for small, medium to large (and really large) websites and applications? (Think, the size of YouTube, Apache or Facebook.) if the answer for the Q above is a big "hell, yes!" then, I should probably prefer Cherokee, right? Because, since I am a beginner, it would a lot easier to setup Cherokee as it has a graphical admin user interface + really good documentation. Yes? I could be wrong, I could be right. I put down what I know so that you can offer most relevant advise. Pardon if anything I've said is offensive.

    Read the article

  • Replacing DropBox with: Amazon S3 + SSL + GPG/TrueCrypt + Mounting on OSX ??

    - by Matt Rogish
    So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems. However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that. There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements: Transport security via SSL to the bucket Encryption of bucket contents Bi-directional syncing Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 ) Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one. Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts? THANKS!

    Read the article

  • Apache reports a 200 status for non-existent WordPress URLs

    - by Jonah Bishop
    The WordPress .htaccess generally has the following rewrite rules: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> When I access a non-existent URL at my website, this rewrite rule gets hit, redirects to index.php, and serves up my custom 404.php template file. The status code that gets sent back to the client is the correct 404, as shown in this HTTP Live Headers output example: http://www.borngeek.com/nothere/ GET /nothere/ HTTP/1.1 Host: www.borngeek.com {...} HTTP/1.1 404 Not Found However, Apache reports the entire exchange with a 200 status code in my server log, as shown here in a log snippet (trimmed for simplicity): {...} "GET /nothere/ HTTP/1.1" 200 2155 "-" {...} This makes some sense to me, seeing as the original request was redirected to page that exists (index.php). Is there a way to force Apache to report the exchange as a 404? My problem is that bogus requests coming from Bad Guys show up as "successful requests" in the various server statistics software I use (AWStats, Analog, etc). I'd love to have them show up on the Apache side as 404s so that they get filtered out from the stat reports that get generated. I tried adding the following line to my .htaccess, but it had no effect (I'm guessing for the same reason as the previous redirect rules): ErrorDocument 404 /index.php?error=404 Does anyone have a clever way to fix this annoyance? Additional Info: OS is Debian 6.0.4, and Apache version looks to be 2.2.22-3 (hosted on DreamHost) The 404 being sent back to the client is being set by WordPress (i.e. I'm not manually calling header() anywhere)

    Read the article

  • Internet Sharing on Lion breaks my routing table

    - by seaders
    When in the office, I'm connected to a 192.168.1.0/24 network. When Internet Sharing is off, when I run netstat -nr the first entry shows default 192.168.1.254 UGSc 10 62 en0 If I turn Internet sharing on, it shows default link#5 UCS 2 0 en1 This is obviously incorrect and breaks all connectivity of my machine. en1 is my wireless, whereas en0 is my ethernet. If I then disable Internet Sharing, it even deletes that incorrect route, so I'm left with no default route at all. Currently I have one script that I run when I share, or after, when I disable that does route delete default route add default 192.168.1.254 That fixes everything, but I'd love to know what's actually making this happen and how to properly fix it. And just to say that at some point a few months ago, this was working absolutely perfectly, with no hitches, then one day when I brought the laptop home, I couldn't disable the internet sharing, so I couldn't connect to my home wifi. I eventually had to restart the machine and since then this problem has been happening. Any help would be greatly appreciated, thank you.

    Read the article

  • Custom Extensions on Managed Chromebooks

    - by user417669
    I am a developer looking for the best way to set up different schools with their own custom, private extensions (ie School A should be the only one with access to Extension A). Theoretically, I am aware that there are a few ways to get a custom, private extension pushed out on a domain: Host the .crx on a server and click "Specify a Custom App" in the management console. Create a Domain App by uploading a zip to the Chrome Web Store Upload the extension from my developer account to the Chrome Web Store and publish to a single "trusted tester," or make it unlisted Option (1), hosting the .crx, has not been working. I am not sure why, but the extension is simply not pushing out. I link directly to the crx file, which has the right ID and MIME type, still, no dice. If anyone has any tips or suggestions for getting this to work, I would love to hear them! Option (2), having the school create a domain app, seems a bit inefficient because it requires all schools to upload their own zip. So essentially I would have to email a zip file to the school, and have them publish it. All updates to the extension will also require a similar process, so this doesn't seem ideal. I doubt that option (3) would work. If I published to the admin as a "trusted tester", I don't think that the other people in the domain would be able to access it. If it is unlisted, I do not know how an admin could find it in the Chrome Web Store dialog. Also, I would rather avoid security through obscurity. Has anyone had success with hosting the extension and using the Specify a Custom App feature? Any other suggestions for getting a Custom Extension pushed out by the management console? Thanks so much!

    Read the article

  • Is there a historical computer peripherals or accessories museum or even just a current list?

    - by zimmer62
    Thinking about all the unique and different peripherals I've owned over the years, from ISA capture cards, to parallel port controlled shutter glasses for 3d games. I've seen many many accessory or computer peripherals come and go. The nostalgia of these things is a lot of fun. I tried to find some sort of historical time-line or list but what mostly turned up is computers themselves. I'm more interested in the mice, scanners, the weird adapters that shouldn't exist, short run very rare products, strange devices from computer shows in the 80's and 90's... Hardware you might find in a geeks basement that would be completely useless now, but was the coolest thing around when it was new. An example would be a drawing tablet I had for my TI-99 computer, or the audio tape player accessory for a C64 which let you save files to audio tapes, An ISA card that did the same for PC's hooked up to a VCR. Remember that IBM-PC Jr upgrade kit, that added a floppy drive, more memory and the AT switch in the back? I'd love to find either a wiki, or a list that has already been assembled which contain many of these weird (or common) accessories. I've had so many over the years I suppose I could start a wiki here if such a list doesn't already exist.

    Read the article

  • Scaling databases with cheap SSD hard drives

    - by Dennis Kashkin
    Hey guys! I hope that many of you are working with high traffic database-driven websites, and chances are that your main scalability issues are in the database. I noticed a couple of things lately: Most large databases require a team of DBAs in order to scale. They constantly struggle with limitations of hard drives and end up with very expensive solutions (SANs or large RAIDs, frequent maintenance windows for defragging and repartitioning, etc.) The actual annual cost of maintaining such databases is in $100K-$1M range which is too steep for me :) Finally, we got several companies like Intel, Samsung, FusionIO, etc. that just started selling extremely fast yet affordable SSD hard drives based on SLC Flash technology. These drives are 100 times faster in random read/writes than the best spinning hard drives on the market (up to 50,000 random writes per second). Their seek time is pretty much zero, so the cost of random I/O is the same as sequential I/O, which is awesome for databases. These SSD drives cost around $10-$20 per gigabyte, and they are relatively small (64GB). So, there seems to be an opportunity to avoid the HUGE costs of scaling databases the traditional way by simply building a big enough RAID 5 array of SSD drives (which would cost only a few thousand dollars). Then we don't care if the database file is fragmented, and we can afford 100 times more disk writes per second without having to spread the database across 100 spindles. . Is anybody else interested in this? I've been testing a few SSD drives and can share my results. If anybody on this site has already solved their I/O bottleneck with SSDs, I would love to hear your war stories! PS. I know that there are plenty of expensive solutions out there that help with scalability, for example the time proven RAM-based SANs. I want to be clear that even $50K is too expensive for my project. I have to find a solution that costs no more than $10K and does not take much time to implement.

    Read the article

  • organizing my music and my itunes

    - by Cawas
    What can we do to organize our music? I've got over 20k items on my iTunes Library, at least 5k with ratings and play counts, apparently just 12k music files and I can't understand how this question have not been properly answered yet. Maybe there is no answer. I have too many duplicates, broken links, bad music, corrupted files... Well, a big mess with no tags! Probably there's no single software capable of just organizing everything, though I'd love one. Hopefully some time in the near future we all will be able to just sync the cloud of our automagically selected music to the newly created offline copy. But meanwhile... Please, do consider I've at least gave a shot (even while not a full test drive) to every single answer linked here already, plus a few more. I'm fine with using other software (mac too, please) to organize, but I'd need it to sync (retrieve and put back) at least iTunes ratings, because of iPhone and smart playlists. Not looking for iTunes replacement. I'm hoping to hear what you hardcore music organizers out there are using as your own solutions! :) I myself am using way too many tools, getting way too little done and end up going song by song.

    Read the article

  • ACER ASPIRE V3-571G-9435 Fan not kicking in leading to overclocking

    - by brythespy
    This laptop has always had this problem. The temperatures kick up to the thermal ceiling of 99C for the CPU (i7-3610QM) and 94C for the GPU (GT 640M). Problem is, the FAN doesn't give a damn. It's actually QUIETER when the temperatures are that high, than when it's at 60C or so. I figured it was a problem with the BIOS, so I updated that, no change. So maybe it was a problem with windows? Nope, same result on gaming with Ubuntu. The major problem of this, is that after gaming for ten minutes the CPU throttles itself to 1197MHz(as opposed to 3193), and the GPU goes down to 135MHz( as opposed to 843MHz). The problem is that the fan won't kick in like I know it can, because when the laptop is in POST, like at BIOS setup, the fan is like a vacuum cleaner it's so loud! I don't really care about noise, so I'd love to have the fan like that all the time as long as the temperatures don't fly through the roof... So, things I've tried so far, to avoid possible duplicate answers. Checked for dust: It's been this way since the laptop was new, and I've since then taken it apart. No dust buildup. Background stuff running?: No, problem persists across OS'es, and it happens while gaming anyways Manually underclocking both CPU/GPU: Using windows, I can force the CPU to stay at 1.1GHz, but the temperature STILL easily hits 99C after 5 min of gaming Contacted Acer support?: No help at all. They told me to update and reset the BIOS, which I have done multiple times. There are only about 6 changeable things anyway, none of which should affect the FAN control Third party fan control program?: None detect the fan So, I'm screwed until I can afford to replace this laptop, but I am very satisfied with performance in games... Whenever the CPU/GPU aren't being throttled. Anyone that can offer advice to solve this problem would be greatly appreciated. Hell, if you solved my problem I'd send you some monies through paypal.

    Read the article

  • College network - can I point non-domain student computers to our SUS server?

    - by Joel Coel
    Since I started here 3 months ago, one of the things that's really bothered me about the way this network is setup is something that shows up on the daily bandwidth consumption report. I get a list of top-visited sites by hits and by size, and invariably the top site (to the point that it's bigger than all the other top sites combined) is au.download.windowsupdate.com. We're pulling in ~30GB/day in windows updates. This is every day, not just after a patch Tuesday. After a patch day, it jumps closer to 40GB for a couple days. The key here is that almost none if it is by machines that I'm responsible for. My machines are for the most part fully patched, and when they're not they'll pull from a SUS server, so new updates are downloaded only once. It used to be closer to 50GB/day because most of the machines in our computer labs use DeepFreeze and weren't applying updates correctly, but that's fixed now. So the problem is definitely student-owned machines in the dorms, some of which are re-downloading the same updates in background each day, over and over. I'd love to have these machines start pulling from our SUS server. Then, if they don't ever actually install them at least they're not leeching bandwidth from our public internet connection. Any ideas on how to resolve the situation?

    Read the article

  • Arch Linux: How to handle patches which only you will use?

    - by user12932
    I'm using freerdp together with xmonad and it has been giving me a lot of trouble. The super key (or "windows key") is my mod key in xmonad and it has been interfering with my freerdp usage rather annoyingly. Whenever I switched workspaces (or did anything else in xmonad involving the super key), windows (controlled by the freerdp instance in focus) registered a keypress as well. This event combined with the loss of focus got the super key stuck in windows indefinitely: the press of the keys d and r would first show my desktop, then open the run dialog (as if I was pressing the windows key constantly). I've tried several versions of freerdp, but all exhibited this annoying behavior. So I resorted to patching freerdp myself to just ignore the left super key on my keyboard. I love free software for a lot of reasons (especially the ability to alter things like this myself), however I still find it annoying to patch and rebuild freerdp on all version (and dependency) changes. How do you deal with situations like this? Is there even a "right way" to resolve this issue?

    Read the article

  • Method to integrate Powershell scripts with non-Windows workflow?

    - by Matt Simmons
    I love the smell of new machines in the morning. I'm automating a machine creation workflow that involves several separate systems across my infrastructure, some of which involve 15 year old perl scripts on Solaris hosts, PXE Booting Linux systems, and Powershell on Windows Server 2008. I can script each of the individual parts, and integrating the Linux and Unix automation is fairly straightforward, but I'm at a loss as to how to reliably tie together the Powershell scripts to the rest of the processes. I would prefer if the process began on a Linux host, since I imagine that it will end up as a web application living on an Apache server, but if it needs to begin on Windows, I am hesitantly okay with that. I would ideally like something along the lines of psexec for Linux to run against Windows, but the answer in that direction appears to by Cygwin, and as much as I appreciate all of the hard work that they put in, it has never felt right, if you know what I mean. It's great for a desktop and gives a lot of functionality, but I feel like Windows servers should be treated like Windows servers and not bastardized Unix machines (which, incidentally, is my argument against OSX servers, too, and they're actually Unix). Anyway, I don't want to go with Cygwin unless that's the last and only option. So I guess what I'm asking is if there is a way to execute jobs on Windows machines from Linux. Without Cygwin. I'm open to ideas and suggestions, including "Look idiot, everyone uses Cygwin, so suck it up and deal with it". Thanks in advance!

    Read the article

  • How to set up daisy-chained routers for separate sub-nets?

    - by joe
    This question seems to be similar to others, but I'll take a shot anyway. A client recently switched ISPs from TDS to Comcast Business Class. Before the switch, they had 5 static IP addresses assigned. Now they'll have a single IP address that will change whenever Comcast decides to do so. The issue is that this internet connection will be shared among two companies, both having (and wanting to keep) their own private subnets. Because TDS was supplying multiple IP addresses to the one location, this allowed me to put each router on the switch. Now, with Comcast, they only get one IP address, meaning there has to be a main router before the subnet routers. Luckily, the cable modem has a built-in router, which I would like to connect to each company's router, and still have DHCP enabled on all accounts. Question: What do I need to do to the subnet routers to keep them separate from each other, but still allow internet access from the main router. I would love to say "I tried this", and give you links, but everything I find on the internet only mentions daisy-chaining routers with DCHP disabled.

    Read the article

  • Looking for suggestions for hosting Windows 2000 Server in the cloud / VPS / etc?

    - by JohnyD
    I have a Windows 2000 Server, currently virtualized in Hyper-V, that I would like to get running off-site as a backup (cloud, VPS, etc). You can't virtualize in EC2 and I'm fairly certain there are no Server 2000 AMI's floating about (correct me if I'm wrong!). If anyone has a recommendation on how I can get a virtualized Windows 2000 Server running in a secure, remote environment I would be grateful. As far as locations go I'd be interested in both North America as well as Australia and Europe. In a nutshell, we're ploughing our way out of a legacy codebase and this server is the last that remains of the legacy apps. However, it is still very much used by our clients. Everything is backed up each night (data, images, etc) to tape which is then taken offsite. However, in the event of a fire I would love to have a backup legacy server to point DNS records to. So while I am rebuilding from the ashes our services would already be available. It would save a lot of time and make my managers all the more happy (and that's what it's all about, riighhtt? :D) Thank you all for your suggestions. Please let me know if I've left out any important information. Additional info: - the legacy codebase does not function properly in Server 2003

    Read the article

  • How to prevent Gnome-shell's Alt+Tab from grouping windows from similar apps?

    - by wleoncio
    I love pretty much everything about how Gnome Shell handles app-switching through Alt+Tab. My one gripe with it, though, is how it forces the user to use Alt+` to switch between windows of the same app. This is very annoying for me, because now I have to keep in mind if the last window I was using belonged to the same app as the current window or not. Definitely a nuisance for power users who thinks in terms of "windows I'm working with" instead of "applications I'm working on". I've tried the AlternateTab extension ( https://extensions.gnome.org/extension/15/alternatetab/ ), but it's looks way too ugly for me. Not to mention that in the end all I want is to remap Alt+(key above tab) to Alt+Tab on this application. I guess one option would be to just tweak Gnome-shell. My guess is that I should tinker with the altTab.js file at /usr/share/gnome-shell/js/ui/, but the file is too long and overwhelming for someone like me, who doesn't know JavaScript. Does anyone know how I can make Gnome Shell stop grouping windows by applications?

    Read the article

  • Best grep-like tool

    - by e-satis
    I do in file search a lot, and used to love grep. Then I learn the existence of egrep, so I switched to benefit from the advanced regexp. Then I discovered the Eclipse search tool. Much easier to use that grep. Then I found ack : fast, easy, powerful. And now I use grin, which is smooth for pythonistas. I know there is also a couple of this kind of tools with a GUI. So what tool do you use, and why do you think it's the best. Practical features generally are : fast to fire and use; speedy processing; automatically ignore useless files; colored output; output lines, filename, context; allow complex regexp; allow a custom filtering and ouput; GUI + command line intergation; let you open an editor from the result set. There are some related posts on SO : http://stackoverflow.com/questions/87350/what-are-good-grep-tool-for-windows http://stackoverflow.com/questions/981601/colorized-grep-viewing-the-entire-file-with-highlighting http://stackoverflow.com/questions/1028107/is-there-some-unix-util-that-will-allow-me-to-grep-multiple-files-with-little-type http://stackoverflow.com/questions/1027906/unix-find-grep-syntax-vs-awk

    Read the article

  • How can I get Windows 8 to automatically disable touch when I am using my Wacom pen and turn it back on when I am not

    - by Robert
    I have an HP convertible tablet computer which I just upgraded to Windows 8. The problem (which existed under Windows 7 as well) is that this tablet has both a capacitive touch screen (with multi-touch) AND a wacom-type tablet built in to the screen that works using electro-magnetic resonance with the provided stylus. My Use Case: Most of the time I am happy using my fingers and the touch interface for navigation and whatnot. However, when I want to get down to serious note-taking/drawing, I want to use the wacom functionality. The problem is that any comfortable writing position has me resting my arm/hand on the screen, which activates the touch technology (despite supposed palm-detection algorithms) and completely screws up my input paradigm. My Ideal Solution: Ideallly, since wacom technology senses when the pen is "close" to the screen, I would love to have touch be automatically disabled whenever the wacom pen is detected, and turned back on when it is out of range. this would allow me to seamless switch between the two input methods, and since I NEVER want to use both at once would work perfectly for me. An acceptable alternative: As a next best option, It would be great to be able to turn off the touch functionality (leaving the wacom in place) whenever I entered specific apps (e.g. OneNote, Photoshop, Gimp, Pencil, etc.) and then have it turn back on when I left that app.... As a worst case at least lets me use my PC option: If I could create a shortcut (tile or otherwise) that flips the touch on and off without going all the way through the nested computer settings, that would be better than nothing. Thanks in advance for the help with 1 or more of the above.

    Read the article

  • Centos iptables configuration for Wordpress and Gmail smtp

    - by Fabrizio
    Let me start off by saying that I'm a Centos newby, so all info, links and suggestions are very welcome! I recently set up a hosted server with Centos 6 and configured it as a webserver. The websites running on it are nothing special, just some low traffic projects. I tried to configure the server as default as possible, but I like it to be secure as well (no ftp, custom ssh port). Getting my Wordpress to run as desired, I'm running into some connection problems. 2 things are not working: installing plugins and updates through ssh2 (failed to connect to localhost:sshportnumber) sending emails from my site using the Gmail smtp (Failed to connect to server: Permission denied (13)) I have the feeling that these are both related to the iptables configuration, because I've tried everything else (I think). I tried opening up the firewall to accept traffic for ports 465 (gmail smtp) and ssh port (lets say this port is 8000), but both the issues remain. Ssh connections from the terminal are working fine though. After each change I tried implementing I restarted the iptables service. This is my iptables configuration (using vim): # Generated by iptables-save v1.4.7 on Sun Jun 1 13:20:20 2014 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp --dport 8000 -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 465 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A OUTPUT -o lo -j ACCEPT -A OUTPUT -p tcp -m tcp --dport 8000 -j ACCEPT -A OUTPUT -p tcp -m tcp --dport 465 -j ACCEPT COMMIT # Completed on Sun Jun 1 13:20:20 2014 Are there any (obvious) issues with my iptables setup considering the above mentioned issues? Saying that the firewall is doing exactly nothing in this state is also an answer... And again, if you have any other suggestions for me to increase security (considering the basic things I do with this box), I would love hear it, also the obvious ones! Thanks!

    Read the article

  • Upgrade Centos 5 tot PHP 5.2 or 5.3 [recommended way?]

    - by solid
    We are using Zend Framework and in version 2, php 5.2 will be the minimum requirement. We love CentOS and we'd like to keep using it, but PHP 5.1 just won't do anymore when developing web applications with Zend framework. I found several links to solutions to upgrade with external repositories. http://serverfault.com/questions/106801/recommended-method-to-upgrade-php-5-1-6-to-5-2-x-on-centos-5 http://www.webtatic.com/blog/2009/05/installing-php-526-on-centos-5/ http://www.webtatic.com/blog/2009/06/php-530-on-centos-5/ We'd like to see another solution with the use of an "official?" CentOS repository if any is available. We only need to upgrade PHP, the rest of the CentOS setup is fine the way it is. For us, it's important however to keep the YUM cycle intact using the normal repositories. So in short: is it even possible to upgrade only PHP by using an external repo or otherwise? While still upgrading all our other packages safely through normal yum usage? Thanks for your help!

    Read the article

  • Windows Explorer and UAC: run elevated

    - by syneticon-dj
    I am profoundly annoyed by UAC and switch it off for my admin user wherever I can. Yet, there are situations where I can't - especially if those are machines not under my continuous administration. In this case, I am always challenged with the task of traversing directories using my administrative user via the Windows Explorer where regular users do not have "read" permissions. The possible two approaches to this problem so far: change the ACLs to the directory in question to include my user (Windows conveniently offers the Continue button in the "You don't currently have permissions to access this folder" dialog. This obviously sucks since more often than not I do not want to change ACLs but just look into the folder's contents use an elevated cmd.exe prompt along with a bunch of command line utilities - this usually takes a lot of time when browsing through large and / or complex directory structures What I would love to see would be a way to run Windows Explorer in elevated mode. I have yet to find out how to do so. But other suggestions solving this problem in an unobtrusive way without changing the entire system's configuration (and preferably without the need for downloading / installing anything) are very welcome, too. I have seen this post with a suggestion for altering HKCR - interesting, but it changes the behavior for all users, which I am not allowed to do in most situations. Also, some folks have suggested using UNC paths to access the folders - unfortunately this does not work when accessing the same machine (i.e. \\localhost\c$\path) as the "Administrators" group membership is still stripped from the token and a re-authentication (and thus the creation of a new token) would not happen when accessing localhost.

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >