Search Results

Search found 3427 results on 138 pages for 'nerds rule'.

Page 56/138 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Stop Screenlets from minimizing

    - by Capt.Nemo
    I've setup my screenlets exactly the way that I want them, but I don't have a quick access to any of them. The screenlet config offers me the following : keep above,below, sticky, locky, widget. Out of these only treat as Widget seems to be of any use here. I just looked at this in detail and thought it was what I was looking for. It might have been a workaround for the issue (instead of minimizing I would just press F9. But this means that the widget hides itself from the normal desktop, which is not what I want. What I want is that on pressing Ctrl+Alt+D or Super+D, I should see the desktop with my screenlets there. I don't want them to minimize with the rest of the windows. As a final struggle, I've thought of a solution using compiz to declare the screenlet windows as non-minimizing, but surely there must be a better way than that. (Instructions for this would be helpful as well - I'm not sure what to enter in the rule matches)

    Read the article

  • Fixed timestep and interpolation question

    - by Eric
    I'm following Glenn Fiedlers excellent Fix Your Timestep! tutorial to step my 2D game. The problem I'm facing is in the interpolation phase in the end. My game has a Tween-function which lets me tween properties of my game entites. Properties such as scale, shear, position, color, rotation etc. Im curious of how I'd interpolate these values, since there's a lot of them. My first thought is to keep a previous value of every property (colorPrev, scalePrev etc.), and interpolate between those. Is this the correct method? To interpolate my characters I use their velocity; renderPostion = position + (velocity * interpolation), but I cannot apply that to color for example. So what is the desired method to interpolate various properties or a entity? Is there any rule of thumb to use?

    Read the article

  • CSS specificity: Why isn't CSS specificity weight of 10 or more class selectors greater than 1 id selector? [migrated]

    - by ajc
    While going through the css specificity concept, I understood the fact that it is calculated as a 4 parts 1) inline (1000) 2) id (100) 3) class (10) 4) html elments (1) CSS with the highest rule will be applied to the corresponding element. I tried the following example Created more than 10 classes <div class="a1"> .... <div class="a13" id="id1"> TEXT COLOR </div> ... </div> and the css as .a1 .a2 .a3 .a4 .a5 .a6 .a7 .a8 .a9 .a10 .a11 .a12 .a13 { color : red; } #id1 { color: blue; } Now, even though in this case there are 13 classes the weight is 130. Which is greater than the id. Result - JSFiddle CSS specificity

    Read the article

  • Is It bad for SEO to have internal redirected links? [closed]

    - by Jonas Lindqvist
    I have a large number of pages having similar but not identical content. Example: site.com/dream_dictionary_flying and site.com/dream_interpretation_flying. The problem is that although not being identical, they are sometimes on the edge of being duplicate content. The solution via redirect 301 in htaccess is simple and can be done in a minute, BUT, changing all existing links on the whole site from "/something" to "/something_else" would take ages, it would be thousands of manual changes taking x hundreds of hours. My question is this; is it bad for SEO to have internal links that are redirected, or rather HOW bad is it? For the human user it would not matter at all but from what I have experienced, the search engines don't like it. Is there any rule of thumb here? Please come back with your thoughts and experience on this. Thanks!

    Read the article

  • Use your own domain email and tired of SPAM? SPAMfighter FTW

    - by Dave Campbell
    I wouldn't post this if I hadn't tried it... and I paid for it myself, so don't anybody be thinking I'm reviewing something someone sent me! Long ago and far away I got very tired of local ISPs and 2nd phone lines and took the plunge and got hooked up to cable... yeah I know the 2nd phone line concept may be hard for everyone to understand, but that's how it was in 'the old days'. To avoid having to change email addresses all the time, I decided to buy a domain name, get minimal hosting, and use that for all email into the house. That way if I changed providers, all the email addresses wouldn't have to change. Of course, about a dozen domains later, I have LOTS of pop email addresses and even an exchange address to my client's server... times have changed. What also has changed is the fact that we get SPAM... 'back in the day' when I was a beta tester for the first ISP in Phoenix, someone tried sending an ad to all of us, and what he got in return for his trouble was a bunch of core dumps that locked up his email... if you don't know what a core dump is, ask your grandfather. But in today's world, we're all much more civilized than that, and as with many things, the criminals seem to have much more rights than we do, so we get inundated with email offering all sorts of wild schemes that you'd have to be brain-dead to accept, but yet... if people weren't accepting them, they'd stop sending them. I keep hoping that survival of the smartest would weed out the mental midgets that respond and then the jumk email stop, but that hasn't happened yet anymore than finding high-quality hearing aids at the checkout line of Safeway because of all the dimwits playing music too loud inside their car... but that's another whole topic and I digress. So what's the solution for all the spam? And I mean *all*... on that old personal email address, I am now getting over 150 spam messages a day! Yes I know that's why God invented the delete key, but I took it on as a challenge, and it's a matter of principle... why should I switch email addresses, or convert from [email protected] to something else, or have all my email filtered through some service just because some A-Hole somewhere has a site up trying to phish Ma & Pa Kettle (ask your grandfather about that too) out of their retirement money? Well... I got an email from my cousin the other day while I was writing yet another email rule, and there was a banner on the bottom of his email that said he was protected by SPAMfighter. SPAMfighter huh.... so I took a look at their site, and found yet one more of the supposed tools to help us. But... I read that they're a Microsoft Gold Partner... and that doesn't come lightly... so I took a gamble and here's what I found: I installed it, and had to do a couple things: 1) SPAMfighter stuffed the SPAMfighter folder into my client's exchange address... I deleted it, made a new SPAMfighter folder where I wanted it to go, then in the SPAMfighter Clients settings for Outlook, I told it to put all spam there. 2) It didn't seem to be doing anything. There's a ribbon button that you can select "Block", and I did that, wondering if I was 'training' it, but it wasn't picking up duplicates 3) I sent email to support, and wrote a post on the forum (not to self: reply to that post). By the time the folks from the home office responded, it was the next day, and first up, SPAMfighter knocked down everything that came through when Outlook opend... two thumbs up! I disabled my 'garbage collection' rule from Outlook, and told Outlook not to use the junk folder thinking it was interfering. 4) Day 2 seemed to go about like Day 1... but I hung in there. 5) Day 3 is now a whole new day... I had left Outlook open and hadn't looked at the PC since sometime late yesterday afternoon, and when I looked this morning, *every bit* of spam was in the SPAMfighter folder!! I'm a new paying customer After watching SPAMfighter work this morning, I've purchased a 1-year license, and I now can sit and watch as emails come in and disappear from my inbox into the SPAMfighter folder. No more continual tweaking of the rules. I've got SPAMfighter set to 'Very Hard' filtering... personally I'd rather pull the few real emails out of the SPAMfighter folder than pull spam out of the real folders. Yes this is simply another way of using the delete key, but you know what? ... it feels good :) Here's a screenshot of the stats after just about 48 hours of being onboard: Note that all the ones blocked by me were during Day 1 and 2... I've blocked none today, and everything is blocked. Stay in the 'Light!

    Read the article

  • htaccess execution order and priority

    - by ChrisRamakers
    Can anyone explain to me in what order apache executes .htaccess files residing in different levels of the same path and how the rewrite rules therein are prioritized? For example, why doesn't the rewrite rule in the first .htaccess below work and is the one in /blog prioritized? .htaccess in / RewriteEngine on RewriteBase / RewriteRule ^blog offline.html [L] .htaccess in /blog RewriteEngine On RewriteBase /blog/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /blog/index.php [L] Ps: i'm not simply looking for an answer but for a way to understand the apache/modrewrite internals ... why is more important to me than how to fix this :) Thanks!

    Read the article

  • Is there an equivalent to the Max OS X software Hazel that runs on Ubuntu?

    - by Stuart Woodward
    Is there an equivalent to the Max OS X software Hazel that runs on Ubuntu? "Hazel watches whatever folders you tell it to, automatically organizing your files according to the rules you create. It features a rule interface [..]. Have Hazel move files around based on name, date, type, what site/email address it came from [..] and much more. Automatically put your music in your Music folder, movies in Movies. Keep your downloads off the desktop and put them where they are supposed to be." This question probably won't make sense unless you have used Hazel, but basically you can define rules via the GUI to move and rename files automatically to make an automated workflow.

    Read the article

  • FTP over ssh hangs on "Connection established: Waiting for Welcome Message"

    - by Siriss
    I have looked far and wide and have not quite found the answer. I am running ubuntu 11.10, openssh, and vsftpd. I am trying to configure my FTP to tunnel over ssh. My ssh connection works just fine and I can create the tunnel to my FTP port. When I go to use Filezilla to connect, it hangs at "Waiting for Welcome Message". I think it is an iptables issue, but I can't seem to figure out what needs to be changed. When I take iptables down, it connects just fine. I don't want to open any more external ports, just my one SSH port, and I can't seem to get it right on the internal port forwarding rule. I always end up opening it to the outside. I would love some help if anyone has any ideas and I hope I have made it clear. Thank you in advance!!

    Read the article

  • XAMPP: Deamon is already running, but it's NOT apache

    - by TedvG
    This one is giving me a headache... I have installed XAMPP for Linux 1.7.7 on Ubuntu 12.10. I haven't installed the latest version because of the new security "feature" which makes XAMPP so secure I can't get it running... But that's another story. After it installed and ran ok for a couple of months, I now get the famous "XAMPP: Another web server daemon is already running." error while starting XAMPP. Now I've googled extensively and can rule out the following: There is no other Apache installation, just XAMPP There are no apache or apache 2 services running There are no services running that use port 80 (checked with netstat -an grep -w 80) I have also done a fresh install of xampp 1.7.7, but that gives me the same result. I think I have tried every solution on the first two result-pages of google and am nowhere nearer to a solution. Can anyone give me pointers on how to find the mysterious "Webdeamon" that is already running?

    Read the article

  • "make" and "make install" never work

    - by Nirmik
    I have been trying it from a long time now and this time it got my nerves... The commands make and make install used to install a program from an extracted tar ball never work for some reason... The make command gives me the error- make: *** No targets specified and no makefile found. Stop. and the make install command gives me the error make: *** No rule to make target `install'. Stop. Why are these commands not executing? What should I do to solve this issue?

    Read the article

  • Balancing agressive invites

    - by Nils Munch
    I am designing a trading card game for mobiles, with the possibility to add cards to your collection using Gems, aquired through victories and inapp purchases. I am thinking to increase the spread of the game with a tracking system on game invites, enabling the user to invite a friend to play the game. If the friend doesn't own the game client (which is free) he will be offered to download it. If he joins the game, the original player earns X amount of gems as an reward. There can only be one player per mobile device, which should rule out some harvesting. My question is, how do you think the structure of this would be recieved ? All invites are mail based, unless the player already exists in the game world (then he gets a ingame invitation.) I have set a flood filter, so a player can only invite a friend (without the client installed) once a month.

    Read the article

  • Redirect pages to fix crawl errors

    - by sarah
    Google is giving me a crawl error for pages that I have removed like www.mysite.com/mypage.html. I want to redirect this pages to the new page www.mysite.com/mysite/mypage. I tried to do that by using .htaccess but instead of fixing the problem, the crawl pages increased and a new crawl came www.mysite.com/www.mysite.com. This is my .htaccess file: <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /sitename/ RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /sitename/index.php [L] </IfModule> # END WordPress Should I add this after the rewrite rule or I should do something else? RewriteRule ^pagename\.html$ http://www.sitename.com/pagename [R=301]

    Read the article

  • ANTLRWorks 2: Early Access Preview 10

    - by Geertjan
    I took a quick look at how the ANTLRWorks 2 project is getting on... and discovered that today, March 23, the new early access preview 10 has been released: http://www.antlr.org/wiki/display/ANTLR4/1.+Overview Downloaded it immediately and was impressed when browsing through the Java.g file that I also found on the Antlr site: (Click to enlarge the image above.) On the page above, the following enhancements are listed: Add tooltips for rule references Finally fixed the navigator update bug Major improvements to code completion Fix legacy mode Many performance and stability updates I've blogged before about how the developers on the above project consider their code completion to be "scary fast". Some discussions have taken place about how code developed by the ANTLRWorks team could be contributed to the NetBeans project, since NetBeans IDE and ANTLRWorks 2 are both based on the NetBeans Platform.

    Read the article

  • How can I secretly ban someone (hellban) in phpbb?

    - by Dan Fabulich
    The trolls are driving me crazy; I'd like to experiment with a secret ban. http://www.codinghorror.com/blog/2011/06/suspension-ban-or-hellban.html A hellbanned user is invisible to all other users, but crucially, not himself. From their perspective, they are participating normally in the community but nobody ever responds to them. They can no longer disrupt the community because they are effectively a ghost. It's a clever way of enforcing the "don't feed the troll" rule in the community. When nothing they post ever gets a response, a hellbanned user is likely to get bored or frustrated and leave. It looks like there's no way to do this out-of-the-box with phpbb. Is there a way to hack it in?

    Read the article

  • Identity verification

    - by acjohnson55
    On a site that I'm working on, I'm trying to find ways of enforcing a one-person, one-account rule. In general, we'd like to do this by providing options to authenticate users with third-party services that provide this assurance. For example, it's possible to authenticate with Facebook and check whether the user is considered "verified" by Facebook (which means they must have provided either a phone number or credit card). This is roughly the level of identity verification we require--we're not doing banking or anything like that. But we want to give the user options. My question is, who else, besides Facebook, provides this? (uncertain of the proper SE forum, please comment if there's a better SE site to ask this)

    Read the article

  • Group method parameter or individual parameter?

    - by Nassign
    I would like to ask on method parameters design consideration. I am usually deciding between using individual variables as parameters versus grouping them to a class or dictionary as one parameter. Is there such a rule when you should use individual parameter against using a class or a dictionary to group the parameter? Individual parameter - Straight forward, strongly typed Dictionary parameter - Very extensible, like HTTP request but cannot be strongly typed. Class parameter - Extensible by adding member to the class parameter, strongly typed. I am looking for a design reference on when to use which? Note: I am not sure if this question is valid in programmers but I definitely think it would be closed in stackoverflow, If it is still not valid, please point me to the proper page.

    Read the article

  • multi-clients web application,should I use custom user controls or a common user control

    - by ValidfroM
    Say my company is going to build a complicated asp.net web form education system. One of the module is web based registration. To make it flexiable, we decide to use user control(ascx) with rule-engine (work flow) regulating all business logic behide them. Thus in future,for different clients, we can simply config basic existing rules or adding new rules.(Rules stored in db or XML per client). Now the question is how to deal with the user controls (ascx)? My opinion is for different client build diffrent user control from scratch. other voice is like reuse existing user controls.

    Read the article

  • Rules of Holes #5: Seek Help to Get Out of the Hole

    - by ArnieRowland
    You are moving along, doing good work, maintaining a steady pace. All seems to be going well for you. Then BAM!, a Hole just grabbed you. How the heck did that happen? What went wrong? How did you fall into a Hole? Definitely, you will want to do a post-mortem and try to tease out what misteps led you into the Hole. Certainly you will want to use this opportunity to enhance your Hole avoidance skills. But your first priority is to get out of this Hole right NOW.. Consider the Fifth Rule of Holes...(read more)

    Read the article

  • Preventing Duplicates on Google

    - by abel
    I am currently using a rewrite rule to enable access to .php pages, without using the php extension. However to prevent old links from breaking, the pages can still be accessed via links containing the .php extension too. For eg. domain.com/page.php can now be accessed at domain.com/page All the links on the website now use domain.com/page type links within the site. However older incoming links will still link to the .php pages, meaning Google will index both pages and mark them as duplicate. I have two plans to remedy the situation. Use a php 301 redirect: When a page is accessed with the .php extension, I can redirect each page individually using a 301 redirect using php Using Canonical: Place a canonical tag on each page, pointing to the ".php" less version My Question: Are both methods equally efficacious in preventing Google from indexing my ".php" pages? Which method should be preferred, by convention or otherwise?

    Read the article

  • How do I allow all possible IPs for Gmail servers through my ufw firewall?

    - by nomadicME
    I am currently using the following rule: ufw allow out from my_local_ip to any port 587 This is a little too lax for my liking. I would like to tighten it up and restrict it to only gmail's smtp server ip addresses, but they are always changing. I used to just wait until an outgoing email didn't make it to its destination, then check syslog for the ip address that was blocked, then add that to the ufw configure script. However, now I have a need for much more reliability. Is there any way to use smtp.gmail.com in ufw? I don't think so, but thought I would ask. Any other ideas? Thanks.

    Read the article

  • CAM XML Editor version 2.2.1 now available.

    - by drrwebber
    CAM Editor v2.2.1 release is now available. Lots of nice enhancements, CAMV performance boost and important bug fixes for DoD, NIEM and LEXS schema. Download is available from the CAM XML Editor Resource Site. The CAM editor is the leading open source XML Editor/Validation/Schema designer for rapidly building and deploying complete XML information exchanges. Provides a visual WYSIWYG structure with rule entry wizards and drag and drop dictionary components. Will import, analyze and refactor existing XML Schema. Oracle is a proud sponsor of the project and its use on the NIEM.gov initiative.Creates XSD schema + JAXB bindings, Mindmap or UML models (XMI), XML test suite examples, HTML documentation + spreadsheets (NIEM IEPDs). XSD schema export in default, flatten, NIEM, and OASIS modes. Generates canonical component dictionaries from schema sets, ERwin models, or spreadsheets.

    Read the article

  • How long should I keep 301 redirecting pages from a deprecated domain?

    - by ElHaix
    I had an old domain that I have deprecated, but 301 redirected all results from it to my new site. The new site is now receiving a decent amount of traffic, but I don't know if it's 301 redirected from the old site, and doing a site:[old site] still shows several thousand pages indexed. Since all pages from the old site are 301 redirected, will they ever be removed from the index, as long as the old domain name is active? As a rule of thumb, somewhere I got 90 days for any significant site changes. When is it safe to burn the old domain?

    Read the article

  • Should a primary key be immutable?

    - by Vincent Malgrat
    A recent question on stackoverflow provoked a discussion about the immutability of primary keys. I had thought that it was a kind of rule that primary keys should be immutable. If there is a chance that some day a primary key would be updated, I thought you should use a surrogate key. However it is not in the SQL standard and some RDBMS' "cascade update" feature allows a primary key to change. So my question is: is it still a bad practice to have a primary key that may change ? What are the cons, if any, of having a mutable primary key ?

    Read the article

  • How to verify the Liskov substitution principle in an inheritance hierarchy?

    - by Songo
    Inspired by this answer: Liskov Substitution Principle requires that Preconditions cannot be strengthened in a subtype. Postconditions cannot be weakened in a subtype. Invariants of the supertype must be preserved in a subtype. History constraint (the "history rule"). Objects are regarded as being modifiable only through their methods (encapsulation). Since subtypes may introduce methods that are not present in the supertype, the introduction of these methods may allow state changes in the subtype that are not permissible in the supertype. The history constraint prohibits this. I was hoping if someone would post a class hierarchy that violates these 4 points and how to solve them accordingly. I'm looking for an elaborate explanation for educational purposes on how to identify each of the 4 points in the hierarchy and the best way to fix it. Note: I was hoping to post a code sample for people to work on, but the question itself is about how to identify the faulty hierarchies :)

    Read the article

  • SEO Benefits of adding a Tumblr feed to site

    - by Paul
    A client of ours has a CMS driven Blog in his hotel site - he would like to use the blog to add depth top his site and add seo benefits relating to the blogs content. The current blog is a basic header / text field and doesn't contain any tagging / meta features. Unfortunately we dont have a .net developer in our team to alter the existing blog and add meta / tagging and there isn't budget to hire one - so I considered using a Tumblr blog - setting it up externally - giving it a blog.hotelname.com address and feeding it into the existing page via tumblrs js - which basically does a document.write into the page - which we can style. I understand from a previous post (Poor CMS blog vs Tumblr embed as a general rule most search engines ignore JS created content - but will the above approach act as an improvement on the existing system for now - as the blog will be setup externally with its own url and also feed into the existing site? Cheers Paul

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >