Search Results

Search found 4286 results on 172 pages for 'spliced news'.

Page 159/172 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • drop down menu will not display outside containing div in IE7..

    - by playahabana
    I am tearing my hair out over this, I have a dropdown menu using CSS and jQuery (thanks to Soh Tanaka) and it works perfectly in Firefox, Safari, Google chrome and I.E. 8, but in IE 7 it will not drop down outside the 'Banner div'. It drops below the nav div however. I have moved the nav div higher in the banner the result is the same, menu drops until it reaches the border of the banner div and then vanishes.... Below is the css. This is my first website and I have some limited understanding of what I am doing. The drop down menu includes transparent png's as links (I know, I know...but it's what the Boss wants...) please could someone take a quick scan at the below CSS and let me know what is wroong? Is this some form of the IE z-index bug? i have tried all different combinations of z-index and still I can't get a different result. . The html is below as well. Thankyou in advance #banner { position: relative; width: 62.5em; height: 12em; background-color: #46280A; background-image: url('images/includes/banner2.jpg'); background-repeat: no-repeat; background-position: center; -moz-box-shadow: -4px 6px 8px #000; -webkit-box-shadow: -4px 6px 8px #000; box-shadow: -4px 6px 8px #000; /* For IE 8 */ -ms-filter: "progid:DXImageTransform.Microsoft.Shadow(Strength=8, Direction=225, Color='#000000')"; /* For IE 5.5 - 7 */ filter: progid:DXImageTransform.Microsoft.Shadow(Strength=8, Direction=225, Color='#000000'); z-index: 1; } /*------------------------------------SCROLLER---------------------------------------------*/ #headlines{ position: absolute; top: 1.3em; right: 2.75em; overflow: hidden; height: 2.5em; width: 24em; background-color: #000000; display: block; z-index: 3; } #news{ position: relative; height: 3.1em; line-height: 2.5em; font-size: 0.8em color: #FFFF99; white-space: nowrap; overflow: hidden; font-family: Georgia,Arial; } #scrollerglass{ position: absolute; top: 0.95em; right: 2em; height: 52px; width: 410px; border: none; padding: 0.2em 0em 0em 0em; line-height: 0.7em; text-align: center; background-image: url('images/includes/scrollerglass.png'); background-color: transparent; background-repeat: no-repeat; background-position: center; opacity: 20; z-index: 10; } #scrollerglass a i { visibility: hiddn ; } /-------------------------------------NAVIGATION-----------------------------------------/ #nav { position: absolute; top: 5.8em; left: 0.2em; font-family: trebuchet, sans-serif; font-size: 1em; line-height: 3.75em; text-align: center; color: #FFFF00; z-index: 3; } ul.navlist { list-style: none; padding: 0em; margin: 1em; float: left; width: 62.5em; background: transparent; font-size: 1em; } ul.navlist li { position: relative; /*--Declare X and Y axis base for sub navigation--*/ float: left; margin: 0em 1.4em; padding: 0em 0.7em 0em 0em; z-index: 1; } ul.navlist li a{ display: block; text-decoration: none; float: left; border: 0px solid; } ul.navlist li img{ border: 0px solid; } ul.navlist li span { trigger styles--*/ width: 1.2em; height: 5.25em; float: left; background: url(images/links/downlogo.png) no-repeat center top; } ul.navlist li span.subhover { background-position: center bottom; cursor: pointer; } ul.navlist li ul.navdrop { list-style: none; position: absolute; float: left; top: 5.3em; left: -2.4em; height: 15.0em; width: 11.25em; margin: 0; padding: 0.5em 0em 0em 0em; display: none; background-position: center; background-image: url('images/includes/slider.jpg'); background-color: transparent; background-repeat: no-repeat; -moz-box-shadow: -4px 6px 8px #000; -webkit-box-shadow: -4px 6px 8px #000; box-shadow: -4px 6px 8px #000; /* For IE 8 */ -ms-filter: "progid:DXImageTransform.Microsoft.Shadow(Strength=8, Direction=225, Color='#000000')"; /* For IE 5.5 - 7 */ filter: progid:DXImageTransform.Microsoft.Shadow(Strength=8, Direction=225, Color='#000000'); z-index:1; } ul.navlist li ul.navdrop li{ margin: 0em 2.3em 0em 0em; padding: 1em 0em 0em 0em; width: 8em; clear: both; } html ul.navlist li ul.navdrop li a { border: 0px solid; width: 11.25em; } html ul.navlist li ul.navdrop li a:hover { background: transparent; } <div id="banner"> <div id="headlines"> <div id="news"> Whatever we want to promote </div> </div> <div id="scrollerglass"> <a href="vintagecigars.php"> <i>------s-c-r-o-l-l-e-r - - - -l-i-n-k-s--------<br /> <br>------s-c-r-o-l-l-e-r - - - -l-i-n-k-s------</i></a> </div> <div id="nav"> <ul class="navmenu"> <li><a href="index.php"><img src="images/links/home.png" alt="Home" ></a></li> <li><a href="ourbar.php"><img src="images/links/ourbar.png" alt="Our Bar" ></a> <ul class="navdrop"> <li ><a href="ourcocktails.php"><img src="images/links/cockteles.png" alt="Our Cocktails" ></a></li> <li ><a href="celebrate.php"><img src="images/links/celebrate.png" alt="Celebrate in Style" ></a></li> </ul> </li> <li><a href="ourcigars.php"><img src="images/links/ourcigars.png" alt="Our Cigars" ></a> <ul class="navdrop"> <li ><a href="edicioneslimitadas.php"><img src="images/links/edicioneslimitadas.png" alt="Edition Limitadas" ></a></li> <li ><a href="cigartasting.php"><img src="images/links/cigartasting.png" alt="Cigar Tastings" ></a></li> </ul> </li> <li><a href="personalroller.php"><img src="images/links/personalcigar.png" alt="Personal Cigar Roller" ></a></li> <li><a href="galleryentrance.php"><img src="images/links/photogallery.png" alt="Photo Gallery" ></a></li> <li><a href="contactus.php"><img src="images/links/contactus.png" alt="Contact Us" ></a></li> </ul></div></div><!--end banner-->

    Read the article

  • Which web solution should I use for my project?

    - by BenIOs
    I'm going to create a fairly large (from my point of view anyway) web project with a friend. We will create a site with roads and other road related info. Our calculations is that we will have around 100k items in our database. Each item will contain some information like location, name etc. (about 30 thing each). We are counting on having a few hundred thousand unique visitors per month. The 100k items and their locations (that will be searchable) will be the main part of the page but we will also have some articles, comments, news and later on some more social functions (accounts, forums, picture uploads etc.). We were going to use Google AppEngine to develop our project since it is really scalable and free (at least for a while). But I'm actually starting to doubt that AppEngine is right for us. It seems to be for webbapps and not sites like ours. Which system (language/framework etc.) would you guys recommend us to use? It doesn't really mater if we know the language since before (we like learning new stuff) but it would be good if it's something that is future proof.

    Read the article

  • database design - empty fields

    - by imanc
    Hey, I am currently debating an issue with a guy on my dev team. He believes that empty fields are bad news. For instance, if we have a customer details table that stores data for customers from different countries, and each country has a slightly different address configuration - plus 1-2 extra fields, e.g. French customer details may also store details for entry code, and floor/level plus title fields (madamme, etc.). South Africa would have a security number. And so on. Given that we're talking about minor variances my idea is to put all of the fields into the table and use what is needed on each form. My colleague believes we should have a separate table with extra data. E.g. customer_info_fr. But this seams to totally defeat the purpose of a combined table in the first place. His argument is that empty fields / columns is bad - but I'm struggling to find justification in terms of database design principles for or against this argument and preferred solutions. Another option is a separate mini EAV table that stores extra data with parent_id, key, val fields. Or to serialise extra data into an extra_data column in the main customer_data table. I think I am confused because what I'm discussing is not covered by 3NF which is what I would typically use as a reference for how to structure data. So my question specifically: - if you have slight variances in data for each record (1-2 different fields for instance) what is the best way to proceed?

    Read the article

  • ajax on parked domain

    - by Daryl
    I'm currently writing this jquery and for some reason (I don't know why) it works on the normal domain, but on the parked domain it doesn't. Normal domain - http://www.thefinishedbox.com Parked domain - http://www.tfbox.com If you scroll down to the colony news and hit the click me link you'll see it will retrieve data via jquery ajax on the Normal domain, but on the parked domain it wont. Here is the jQuery code I have so far (its pretty standard): $(function() { $.ajaxSetup({ cache: false }); var ajax_load = "Load me plz"; // load() functions var loadUrl = "http://thefinishedbox.com/wp-content/themes/tfbox-beta/test.php"; $('.overlay').css({ opacity: '0' }); $('.toggle').click(function() { $('.overlay').css({ display: 'block' }).animate({ opacity: '1' }, 300); $(".overlay .content").html(ajax_load).load(loadUrl); return false; }); $('.close').click(function() { $('.overlay').animate({ opacity: '0' }, 300); $('.overlay').queue(function() { $(this).css({ display: 'none' }); $(this).dequeue(); }); return false; }); I'm a complete noob when it comes to ajax so any help would be massivly appreciated.

    Read the article

  • Handling Email Bouncebacks in Rails

    - by aressidi
    Hi there, I've built a very basic CRM system for my rails app that allows me to send weekly user activity digests with custom text and create multi-part marketing messages which I can configure and send through a basic admin interface. I'm happy with what I've put together on the send-side of things (minus the fact that I haven't tried to volume test its capabilities), but I'm concerned about how to handle bounce-backs. I came across this plugin with associated scripts: http://github.com/kovyrin/bounces-handler I'm using Google Apps to handle my mail and really don't know enough about Perl to want to mess with the above plugin - I have enough headaches. I'm looking for a simple solution for processing bounce-backs in Rails. All my email will go out from an address like this which will be managed in Google Apps: "[email protected]." What's the best workflow for this? Can anyone post an example solution they're using keeping in mind the fact that I'm using Google Apps for the mail? Any guidance, links, or basic workflow best-practices to handle this would be greatly appreciated. Thanks! -A

    Read the article

  • PHP OOP class Sensative To Counter field name !

    - by Mac Taylor
    hey guys recently i managed to write a class for my stories , and everything is fine , unless counter field that stores page's hits here is my class : class nk_posts { var $data = array(); public function _data() { global $db; $result = $db->sql_query(" SELECT bt_stories.*, bt_tags.*, bt_topics.*, group_concat(DISTINCT bt_tags.tag ) as mytags, group_concat(DISTINCT bt_topics.topicname ) as mytopics FROM bt_stories,bt_tags,bt_topics WHERE CONCAT( ' ', bt_stories.tags, ' ' ) LIKE CONCAT( '%', bt_tags.tid, '%' ) AND CONCAT( ' ', bt_stories.associated, ' ' ) LIKE CONCAT( '%', bt_topics.topicid, '%' ) AND time<=now() AND section='news' AND approved='1' GROUP BY bt_stories.sid ORDER BY bt_stories.sid DESC "); while ($this->data = $db->sql_fetchrow($result)) { $this->sid = $this->data['sid']; $this->title = $this->data['title']; $this->counter = $this->data['counter']; //------Rest of Fields ------ $this->_output(); } } public function _output() { themeindex( $this->sid, $this->title, $this->counter, //------Rest of Fields ------ ); } } problem this class can't show counter filed value , but if i change counter field name to other thing like hit , .. everything goes fine im sure its okay if i write it in normal php mysql way , but i need this to be in OOP way any comment why it's sensitive to counter field name ?!

    Read the article

  • Vacancy Tracking Algorithm implementation in C++

    - by Dave
    I'm trying to use the vacancy tracking algorithm to perform transposition of multidimensional arrays in C++. The arrays come as void pointers so I'm using address manipulation to perform the copies. Basically, there is an algorithm that starts with an offset and works its way through the whole 1-d representation of the array like swiss cheese, knocking out other offsets until it gets back to the original one. Then, you have to start at the next, untouched offset and do it again. You repeat until all offsets have been touched. Right now, I'm using a std::set to just fill up all possible offsets (0 up to the multiplicative fold of the dimensions of the array). Then, as I go through the algorithm, I erase from the set. I figure this would be fastest because I need to randomly access offsets in the tree/set and delete them. Then I need to quickly find the next untouched/undeleted offset. First of all, filling up the set is very slow and it seems like there must be a better way. It's individually calling new[] for every insert. So if I have 5 million offsets, there's 5 million news, plus re-balancing the tree constantly which as you know is not fast for a pre-sorted list. Second, deleting is slow as well. Third, assuming 4-byte data types like int and float, I'm using up actually the same amount of memory as the array itself to store this list of untouched offsets. Fourth, determining if there are any untouched offsets and getting one of them is fast -- a good thing. Does anyone have suggestions for any of these issues?

    Read the article

  • UnknownHostException again!

    - by Nitesh Panchal
    Hello, I posted one question previously and all of them answered that there is some problem with DNS but i changed my DNS to many addressed and now i have the most reliable, google DNS :- 8.8.8.8 Still i get the same UnknownHostException. What can be the problem? This is my code :- DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse("http://rss.news.yahoo.com/rss/india"); Infact if i pass address as something very common like :- http://google.com i still get the same error. Please help me :(. I have my submissions tomorrow. Thanks in advance :) EDIT : If i type the same address in my mozilla, it works great. So, i am sure that there is no DNS problem. 2nd EDIT :- I found this link http://www.ehow.com/how_4747553_fix-unknownhostexception-java-applications-ubuntu.html But when i run the command sudo apt-get install lib32nss-mdns i get package not found. Somebody even mentioned :- -Djava.net.preferIPv4Stack=true But where do i write this statement of Djava? I am using Netbeans 6.8 to run my web application

    Read the article

  • how to retrieve data from db and display it in list?

    - by raji
    In the below code I am passing catID to db.getNews(catID) super.onCreate(savedInstanceState); setContentView(R.layout.newslist); Bundle extras = getIntent().getExtras(); int catID = extras.getInt("cat_id"); mInflater = (LayoutInflater)getSystemService(Context.LAYOUT_INFLATER_SERVICE); final ListView lv = (ListView) findViewById(R.id.category); lv.setAdapter(new ArrayAdapter<String>(this, R.layout.newslist, db.getNews(catID)){ public View getView(int position, View convertView, ViewGroup parent) { View row; if (null == convertView) { row = mInflater.inflate(R.layout.newslist, null); } else { row = convertView; } TextView tv = (TextView) row.findViewById(R.id.newslist_text); tv.setText(getItem(position)); return row; } }); the getnews(int catid) function is below: public NewsItemCursor getNews(int catID) { String sql = "SELECT title FROM news WHERE catid = " + catID + " ORDER BY id ASC"; SQLiteDatabase d = getReadableDatabase(); NewsItemCursor c = (NewsItemCursor) d.rawQueryWithFactory( new NewsItemCursor.Factory(), sql, null, null); c.moveToFirst(); d.close(); return c; } But I'm getting bug as array adapter undefined... Can anyone help me to resolve this, and retrieve data to make it display in list.

    Read the article

  • Unable to set up SSL support for Apache 2 on Debian

    - by Francesco
    I am trying to set up ssl support for Apache 2 on Debian. Versions are: Debian GNU/Linux 6.0 apache2 2.2.16-6+squeeze1 I followed a lot of how-tos for days but I couldn't make it work. Here are my steps and configuration files (ServerName and DocumentRoot are changed for privacy, in case tell me): # mkdir /etc/apache2/ssl # openssl req $@ -new -x509 -days 365 -nodes -out /etc/apache2/apache.pem -keyout /etc/apache2/apache.pem at this point I've a doubt about permissions on apache.pem, at this step they are -rw-r--r-- 1 root root Maybe it has to belong to www-data? Then I enable ssl-mod with # a2enmod ssl # /etc/init.d/apache2 restart I modify /etc/apache2/sites-available/default-ssl in this way (I put port 8080 because I need port 443 for another purpose): <VirtualHost *:8080> SSLEngine on SSLCertificateFile /etc/apache2/ssl/apache.pem SSLCertificateKeyFile /etc/apache2/ssl/apache.pem ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options Indexes FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <VirtualHost *:8080> DocumentRoot /home/user1/public_html/ ServerName first.server.org # Other directives here </VirtualHost> <VirtualHost *:8080> DocumentRoot /home/user2/public_html/ ServerName second.server.org # Other directives here </VirtualHost> I have to point out that the same configuration works on http (it is a copy of /etc/apache2/sites-available/default with some differences - port and ssl support). My /etc/apache2/ports.conf is the following: # If you just change the port or add more ports here, you will likely also # have to change the VirtualHost statement in # /etc/apache2/sites-enabled/000-default # This is also true if you have upgraded from before 2.2.9-3 (i.e. from # Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and # README.Debian.gz #NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. #NameVirtualHost *:8080 Listen 8080 </IfModule> <IfModule mod_gnutls.c> Listen 8080 </IfModule> Any suggestion? Thanks

    Read the article

  • Double Layer DVD+R burning problem - I/O Error

    - by Mehper C. Palavuzlar
    I have a modern PC (Quad Core CPU, 4 GB RAM, Win7 Home Premium 64-bit) but I have a problem with burning .dvd images to Double Layer (8.5 GB) DVDs. I wasted too many DVD+R DL discs but to no avail. Here is a short explanation of what I did: I'm using ImgBurn v2.5.0.0 (latest version). I'm trying to burn an image file (.dvd) which is together with the related .iso file in the same folder. In ImgBurn, I select the file with .dvd extension, and set writing speed to 2.4x. Burning process starts normally, but around 7% of the process, it gives a I/O Write Error, which is as follows: I wasted 3 discs (Magic, Made in Taiwan, DVD+R DL, 8.5 GB) trying the same thing. My DVD writer is LG GH22NP20 with IDE connection type. I updated its firmware from 1.04 to 2.00 but no success in burning again. Then my cousin brought his LG (an older model) which, he claims, was successful in writing DL discs with the same brand (Magic). I plugged off my LG and plugged the older one in, and tried to burn the image again. It also gave an I/O Error even without standing till 7%. I tried another burning program (CloneCD), but failed again. Then I bought other brands (TDK and VERBATIM) and tried to burn the image. Burning process started successfully, but around 14% (for Verbatim) and 25% (for TDK) failed again. Here is a screeny from ImgBurn: I've burned lots of 4.7 GB DVD+Rs and DVD-Rs successfully, even without a single error, with this LG DVD writer, but this case is very bothering for me. What should I do? Should I buy a new DVD writer other than LG? Could this be related to Windows or my hardware configuration? Thanks for your help. Edit: My burner works on my cousin's machine. So the problem must be related to my system. What could be the reason? Latest news: I borrowed an external USB DVD writer from a friend, which is PHILIPS SPD3000CC (an old model). Guess what! It's burning DVD+R DLs successfully! How come an internal DVD writer of a brand new computer system cannot burn DL DVDs? Now I'm considering buying a new internal DVD writer with not IDE, but SATA connection...

    Read the article

  • passwd ldap request to ActiveDirectory fails on half of 2500 users

    - by groovehunter
    We just setup ActiveDirectory in my company and imported all linux users and groups. On the linux client: (configured to ask ldap in nsswitch.conf): If i do a common ldapsearch to the AD ldap server i get the complete number of about 2580 users. But if i do this it only gets a part of all users, 1221 in number: getent passwd | wc -l Running it with strace shows kind of attempt to reconnect My ideas were: Does the linux authentication procedure run ldapsearch with a parameter incompatible to AD ldap ? Or probably it is a encoding issue. The windows user are entered in AD with all kind of characters. Maybe someone could shed light on this and give a hint how to debug that further!? Here's our ldap.conf host audc01.mycompany.de audc03.mycompany.de base ou=location,dc=mycompany,dc=de ldap_version 3 binddn cn=manager,ou=location,dc=mycompany,dc=de bindpw Password timelimit 120 idle_timelimit 3600 nss_base_passwd cn=users,cn=import,ou=location,dc=mycompany,dc=de?sub nss_base_group ou=location,dc=mycompany,dc=de?sub # RFC 2307 (AD) mappings nss_map_objectclass posixAccount User # nss_map_objectclass shadowAccount User nss_map_objectclass posixGroup Group nss_map_attribute uid sAMAccountName nss_map_attribute cn sAMAccountName # Display Name nss_map_attribute gecos cn ## nss_map_attribute homeDirectory unixHomeDirectory nss_map_attribute loginShell msSFU30LoginShell # PAM attributes pam_login_attribute sAMAccountName # Location based login pam_groupdn CN=Location-AU-Login,OU=au,OU=Location,DC=mycompany,DC=de pam_member_attribute msSFU30PosixMember ## pam_lookup_policy yes pam_filter objectclass=User nss_initgroups_ignoreusers avahi,avahi-autoipd,backup,bin,couchdb,daemon,games,gdm,gnats,haldaemon,hplip,irc,kernoops,libuuid,list,lp,mail,man,messagebus,news,proxy,pulse,root,rtkit,saned,speech-dispatcher,statd,sync,sys,syslog,usbmux,uucp,www-data and here the stacktrace from strace getent passwd poll([{fd=4, events=POLLIN|POLLPRI|POLLERR|POLLHUP}], 1, 120000) = 1 ([{fd=4, revents=POLLIN}]) read(4, "0\204\0\0\0A\2\1", 8) = 8 read(4, "\4e\204\0\0\0\7\n\1\0\4\0\4\0\240\204\0\0\0+0\204\0\0\0%\4\0261.2."..., 63) = 63 stat64("/etc/ldap.conf", {st_mode=S_IFREG|0644, st_size=1151, ...}) = 0 geteuid32() = 12560 getsockname(4, {sa_family=AF_INET, sin_port=htons(60334), sin_addr=inet_addr("10.1.35.51")}, [16]) = 0 getpeername(4, {sa_family=AF_INET, sin_port=htons(389), sin_addr=inet_addr("10.1.5.81")}, [16]) = 0 time(NULL) = 1297684722 rt_sigaction(SIGPIPE, {SIG_DFL, [], 0}, NULL, 8) = 0 munmap(0xb7617000, 1721) = 0 close(3) = 0 rt_sigaction(SIGPIPE, {SIG_IGN, [], 0}, {SIG_DFL, [], 0}, 8) = 0 rt_sigaction(SIGPIPE, {SIG_DFL, [], 0}, NULL, 8) = 0 rt_sigaction(SIGPIPE, {SIG_IGN, [], 0}, {SIG_DFL, [], 0}, 8) = 0 write(4, "0\5\2\1\5B\0", 7) = 7 shutdown(4, 2 /* send and receive */) = 0 close(4) = 0 shutdown(-1, 2 /* send and receive */) = -1 EBADF (Bad file descriptor) close(-1) = -1 EBADF (Bad file descriptor) exit_group(0) = ?

    Read the article

  • Unclear pricing of Windows Azure

    - by Dirk
    How do you people think about the Windows Azure pricing model and the way it is presented to the user? I just found out that Azure keeps charging hours for STOPPED instances. I just received a bill from more than 100 euro for 3 STOPPED instances (not) running "HelloAzure". I the past I also played around with Amazon Web Services. Amazon doesn't charge for stopped instances. I was wondering: "Should I have known this before, or is Microsoft doing a bad job in clear communication in the pricing model?" Quote from http://www.microsoft.com/windowsazure/pricing/ : Compute time, measured in service hours: Windows Azure compute hours are charged only for when your application is deployed. When developing and testing your application, developers will want to remove the compute instances that are not being used to minimize compute hour billing. Partial compute hours are billed as full hours. I read this, so I stopped all instances after a few hours playing around. Now it seems I should have deleted them, not just "stopped". Strictly speaking, all depends on the definition of the word "deployed". If you upload an application, but it is not running, can it still be regarded as being "deployed"? May be, but when you read this for the first time, with AWS experience in mind, I don't think it's 100% clear what this means. Technically speaking, an uploaded application only uses (read: should only use / needs only) a few MB harddrive space. It doesn't require any CPU time. If Azure wants to reserve CPU's for not running instances.. well, that's Azure's choice, not mine. I don't want to spread a hate campaign at all, but I do want to know how people think about this subject. Should Microsoft be more clear about their pricing model or do you think it's clear enough? Second question: did anyone got refunded for a similar case? Thanks in advance! UPDATE 27-01-2011 I sent an email to customer support a few days ago, but I guess that didn't reach anu human being because I didn't hear anything from it. So, I made a telephone call today with a Dutch customer support representative (I live in Holland). She totally understood the problem and she's trying to get a refund for me. However, she mentioned that "usually these refund requests are denied", but she's going to try. She also mentioned that I'm not the first one with this (or similar) problem. UPDATE 28-01-2011 I just received a phonecall from Microsoft support. The lady told me some good news: the money will refunded. However, the invoice has not been made yet, and my creditcard will first be chardged, after which it will be refunded, but hey, that's no problem for me! I'm glad the way it's solved! Thanks everybody!

    Read the article

  • Setting up SSL on apache on linux ubuntu

    - by ThomasReggi
    I'm trying to get SSL to run on my apache web server. I do not have the DNS for the domain setup yet is that an issue? How do I setup SSL on my web server? When I start apache it fails. root@vannevar:/etc/apache2/ssl# service apache2 start * Starting web server apache2 Action 'start' failed. The Apache error log may have more information. The log stats that it's unable to read the certificate. [Thu Jun 28 15:01:02 2012] [error] Init: Unable to read server certificate from file /etc/apache2/ssl/www.example.com.csr [Thu Jun 28 15:01:02 2012] [error] SSL Library Error: 218529960 error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag [Thu Jun 28 15:01:02 2012] [error] SSL Library Error: 218595386 error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error The contents of /etc/apache2/httpd.conf ServerName [SERVERIP] The contents of /etc/apache2/ports.conf # If you just change the port or add more ports here, you will likely also # have to change the VirtualHost statement in # /etc/apache2/sites-enabled/000-default # This is also true if you have upgraded from before 2.2.9-3 (i.e. from # Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and # README.Debian.gz NameVirtualHost [SERVERIP]:443 NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> The contents of /etc/apache2/sites-available/www.example.com <VirtualHost *:80> ServerAdmin [email protected] ServerName example.com ServerAlias www.example.com DocumentRoot /srv/sites/example.com/public/ ErrorLog /srv/sites/example.com/logs/error.log CustomLog /srv/sites/example.com/logs/access.log combined </VirtualHost> <VirtualHost [SERVERIP]:443> SSLEngine On SSLCertificateFile /etc/apache2/ssl/www.example.com.csr SSLCertificateKeyFile /etc/apache2/ssl/www.example.com.key SSLCACertificateFile /etc/apache2/ssl/comodo.crt ServerAdmin [email protected] ServerName example.com ServerAlias www.example.com DocumentRoot /srv/sites/example.com/public/ ErrorLog /srv/sites/example.com/logs/error.log CustomLog /srv/sites/example.com/logs/access.log combined </VirtualHost>

    Read the article

  • Looking for a new backup solution to replace dying tape drive

    - by E3 Group
    We're running Windows Server 2003 SBS and another machine with Server 2003 Standard on it. The SBS server is about 7 years old running pretty much 24/7 - a HP server of some description. We have an Ultrium 448 cycling LTO2 400GB tapes daily and incrementally backing up approximately 100gb worth of data (20gb C:\ and system state, 40gb exchange, 40gb database for some crap marketing software) on BackupExec 10D. As of 5 months ago, the backups have been consistently failing with IO errors, bad reads and some write errors. When I say consistent, I mean every time and we haven't had a proper backup for the entire 5 months - So if the server explodes tomorrow, 7 years worth of data will just cease to exist. I've only just recently rejoined the company and am looking at rectifying the more concerning problems, so the first thing I did was try a backup to an USB2.0 external drive. It was excruciatingly slow. In fact it was so slow it took 40 hours and it still wasn't finished. I ended up cancelling it and reconfiguring the selections again to reduce file size. This, however, isn't a permanent solution. I concluded that the IO error was either from a faulty tape drive (which has a tape stuck in there right now and not coming out) or from a dying SCSI controller. Neither of them are good news and both are extremely expensive to fix. I'm operating on an extremely low budget so have been looking at outsourcing the backups. A company in Sydney (where I'm located) offer incremental online backups via a NAS. It costs almost double a new tape drive but offers monthly repayments which will let us get through times when cash flow is minimal. It seems like a sweet deal but it is still a little bit pricey. So I'm looking for a cheaper, yet reliable solution. Maybe some in-house NAS or something offsite? The idea is to avoid using tapes. Are there any recommendations for rectifying my current situation? Or are tapes the only way to go? I'm concerned that the server will die one day in the near future and I must be able to restore it to another server with different hardware.

    Read the article

  • Why is mount -a not mounting fuse drive properly when executed remotely (via Fabric)?

    - by Jim D
    This is a weird bug and I'm not sure where it's coming from. Here's a quick run down of what I'm doing. I'm trying to mount a FUSE drive to an Amazon EC2 instance running Ubuntu 10.10 using s3fs (FUSE over Amazon). s3fs is compiled from source according to the instructions etc. I've also added an entry to /etc/fstab so that the drive mounts on boot. Here's what /etc/fstab looks like: # /etc/fstab: static file system information. # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 LABEL=uec-rootfs / ext4 defaults 0 0 /dev/sda2 /mnt auto defaults,nobootwait,comment=cloudconfig 0 2 /dev/sda3 none swap sw,comment=cloudconfig 0 0 s3fs#mybucket /mnt/s3/mybucket fuse default_acl=public-read,use_cache=/tmp,allow_other 0 0 So the good news is that this works fine. On reboot the connection mounts correctly. I can also do: $ sudo umount /mnt/s3/mybucket $ sudo mount -a $ mountpoint /mnt/s3/mybucket /mnt/s3/mybucket is a mountpoint Great, right? Well here's the problem. I'm using Fabric to automate the process of building and managing this instance. I noticed I was getting this error message when using Fabric to build s3fs and set up the mount process: mountpoint: /mnt/s3/mybucket: Transport endpoint is not connected I isolated it down the the problem and built a fabric task that reproduces the problem: def remount_s3fs(): sudo("mount -a") Which does: [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] Executing task 'remount_s3fs' [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] sudo: mount -a [And yes, I was sure to unmount it before running this task.] When I check the mount using mountpoint I get: $ mountpoint /mnt/s3/mybucket mountpoint: /mnt/s3/mybucket: Transport endpoint is not connected Done. But if I run sudo mount -a at the command line, it works. Hrm. Here is that fab task output again, this time in full debug mode: [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] Executing task 'remount_s3fs' [ec2-xx-xx-xx-xx.compute-1.amazonaws.com] sudo: sudo -S -p 'sudo password:' /bin/bash -l -c "mount -a" Again, I get that transport endpoint not connected error. I've also tried copying and pasting the exact command run into my ssh session (i.e. sudo -S -p 'sudo password:' /bin/bash -l -c "mount -a") and it works fine. So...that's my problem. Any ideas?

    Read the article

  • Windows Server 2008 R2 loses ability to connect to network share

    - by JamesB
    I could sure use some help with this one: I've got two Windows Server 2008 R2 x64 Terminal Servers, as well as several 2003 servers (DNS / Wins / AD / DC). On the two 2008 boxes, every now and then they will get in this mode where you can't map a drive to a random server. I say random server because it's not always the same server that you can't map to. Here is a summary of what I can and can't do: net view \\servername Sometimes this works, sometimes it does not. net view \\FQDN This always works. net view \\IPAddress This always works. ping servername Sometimes this works, sometimes it does not. ping FQDN This always works. ping IPAddress This always works. I've been looking all over for a solution to this. It sure seems like Microsoft would have a hotfix by now. The kicker to this is that it sometimes works great, especially after a reboot. It may run for 2 weeks just fine, but all of a sudden it will fail to resolve the remote server name. It will then be this way for a few days, then it might start working again. Also, while it's in the mode of not working, the other servers have no problem getting there. It's just these 2008 R2 Terminal Servers. Setting a static entry in the Hosts file and LMHosts does not make it work. All servers have static IPs and they are registered in DNS and Wins just fine. Here is a long thread on MS Technet of the exact same problem, but they don't have a good solution. Here is their workaround (It was from June of 2010): Good news - a hotfix is in the works and a workaround has been identified: Root cause is that since this is SMB1 all user sessions are on a single TCP connection to the remote server. The first user to initiate a connection to the remote SMB server has their logon-ID added to the structure defining the connection. If that user logs off all subsequent uses of that TCP session fail as the logon-id is no longer valid. As a workaround for now to keep the issue from happening you will want to have the user not logoff the Terminal Server only disconnect their sessions. Any word from anyone out there about a solution? Any help would sure be appreciated. Thanks, James

    Read the article

  • Apache's htcacheclean doesn't scale: How to tame a huge Apache disk_cache?

    - by flight
    We have an Apache setup with a huge disk_cache (500.000 entries, 50 GB disk space used). The cache grows by 16 GB every day. My problem is that the cache seems to be growing nearly as fast as it's possible to remove files and directories from the cache filesystem! The cache partition is an ext3 filesystem (100GB, "-t news") on an iSCSI storage. The Apache server (which acts as a caching proxy) is a VM. The disk_cache is configured with CacheDirLevels=2 and CacheDirLength=1, and includes variants. A typical file path is "/htcache/B/x/i_iGfmmHhxJRheg8NHcQ.header.vary/A/W/oGX3MAV3q0bWl30YmA_A.header". When I try to call htcacheclean to tame the cache (non-daemon mode, "htcacheclean-t -p/htcache -l15G"), IOwait is going through the roof for several hours. Without any visible action. Only after hours, htcacheclean starts to delete files from the cache partition, which takes a couple more hours. (A similar problem was brought up in the Apache mailing list in 2009, without a solution: http://www.mail-archive.com/[email protected]/msg42683.html) The high IOwait leads to problems with the stability of the web server (the bridge to the Tomcat backend server sometimes stalls). I came up with my own prune script, which removes files and directories from random subdirectories of the cache. Only to find that the deletion rate of the script is just slightly higher than the cache growth rate. The script takes ~10 seconds to read the a subdirectory (e.g. /htcache/B/x) and frees some 5 MB of disk space. In this 10 seconds, the cache has grown by another 2 MB. As with htcacheclean, IOwait goes up to 25% when running the prune script continuously. Any idea? Is this a problem specific to the (rather slow) iSCSI storage? Should I choose a different file system for a huge disk_cache? ext2? ext4? Are there any kernel parameter optimizations for this kind of scenario? (I already tried the deadline scheduler and a smaller read_ahead_kb, without effect).

    Read the article

  • Oracle Announces New Oracle Exastack Program for ISV Partners

    - by pfolgado
    Oracle Exastack Program Enables ISV Partners to Leverage a Scalable, Integrated Infrastructure to Deliver Their Applications Tuned and Optimized for High-Performance News Facts Enabling Independent Software Vendors (ISVs) and other members of Oracle Partner Network (OPN) to rapidly build and deliver faster, more reliable applications to end customers, Oracle today introduced Oracle Exastack Ready, available now, and Oracle Exastack Optimized, available in fall 2011 through OPN. The Oracle Exastack Program focuses on helping ISVs run their solutions on Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud -- integrated systems in which the software and hardware are engineered to work together. These products provide partners with a lower cost and high performance infrastructure for database and application workloads across on-premise and cloud based environments. Leveraging the new Oracle Exastack Program in which applications can qualify as Oracle Exastack Ready or Oracle Exastack Optimized, partners can use available OPN resources to optimize their applications to run faster and more reliably -- providing increased performance to their end users. By deploying their applications on Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud, ISVs can reduce the cost, time and support complexities typically associated with building and maintaining a disparate application infrastructure -- enabling them to focus more on their core competencies, accelerating innovation and delivering superior value to customers. After qualifying their applications as Oracle Exastack Ready, partners can note to customers that their applications run on and support Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud component products including Oracle Solaris, Oracle Linux, Oracle Database and Oracle WebLogic Server. Customers can be confident when choosing a partner's Oracle Exastack Optimized application, knowing it has been tuned by the OPN member on Oracle Exadata Database Machine or Oracle Exalogic Elastic Cloud with a goal of delivering optimum speed, scalability and reliability. Partners participating in the Oracle Exastack Program can also leverage their Oracle Exastack Ready and Oracle Exastack Optimized applications to advance to Platinum or Diamond level in OPN. Oracle Exastack Programs Provide ISVs a Reliable, High-Performance Application Infrastructure With the Oracle Exastack Program ISVs have several options to qualify and tune their applications with Oracle Exastack, including: Oracle Exastack Ready: Oracle Exastack Ready provides qualifying partners with specific branding and promotional benefits based on their adoption of Oracle products. If a partner application supports the latest major release of one of these products, the partner may use the corresponding logo with their product marketing materials: Oracle Solaris Ready, Oracle Linux Ready, Oracle Database Ready, and Oracle WebLogic Ready. Oracle Exastack Ready is available to OPN members at the Gold level or above. Additionally, OPN members participating in the program can leverage their Oracle Exastack Ready applications toward advancement to the Platinum or Diamond levels in the OPN Specialized program and toward achieving Oracle Exastack Optimized status. Oracle Exastack Optimized: When available, for OPN members at the Gold level or above, Oracle Exastack Optimized will provide direct access to Oracle technical resources and dedicated Oracle Exastack lab environments so OPN members can test and tune their applications to deliver optimal performance and scalability on Oracle Exadata Database Machine or Oracle Exalogic Elastic Cloud. Oracle Exastack Optimized will provide OPN members with specific branding and promotional benefits including the use of the Oracle Exastack Optimized logo. OPN members participating in the program will also be able to leverage their Oracle Exastack Optimized applications toward advancement to Platinum or Diamond level in the OPN Specialized program. Oracle Exastack Labs and ISV Enablement: Dedicated Oracle Exastack lab environments and related technical enablement resources (including Guided Learning Paths and Boot Camps) will be available through OPN for OPN members to further their knowledge of Oracle Exastack offerings, and qualify their applications for Oracle Exastack Optimized or Oracle Exastack Ready. Oracle Exastack labs will be available to qualifying OPN members at the Gold level or above. Partners are eligible to participate in the Oracle Exastack Ready program immediately, which will help them meet the requirements to attain Oracle Exastack Optimized status in the future. Guidelines for Oracle Exastack Optimized, as well as Oracle Exastack Labs will be available in fall 2011. Supporting Quotes "In order to effectively differentiate their software applications in the marketplace, ISVs need to rapidly deliver new capabilities and performance improvements," said Judson Althoff, Oracle senior vice president of Worldwide Alliances and Channels and Embedded Sales. "With Oracle Exastack, ISVs have the ability to optimize and deploy their applications with a complete, integrated and cloud-ready infrastructure that will help them accelerate innovation, unlock new features and functionality, and deliver superior value to customers." "We view performance as absolutely critical and a key differentiator," said Tom Stock, SVP of Product Management, GoldenSource. "As a leading provider of enterprise data management solutions for securities and investment management firms, with Oracle Exadata Database Machine, we see an opportunity to notably improve data processing performance -- providing high quality 'golden copy' data in a reduced timeframe. Achieving Oracle Exastack Optimized status will be a stamp of approval that our solution will provide the performance and scalability that our customers demand." "As a leading provider of Revenue Intelligence solutions for telecommunications, media and entertainment service providers, our customers continually demand more readily accessible, enriched and pre-analyzed information to minimize their financial risks and maximize their margins," said Alon Aginsky, President and CEO of cVidya Networks. "Oracle Exastack enables our solutions to deliver the power, infrastructure, and innovation required to transform our customers' business operations and stay ahead of the game." Supporting Resources Oracle PartnerNetwork (OPN) Oracle Exastack Oracle Exastack Datasheet Judson Althoff blog Connect with the Oracle Partner community at OPN on Facebook, OPN on LinkedIn, OPN on YouTube, or OPN on Twitter

    Read the article

  • SQL SERVER – Database Dynamic Caching by Automatic SQL Server Performance Acceleration

    - by pinaldave
    My second look at SafePeak’s new version (2.1) revealed to me few additional interesting features. For those of you who hadn’t read my previous reviews SafePeak and not familiar with it, here is a quick brief: SafePeak is in business of accelerating performance of SQL Server applications, as well as their scalability, without making code changes to the applications or to the databases. SafePeak performs database dynamic caching, by caching in memory result sets of queries and stored procedures while keeping all those cache correct and up to date. Cached queries are retrieved from the SafePeak RAM in microsecond speed and not send to the SQL Server. The application gets much faster results (100-500 micro seconds), the load on the SQL Server is reduced (less CPU and IO) and the application or the infrastructure gets better scalability. SafePeak solution is hosted either within your cloud servers, hosted servers or your enterprise servers, as part of the application architecture. Connection of the application is done via change of connection strings or adding reroute line in the c:\windows\system32\drivers\etc\hosts file on all application servers. For those who would like to learn more on SafePeak architecture and how it works, I suggest to read this vendor’s webpage: SafePeak Architecture. More interesting new features in SafePeak 2.1 In my previous review of SafePeak new I covered the first 4 things I noticed in the new SafePeak (check out my article “SQLAuthority News – SafePeak Releases a Major Update: SafePeak version 2.1 for SQL Server Performance Acceleration”): Cache setup and fine-tuning – a critical part for getting good caching results Database templates Choosing which database to cache Monitoring and analysis options by SafePeak Since then I had a chance to play with SafePeak some more and here is what I found. 5. Analysis of SQL Performance (present and history): In SafePeak v.2.1 the tools for understanding of performance became more comprehensive. Every 15 minutes SafePeak creates and updates various performance statistics. Each query (or a procedure execute) that arrives to SafePeak gets a SQL pattern, and after it is used again there are statistics for such pattern. An important part of this product is that it understands the dependencies of every pattern (list of tables, views, user defined functions and procs). From this understanding SafePeak creates important analysis information on performance of every object: response time from the database, response time from SafePeak cache, average response time, percent of traffic and break down of behavior. One of the interesting things this behavior column shows is how often the object is actually pdated. The break down analysis allows knowing the above information for: queries and procedures, tables, views, databases and even instances level. The data is show now on all arriving queries, both read queries (that can be cached), but also any types of updates like DMLs, DDLs, DCLs, and even session settings queries. The stats are being updated every 15 minutes and SafePeak dashboard allows going back in time and investigating what happened within any time frame. 6. Logon trigger, for making sure nothing corrupts SafePeak cache data If you have an application with many parts, many servers many possible locations that can actually update the database, or the SQL Server is accessible to many DBAs or software engineers, each can access some database directly and do some changes without going thru SafePeak – this can create a potential corruption of the data stored in SafePeak cache. To make sure SafePeak cache is correct it needs to get all updates to arrive to SafePeak, and if a DBA will access the database directly and do some changes, for example, then SafePeak will simply not know about it and will not clean SafePeak cache. In the new version, SafePeak brought a new feature called “Logon Trigger” to solve the above challenge. By special click of a button SafePeak can deploy a special server logon trigger (with a CLR object) on your SQL Server that actually monitors all connections and informs SafePeak on any connection that is coming not from SafePeak. In SafePeak dashboard there is an interface that allows to control which logins can be ignored based on login names and IPs, while the rest will invoke cache cleanup of SafePeak and actually locks SafePeak cache until this connection will not be closed. Important to note, that this does not interrupt any logins, only informs SafePeak on such connection. On the Dashboard screen in SafePeak you will be able to see those connections and then decide what to do with them. Configuration of this feature in SafePeak dashboard can be done here: Settings -> SQL instances management -> click on instance -> Logon Trigger tab. Other features: 7. User management ability to grant permissions to someone without changing its configuration and only use SafePeak as performance analysis tool. 8. Better reports for analysis of performance using 15 minute resolution charts. 9. Caching of client cursors 10. Support for IPv6 Summary SafePeak is a great SQL Server performance acceleration solution for users who want immediate results for sites with performance, scalability and peak spikes challenges. Especially if your apps are packaged or 3rd party, since no code changes are done. SafePeak can significantly increase response times, by reducing network roundtrip to the database, decreasing CPU resource usage, eliminating I/O and storage access. SafePeak team provides a free fully functional trial www.safepeak.com/download and actually provides a one-on-one assistance during such trial. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • MySQL 5.5 brings in new ways to authenticate users

    - by Georgi Kodinov
    Ever wanted to use your server's OS for authenticating MySQL users ? Or the corporate LDAP repository ? Unfortunately options like the above are plentiful nowadays. And providing hard-coded support for protocol X or service Y is not the best possible idea. MySQL 5.5 has taken the step into the right direction by providing an infrastructure allowing one to make the server understand different authentication protocols by creating a set of simple plugins (one for the client and one for the server). So now you can easily extend MySQL to search for and authenticate users in your favorite user directory. In fact the API supplied is so versatile that we took the possibility to re-design the current "native" authentication mechanism into a built-in always-on plugin ! OK, let me give you an example: Imagine we have a bunch of users defined in your OS, e.g. we have a user joro with his respective password. And we have a MySQL instance running on the same computer. It would not be unexpected to need to let joro access and/or modify MySQL data. The first step is to define him as a MySQL user. And there's a problem right there : MySQL's CREATE USER joro@localhost IDENTIFIED BY 'joros_password' statement needs a password. And this is a password in no way related to the password that joro have set up in the OS. What's worse : if joro changes his OS password this will in no way be reflected in MySQL. So he'll need to change his MySQL password in a separate step. Not very convenient, specially when you have a lot of users. This is a laborious setup for joro's DBA as well : he'll have to disable his access in both MySQL and the OS should he decides that joro's out of the "nice" list. Now mysql 5.5 to the rescue: Imagine that the smart DBA has created a MySQL server plugin that will check if the name of the user logging in is a valid and enabled OS name and if the password supplied to the mysql client matches the OS and has called this plugin 'auth_os'. Now all that's left to do is to define joro as a MySQL user that will be authenticated externally. This is done by the following command : CREATE USER 'joro'@'localhost' IDENTIFIED WITH 'auth_os'; Now joro can login to MySQL using his current OS password. Note : joro is still a valid MySQL user, so you can grant privileges to him just like you would for all other users. What's better: you can have users that authenticate using different mechanisms in the same server. So you can e.g. safely experiment with external authentication for selected users while keeping your current user base operational. What happens under the hood when joro logs in ? The server will find out by the user definition that it needs to use a non-default authentication and will ask the client to "switch" to using the appropriate client-side plugin (if of course the client is not already using it). If the client can't do this (e.g. because it's an old client or doesn't have the necessary plugin available) the server will reject the login. Otherwise the server will let the server-side plugin decide (while possibly talking to the client side plugin and the OS user directory) if this is a valid login or not. If it is the login process will continue as usual, while if it's not the login will get rejected. There's a lot more that MySQL 5.5 can do for you than just the simple case above. Stay tuned for more advanced use cases like mapping groups of external users to a single MySQL user (so you won't have to have 1-to-1 mapping between your external user directory and your mysql user repository) or ways to control the process as a DBA. Or you can simply skip ahead and read the relevant topics from MySQL's excellent online documentation. Or take a look at the example plugins in plugin/auth. Or take a look at the test suite in mysql-test/t/plugin_auth.test. Changelog entry: http://dev.mysql.com/doc/refman/5.5/en/news-5-5-7.html Primary new sections: Pluggable authentication Proxy users Client plugin C API functions Revised sections: New PROXY privilege New proxies_priv grant table Passwords might be external New external_user and proxy_user system variables New --default-auth and --plugin-dir mysql options New MYSQL_DEFAULT_AUTH and MYSQL_PLUGIN_DIR options for mysql_options() CREATE USER has IDENTIFIED WITH clause to specify auth plugin GRANT has PROXY privilege, IDENTIFIED WITH clause to specify auth plugin The data structure for writing client plugins

    Read the article

  • Best WordPress Video Themes for a Video Blog

    - by Matt
    WordPress has made blogging so easy & fun, there are plenty of video blog themes that you can pick from. However there is always rarity in quality. We at JustSkins have gathered some high quality, tested, tried video themes list. We tried to find some WordPress themes for vloggers, we knew all along that there are very few yet some of them are just brilliant premium wordpress themes. More on that later, let’s find out some themes which you can install on your vlog right now. On Demand 2.0 A fully featured video WordPress premium theme from Press75. Includes  theme options panel for personal customization and content management options, post thumbnails, drop down navigation menu, custom widgets and lots more. Demo | Price: $75 | DOWNLOAD VideoZoom An outstanding premium WordPress video theme from WPZoom featuring standard video integration plus additionally it lets you play any video from all the popular video websites. VideoZoom theme also includes a featured video slider on the homepage, multiple post layout options, theme options panel, WordPress 3.0 menus, backgrounds etc. Demo | Price Single: $69, Developer: $149 | DOWNLOAD Vidley Press75′s easy to use premium WordPress video theme. This theme is full of great features, it can be a perfect choice if you intend to make it a portal someday..it is scalable to shape like a news portal or portfolios. The Theme is widget ready. It has ability to place Featured Content and Featured Category section on homepage. The drop down menus on this theme are nifty! Demo | Price $75 |  DOWNLOAD Live A video premium WordPress theme designed for streaming video, and live event broadcasting. You can embed live video broadcasts from third party services like Ustream etc, and features a prominent timer counting down to the next broadcast, rotating bumper images, Facebook and twitter integration for viewer interaction, theme admin options panel and more make this theme one of its kind. Demo | Price: $99, Support License: $149| DOWNLOAD Groovy Video Woo Themes is pioneer in making beautiful wordpress themes,  One such theme that is built by keeping the video blogger in mind. The Groovy Theme is very colourful video blog premium WordPress theme. Creating video posts is quick and easy with just a copy / paste of the video’s embed code. The theme enables automatic video resizing, plenty of widgets. Also allows you to pick color of your choice. Price: Single Use $70, Developer Price : $150 | DOWNLOAD Video Flick Another exciting Video blogging theme by Press75 is the Video Flick theme. Video Flick is compatible with any video service that provides embed code, or if you want to host your own videos, Video Flick is also compatible with FLV (Flash Video) and Quicktime formats. This theme allows you to either keep standard Blog and/or have Video posts. You can pick a light or dark color option. Demo | Price : $75 | DOWNLOAD Woo Tube An excellent video premium WordPress theme from Woothemes, the WooTube theme is a very easy video blog platform, as it comes with  automatic video resizing, a completely widgetised sidebar and 7 different colour schemes to choose from. The theme  has the ability to be used as a normal blog or a gallery. A very wise choice! Price: Single Use $70, Developer Price : $150 | DOWNLOAD eVid Theme One of the nicest WordPress theme designed specifically for the video bloggers. Simple to integrate videos from video hosts such as Youtube, Vimeo, Veoh, MetaCafe etc. Demo | Price: $19 | DOWNLOAD Tubular A video premium WordPress theme from StudioPress which can also be used as a used a simple website or a blog. The theme is also available in a light color version. Demo | Price: $59.95 | DOWNLOAD Video Elements 2.0 Another beautiful video premium WordPress theme from Press75. Video Elements 2.0 has been re-designed to include the features you need to easily run and maintain a video blog on WordPress. Demo | Price: $75 | DOWNLOAD TV Elements 3.0 The theme includes a featured video carousel on the homepage which can display any number of videos, a featured category section which displays up to 12 channels, creates automatic thumbnails and a lots more… Demo | Price: $75 | DOWNLOAD Wave A beautiful premium video wordpress theme, Flexible & Super cool looking. The Design has very earthy feel to it. The theme has featured video area & latest listing on the homepage. All in all a simple design no fancy features. Demo | Price: $35 | Download

    Read the article

  • 6 Interesting Facts About NASA’s Mars Rover ‘Curiosity’

    - by Gopinath
    Humans quest for exploring the surrounding planets to see whether we can live there or not is taking new shape today. NASA’s Mars probing robot, Curiosity, blasted off today on its 9 months journey to reach Mars and explore it for the possibilities of life there. Scientist says that Curiosity is one most advanced rover ever launched to probe life on other planets. Here is the launch video and some analysis by a news reporter Lets look at the 6 interesting facts about the mission 1. It’s as big as a car Curiosity is the biggest ever rover ever launched by NASA to probe life on outer planets. It’s as big as a car and almost double the size of its predecessor rover Spirit. The length of Curiosity is around 9 feet 10 inches(3 meters), width is 9 feet 1 inch (2.8 meters) and height is 7 feet (2.1 meters). 2. Powered by Plutonium – Lasts 24×7 for 23 months The earlier missions of NASA to explore Mars are powered by Solar power and that hindered capabilities of the rovers to move around when the Sun is hiding. Due to dependency of Sun the earlier rovers were not able to traverse the places where there is no Sun light. Curiosity on the other hand is equipped with a radioisotope power system that generates electricity from the heat emitted by plutonium’s radioactive decay. The plutonium weighs around 10 pounds and can generate power required for operating the rover close to 23 weeks. The best part of the new power system is, Curiosity can roam around in darkness, light and all year around. 3. Rocket powered backpack for a science fiction style landing The Curiosity is so heavy that NASA could not use parachute and balloons to air-drop the rover on the surface of Mars like it’s previous missions. They are trying out a new science fiction style air-dropping mechanism that is similar to sky crane heavy-lift helicopter. The landing of the rover begins first with entry into the Mars atmosphere protected by a heat shield. At about 6 miles to the surface, the heat shield is jettisoned and a parachute is deployed to glide the rover smoothly. When the rover touches 3 miles above the surface, the parachute is jettisoned and the eight motors rocket backpack is used for a smooth and impact free landing as shown in the image. Here is an animation created by NASA on the landing sequence. If you are interested in getting more detailed information about the landing process check this landing sequence picture available on NASA website 4. Equipped with Star Wars style laser gun Hollywood movie directors and novelist always imagined aliens coming to earth with spaceships full of laser guns and blasting the objects which comes on their way. With Curiosity the equations are going to change. It has a powerful laser gun equipped in one of it’s arms to beam laser on rocks to vaporize them. This is not part of any assault mission Curiosity is expected to carry out, the laser gun is will be used to carry out experiments to detect life and understand nature. 5. Most sophisticated laboratory powered by 10 instruments Around 10 state of art instruments are part of Curiosity rover and the these 10 instruments form a most advanced rover based lab ever built by NASA. There are instruments to cut through rocks to examine them and other instruments will search for organic compounds. Mounted cameras can study targets from a distance, arm mounted instruments can study the targets they touch. Microscopic lens attached to the arm can see and magnify tiny objects as tiny as 12.5 micro meters. 6. Rover Carrying 1.24 million names etched on silicon Early June 2009 NASA launched a campaign called “Send Your Name to Mars” and around 1.24 million people registered their names through NASA’s website. All those 1.24 million names are etched on Silicon chips mounted onto Curiosity’s deck. If you had registered your name in the campaign may be your name is going to reach Mars soon. Curiosity On Web If you wish to follow the mission here are few links to help you NASA’s Curiosity Web Page Follow Curiosity on Facebook Follow @MarsCuriosity on Twitter Artistic Gallery Image of Mars Rover Curiosity A printable sheet of Curiosity Mission [pdf] Images credit: NASA This article titled,6 Interesting Facts About NASA’s Mars Rover ‘Curiosity’, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • ASP.NET MVC HandleError Attribute

    - by Ben Griswold
    Last Wednesday, I took a whopping 15 minutes out of my day and added ELMAH (Error Logging Modules and Handlers) to my ASP.NET MVC application.  If you haven’t heard the news (I hadn’t until recently), ELMAH does a killer job of logging and reporting nearly all unhandled exceptions.  As for handled exceptions, I’ve been using NLog but since I was already playing with the ELMAH bits I thought I’d see if I couldn’t replace it. Atif Aziz provided a quick solution in his answer to a Stack Overflow question.  I’ll let you consult his answer to see how one can subclass the HandleErrorAttribute and override the OnException method in order to get the bits working.  I pretty much took rolled the recommended logic into my application and it worked like a charm.  Along the way, I did uncover a few HandleError fact to which I wasn’t already privy.  Most of my learning came from Steven Sanderson’s book, Pro ASP.NET MVC Framework.  I’ve flipped through a bunch of the book and spent time on specific sections.  It’s a really good read if you’re looking to pick up an ASP.NET MVC reference. Anyway, my notes are found a comments in the following code snippet.  I hope my notes clarify a few things for you too. public class LogAndHandleErrorAttribute : HandleErrorAttribute {     public override void OnException(ExceptionContext context)     {         // A word from our sponsors:         //      http://stackoverflow.com/questions/766610/how-to-get-elmah-to-work-with-asp-net-mvc-handleerror-attribute         //      and Pro ASP.NET MVC Framework by Steven Sanderson         //         // Invoke the base implementation first. This should mark context.ExceptionHandled = true         // which stops ASP.NET from producing a "yellow screen of death." This also sets the         // Http StatusCode to 500 (internal server error.)         //         // Assuming Custom Errors aren't off, the base implementation will trigger the application         // to ultimately render the "Error" view from one of the following locations:         //         //      1. ~/Views/Controller/Error.aspx         //      2. ~/Views/Controller/Error.ascx         //      3. ~/Views/Shared/Error.aspx         //      4. ~/Views/Shared/Error.ascx         //         // "Error" is the default view, however, a specific view may be provided as an Attribute property.         // A notable point is the Custom Errors defaultRedirect is not considered in the redirection plan.         base.OnException(context);           var e = context.Exception;                  // If the exception is unhandled, simply return and let Elmah handle the unhandled exception.         // Otherwise, try to use error signaling which involves the fully configured pipeline like logging,         // mailing, filtering and what have you). Failing that, see if the error should be filtered.         // If not, the error simply logged the exception.         if (!context.ExceptionHandled                || RaiseErrorSignal(e)                   || IsFiltered(context))                  return;           LogException(e); // FYI. Simple Elmah logging doesn't handle mail notifications.     }

    Read the article

  • Top 4 Lame Tech Blogging Posts

    - by jkauffman
    From a consumption point of view, tech blogging is a great resource for one-off articles on niche subjects. If you spend any time reading tech blogs, you may find yourself running into several common, useless types of posts tech bloggers slip into. Some of these lame posts may just be natural due to common nerd psychology, and some others are probably due to lame, lemming-like laziness. I’m sure I’ll do my fair share of fitting the mold, but I quickly get bored when I happen upon posts that hit these patterns without any real purpose or personal touches. 1. The Content Regurgitation Posts This is a common pattern fueled by the starving pan-handlers in the web traffic economy. These are posts that are terse opinions or addendums to an existing post. I commonly see these involve huge block quotes from the linked article which almost always produces over 50% of the post itself. I’ve accidentally gone to these posts when I’m knowingly only interested in the source material. Web links can degrade as well, so if the source link is broken, then, well, I’m pretty steamed. I see this occur with simple opinions on technologies, Stack Overflow solutions, or various tech news like posts from Microsoft. It’s not uncommon to go to the linked article and see the author announce that he “added a blog post” as a response or summary of the topic. This is just rude, but those who do it are probably aware of this. It’s a matter of winning that sweet, juicy web traffic. I doubt this leeching is fooling anybody these days. I would like to rally human dignity and urge people to avoid these types of posts, and just leave a comment on the source material. 2. The “Sorry I Haven’t Posted In A While” Posts This one is far too common. You’ll most likely see this quote somewhere in the body of the offending post: I have been really busy. If the poster is especially guilt-ridden, you’ll see a few volleys of excuses. Here are some common reasons I’ve seen, which I’ll list from least to most painfully awkward. Out of town Vague allusions to personal health problems (these typically includes phrases like “sick”, “treatment'”, and “all better now!”) “Personal issues” (which I usually read as "divorce”) Graphic or specific personal health problems (maximum awkwardness potential is achieved if you see links to charity fund websites) I can’t help but to try over-analyzing why this occurs. Personally, I see this an an amalgamation of three plain factors: Life happens Us nerds are duty-driven, and driven to guilt at personal inefficiencies Tech blogs can become personal journals I don’t think we can do much about the first two, but on the third I think we could certainly contain our urges. I’m a pretty boring guy and, whether or I like it or not, I have an unspoken duty to protect the world from hearing about my unremarkable existence. Nobody cares what kind of sandwich I’m eating. Similarly, if I disappear for a while, it’s unlikely that anybody who happens upon my blog would care why. Rest assured, if I stop posting for a while due to a vasectomy, you will be the first to know. 3. The “At A Conference”, or “Conference Review” Posts I don’t know if I’m like everyone else on this one, but I have never been successfully interested in these posts. It even sounds like a good idea: if I can’t make it to a particular conference (like the KCDC this year), wouldn’t I be interested in a concentrated summary of events? Apparently, no! Within this realm, I’ve never read a post by a blogger that held my interest. What really baffles is is that, for whatever reason, I am genuinely engaged and interested when talking to someone in person regarding the same topic. I have noticed the same phenomenon when hearing about others’ vacations. If someone sends me an email about their vacation, I gloss over it and forget about it quickly. In contrast, if I’m speaking to that individual in person about their vacation, I’m actually interested. I’m unsure why the written medium eradicates the intrigue. I was raised by a roaming pack of friendly wild video games, so that may be a factor. 4. The “Top X Number of Y’s That Z” Posts I’ve seen this one crop up a lot more in the past few of years. Here are some fabricated examples: 5 Easy Ways to Improve Your Code Top 7 Good Habits Programmers Learn From Experience The 8 Things to Consider When Giving Estimates Top 4 Lame Tech Blogging Posts These are attention-grabbing headlines, and I’d assume they rack up hits. In fact, I enjoy a good number of these. But, I’ve been drawn to articles like this just to find an endless list of identically formatted posts on the blog’s archive sidebar. Often times these posts have overlapping topics, too. These types of posts give the impression that the author has given thought to prioritize and organize the points as a result of a comprehensive consideration of a particular topic. Did the author really weigh all the possibilities when identifying the “Top 4 Lame Tech Blogging Patterns”? Unfortunately, probably not. What a tool. To reiterate, I still enjoy the format, but I feel it is abused. Nowadays, I’m pretty skeptical when approaching posts in this format. If these trends continue, my brain will filter these blog posts out just as effectively as it ignores the encroaching “do xxx with this one trick” advertisements. Conclusion To active blog readers, I hope my guide has served you precious time in being able to identify lame blog posts at a glance. Save time and energy by skipping over the chaff of the internet! And if you author a blog, perhaps my insight will help you to avoid the occasional urge to produce these needless filler posts.

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >