Search Results

Search found 12925 results on 517 pages for 'email routing'.

Page 498/517 | < Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >

  • php zencart mod - having problems with attributes array

    - by user80151
    I inherited a zencart mod and can't figure out what's wrong. The customer selects a product and an attribute (model#). This is then sent to another form that they complete. When they submit the form, the product and the attribute should be included in the email sent. At this time, only the product is coming through. The attribute just says "array." The interesting part is, when I delete the line that prints the attribute, the products_options_names will print out. So I know that both the product and the products_options_names are working. The attribute is the only thing that is not working right. Here's what I believe to be the significant code. This is the page that has the form, so the attribute should already be passed to the form. //Begin Adding of New features //$productsimage = $product['productsImage']; $productsname = $product['productsName']; $attributes = $product['attributes']; $products_options_name = $value['products_options_name']; $arr_product_list[] = "<strong>Product Name:</strong> $productsname <br />"; $arr_product_list[] .= "<strong>Attributes:</strong> $attributes <br />"; $arr_product_list[] .= "<strong>Products Options Name:</strong> $products_options_name <br />"; $arr_product_list[] .= "---------------------------------------------------------------"; //End Adding of New features } // end foreach ($productArray as $product) ?> Above this, there is another section that has attributes: <?php echo $product['attributeHiddenField']; if (isset($product['attributes']) && is_array($product['attributes'])) { echo '<div class="cartAttribsList">'; echo '<ul>'; reset($product['attributes']); foreach ($product['attributes'] as $option => $value) { ?> Can anyone help me figure out what is wrong? I'm not sure if the problem is on this page or if the attribute isn't being passed to this page. TIA

    Read the article

  • How can i maintain last cookie value in flex with jsp?

    - by praveen
    Hi All, my login form in flex when I login I have created a cookie in jsp like this name setValueCookie.jsp <%@ page language="java" import="java.util.* , javax.net.*" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <% String username = request.getParameter("value"); System.out.println("Email got in cookieSet = " + username); if(username==null) username=""; Date now = new Date(); String timestamp = now.toString(); Cookie cookie = new Cookie("username",username); cookie.setMaxAge(365 * 24 * 60 * 60); response.addCookie(cookie); %> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>DashBoard-Cookie</title> </head> <body> </body> </html> now using Http service request parameter i am passing username 'Value' to this jsp. and i am reading cookie value from getValueCookie.jsp like this <% String cookieName = "username"; Cookie cookies [] = request.getCookies (); Cookie myCookie = null; String result; if (cookies != null) { for (int i = 0; i < cookies.length; i++) { if (cookies [i].getName().equals (cookieName)) { myCookie = cookies[i]; break; } } } %> <data> <status><%=myCookie.getValue().toString()%></status> </data> through the httpservice value i am getting but if i open a new window or any new tab cookie value is not getting how can i solve this? Thanks in advance.

    Read the article

  • ASP.Net MVC + Live validation - how come the flagged text are all over the place?

    - by melaos
    hi guys, this is an asp.net mvc project and <% using (Html.BeginForm("ProductAdded", "Home")) { % Register Your Product <%= ViewData["MainHeader"]% <p><%=ViewData["IntroText"]%></p> <div style="display: none;"> <div id="regionThreePane"> <table border="0" cellpadding="0" cellspacing="0" frame="void" style="width: 100%"> <tr> <td width='250px'><select name="ProdLBox1" id="ProdLBox1" class="ProdLBox1" size="8"></select></td> <td width='250px'><select name="ProdLBox2" id="ProdLBox2" class="ProdLBox2" size="8"></select></td> <td width='250px'><select name="ProdLBox3" id="ProdLBox3" class="ProdLBox3" size="8"></select></td> </tr> </table> </div> i'm using live validation for my client side validation. var v_fname = new LiveValidation('Customer_FirstName', { validMessage: " " }, { onlyOnSubmit: true }); v_fname.add(Validate.Presence, { failureMessage: enterfirstname}); var v_lname = new LiveValidation('Customer_LastName', { validMessage: " " }); v_lname.add(Validate.Presence, { failureMessage: enterlastname }); var v_email = new LiveValidation('Customer_Email', { validMessage: " " }); v_email.add(Validate.Presence, { failureMessage: enteremail, validMessage: " " }); v_email.add(Validate.Email, { failureMessage: entervalidemail}); and what i notice is that after doing some button call: $(".btnAddProduct").click(function() { //Check first to see if there's anything to be added if (parseFloat($(".tboAddProduct").val()) < 1) { //TO DO: to replace with localized text var selectProductError = "Please select a product first"; $("#validationSummary").text(selectProductError); //alert("Please select a product first"); return false; } $(".PanelProductReg").show(); addProductRow($(".tboAddProductId").val(), $("#tboAddProduct").val()); }); what will happen is that the validation tags will start to appear for the whole page for all the input which are tag for the live validation. instead of just appearing when the controls are being higlighted and onblur. i'm using some ajax calls to get data and a lot of jquery to dynamically do the gui stuff. could any of this be causing some sort of an internal conflict? thanks

    Read the article

  • Repeat Customers Each Year (Retention)

    - by spazzie
    I've been working on this and I don't think I'm doing it right. |D Our database doesn't keep track of how many customers we retain so we looked for an alternate method. It's outlined in this article. It suggests you have this table to fill in: Year Number of Customers Number of customers Retained in 2009 Percent (%) Retained in 2009 Number of customers Retained in 2010 Percent (%) Retained in 2010 .... 2008 2009 2010 2011 2012 Total The table would go out to 2012 in the headers. I'm just saving space. It tells you to find the total number of customers you had in your starting year. To do this, I used this query since our starting year is 2008: select YEAR(OrderDate) as 'Year', COUNT(distinct(billemail)) as Customers from dbo.tblOrder where OrderDate >= '2008-01-01' and OrderDate <= '2008-12-31' group by YEAR(OrderDate) At the moment we just differentiate our customers by email address. Then you have to search for the same names of customers who purchased again in later years (ours are 2009, 10, 11, and 12). I came up with this. It should find people who purchased in both 2008 and 2009. SELECT YEAR(OrderDate) as 'Year',COUNT(distinct(billemail)) as Customers FROM dbo.tblOrder o with (nolock) WHERE o.BillEmail IN (SELECT DISTINCT o1.BillEmail FROM dbo.tblOrder o1 with (nolock) WHERE o1.OrderDate BETWEEN '2008-1-1' AND '2009-1-1') AND o.BillEmail IN (SELECT DISTINCT o2.BillEmail FROM dbo.tblOrder o2 with (nolock) WHERE o2.OrderDate BETWEEN '2009-1-1' AND '2010-1-1') --AND o.OrderDate BETWEEN '2008-1-1' AND '2013-1-1' AND o.BillEmail NOT LIKE '%@halloweencostumes.com' AND o.BillEmail NOT LIKE '' GROUP BY YEAR(OrderDate) So I'm just finding the customers who purchased in both those years. And then I'm doing an independent query to find those who purchased in 2008 and 2010, then 08 and 11, and then 08 and 12. This one finds 2008 and 2010 purchasers: SELECT YEAR(OrderDate) as 'Year',COUNT(distinct(billemail)) as Customers FROM dbo.tblOrder o with (nolock) WHERE o.BillEmail IN (SELECT DISTINCT o1.BillEmail FROM dbo.tblOrder o1 with (nolock) WHERE o1.OrderDate BETWEEN '2008-1-1' AND '2009-1-1') AND o.BillEmail IN (SELECT DISTINCT o2.BillEmail FROM dbo.tblOrder o2 with (nolock) WHERE o2.OrderDate BETWEEN '2010-1-1' AND '2011-1-1') --AND o.OrderDate BETWEEN '2008-1-1' AND '2013-1-1' AND o.BillEmail NOT LIKE '%@halloweencostumes.com' AND o.BillEmail NOT LIKE '' GROUP BY YEAR(OrderDate) So you see I have a different query for each year comparison. They're all unrelated. So in the end I'm just finding people who bought in 2008 and 2009, and then a potentially different group that bought in 2008 and 2010, and so on. For this to be accurate, do I have to use the same grouping of 2008 buyers each time? So they bought in 2009 and 2010 and 2011, and 2012? This is where I'm worried and not sure how to proceed or even find such data. Any advice would be appreciated! Thanks!

    Read the article

  • PHP, MySQL - My own version of SALT (I call salty) - Login Issue

    - by Fabio Anselmo
    Ok I wrote my own version of SALT I call it salty lol don't make fun of me.. Anyway the registration part of my script as follows is working 100% correctly. //generate SALTY my own version of SALT and I likes me salt.. lol function rand_string( $length ) { $chars = "ABCDEFGHIJKLMNOPQRSTUWXYZabcdefghijklmnopqrstuwxyz1234567890"; $size = strlen( $chars ); for( $i = 0; $i < $length; $i++ ) { $str .= $chars[ rand( 0, $size - 1 ) ]; } return $str; } $salty = rand_string( 256 ); //generate my extra salty pw $password = crypt('password'); $hash = $password . $salty; $newpass = $hash; //insert the data in the database include ('../../scripts/dbconnect.php'); //Update db record with my salty pw ;) // TESTED WITH AND WITHOUT SALTY //HENCE $password and $newpass mysql_query("UPDATE `Register` SET `Password` = '$password' WHERE `emailinput` = '$email'"); mysql_close($connect); However my LOGIN script is failing. I have it setup to TEST and echo if its login or not. It always returns FAILED. I entered the DB and changed the crypted salty pw to "TEST" and I got a SUCCESS. So my problem is somewhere in this LOGIN script I assume. Now I am not sure how to implement my $Salty in this. But also be advised that even without SALTY (just using crypt to store my pass) - I was still unable to perform a login successfully. And if you're gonna suggest i use blowfish - note that my webhost doesn't have it supported and i don't know how to install it. here's my login script: if (isset($_POST['formsubmitted'])) { include ('../../scripts/dbconnect.php'); $username = mysql_real_escape_string($_POST['username']); $password = crypt(mysql_real_escape_string($_POST['password'])); $qry = "SELECT ID FROM Register WHERE emailinput='$username' AND Password='$password'"; $result = mysql_query($qry); if(mysql_num_rows($result) > 0) { echo 'SUCCESS'; //START SESSION } else { echo 'FAILED'; //YOU ARE NOT LOGGED IN } } So what's wrong with this login? Why isn't it working just using the crypt/storing only crypt? How can i make it work storing both the crypt and randomly generated SALTY :) ? Ty advance

    Read the article

  • assign a model's attribute through association

    - by justcode
    I'm new to rails and working on a rails app and I'm stuck pondering this issue. I have three models class product < ActiveRecord::Base attr_accessible :name, :issn, :category validates_presence_of :name, :issn, :category validates_numericality_of :issn, :message => "has to be a number" has_many :user_products has_many :users, :through => :user_products end class UserProduct < ActiveRecord::Base attr_accessible :price, :category validates_presence_of :price, :category validates_numericality_of :price, :message = "has to be a number" belongs_to :user belongs_to :product end class user < ActiveRecord::Base # devise authentication here # Setup accessible (or protected) attributes for your model attr_accessible :email, :password, :password_confirmation, :remember_me has_many :user_products has_many :products, :through = :user_products end here is my new.html.erb <div class="MainBodyWrapper"> <div class="span8"> <div id="listBoxWrapper"> <fieldset> <%= form_for(@product, :html => { :class => "form-inline" }, :style => "margin-bottom: 60px" ) do |f| %> <div class="control-group"> <label class="control-label" for="name">name</label> <div class="controls"> <%= f.text_field :price, :class => 'input-xlarge input-name', :id => "name" %> </div> </div> <div class="listingButtons"> <button class="btn btn-info"></i>Add</button> <a class="btn">Upload Pictures (anytime)</a> </div> </fieldset> </div> </div> There are reasons why I want to setup the models this way. So the question is this: I want the user to enter the info for the product in the form but it also involves putting in the price of the product which exists in a different model/table (user_product) that is associated with product. How can I do this? You can see that my form_for uses @product. Any help will be appreciated.

    Read the article

  • What do I need to change to get this 'acts_as_rateable' Rails plugin working with this code from the

    - by tepidsam
    Hello! I'm working my way through the 'Foundation Rails 2' book. In Chapter 9, we are building a little "plugins" app. In the book, he installs the acts_as_rateable plugin found at http://juixe.com/svn/acts_as_rateable. This plugin doesn't appear to exist in 2010 (the page for this "old" plugin seems to be working again...it was down when I tried it earlier), so I found another acts_as_rateable plugin at http://github.com/azabaj/acts_as_rateable. These plugins are different and I'm trying to get the "new" plugin working with the code/project/app from the Foudations 2 book. I was able to use the migration generator included with the "new" plugin and it worked. Now,though, I'm a bit confused. In the book, he has us create a new controller to work with the plugin (the old plugin). Here's the code for that: class RatingsController < ApplicationController def create @plugin = Plugin.find(params[:plugin_id]) rating = Rating.new(:rating => params[:rating]) @plugin.ratings << average_rating redirect_to @plugin end end Then he had us change the routing: map.resources :plugins, :has_many => :ratings map.resources :categories map.root :controller => 'plugins' Next, we modified the following to the 'show' template so that it looks like this: <div id="rate_plugin"> <h2>Rate this plugin</h2> <ul class="star-rating"> <li> <%= link_to "1", @plugin.rate_it(1), :method => :post, :title => "1 star out of 5", :class => "one-star" %> </li> <li> <%= link_to "2", plugin_ratings_path(@plugin, :rating => 2), :method => :post, :title => "2 stars out of 5", :class => "two-stars" %> </li> <li> <%= link_to "3", plugin_ratings_path(@plugin, :rating => 3), :method => :post, :title => "3 stars out of 5", :class => "three-stars" %> </li> <li> <%= link_to "4", plugin_ratings_path(@plugin, :rating => 4), :method => :post, :title => "4 stars out of 5", :class => "four-stars" %> </li> <li> <%= link_to "5", plugin_ratings_path(@plugin, :rating => 5), :method => :post, :title => "5 stars out of 5", :class => "five-stars" %> </li> </ul> </div> The problem is that we're using code for a different plugin. When I try to actually "rate" one of the plugins, I get an "unknown attribute" error. That makes sense. I'm using attribute names for the "old" plugin, but I should be using attributes names for the "new" plugin. The problem is...I've tried using a variety of different attribute names and I keep getting the same error. From the README for the "new" plugin, I think I should be using some of these but I haven't been able to get them working: Install the plugin into your vendor/plugins directory, insert 'acts_as_rateable' into your model, then restart your application. class Post < ActiveRecord::Base acts_as_rateable end Now your model is extended by the plugin, you can rate it ( 1-# )or calculate the average rating. @plugin.rate_it( 4, current_user.id ) @plugin.average_rating #=> 4.0 @plugin.average_rating_round #=> 4 @plugin.average_rating_percent #=> 80 @plugin.rated_by?( current_user ) #=> rating || false Plugin.find_average_of( 4 ) #=> array of posts See acts_as_rateable.rb for further details! I'm guessing that I might have to make some "bigger" changes to get this plugin working. I'm asking this question to learn and only learn. The book gives code to get things working with the old plugin, but if one were actually building this app today it would be necessary to use this "new" plugin, so I would like to see how it could be used. Cheers! Any ideas on what I could change to get the "new" plugin working?

    Read the article

  • php parse error always on the last line

    - by is0lated
    I'm trying to read a comment that is stored in a mysql table. For some reason I always get a parse error on the last line of the file even if the last line is blank. I'm not sure if it's relevant but the connect.php works for putting the comment into the database. I'm using wampserver to host it and coding it by hand. I think that's it's something to do with the while loop, when I comment out the while(){ and the } near the end I just get a few missing variable errors as you would expect. I'm quite new to php coding so I'm pretty sure the problem will be something simple that I've either missed or not understood properly. Anyway, here's my code: <?php include "connect.php"; ?> <?php $sql = "SELECT * FROM main"; $result = mysql_query($sql) or die("Could not get posts from table"); while($rows=mysql_fetch_array($result)){ ?> <table bgcolor="green" align="center"> <tr> <td></td> </tr> <tr> <td><strong> <? echo $rows['name']; ?> </strong></td> </tr> <tr> <td> <? echo $rows['email']; ?> </td> </tr> <tr> <td> <? echo $rows['comment']; ?> </td> </tr> </table> <? } ?> Thanks for the help. :)

    Read the article

  • SQL-How to retrieve the correct data using php

    - by Programatt
    I am new to SQL so please excuse my question if it is simple. I have a database with a few tables. 1 is a users table, the others are application tables that contain the users preferences for receiving notifications about that application based on the country they are interested in. What I want to do, is retrieve the e-mail address of all users that have an interest in that country. I am struggling to think about how to do this. I currently have the following query constructed, and the code to populate the values function check($string) { if (isset($_POST[$string])) { $print = implode(', ', $_POST[$string]); //Converts an array into a single string $imanageSQLArr = Array(); if (substr_count($print,'Benelux') > 0) { $imanageSQLArr[0] = "checked"; } else { $imanageSQLArr[0] = "off"; } if (substr_count($print, 'France') > 0) { $imanageSQLArr[1] = "checked"; } else { $imanageSQLArr[1] = "off"; } if (substr_count($print, 'Germany') > 0) { $imanageSQLArr[2] = "checked"; } else { $imanageSQLArr[2] = "off"; } if (substr_count($print, 'Italy') > 0) { $imanageSQLArr[3] = "checked"; } else { $imanageSQLArr[3] = "off"; } if (substr_count($print, 'Netherlands') > 0) { $imanageSQLArr[4] = "checked"; } else { $imanageSQLArr[4] = "off"; } if (substr_count($print, 'Portugal') > 0) { $imanageSQLArr[5] = "checked"; } else { $imanageSQLArr[5] = "off"; } if (substr_count($print, 'Spain') > 0) { $imanageSQLArr[6] = "checked"; } else { $imanageSQLArr[6] = "off"; } if (substr_count($print, 'Sweden') > 0) { $imanageSQLArr[7] = "checked"; } else { $imanageSQLArr[7] = "off"; } if (substr_count($print, 'Switzerland') > 0) { $imanageSQLArr[8] = "checked"; } else { $imanageSQLArr[8] = "off"; } if (substr_count($print, 'UK') > 0) { $imanageSQLArr[9] = "checked"; } else { $imanageSQLArr[9] = "off"; } and the query $tocheck = $db->prepare( "SELECT users.email FROM users,app WHERE users.id=app.userid AND BENELUX=:BENELUX AND FRANCE=:FRANCE AND GERMANY=:GERMANY AND ITALY=:ITALY AND NETHERLANDS=:NETHERLANDS AND PORTUGAL=:PORTUGAL AND SPAIN=:SPAIN AND SWEDEN=:SWEDEN AND SWITZERLAND=:SWITZERLAND AND UK=:UK"); $tocheck->execute($country); $row = $tocheck->fetchAll(); This does retrieve data, but only people who's preferences match EXACTLY what is put (so what they haven't selected is taken into account as much as what they have). Any help would be greatly appreciated.

    Read the article

  • Rails nested form for belongs_to

    - by user1232533
    I'm new to rails and have some troubles with creating a nested form. My models: class User < ActiveRecord::Base belongs_to :company accepts_nested_attributes_for :company, :reject_if => :all_blank end class Company < ActiveRecord::Base has_many :users end Now i would like to create a new company from the user sign_up page (i use Devise btw) by given only a company name. And have a relation between the new User and new Company. In the console i can create a company for a existing User like this: @company = User.first.build_company(:name => "name of company") @company.save That works, but i can't make this happen for a new user, in my new user sign_up form i tried this (i know its wrong by creating a new User fist but im trying to get something working here..): <%= simple_form_for(resource, :as => resource_name, :html => { :class => 'form-horizontal' }, :url => registration_path(resource_name)) do |f| %> <%= f.error_notification %> <div class="inputs"> <% @user = User.new company = @user.build_company() %> <% f.fields_for company do |builder| %> <%= builder.input :name, :required => true, :autofocus => true %> <% end %> <%= f.input :email, :required => true, :autofocus => true %> <%= f.input :password, :required => true %> <%= f.input :password_confirmation, :required => true %> </div> <div class="form-actions"> <%= f.button :submit, :class => 'btn-primary', :value => 'Sign up' %> </div> I did my best to google for a solution/ example.. found some nested form examples but it's just not clear to me how to do this. Really hope somebody can help me with this. Any help on this would be appreciated. Thanks in advance! Greets, Daniel

    Read the article

  • How to avoid my this facebook app api login page?

    - by user1035140
    I got a problem regrading with my apps which is once I go to my apps, it sure will show me a login page instead of allow page? it always display the login page 1st then only display allow page, I had tried other apps, if I am 1st time user, It sure will appear the allow page only, it did not show me the login page. my question is how to I avoid my login page direct go to allow page? here is my login page picture here is my apps link https://apps.facebook.com/christmas_testing/ here is my facebook php jdk api coding <?php $fbconfig['appid' ] = "XXXXXXXXXXXXX"; $fbconfig['secret'] = "XXXXXXXXXXXXX"; $fbconfig['baseUrl'] = "myserverlink"; $fbconfig['appBaseUrl'] = "http://apps.facebook.com/christmas_testing/"; if (isset($_GET['code'])){ header("Location: " . $fbconfig['appBaseUrl']); exit; } if (isset($_GET['request_ids'])){ //user comes from invitation //track them if you need header("Location: " . $fbconfig['appBaseUrl']); } $user = null; //facebook user uid try{ include_once "facebook.php"; } catch(Exception $o){ echo '<pre>'; print_r($o); echo '</pre>'; } // Create our Application instance. $facebook = new Facebook(array( 'appId' => $fbconfig['appid'], 'secret' => $fbconfig['secret'], 'cookie' => true, )); //Facebook Authentication part $user = $facebook->getUser(); $loginUrl = $facebook->getLoginUrl( array( 'scope' => 'email,publish_stream,user_birthday,user_location,user_work_history,user_about_me,user_hometown' ) ); if ($user) { try { // Proceed knowing you have a logged in user who's authenticated. $user_profile = $facebook->api('/me'); } catch (FacebookApiException $e) { //you should use error_log($e); instead of printing the info on browser d($e); // d is a debug function defined at the end of this file $user = null; } } if (!$user) { echo "<script type='text/javascript'>top.location.href = '$loginUrl';</script>"; exit; } //get user basic description $userInfo = $facebook->api("/$user"); function d($d){ echo '<pre>'; print_r($d); echo '</pre>'; } ?

    Read the article

  • Issue with SQL query for activity stream/feed

    - by blabus
    I'm building an application that allows users to recommend music to each other, and am having trouble building a query that would return a 'stream' of recommendations that involve both the user themselves, as well as any of the user's friends. This is my table structure: Recommendations ID Sender Recipient [other columns...] -- ------ --------- ------------------ r1 u1 u3 ... r2 u3 u2 ... r3 u4 u3 ... Users ID Email First Name Last Name [other columns...] --- ----- ---------- --------- ------------------ u1 ... ... ... ... u2 ... ... ... ... u3 ... ... ... ... u4 ... ... ... ... Relationships ID Sender Recipient Status [other columns...] --- ------ --------- -------- ------------------ rl1 u1 u2 accepted ... rl2 u3 u1 accepted ... rl3 u1 u4 accepted ... rl4 u3 u2 accepted ... So for user 'u4' (who is friends with 'u1'), I want to query for a 'stream' of recommendations relevant to u4. This stream would include all recommendations in which either the sender or recipient is u4, as well as all recommendations in which the sender or recipient is u1 (the friend). This is what I have for the query so far: SELECT * FROM recommendations WHERE recommendations.sender IN ( SELECT sender FROM relationships WHERE recipient='u4' AND status='accepted' UNION SELECT recipient FROM relationships WHERE sender='u4' AND status='accepted') OR recommendations.recipient IN ( SELECT sender FROM relationships WHERE recipient='u4' AND status='accepted' UNION SELECT recipient FROM relationships WHERE sender='u4' AND status='accepted') UNION SELECT * FROM recommendations WHERE recommendations.sender='u4' OR recommendations.recipient='u4' GROUP BY recommendations.id ORDER BY datecreated DESC Which seems to work, as far as I can see (I'm no SQL expert). It returns all of the records from the Recommendations table that would be 'relevant' to a given user. However, I'm now having trouble also getting data from the Users table as well. The Recommendations table has the sender's and recipient's ID (foreign keys), but I'd also like to get the first and last name of each as well. I think I require some sort of JOIN, but I'm lost on how to proceed, and was looking for help on that. (And also, if anyone sees any areas for improvement in my current query, I'm all ears.) Thanks!

    Read the article

  • Windows using the DNS suffix search list on all lookups, even valid FQDNs. How to stop this?

    - by RealityGone
    When doing DNS lookups (specifically using nslookup, for some reason most things are not effected) Windows XP Pro SP3 is using the DNS suffix search list for every single one. Even for fully qualified domain names. For example I lookup "www.microsoft.com" but windows actually asks for "www.microsoft.com.eondream.com" (eondream.com is my primary domain). Now I can fix the issue by removing the Primary DNS suffix, but it seems to me that the DNS suffix search list should be for short, invalid names (where dots=0 or something). I'm sure I have a misconfiguration somewhere in windows but I don't know where. I've changed every option I can think of or find. Below is the output of ipconfig /all and nslookup (with debug & db2 enabled). This is using a static IP & (internal) DNS server. C:\ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : frayedlogic Primary Dns Suffix . . . . . . . : eondream.com Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : eondream.com Ethernet adapter Wireless Network Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Dell Wireless 1390 WLAN Mini-Card Physical Address. . . . . . . . . : 00-1B-FC-29-EB-6B Dhcp Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.13.32 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.13.13 DNS Servers . . . . . . . . . . . : 192.168.19.19 C:\nslookup Default Server: shardik.eondream.com Address: 192.168.19.19 set debug set db2 www.microsoft.com Server: shardik.eondream.com Address: 192.168.19.19 ------------ Got answer: HEADER: opcode = QUERY, id = 2, rcode = NOERROR header flags: response, want recursion, recursion avail. questions = 1, answers = 1, authority records = 0, additional = 0 QUESTIONS: www.microsoft.com.eondream.com, type = A, class = IN ANSWERS: - www.microsoft.com.eondream.com internet address = 208.69.36.132 ttl = 0 (0 secs) ------------ Non-authoritative answer: Name: www.microsoft.com.eondream.com Address: 208.69.36.132 (Note: it resolves to that IP because I use the opendns service and that is their suggestion page or whatever you want to call it) If I am reading the nslookup output correctly then it is not a problem with my DNS server because windows is actually asking for the incorrect domain.

    Read the article

  • How to use SSL with a WCF web service?

    - by Martin
    I have a web service in asp.net running and everything works fine. Now I need to access some methods in that web-service using SSL. It works perfect when I contact the web-service using http:// but with https:// I get "There was no endpoint listening at https://...". Can you please help me on how to set up my web.config to support both http and https access to my web service. I have tried to follow guidelines but I can't get it working. Some code: My TestService.svc: [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class TestService { [OperationContract] [WebGet(ResponseFormat = WebMessageFormat.Json)] public bool validUser(string email) { return true; } } My Web.config: <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> <behaviors> <endpointBehaviors> <behavior name="ServiceAspNetAjaxBehavior"> <enableWebScript /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="ServiceBehavior"> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> <behavior name=""> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <services> <service behaviorConfiguration="ServiceBehavior" name="TestService"> <endpoint address="" behaviorConfiguration="ServiceAspNetAjaxBehavior" binding="webHttpBinding" bindingConfiguration="ServiceBinding" contract="TestService" /> </service> </services> <bindings> <webHttpBinding> <binding name="ServiceBinding" maxBufferPoolSize="1000000" maxReceivedMessageSize="1000000"> <readerQuotas maxDepth="1000000" maxStringContentLength="1000000" maxArrayLength="1000000" maxBytesPerRead="1000000" maxNameTableCharCount="1000000"/> </binding> </webHttpBinding> </bindings> </system.serviceModel>

    Read the article

  • JQM dialog is opening in new page instead of dialog

    - by K D
    Thank you for taking the time to read my question. I'm trying to get a dialog box to open using Jquery mobile. I followed the documentation and used the data-rel="dialog" notation along with the data-transition="pop". Instead of a dialog appearing on the same page, I get a brand new page with the dialog appearing. Can someone kindly assist me on how to fix this functionality. Here is my code for the initial main page: <article> <ul data-role="listview" data-split-icon="star" data-split-theme="d" data-inset="true"> <li><a href="#black_seed_desc" data-rel="dialog" ><img src="black_seed.jpg"/> <h3>Black Seed Oil</h3> </a> <a href="#black_seed_purchase" data-rel="dialog" data-transition="pop">Purchase Black Seed Oil</a> </li> </ul> </article> Here is my code for the dialog page: <div data-role="dialog" id="black_seed_purchase" data-theme="c"> <section data-role="content"> <h1>Purchase Black Seed Oil?</h1> <p>By purchasing Black Seed Oil you will receive an email receipt copy sent to you for your reference.</p> <a href="#purchase_blackseed" data-inline="true" data-corners="true" data-rel="back" data-role="button" data-shadow="true" data-iconshadow="true" data-wrapperrels="span"> <span> <span>Buy: $49.99</span> <span>&nbsp;</span> </span> </a> <a href="#" data-role="button" data-rel="back" data-inline="true" data-corners="true" data-wrapperrels="span" data-shadow="true" data-iconshawdow="true"> <span> Cancel </span> </a> </section> </div> Here is a working example. http://jsfiddle.net/Gajotres/w3ptm/?

    Read the article

  • Magento - How to select mysql rows by max value?

    - by Damodar Bashyal
    mysql> SELECT * FROM `log_customer` WHERE `customer_id` = 224 LIMIT 0, 30; +--------+------------+-------------+---------------------+-----------+----------+ | log_id | visitor_id | customer_id | login_at | logout_at | store_id | +--------+------------+-------------+---------------------+-----------+----------+ | 817 | 50139 | 224 | 2011-03-21 23:56:56 | NULL | 1 | | 830 | 52317 | 224 | 2011-03-27 23:43:54 | NULL | 1 | | 1371 | 136549 | 224 | 2011-11-16 04:33:51 | NULL | 1 | | 1495 | 164024 | 224 | 2012-02-08 01:05:48 | NULL | 1 | | 2130 | 281854 | 224 | 2012-11-13 23:44:13 | NULL | 1 | +--------+------------+-------------+---------------------+-----------+----------+ 5 rows in set (0.00 sec) mysql> SELECT * FROM `customer_entity` WHERE `entity_id` = 224; +-----------+----------------+---------------------------+----------+---------------------+---------------------+ | entity_id | entity_type_id | email | group_id | created_at | updated_at | +-----------+----------------+---------------------------+----------+---------------------+---------------------+ | 224 | 1 | [email protected] | 3 | 2011-03-21 04:59:17 | 2012-11-13 23:46:23 | +-----------+----------------+---------------------------+----------+--------------+----------+-----------------+ 1 row in set (0.00 sec) How can i search for customers who hasn't logged in for last 10 months and their account has not been updated for last 10 months. I tried below but failed. $collection = Mage::getModel('customer/customer')->getCollection(); $collection->getSelect()->joinRight(array('l'=>'log_customer'), "customer_id=entity_id AND MAX(l.login_at) <= '" . date('Y-m-d H:i:s', strtotime('10 months ago')) . "'")->group('e.entity_id'); $collection->addAttributeToSelect('*'); $collection->addFieldToFilter('updated_at', array( 'lt' => date('Y-m-d H:i:s', strtotime('10 months ago')), 'datetime'=>true, )); $collection->addAttributeToFilter('group_id', array( 'neq' => 5, )); Above tables have one customer for reference. I have no idea how to use MAX() on joins. Thanks UPDATE: This seems returning correct data, but I would like to do magento way using resource collection, so i don't need to do load customer again on for loop. $read = Mage::getSingleton('core/resource')->getConnection('core_read'); $sql = "select * from ( select e.*,l.login_at from customer_entity as e left join log_customer as l on l.customer_id=e.entity_id group by e.entity_id order by l.login_at desc ) as l where ( l.login_at <= '".date('Y-m-d H:i:s', strtotime('10 months ago'))."' or ( l.created_at <= '".date('Y-m-d H:i:s', strtotime('10 months ago'))."' and l.login_at is NULL ) ) and group_id != 5"; $result = $read->fetchAll($sql); I have loaded full shell script to github https://github.com/dbashyal/Magento-ecommerce-Shell-Scripts/blob/master/shell/suspendCustomers.php

    Read the article

  • MySQL Database Query - Codeigniter

    - by user2450349
    I am building an application with Codeigniter and need some help with a DB query. I have a table called users with the following fields: user_id, user_name, user_password, user_email, user_role, user_manager_id In my app, I pull all records from the user table using the following: function get_clients() { $this->db->select('*'); $this->db->where('user_role', 'client'); $this->db->order_by("user_name", "Asc"); $query = $this->db->get("users"); return $query->result_array(); } This works as expected, however when I display the results in the view, I also want to display a new column called Manager which will display the managers user_name field. The user_manager_id is the id of the user from the same table. Im guessing you can create an outer join on the same table but not sure. In the view, I am displaying the returned info as follows: <table class="table table-striped" id="zero-configuration"> <thead> <tr> <th>Name</th> <th>Email</th> <th>Manager</th> </tr> </thead> <tbody> <?php foreach($clients as $row) { ?> <tr> <td><?php echo $row['user_name']; ?> (<?php echo $row['user_username']; ?>)</td> <td><?php echo $row['user_email']; ?></td> <td><?php echo $row['???']; ?></td> </tr> <?php } ?> </tbody> </table> Any idea of how I can form the query and display the manager name is the view? Example: user_id user_name user_password user_email user_role user_manager_id 1 Ollie adjjk34jcd [email protected] client null 2 James djklsdfsdjk [email protected] client 1 When i query the database, i want to display results like this: Ollie [email protected] James [email protected] Ollie

    Read the article

  • IntentService android download and return file to Activity

    - by Andrew G
    I have a fairly tricky situation that I'm trying to determine the best design for. The basics are this: I'm designing a messaging system with a similar interface to email. When a user clicks a message that has an attachment, an activity is spawned that shows the text of that message along with a paper clip signaling that there is an additional attachment. At this point, I begin preloading the attachment so that when the user clicks on it - it loads more quickly. currently, when the user clicks the attachment, it prompts with a loading dialog until the download is complete at which point it loads a separate attachment viewer activity, passing in the bmp byte array. I don't ever want to save attachments to persistent storage. The difficulty I have is in supporting rotation as well as home button presses etc. The download is currently done with a thread and handler setup. Instead of this, I'd like the flow to be the following: User loads message as before, preloading begins of attachment as before (invisible to user). When the user clicks on the attachment link, the attachment viewer activity is spawned right away. If the download was done, the image is displayed. If not, a dialog is shown in THIS activity until it is done and can be displayed. Note that ideally the download never restarts or else I've wasted cycles on the preload. Obviously I need some persistent background process that is able to keep downloading and is able to call back to arbitrarily bonded Activities. It seems like the IntentService almost fits my needs as it does its work in a background thread and has the Service (non UI) lifecycle. However, will it work for my other needs? I notice that common implementations for what I want to do get a Messenger from the caller Activity so that a Message object can be sent back to a Handler in the caller's thread. This is all well and good but what happens in my case when the caller Activity is Stopped or Destroyed and the currently active Activity (the attachment viewer) is showing? Is there some way to dynamically bind a new Activity to a running IntentService so that I can send a Message back to the new Activity? The other question is on the Message object. Can I send arbitrarily large data back in this package? For instance, rather than send back that "The file was downloaded", I need to send back the byte array of the downloaded file itself since I never want to write it to disk (and yes this needs to be the case). Any advice on achieving the behavior I want is greatly appreciated. I've not been working with Android for that long and I often get confused with how to best handle asynchronous processes over the course of the Activity lifecycle especially when it comes to orientation changes and home button presses...

    Read the article

  • Data extract from website URL

    - by user2522395
    From this below script I am able to extract all links of particular website, But i need to know how I can generate data from extracted links especially like eMail, Phone number if its there Please help how i will modify the existing script and get the result or if you have full sample script please provide me. Private Sub btnGo_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnGo.Click 'url must be in this format: http://www.example.com/ Dim aList As ArrayList = Spider("http://www.qatarliving.com", 1) For Each url As String In aList lstUrls.Items.Add(url) Next End Sub Private Function Spider(ByVal url As String, ByVal depth As Integer) As ArrayList 'aReturn is used to hold the list of urls Dim aReturn As New ArrayList 'aStart is used to hold the new urls to be checked Dim aStart As ArrayList = GrabUrls(url) 'temp array to hold data being passed to new arrays Dim aTemp As ArrayList 'aNew is used to hold new urls before being passed to aStart Dim aNew As New ArrayList 'add the first batch of urls aReturn.AddRange(aStart) 'if depth is 0 then only return 1 page If depth < 1 Then Return aReturn 'loops through the levels of urls For i = 1 To depth 'grabs the urls from each url in aStart For Each tUrl As String In aStart 'grabs the urls and returns non-duplicates aTemp = GrabUrls(tUrl, aReturn, aNew) 'add the urls to be check to aNew aNew.AddRange(aTemp) Next 'swap urls to aStart to be checked aStart = aNew 'add the urls to the main list aReturn.AddRange(aNew) 'clear the temp array aNew = New ArrayList Next Return aReturn End Function Private Overloads Function GrabUrls(ByVal url As String) As ArrayList 'will hold the urls to be returned Dim aReturn As New ArrayList Try 'regex string used: thanks google Dim strRegex As String = "<a.*?href=""(.*?)"".*?>(.*?)</a>" 'i used a webclient to get the source 'web requests might be faster Dim wc As New WebClient 'put the source into a string Dim strSource As String = wc.DownloadString(url) Dim HrefRegex As New Regex(strRegex, RegexOptions.IgnoreCase Or RegexOptions.Compiled) 'parse the urls from the source Dim HrefMatch As Match = HrefRegex.Match(strSource) 'used later to get the base domain without subdirectories or pages Dim BaseUrl As New Uri(url) 'while there are urls While HrefMatch.Success = True 'loop through the matches Dim sUrl As String = HrefMatch.Groups(1).Value 'if it's a page or sub directory with no base url (domain) If Not sUrl.Contains("http://") AndAlso Not sUrl.Contains("www") Then 'add the domain plus the page Dim tURi As New Uri(BaseUrl, sUrl) sUrl = tURi.ToString End If 'if it's not already in the list then add it If Not aReturn.Contains(sUrl) Then aReturn.Add(sUrl) 'go to the next url HrefMatch = HrefMatch.NextMatch End While Catch ex As Exception 'catch ex here. I left it blank while debugging End Try Return aReturn End Function Private Overloads Function GrabUrls(ByVal url As String, ByRef aReturn As ArrayList, ByRef aNew As ArrayList) As ArrayList 'overloads function to check duplicates in aNew and aReturn 'temp url arraylist Dim tUrls As ArrayList = GrabUrls(url) 'used to return the list Dim tReturn As New ArrayList 'check each item to see if it exists, so not to grab the urls again For Each item As String In tUrls If Not aReturn.Contains(item) AndAlso Not aNew.Contains(item) Then tReturn.Add(item) End If Next Return tReturn End Function

    Read the article

  • Display BLOB (image) through JSP

    - by jMarcel
    I have a code to show a chart o employees. The data (name, phone, photo etc) are stored in SQLServer and displayed through JSP. Showing the data is ok, except the image .jpg (stored in IMAGE=BLOB column). By the way, I've already got the image displayed (see code below), but I dont't know how to put it in the area defined in a .css (see code below, too), since the image got through the resultSet is loaded in the whole page in the browser. Does anyone knows how can I 'frame' the image ? <% Connection con = FactoryConnection_SQL_SERVER.getConnection("empCHART"); Statement stSuper = con.createStatement(); Statement stSetor = con.createStatement(); Blob image = null; byte[] imgData = null; ResultSet rsSuper = stSuper.executeQuery("SELECT * FROM funChart WHERE dept = 'myDept'"); if (rsSuper.next()) { image = rsSuper.getBlob(12); imgData = image.getBytes(1, (int) image.length()); response.setContentType("image/gif"); OutputStream o = response.getOutputStream(); //o.write(imgData); // even here we got the same as below. //o.flush(); //o.close(); --[...] <table style="margin: 0px; margin-top: 15px;"> <tr> <td id="photo"> <img title="<%=rsSuper.getString("empName").trim()%>" src="<%= o.wite(imageData); o.flush(); o.close(); %>" /> </td> </td> <td id="empData"> <h3><%=rsSuper.getString("empName")%></h3> <p><%=rsSuper.getString("Position")%></p> <p>Id:<br/><%=rsSuper.getString("id")%></p> <p>Phone:<br/><%=rsSuper.getString("Phone")%></p> <p>E-Mail:<br/><%=rsSuper.getString("Email")%></p> </td> </table> And here is the fragment supposed to frame the image: #photo { padding: 0px; vertical-align: middle; text-align: center; width: 170px; height: 220px; } Thanks in advance !

    Read the article

  • Using an alternate JSON Serializer in ASP.NET Web API

    - by Rick Strahl
    The new ASP.NET Web API that Microsoft released alongside MVC 4.0 Beta last week is a great framework for building REST and AJAX APIs. I've been working with it for quite a while now and I really like the way it works and the complete set of features it provides 'in the box'. It's about time that Microsoft gets a decent API for building generic HTTP endpoints into the framework. DataContractJsonSerializer sucks As nice as Web API's overall design is one thing still sucks: The built-in JSON Serialization uses the DataContractJsonSerializer which is just too limiting for many scenarios. The biggest issues I have with it are: No support for untyped values (object, dynamic, Anonymous Types) MS AJAX style Date Formatting Ugly serialization formats for types like Dictionaries To me the most serious issue is dealing with serialization of untyped objects. I have number of applications with AJAX front ends that dynamically reformat data from business objects to fit a specific message format that certain UI components require. The most common scenario I have there are IEnumerable query results from a database with fields from the result set rearranged to fit the sometimes unconventional formats required for the UI components (like jqGrid for example). Creating custom types to fit these messages seems like overkill and projections using Linq makes this much easier to code up. Alas DataContractJsonSerializer doesn't support it. Neither does DataContractSerializer for XML output for that matter. What this means is that you can't do stuff like this in Web API out of the box:public object GetAnonymousType() { return new { name = "Rick", company = "West Wind", entered= DateTime.Now }; } Basically anything that doesn't have an explicit type DataContractJsonSerializer will not let you return. FWIW, the same is true for XmlSerializer which also doesn't work with non-typed values for serialization. The example above is obviously contrived with a hardcoded object graph, but it's not uncommon to get dynamic values returned from queries that have anonymous types for their result projections. Apparently there's a good possibility that Microsoft will ship Json.NET as part of Web API RTM release.  Scott Hanselman confirmed this as a footnote in his JSON Dates post a few days ago. I've heard several other people from Microsoft confirm that Json.NET will be included and be the default JSON serializer, but no details yet in what capacity it will show up. Let's hope it ends up as the default in the box. Meanwhile this post will show you how you can use it today with the beta and get JSON that matches what you should see in the RTM version. What about JsonValue? To be fair Web API DOES include a new JsonValue/JsonObject/JsonArray type that allow you to address some of these scenarios. JsonValue is a new type in the System.Json assembly that can be used to build up an object graph based on a dictionary. It's actually a really cool implementation of a dynamic type that allows you to create an object graph and spit it out to JSON without having to create .NET type first. JsonValue can also receive a JSON string and parse it without having to actually load it into a .NET type (which is something that's been missing in the core framework). This is really useful if you get a JSON result from an arbitrary service and you don't want to explicitly create a mapping type for the data returned. For serialization you can create an object structure on the fly and pass it back as part of an Web API action method like this:public JsonValue GetJsonValue() { dynamic json = new JsonObject(); json.name = "Rick"; json.company = "West Wind"; json.entered = DateTime.Now; dynamic address = new JsonObject(); address.street = "32 Kaiea"; address.zip = "96779"; json.address = address; dynamic phones = new JsonArray(); json.phoneNumbers = phones; dynamic phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); phone = new JsonObject(); phone.type = "Home"; phone.number = "808 123-1233"; phones.Add(phone); //var jsonString = json.ToString(); return json; } which produces the following output (formatted here for easier reading):{ name: "rick", company: "West Wind", entered: "2012-03-08T15:33:19.673-10:00", address: { street: "32 Kaiea", zip: "96779" }, phoneNumbers: [ { type: "Home", number: "808 123-1233" }, { type: "Mobile", number: "808 123-1234" }] } If you need to build a simple JSON type on the fly these types work great. But if you have an existing type - or worse a query result/list that's already formatted JsonValue et al. become a pain to work with. As far as I can see there's no way to just throw an object instance at JsonValue and have it convert into JsonValue dictionary. It's a manual process. Using alternate Serializers in Web API So, currently the default serializer in WebAPI is DataContractJsonSeriaizer and I don't like it. You may not either, but luckily you can swap the serializer fairly easily. If you'd rather use the JavaScriptSerializer built into System.Web.Extensions or Json.NET today, it's not too difficult to create a custom MediaTypeFormatter that uses these serializers and can replace or partially replace the native serializer. Here's a MediaTypeFormatter implementation using the ASP.NET JavaScriptSerializer:using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using System.IO; namespace Westwind.Web.WebApi { public class JavaScriptSerializerFormatter : MediaTypeFormatter { public JavaScriptSerializerFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type== typeof(JsonArray) ) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var ser = new JavaScriptSerializer(); string json; using (var sr = new StreamReader(stream)) { json = sr.ReadToEnd(); sr.Close(); } object val = ser.Deserialize(json,type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var ser = new JavaScriptSerializer(); var json = ser.Serialize(value); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } Formatter implementation is pretty simple: You override 4 methods to tell which types you can handle and then handle the input or output streams to create/parse the JSON data. Note that when creating output you want to take care to still allow JsonValue/JsonObject/JsonArray types to be handled by the default serializer so those objects serialize properly - if you let either JavaScriptSerializer or JSON.NET handle them they'd try to render the dictionaries which is very undesirable. If you'd rather use Json.NET here's the JSON.NET version of the formatter:// this code requires a reference to JSON.NET in your project #if true using System; using System.Net.Http.Formatting; using System.Threading.Tasks; using System.Web.Script.Serialization; using System.Json; using Newtonsoft.Json; using System.IO; using Newtonsoft.Json.Converters; namespace Westwind.Web.WebApi { public class JsonNetFormatter : MediaTypeFormatter { public JsonNetFormatter() { SupportedMediaTypes.Add(new System.Net.Http.Headers.MediaTypeHeaderValue("application/json")); } protected override bool CanWriteType(Type type) { // don't serialize JsonValue structure use default for that if (type == typeof(JsonValue) || type == typeof(JsonObject) || type == typeof(JsonArray)) return false; return true; } protected override bool CanReadType(Type type) { if (type == typeof(IKeyValueModel)) return false; return true; } protected override System.Threading.Tasks.Taskobject OnReadFromStreamAsync(Type type, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext) { var task = Taskobject.Factory.StartNew(() = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; var sr = new StreamReader(stream); var jreader = new JsonTextReader(sr); var ser = new JsonSerializer(); ser.Converters.Add(new IsoDateTimeConverter()); object val = ser.Deserialize(jreader, type); return val; }); return task; } protected override System.Threading.Tasks.Task OnWriteToStreamAsync(Type type, object value, System.IO.Stream stream, System.Net.Http.Headers.HttpContentHeaders contentHeaders, FormatterContext formatterContext, System.Net.TransportContext transportContext) { var task = Task.Factory.StartNew( () = { var settings = new JsonSerializerSettings() { NullValueHandling = NullValueHandling.Ignore, }; string json = JsonConvert.SerializeObject(value, Formatting.Indented, new JsonConverter[1] { new IsoDateTimeConverter() } ); byte[] buf = System.Text.Encoding.Default.GetBytes(json); stream.Write(buf,0,buf.Length); stream.Flush(); }); return task; } } } #endif   One advantage of the Json.NET serializer is that you can specify a few options on how things are formatted and handled. You get null value handling and you can plug in the IsoDateTimeConverter which is nice to product proper ISO dates that I would expect any Json serializer to output these days. Hooking up the Formatters Once you've created the custom formatters you need to enable them for your Web API application. To do this use the GlobalConfiguration.Configuration object and add the formatter to the Formatters collection. Here's what this looks like hooked up from Application_Start in a Web project:protected void Application_Start(object sender, EventArgs e) { // Action based routing (used for RPC calls) RouteTable.Routes.MapHttpRoute( name: "StockApi", routeTemplate: "stocks/{action}/{symbol}", defaults: new { symbol = RouteParameter.Optional, controller = "StockApi" } ); // WebApi Configuration to hook up formatters and message handlers // optional RegisterApis(GlobalConfiguration.Configuration); } public static void RegisterApis(HttpConfiguration config) { // Add JavaScriptSerializer formatter instead - add at top to make default //config.Formatters.Insert(0, new JavaScriptSerializerFormatter()); // Add Json.net formatter - add at the top so it fires first! // This leaves the old one in place so JsonValue/JsonObject/JsonArray still are handled config.Formatters.Insert(0, new JsonNetFormatter()); } One thing to remember here is the GlobalConfiguration object which is Web API's static configuration instance. I think this thing is seriously misnamed given that GlobalConfiguration could stand for anything and so is hard to discover if you don't know what you're looking for. How about WebApiConfiguration or something more descriptive? Anyway, once you know what it is you can use the Formatters collection to insert your custom formatter. Note that I insert my formatter at the top of the list so it takes precedence over the default formatter. I also am not removing the old formatter because I still want JsonValue/JsonObject/JsonArray to be handled by the default serialization mechanism. Since they process in sequence and I exclude processing for these types JsonValue et al. still get properly serialized/deserialized. Summary Currently DataContractJsonSerializer in Web API is a pain, but at least we have the ability with relatively limited effort to replace the MediaTypeFormatter and plug in our own JSON serializer. This is useful for many scenarios - if you have existing client applications that used MVC JsonResult or ASP.NET AJAX results from ASMX AJAX services you can plug in the JavaScript serializer and get exactly the same serializer you used in the past so your results will be the same and don't potentially break clients. JSON serializers do vary a bit in how they serialize some of the more complex types (like Dictionaries and dates for example) and so if you're migrating it might be helpful to ensure your client code doesn't break when you switch to ASP.NET Web API. Going forward it looks like Microsoft is planning on plugging in Json.Net into Web API and make that the default. I think that's an awesome choice since Json.net has been around forever, is fast and easy to use and provides a ton of functionality as part of this great library. I just wish Microsoft would have figured this out sooner instead of now at the last minute integrating with it especially given that Json.Net has a similar set of lower level JSON objects JsonValue/JsonObject etc. which now will end up being duplicated by the native System.Json stuff. It's not like we don't already have enough confusion regarding which JSON serializer to use (JavaScriptSerializer, DataContractJsonSerializer, JsonValue/JsonObject/JsonArray and now Json.net). For years I've been using my own JSON serializer because the built in choices are both limited. However, with an official encorsement of Json.Net I'm happily moving on to use that in my applications. Let's see and hope Microsoft gets this right before ASP.NET Web API goes gold.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • HTG Reviews the CODE Keyboard: Old School Construction Meets Modern Amenities

    - by Jason Fitzpatrick
    There’s nothing quite as satisfying as the smooth and crisp action of a well built keyboard. If you’re tired of  mushy keys and cheap feeling keyboards, a well-constructed mechanical keyboard is a welcome respite from the $10 keyboard that came with your computer. Read on as we put the CODE mechanical keyboard through the paces. What is the CODE Keyboard? The CODE keyboard is a collaboration between manufacturer WASD Keyboards and Jeff Atwood of Coding Horror (the guy behind the Stack Exchange network and Discourse forum software). Atwood’s focus was incorporating the best of traditional mechanical keyboards and the best of modern keyboard usability improvements. In his own words: The world is awash in terrible, crappy, no name how-cheap-can-we-make-it keyboards. There are a few dozen better mechanical keyboard options out there. I’ve owned and used at least six different expensive mechanical keyboards, but I wasn’t satisfied with any of them, either: they didn’t have backlighting, were ugly, had terrible design, or were missing basic functions like media keys. That’s why I originally contacted Weyman Kwong of WASD Keyboards way back in early 2012. I told him that the state of keyboards was unacceptable to me as a geek, and I proposed a partnership wherein I was willing to work with him to do whatever it takes to produce a truly great mechanical keyboard. Even the ardent skeptic who questions whether Atwood has indeed created a truly great mechanical keyboard certainly can’t argue with the position he starts from: there are so many agonizingly crappy keyboards out there. Even worse, in our opinion, is that unless you’re a typist of a certain vintage there’s a good chance you’ve never actually typed on a really nice keyboard. Those that didn’t start using computers until the mid-to-late 1990s most likely have always typed on modern mushy-key keyboards and never known the joy of typing on a really responsive and crisp mechanical keyboard. Is our preference for and love of mechanical keyboards shining through here? Good. We’re not even going to try and hide it. So where does the CODE keyboard stack up in pantheon of keyboards? Read on as we walk you through the simple setup and our experience using the CODE. Setting Up the CODE Keyboard Although the setup of the CODE keyboard is essentially plug and play, there are two distinct setup steps that you likely haven’t had to perform on a previous keyboard. Both highlight the degree of care put into the keyboard and the amount of customization available. Inside the box you’ll find the keyboard, a micro USB cable, a USB-to-PS2 adapter, and a tool which you may be unfamiliar with: a key puller. We’ll return to the key puller in a moment. Unlike the majority of keyboards on the market, the cord isn’t permanently affixed to the keyboard. What does this mean for you? Aside from the obvious need to plug it in yourself, it makes it dead simple to repair your own keyboard cord if it gets attacked by a pet, mangled in a mechanism on your desk, or otherwise damaged. It also makes it easy to take advantage of the cable routing channels in on the underside of the keyboard to  route your cable exactly where you want it. While we’re staring at the underside of the keyboard, check out those beefy rubber feet. By peripherals standards they’re huge (and there is six instead of the usual four). Once you plunk the keyboard down where you want it, it might as well be glued down the rubber feet work so well. After you’ve secured the cable and adjusted it to your liking, there is one more task  before plug the keyboard into the computer. On the bottom left-hand side of the keyboard, you’ll find a small recess in the plastic with some dip switches inside: The dip switches are there to switch hardware functions for various operating systems, keyboard layouts, and to enable/disable function keys. By toggling the dip switches you can change the keyboard from QWERTY mode to Dvorak mode and Colemak mode, the two most popular alternative keyboard configurations. You can also use the switches to enable Mac-functionality (for Command/Option keys). One of our favorite little toggles is the SW3 dip switch: you can disable the Caps Lock key; goodbye accidentally pressing Caps when you mean to press Shift. You can review the entire dip switch configuration chart here. The quick-start for Windows users is simple: double check that all the switches are in the off position (as seen in the photo above) and then simply toggle SW6 on to enable the media and backlighting function keys (this turns the menu key on the keyboard into a function key as typically found on laptop keyboards). After adjusting the dip switches to your liking, plug the keyboard into an open USB port on your computer (or into your PS/2 port using the included adapter). Design, Layout, and Backlighting The CODE keyboard comes in two flavors, a traditional 87-key layout (no number pad) and a traditional 104-key layout (number pad on the right hand side). We identify the layout as traditional because, despite some modern trapping and sneaky shortcuts, the actual form factor of the keyboard from the shape of the keys to the spacing and position is as classic as it comes. You won’t have to learn a new keyboard layout and spend weeks conditioning yourself to a smaller than normal backspace key or a PgUp/PgDn pair in an unconventional location. Just because the keyboard is very conventional in layout, however, doesn’t mean you’ll be missing modern amenities like media-control keys. The following additional functions are hidden in the F11, F12, Pause button, and the 2×6 grid formed by the Insert and Delete rows: keyboard illumination brightness, keyboard illumination on/off, mute, and then the typical play/pause, forward/backward, stop, and volume +/- in Insert and Delete rows, respectively. While we weren’t sure what we’d think of the function-key system at first (especially after retiring a Microsoft Sidewinder keyboard with a huge and easily accessible volume knob on it), it took less than a day for us to adapt to using the Fn key, located next to the right Ctrl key, to adjust our media playback on the fly. Keyboard backlighting is a largely hit-or-miss undertaking but the CODE keyboard nails it. Not only does it have pleasant and easily adjustable through-the-keys lighting but the key switches the keys themselves are attached to are mounted to a steel plate with white paint. Enough of the light reflects off the interior cavity of the keys and then diffuses across the white plate to provide nice even illumination in between the keys. Highlighting the steel plate beneath the keys brings us to the actual construction of the keyboard. It’s rock solid. The 87-key model, the one we tested, is 2.0 pounds. The 104-key is nearly a half pound heavier at 2.42 pounds. Between the steel plate, the extra-thick PCB board beneath the steel plate, and the thick ABS plastic housing, the keyboard has very solid feel to it. Combine that heft with the previously mentioned thick rubber feet and you have a tank-like keyboard that won’t budge a millimeter during normal use. Examining The Keys This is the section of the review the hardcore typists and keyboard ninjas have been waiting for. We’ve looked at the layout of the keyboard, we’ve looked at the general construction of it, but what about the actual keys? There are a wide variety of keyboard construction techniques but the vast majority of modern keyboards use a rubber-dome construction. The key is floated in a plastic frame over a rubber membrane that has a little rubber dome for each key. The press of the physical key compresses the rubber dome downwards and a little bit of conductive material on the inside of the dome’s apex connects with the circuit board. Despite the near ubiquity of the design, many people dislike it. The principal complaint is that dome keyboards require a complete compression to register a keystroke; keyboard designers and enthusiasts refer to this as “bottoming out”. In other words, the register the “b” key, you need to completely press that key down. As such it slows you down and requires additional pressure and movement that, over the course of tens of thousands of keystrokes, adds up to a whole lot of wasted time and fatigue. The CODE keyboard features key switches manufactured by Cherry, a company that has manufactured key switches since the 1960s. Specifically the CODE features Cherry MX Clear switches. These switches feature the same classic design of the other Cherry switches (such as the MX Blue and Brown switch lineups) but they are significantly quieter (yes this is a mechanical keyboard, but no, your neighbors won’t think you’re firing off a machine gun) as they lack the audible click found in most Cherry switches. This isn’t to say that they keyboard doesn’t have a nice audible key press sound when the key is fully depressed, but that the key mechanism isn’t doesn’t create a loud click sound when triggered. One of the great features of the Cherry MX clear is a tactile “bump” that indicates the key has been compressed enough to register the stroke. For touch typists the very subtle tactile feedback is a great indicator that you can move on to the next stroke and provides a welcome speed boost. Even if you’re not trying to break any word-per-minute records, that little bump when pressing the key is satisfying. The Cherry key switches, in addition to providing a much more pleasant typing experience, are also significantly more durable than dome-style key switch. Rubber dome switch membrane keyboards are typically rated for 5-10 million contacts whereas the Cherry mechanical switches are rated for 50 million contacts. You’d have to write the next War and Peace  and follow that up with A Tale of Two Cities: Zombie Edition, and then turn around and transcribe them both into a dozen different languages to even begin putting a tiny dent in the lifecycle of this keyboard. So what do the switches look like under the classicly styled keys? You can take a look yourself with the included key puller. Slide the loop between the keys and then gently beneath the key you wish to remove: Wiggle the key puller gently back and forth while exerting a gentle upward pressure to pop the key off; You can repeat the process for every key, if you ever find yourself needing to extract piles of cat hair, Cheeto dust, or other foreign objects from your keyboard. There it is, the naked switch, the source of that wonderful crisp action with the tactile bump on each keystroke. The last feature worthy of a mention is the N-key rollover functionality of the keyboard. This is a feature you simply won’t find on non-mechanical keyboards and even gaming keyboards typically only have any sort of key roller on the high-frequency keys like WASD. So what is N-key rollover and why do you care? On a typical mass-produced rubber-dome keyboard you cannot simultaneously press more than two keys as the third one doesn’t register. PS/2 keyboards allow for unlimited rollover (in other words you can’t out type the keyboard as all of your keystrokes, no matter how fast, will register); if you use the CODE keyboard with the PS/2 adapter you gain this ability. If you don’t use the PS/2 adapter and use the native USB, you still get 6-key rollover (and the CTRL, ALT, and SHIFT don’t count towards the 6) so realistically you still won’t be able to out type the computer as even the more finger twisting keyboard combos and high speed typing will still fall well within the 6-key rollover. The rollover absolutely doesn’t matter if you’re a slow hunt-and-peck typist, but if you’ve read this far into a keyboard review there’s a good chance that you’re a serious typist and that kind of quality construction and high-number key rollover is a fantastic feature.  The Good, The Bad, and the Verdict We’ve put the CODE keyboard through the paces, we’ve played games with it, typed articles with it, left lengthy comments on Reddit, and otherwise used and abused it like we would any other keyboard. The Good: The construction is rock solid. In an emergency, we’re confident we could use the keyboard as a blunt weapon (and then resume using it later in the day with no ill effect on the keyboard). The Cherry switches are an absolute pleasure to type on; the Clear variety found in the CODE keyboard offer a really nice middle-ground between the gun-shot clack of a louder mechanical switch and the quietness of a lesser-quality dome keyboard without sacrificing quality. Touch typists will love the subtle tactile bump feedback. Dip switch system makes it very easy for users on different systems and with different keyboard layout needs to switch between operating system and keyboard layouts. If you’re investing a chunk of change in a keyboard it’s nice to know you can take it with you to a different operating system or “upgrade” it to a new layout if you decide to take up Dvorak-style typing. The backlighting is perfect. You can adjust it from a barely-visible glow to a blazing light-up-the-room brightness. Whatever your intesity preference, the white-coated steel backplate does a great job diffusing the light between the keys. You can easily remove the keys for cleaning (or to rearrange the letters to support a new keyboard layout). The weight of the unit combined with the extra thick rubber feet keep it planted exactly where you place it on the desk. The Bad: While you’re getting your money’s worth, the $150 price tag is a shock when compared to the $20-60 price tags you find on lower-end keyboards. People used to large dedicated media keys independent of the traditional key layout (such as the large buttons and volume controls found on many modern keyboards) might be off put by the Fn-key style media controls on the CODE. The Verdict: The keyboard is clearly and heavily influenced by the needs of serious typists. Whether you’re a programmer, transcriptionist, or just somebody that wants to leave the lengthiest article comments the Internet has ever seen, the CODE keyboard offers a rock solid typing experience. Yes, $150 isn’t pocket change, but the quality of the CODE keyboard is so high and the typing experience is so enjoyable, you’re easily getting ten times the value you’d get out of purchasing a lesser keyboard. Even compared to other mechanical keyboards on the market, like the Das Keyboard, you’re still getting more for your money as other mechanical keyboards don’t come with the lovely-to-type-on Cherry MX Clear switches, back lighting, and hardware-based operating system keyboard layout switching. If it’s in your budget to upgrade your keyboard (especially if you’ve been slogging along with a low-end rubber-dome keyboard) there’s no good reason to not pickup a CODE keyboard. Key animation courtesy of Geekhack.org user Lethal Squirrel.       

    Read the article

  • CodePlex Daily Summary for Thursday, March 01, 2012

    CodePlex Daily Summary for Thursday, March 01, 2012Popular ReleasesMetodología General Ajustada - MGA: 01.09.08: Cambios John: Cambios en el MDI: Habilitación del menú e ícono de Imprimir. Deshabilitación de menú Ayuda y opciones de Importar y Exportar del menú Proyectos temporalmente. Integración con código de Crystal Report. Validaciones con Try-Catch al generar los reportes, personalización de los formularios en estilos y botones y validación de selección de tipo de reporte. Creación de instalador con TODOS los cambios y la creación de las carpetas asociadas a los RPT.WatchersNET CKEditor™ Provider for DotNetNuke®: CKEditor Provider 1.14.01: Whats NewAdded New Plugin "Ventrian News Articles Link Selector" to select an Article Link from the News Article Module (This Plugin is not visible by default in your Toolbar, you need to manually add the 'newsarticleslinks' to your toolbarset) http://www.watchersnet.de/Portals/0/screenshots/dnn/CKEditorNewsArticlesLinks.png File-Browser: Added Paging to the Files List. You can define the Page Size in the Options (Default Value: 20) http://www.watchersnet.de/Portals/0/screenshots/dnn/CKEdito...MyRouter (Virtual WiFi Router): MyRouter 1.0 (Beta): A friendlier User Interface. A logger file to catch exceptions so you may send it to use to improve and fix any bugs that may occur. A feedback form because we always love hearing what you guy's think of MyRouter. Check for update menu item for you to stay up to date will the latest changes. Facebook fan page so you may spread the word and share MyRouter with friends and family And Many other exciting features were sure your going to love!WPF Sound Visualization Library: WPF SVL 0.3 (Source, Binaries, Examples, Help): Version 0.3 of WPFSVL. This includes three new controls: an equalizer, a digital clock, and a time editor.Thai Flood Watch: Thai Flood Watch - Source: non commercial use only ** This project supported by Department of Computer Science KhonKaen University Thailand.ZXing.Net: ZXing.Net 0.4.0.0: sync with rev. 2196 of the java version important fix for RGBLuminanceSource generating barcode bitmaps Windows Phone demo client (only tested with emulator, because I don't have a Windows Phone) Barcode generation support for Windows Forms demo client Webcam support for Windows Forms demo clientOrchard Project: Orchard 1.4: Please read our release notes for Orchard 1.4: http://docs.orchardproject.net/Documentation/Orchard-1-4-Release-Notes.NET Assembly Information: Assembly Information 2.1.0.1: - Fixed the issue in which AnyCPU binaries were shown as 32bit - Added support to show the errors in-case if some dlls failed to load.FluentData -Micro ORM with a fluent API that makes it simple to query a database: FluentData version 1.2: New features: - QueryValues method - Added support for automapping to enumerations (both int and string are supported). Fixed 2 reported issues.NetSqlAzMan - .NET SQL Authorization Manager: 3.6.0.15: 3.6.0.15 28-Feb-2012 • Fix: The communication object, System.ServiceModel.Channels.ServiceChannel, cannot be used for communication because it is in the Faulted state. Work Item 10435: http://netsqlazman.codeplex.com/workitem/10435 • Fix: Made StorageCache thread safe. Thanks to tangrl. • Fix: Members property of SqlAzManApplicationGroup is not functioning. Thanks to tangrl. Work Item 10267: http://netsqlazman.codeplex.com/workitem/10267 • Fix: Indexer are making database calls. Thanks to t...SCCM Client Actions Tool: Client Actions Tool v1.1: SCCM Client Actions Tool v1.1 is the latest version. It comes with following changes since last version: Added stop button to stop the ongoing process. Added action "Query update status". Added option "saveOnlineComputers" in config.ini to enable saving list of online computers from last session. Default value for "LatestClientVersion" set to SP2 R3 (4.00.6487.2157). Wuauserv service manual startup mode is considered healthy on Windows 7. Errors are now suppressed in checkReleases...Kinect PowerPoint Control: Kinect PowerPoint Control v1.1: Updated for Kinect SDK 1.0.SharpCompress - a fully native C# library for RAR, 7Zip, Zip, Tar, GZip, BZip2: SharpCompress 0.8: API Updates: SOLID Extract Method for Archives (7Zip and RAR). ExtractAllEntries method on Archive classes will extract archives as a streaming file. This can offer better 7Zip extraction performance if any of the entries are solid. The IsSolid method on 7Zip archives will return true if any are solid. Removed IExtractionListener was removed in favor of events. Unit tests show example. Bug fixes: PPMd passes tests plus other fixes (Thanks Pavel) Zip used to always write a Post Descri...Social Network Importer for NodeXL: SocialNetImporter(v.1.3): This new version includes: - Download new networks for Facebook fan pages. - New options for downloading more posts - Bug fixes To use the new graph data provider, do the following: Unzip the Zip file into the "PlugIns" folder that can be found in the NodeXL installation folder (i.e "C:\Program Files\Social Media Research Foundation\NodeXL Excel Template\PlugIns") Open NodeXL template and you can access the new importer from the "Import" menuASP.NET REST Services Framework: Release 1.1 - Standard version: Beginning from v1.1 the REST-services Framework is compatible with ASP.NET Routing model as well with CRUD (Create, Read, Update, and Delete) principle. These two are often important when building REST API functionality within your application. It also includes ability to apply Filters to a class to target all WebRest methods, as well as some performance enhancements. New version includes Metadata Explorer providing ability exploring the existing services that becomes essential as the number ...SQL Live Monitor: SQL Live Monitor 1.31: A quick fix to make it this version work with SQL 2012. Version 2 already has 2012 working, but am still developing the UI in version 2, so this is just an interim fix to allow user to monitor SQL 2012.Content Slider Module for DotNetNuke: 01.02.00: This release has the following updates and new features: Feature: One-Click Enabling of Pager Setting Feature: Cache Sliders for Performance Feature: Configurable Cache Setting Enhancement: Transitions can be Selected Bug: Secure Folder Images not Viewable Bug: Sliders Disappear on Postback Bug: Remote Images Cause Error Bug: Deleted Images Cause Error System Requirements DotNetNuke v06.00.00 or newer .Net Framework v3.5 SP1 or newer SQL Server 2005 or newerImage Resizer for Windows: Image Resizer 3 Preview 3: Here is yet another iteration toward what will eventually become Image Resizer 3. This release is stable. However, I'm calling it a preview since there are still many features I'd still like to add before calling it complete. Updated on February 28 to fix an issue with installing on multi-user machines. As usual, here is my progress report. Done Preview 3 Fix: 3206 3076 3077 5688 Fix: 7420 Fix: 7527 Fix: 7576 7612 Preview 2 6308 6309 Fix: 7339 Fix: 7357 Preview 1 UI...Finestra Virtual Desktops: 2.5.4500: This is a bug fix release for version 2.5. It fixes several things and adds a couple of minor features. See the 2.5 release notes for more information on the major new features in that version. Important - If Finestra crashes on startup for you, you must install the Visual C++ 2010 runtime from http://www.microsoft.com/download/en/details.aspx?id=5555. Fixes a bug with window animations not refreshing the screen on XP and with DWM off Fixes a bug with with crashing on XP due to a bug in t...Media Companion: MC 3.432b Release: General Now remembers window location. Catching a few more exceptions when an image is blank TV A couple of UI tweaks Movies Fixed the actor name displaying HTML Fixed crash when using Save files as "movie.nfo", "movie.tbn", & "fanart.jpg" New CSV template for HTML output function Added <createdate> tag for HTML output A couple of UI tweaks Known Issues Multiepisodes are not handled correctly in MC. The created nfo is valid, but they are not displayed in MC correctly & saving the...New Projectsabac: abac cn websiteAION Launcher: simple aion launcher...just edit the background image of your choosing inside the code and other things such as the links for the buttons and the ip adress and port of the serverAXTFSTool: Dynamics AX tool that connects to your project's TFS and lists the objects your colleagues have changed. Written in C#, still under development and improvements. Useful for team leaders, deployment managers, etc.cookieTopo: Topo map viewerCrmFetchKit.js: Simple Library at allows the execution of fetchxml queries via JavaScript for Dynamics CRM 2011 (using the new WCF endpoints). Like the CrmRestKit this framework uses the promise/A capacities of jQuery. The code and the idea for this framework bases on the CrmServiceToolkit (http://crmtoolkit.codeplex.com/) developed by Daniel Cai. cy univerX engine: ????????DNSAPI.NET: A common API for managing DNS servers on Windows. This project is based on the work I started back in 2002 when I needed to create a web front-end for Windows' DNS server using the .Net framework. The plan is to expand on the project and include support for the BIND server on Windows too. ego.net: ego.netfdTFS: Team Foundation Server Source Control Plugin for FlashDevelopGeoWPS: GeoWPS is an implementation of the OGC WPS. It will be developed in C#. IThink: A new project.King Garden: Boy King's .net practical projects.King Garret: Boy King's .net learning projects.LottoCheck: Follow LottoNot-Terraria: This is a like terraria game but NOT terrariaPassword Protector: Password Protector SharePoint 2010 BlobCache Manager: Manage your web application's blobcache settings directly in the central administration.SharePoint 2010 SilverLight Multiple File Uploader: SharePoint 2010 SilverLight Multiple File Uploader for Documents Libraries with MetaData.Sharepoint Tool Collection: I want to Integrate Various Utilities of Sharepoint at one place. It is for easy working of user or developer. Ex-1. A utility which takes some params & csv file and upload 100s of items on the sharepoint list easily. Ex-2 A utility to upload documents in a library. etc.SQLCLR Cmd Exec Framework Example: For users of MS SQL Server, xp_cmdshell is a utility that we usually want to have disabled. However there are still cases where calling a command line is needed. This project provides an framework/example to make command line calls. It is not meant as an xp_cmdshell replacement but as a workaround.Symmetric Designs Python 3.2: Symmetric Designs for Python 3.2 helps graphical artists to design and develop their own designs freestyle. It uses the pygame module for Python 3.2. It can also be analysed in order to get a grasp of graphics programming in Python.Terminsoft open CLR libraries: Terminsoft open CLR libraries. The first is Terminsoft.Intervals, intended for modeling the sets of intervals with elements, the comparison operation is defined for. The second is Terminsoft.Syntax, intended for text parsing and transformation and built upon regular expressions.Thai Flood Watch: Thai Flood Watch provides useful information, up-to-date and visual access to the major canal in Bangkok, Thailand using data from department of drainage and sewerage. Easily monitor river and canal flow information in Bangkok area, right from your hand.TheNerd: Sample video game source code. Using Sunburn.Unity.WebAPI: A library that allows simple Integration of Microsoft's Unity IoC container with ASP.NET's WebAPI. This project includes a bespoke DependencyResolver that creates a child container per HTTP request and disposes of all registered IDisposable instances at the end of the request.Wholemy.RemoteTouch: The project is a remote touch-sensitive keyboard with a customizable interface which allows to supplement control of another computer, regardless of the wires. For example, if you have not so fast Tablet PC - a client and a fast desktop computer - the server using the network.WindowPlace: WindowPlace makes it possible to save Window positions and sizes to a profile. Switching between profiles will effortlessly move and resize your windows. Help improve productivity - especially for multi-monitor systems. Developed in C# using WPF and a few Windows API calls in the background. WP Error Manager (Devv.Core.WPErrorManager): Library to log, handle and report errors on Windows Phone 7 apps. Fully customizable and extremely easy to implement. Works with any WP7 app. Tested with the emulator, Nokia Lumia 800 and Samsung Focus Flash.WPMatic: Windows Phone7 App to manage Homematic (eQ-3) Devices. The App is like the Homematic Central Configuration Unit (CCU) in German.www.Nabaza.com Freeware and Ebooks: www.Nabaza.com Freeware and Ebooks by William R. NabazaZap: Zap is a light weight .NET communication framework. It is designed for programs running in local area network. Zap provides code generation tool that enables user to call remote methods, add/remote event listener to remote objects, while hides the lower details.

    Read the article

  • OpenVPN not connecting

    - by LandArch
    There have been a number of post similar to this, but none seem to satisfy my need. Plus I am a Ubuntu newbie. I followed this tutorial to completely set up OpenVPN on Ubuntu 12.04 server. Here is my server.conf file ################################################# # Sample OpenVPN 2.0 config file for # # multi-client server. # # # # This file is for the server side # # of a many-clients <-> one-server # # OpenVPN configuration. # # # # OpenVPN also supports # # single-machine <-> single-machine # # configurations (See the Examples page # # on the web site for more info). # # # # This config should work on Windows # # or Linux/BSD systems. Remember on # # Windows to quote pathnames and use # # double backslashes, e.g.: # # "C:\\Program Files\\OpenVPN\\config\\foo.key" # # # # Comments are preceded with '#' or ';' # ################################################# # Which local IP address should OpenVPN # listen on? (optional) local 192.168.13.8 # Which TCP/UDP port should OpenVPN listen on? # If you want to run multiple OpenVPN instances # on the same machine, use a different port # number for each one. You will need to # open up this port on your firewall. port 1194 # TCP or UDP server? proto tcp ;proto udp # "dev tun" will create a routed IP tunnel, # "dev tap" will create an ethernet tunnel. # Use "dev tap0" if you are ethernet bridging # and have precreated a tap0 virtual interface # and bridged it with your ethernet interface. # If you want to control access policies # over the VPN, you must create firewall # rules for the the TUN/TAP interface. # On non-Windows systems, you can give # an explicit unit number, such as tun0. # On Windows, use "dev-node" for this. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. dev tap0 up "/etc/openvpn/up.sh br0" down "/etc/openvpn/down.sh br0" ;dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel if you # have more than one. On XP SP2 or higher, # you may need to selectively disable the # Windows firewall for the TAP adapter. # Non-Windows systems usually don't need this. ;dev-node MyTap # SSL/TLS root certificate (ca), certificate # (cert), and private key (key). Each client # and the server must have their own cert and # key file. The server and all clients will # use the same ca file. # # See the "easy-rsa" directory for a series # of scripts for generating RSA certificates # and private keys. Remember to use # a unique Common Name for the server # and each of the client certificates. # # Any X509 key management system can be used. # OpenVPN can also use a PKCS #12 formatted key file # (see "pkcs12" directive in man page). ca "/etc/openvpn/ca.crt" cert "/etc/openvpn/server.crt" key "/etc/openvpn/server.key" # This file should be kept secret # Diffie hellman parameters. # Generate your own with: # openssl dhparam -out dh1024.pem 1024 # Substitute 2048 for 1024 if you are using # 2048 bit keys. dh dh1024.pem # Configure server mode and supply a VPN subnet # for OpenVPN to draw client addresses from. # The server will take 10.8.0.1 for itself, # the rest will be made available to clients. # Each client will be able to reach the server # on 10.8.0.1. Comment this line out if you are # ethernet bridging. See the man page for more info. ;server 10.8.0.0 255.255.255.0 # Maintain a record of client <-> virtual IP address # associations in this file. If OpenVPN goes down or # is restarted, reconnecting clients can be assigned # the same virtual IP address from the pool that was # previously assigned. ifconfig-pool-persist ipp.txt # Configure server mode for ethernet bridging. # You must first use your OS's bridging capability # to bridge the TAP interface with the ethernet # NIC interface. Then you must manually set the # IP/netmask on the bridge interface, here we # assume 10.8.0.4/255.255.255.0. Finally we # must set aside an IP range in this subnet # (start=10.8.0.50 end=10.8.0.100) to allocate # to connecting clients. Leave this line commented # out unless you are ethernet bridging. server-bridge 192.168.13.101 255.255.255.0 192.168.13.105 192.168.13.200 # Configure server mode for ethernet bridging # using a DHCP-proxy, where clients talk # to the OpenVPN server-side DHCP server # to receive their IP address allocation # and DNS server addresses. You must first use # your OS's bridging capability to bridge the TAP # interface with the ethernet NIC interface. # Note: this mode only works on clients (such as # Windows), where the client-side TAP adapter is # bound to a DHCP client. ;server-bridge # Push routes to the client to allow it # to reach other private subnets behind # the server. Remember that these # private subnets will also need # to know to route the OpenVPN client # address pool (10.8.0.0/255.255.255.0) # back to the OpenVPN server. push "route 192.168.13.1 255.255.255.0" push "dhcp-option DNS 192.168.13.201" push "dhcp-option DOMAIN blahblah.dyndns-wiki.com" ;push "route 192.168.20.0 255.255.255.0" # To assign specific IP addresses to specific # clients or if a connecting client has a private # subnet behind it that should also have VPN access, # use the subdirectory "ccd" for client-specific # configuration files (see man page for more info). # EXAMPLE: Suppose the client # having the certificate common name "Thelonious" # also has a small subnet behind his connecting # machine, such as 192.168.40.128/255.255.255.248. # First, uncomment out these lines: ;client-config-dir ccd ;route 192.168.40.128 255.255.255.248 # Then create a file ccd/Thelonious with this line: # iroute 192.168.40.128 255.255.255.248 # This will allow Thelonious' private subnet to # access the VPN. This example will only work # if you are routing, not bridging, i.e. you are # using "dev tun" and "server" directives. # EXAMPLE: Suppose you want to give # Thelonious a fixed VPN IP address of 10.9.0.1. # First uncomment out these lines: ;client-config-dir ccd ;route 10.9.0.0 255.255.255.252 # Then add this line to ccd/Thelonious: # ifconfig-push 10.9.0.1 10.9.0.2 # Suppose that you want to enable different # firewall access policies for different groups # of clients. There are two methods: # (1) Run multiple OpenVPN daemons, one for each # group, and firewall the TUN/TAP interface # for each group/daemon appropriately. # (2) (Advanced) Create a script to dynamically # modify the firewall in response to access # from different clients. See man # page for more info on learn-address script. ;learn-address ./script # If enabled, this directive will configure # all clients to redirect their default # network gateway through the VPN, causing # all IP traffic such as web browsing and # and DNS lookups to go through the VPN # (The OpenVPN server machine may need to NAT # or bridge the TUN/TAP interface to the internet # in order for this to work properly). ;push "redirect-gateway def1 bypass-dhcp" # Certain Windows-specific network settings # can be pushed to clients, such as DNS # or WINS server addresses. CAVEAT: # http://openvpn.net/faq.html#dhcpcaveats # The addresses below refer to the public # DNS servers provided by opendns.com. ;push "dhcp-option DNS 208.67.222.222" ;push "dhcp-option DNS 208.67.220.220" # Uncomment this directive to allow different # clients to be able to "see" each other. # By default, clients will only see the server. # To force clients to only see the server, you # will also need to appropriately firewall the # server's TUN/TAP interface. ;client-to-client # Uncomment this directive if multiple clients # might connect with the same certificate/key # files or common names. This is recommended # only for testing purposes. For production use, # each client should have its own certificate/key # pair. # # IF YOU HAVE NOT GENERATED INDIVIDUAL # CERTIFICATE/KEY PAIRS FOR EACH CLIENT, # EACH HAVING ITS OWN UNIQUE "COMMON NAME", # UNCOMMENT THIS LINE OUT. ;duplicate-cn # The keepalive directive causes ping-like # messages to be sent back and forth over # the link so that each side knows when # the other side has gone down. # Ping every 10 seconds, assume that remote # peer is down if no ping received during # a 120 second time period. keepalive 10 120 # For extra security beyond that provided # by SSL/TLS, create an "HMAC firewall" # to help block DoS attacks and UDP port flooding. # # Generate with: # openvpn --genkey --secret ta.key # # The server and each client must have # a copy of this key. # The second parameter should be '0' # on the server and '1' on the clients. ;tls-auth ta.key 0 # This file is secret # Select a cryptographic cipher. # This config item must be copied to # the client config file as well. ;cipher BF-CBC # Blowfish (default) ;cipher AES-128-CBC # AES ;cipher DES-EDE3-CBC # Triple-DES # Enable compression on the VPN link. # If you enable it here, you must also # enable it in the client config file. comp-lzo # The maximum number of concurrently connected # clients we want to allow. ;max-clients 100 # It's a good idea to reduce the OpenVPN # daemon's privileges after initialization. # # You can uncomment this out on # non-Windows systems. user nobody group nogroup # The persist options will try to avoid # accessing certain resources on restart # that may no longer be accessible because # of the privilege downgrade. persist-key persist-tun # Output a short status file showing # current connections, truncated # and rewritten every minute. status openvpn-status.log # By default, log messages will go to the syslog (or # on Windows, if running as a service, they will go to # the "\Program Files\OpenVPN\log" directory). # Use log or log-append to override this default. # "log" will truncate the log file on OpenVPN startup, # while "log-append" will append to it. Use one # or the other (but not both). ;log openvpn.log ;log-append openvpn.log # Set the appropriate level of log # file verbosity. # # 0 is silent, except for fatal errors # 4 is reasonable for general usage # 5 and 6 can help to debug connection problems # 9 is extremely verbose verb 3 # Silence repeating messages. At most 20 # sequential messages of the same message # category will be output to the log. ;mute 20 I am using Windows 7 as the Client and set that up accordingly using the OpenVPN GUI. That conf file is as follows: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. dev tap0 up "/etc/openvpn/up.sh br0" down "/etc/openvpn/down.sh br0" ;dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. proto tcp ;proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. blahblah.dyndns-wiki.com 1194 ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. ;remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) user nobody group nobody # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca "C:\\Program Files\OpenVPN\config\\ca.crt" cert "C:\\Program Files\OpenVPN\config\\ChadMWade-THINK.crt" key "C:\\Program Files\OpenVPN\config\\ChadMWade-THINK.key" # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 Not sure whats left to do.

    Read the article

  • Solaris 11.1: Changes to included FOSS packages

    - by alanc
    Besides the documentation changes I mentioned last time, another place you can see Solaris 11.1 changes before upgrading is in the online package repository, now that the 11.1 packages have been published to http://pkg.oracle.com/solaris/release/, as the “0.175.1.0.0.24.2” branch. (Oracle Solaris Package Versioning explains what each field in that version string means.) When you’re ready to upgrade to the packages from either this repo, or the support repository, you’ll want to first read How to Update to Oracle Solaris 11.1 Using the Image Packaging System by Pete Dennis, as there are a couple issues you will need to be aware of to do that upgrade, several of which are due to changes in the Free and Open Source Software (FOSS) packages included with Solaris, as I’ll explain in a bit. Solaris 11 can update more readily than Solaris 10 In the Solaris 10 and older update models, the way the updates were built constrained what changes we could make in those releases. To change an existing SVR4 package in those releases, we created a Solaris Patch, which applied to a given version of the SVR4 package and replaced, added or deleted files in it. These patches were released via the support websites (originally SunSolve, now My Oracle Support) for applying to existing Solaris 10 installations, and were also merged into the install images for the next Solaris 10 update release. (This Solaris Patches blog post from Gerry Haskins dives deeper into that subject.) Some of the restrictions of this model were that package refactoring, changes to package dependencies, and even just changing the package version number, were difficult to do in this hybrid patch/OS update model. For instance, when Solaris 10 first shipped, it had the Xorg server from X11R6.8. Over the first couple years of update releases we were able to keep it up to date by replacing, adding, & removing files as necessary, taking it all the way up to Xorg server release 1.3 (new version numbering begun after the X11R7 split of the X11 tree into separate modules gave each module its own version). But if you run pkginfo on the SUNWxorg-server package, you’ll see it still displayed a version number of 6.8, confusing users as to which version was actually included. We stopped upgrading the Xorg server releases in Solaris 10 after 1.3, as later versions added new dependencies, such as HAL, D-Bus, and libpciaccess, which were very difficult to manage in this patching model. (We later got libpciaccess to work, but HAL & D-Bus would have been much harder due to the greater dependency tree underneath those.) Similarly, every time the GNOME team looked into upgrading Solaris 10 past GNOME 2.6, they found these constraints made it so difficult it wasn’t worthwhile, and eventually GNOME’s dependencies had changed enough it was completely infeasible. Fortunately, this worked out for both the X11 & GNOME teams, with our management making the business decision to concentrate on the “Nevada” branch for desktop users - first as Solaris Express Desktop Edition, and later as OpenSolaris, so we didn’t have to fight to try to make the package updates fit into these tight constraints. Meanwhile, the team designing the new packaging system for Solaris 11 was seeing us struggle with these problems, and making this much easier to manage for both the development teams and our users was one of their big goals for the IPS design they were working on. Now that we’ve reached the first update release to Solaris 11, we can start to see the fruits of their labors, with more FOSS updates in 11.1 than we had in many Solaris 10 update releases, keeping software more up to date with the upstream communities. Of course, just because we can more easily update now, doesn’t always mean we should or will do so, it just removes the package system limitations from forcing the decision for us. So while we’ve upgraded the X Window System in the 11.1 release from X11R7.6 to 7.7, the Solaris GNOME team decided it was not the right time to try to make the jump from GNOME 2 to GNOME 3, though they did update some individual components of the desktop, especially those with security fixes like Firefox. In other parts of the system, decisions as to what to update were prioritized based on how they affected other projects, or what customer requests we’d gotten for them. So with all that background in place, what packages did we actually update or add between Solaris 11.0 and 11.1? Core OS Functionality One of the FOSS changes with the biggest impact in this release is the upgrade from Grub Legacy (0.97) to Grub 2 (1.99) for the x64 platform boot loader. This is the cause of one of the upgrade quirks, since to go from Solaris 11.0 to 11.1 on x64 systems, you first need to update the Boot Environment tools (such as beadm) to a new version that can handle boot environments that use the Grub2 boot loader. System administrators can find the details they need to know about the new Grub in the Administering the GRand Unified Bootloader chapter of the Booting and Shutting Down Oracle Solaris 11.1 Systems guide. This change was necessary to be able to support new hardware coming into the x64 marketplace, including systems using UEFI firmware or booting off disk drives larger than 2 terabytes. For both platforms, Solaris 11.1 adds rsyslog as an optional alternative to the traditional syslogd, and OpenSCAP for checking security configuration settings are compliant with site policies. Note that the support repo actually has newer versions of BIND & fetchmail than the 11.1 release, as some late breaking critical fixes came through from the community upstream releases after the Solaris 11.1 release was frozen, and made their way to the support repository. These are responsible for the other big upgrade quirk in this release, in which to upgrade a system which already installed those versions from the support repo, you need to either wait for those packages to make their way to the 11.1 branch of the support repo, or follow the steps in the aforementioned upgrade walkthrough to let the package system know it's okay to temporarily downgrade those. Developer Stack While Solaris 11.0 included Python 2.7, many of the bundled python modules weren’t packaged for it yet, limiting its usability. For 11.1, many more of the python modules include 2.7 versions (enough that I filtered them out of the below table, but you can always search on the package repository server for them. For other language runtimes and development tools, 11.1 expands the use of IPS mediated links to choose which version of a package is the default when the packages are designed to allow multiple versions to install side by side. For instance, in Solaris 11.0, GNU automake 1.9 and 1.10 were provided, and developers had to run them as either automake-1.9 or automake-1.10. In Solaris 11.1, when automake 1.11 was added, also added was a /usr/bin/automake mediated link, which points to the automake-1.11 program by default, but can be changed to another version by running the pkg set-mediator command. Mediated links were also used for the Java runtime & development kits in 11.1, changing the default versions to the Java 7 releases (the 1.7.0.x package versions), while allowing admins to switch links such as /usr/bin/javac back to Java 6 if they need to for their site, to deal with Java 7 compatibility or other issues, without having to update each usage to use the full versioned /usr/jdk/jdk1.6.0_35/bin/javac paths for every invocation. Desktop Stack As I mentioned before, we upgraded from X11R7.6 to X11R7.7, since a pleasant coincidence made the X.Org release dates line up nicely with our feature & code freeze dates for this release. (Or perhaps it wasn’t so coincidental, after all, one of the benefits of being the person making the release is being able to decide what schedule is most convenient for you, and this one worked well for me.) For the table below, I’ve skipped listing the packages in which we use the X11 “katamari” version for the Solaris package version (mainly packages combining elements of multiple upstream modules with independent version numbers), since they just all changed from 7.6 to 7.7. In the graphics drivers, we worked with Intel to update the Intel Integrated Graphics Processor support to support 3D graphics and kernel mode setting on the Ivy Bridge chipsets, and updated Nvidia’s non-FOSS graphics driver from 280.13 to 295.20. Higher up in the desktop stack, PulseAudio was added for audio support, and liblouis for Braille support, and the GNOME applications were built to use them. The Mozilla applications, Firefox & Thunderbird moved to the current Extended Support Release (ESR) versions, 10.x for each, to bring up-to-date security fixes without having to be on Mozilla’s agressive 6 week feature cycle release train. Detailed list of changes This table shows most of the changes to the FOSS packages between Solaris 11.0 and 11.1. As noted above, some were excluded for clarity, or to reduce noise and duplication. All the FOSS packages which didn't change the version number in their packaging info are not included, even if they had updates to fix bugs, security holes, or add support for new hardware or new features of Solaris. Package11.011.1 archiver/unrar 3.8.5 4.1.4 audio/sox 14.3.0 14.3.2 backup/rdiff-backup 1.2.1 1.3.3 communication/im/pidgin 2.10.0 2.10.5 compress/gzip 1.3.5 1.4 compress/xz not included 5.0.1 database/sqlite-3 3.7.6.3 3.7.11 desktop/remote-desktop/tigervnc 1.0.90 1.1.0 desktop/window-manager/xcompmgr 1.1.5 1.1.6 desktop/xscreensaver 5.12 5.15 developer/build/autoconf 2.63 2.68 developer/build/autoconf/xorg-macros 1.15.0 1.17 developer/build/automake-111 not included 1.11.2 developer/build/cmake 2.6.2 2.8.6 developer/build/gnu-make 3.81 3.82 developer/build/imake 1.0.4 1.0.5 developer/build/libtool 1.5.22 2.4.2 developer/build/makedepend 1.0.3 1.0.4 developer/documentation-tool/doxygen 1.5.7.1 1.7.6.1 developer/gnu-binutils 2.19 2.21.1 developer/java/jdepend not included 2.9 developer/java/jdk-6 1.6.0.26 1.6.0.35 developer/java/jdk-7 1.7.0.0 1.7.0.7 developer/java/jpackage-utils not included 1.7.5 developer/java/junit 4.5 4.10 developer/lexer/jflex not included 1.4.1 developer/parser/byaccj not included 1.14 developer/parser/java_cup not included 0.10 developer/quilt 0.47 0.60 developer/versioning/git 1.7.3.2 1.7.9.2 developer/versioning/mercurial 1.8.4 2.2.1 developer/versioning/subversion 1.6.16 1.7.5 diagnostic/constype 1.0.3 1.0.4 diagnostic/nmap 5.21 5.51 diagnostic/scanpci 0.12.1 0.13.1 diagnostic/wireshark 1.4.8 1.8.2 diagnostic/xload 1.1.0 1.1.1 editor/gnu-emacs 23.1 23.4 editor/vim 7.3.254 7.3.600 file/lndir 1.0.2 1.0.3 image/editor/bitmap 1.0.5 1.0.6 image/gnuplot 4.4.0 4.6.0 image/library/libexif 0.6.19 0.6.21 image/library/libpng 1.4.8 1.4.11 image/library/librsvg 2.26.3 2.34.1 image/xcursorgen 1.0.4 1.0.5 library/audio/pulseaudio not included 1.1 library/cacao 2.3.0.0 2.3.1.0 library/expat 2.0.1 2.1.0 library/gc 7.1 7.2 library/graphics/pixman 0.22.0 0.24.4 library/guile 1.8.4 1.8.6 library/java/javadb 10.5.3.0 10.6.2.1 library/java/subversion 1.6.16 1.7.5 library/json-c not included 0.9 library/libedit not included 3.0 library/libee not included 0.3.2 library/libestr not included 0.1.2 library/libevent 1.3.5 1.4.14.2 library/liblouis not included 2.1.1 library/liblouisxml not included 2.1.0 library/libtecla 1.6.0 1.6.1 library/libtool/libltdl 1.5.22 2.4.2 library/nspr 4.8.8 4.8.9 library/openldap 2.4.25 2.4.30 library/pcre 7.8 8.21 library/perl-5/subversion 1.6.16 1.7.5 library/python-2/jsonrpclib not included 0.1.3 library/python-2/lxml 2.1.2 2.3.3 library/python-2/nose not included 1.1.2 library/python-2/pyopenssl not included 0.11 library/python-2/subversion 1.6.16 1.7.5 library/python-2/tkinter-26 2.6.4 2.6.8 library/python-2/tkinter-27 2.7.1 2.7.3 library/security/nss 4.12.10 4.13.1 library/security/openssl 1.0.0.5 (1.0.0e) 1.0.0.10 (1.0.0j) mail/thunderbird 6.0 10.0.6 network/dns/bind 9.6.3.4.3 9.6.3.7.2 package/pkgbuild not included 1.3.104 print/filter/enscript not included 1.6.4 print/filter/gutenprint 5.2.4 5.2.7 print/lp/filter/foomatic-rip 3.0.2 4.0.15 runtime/java/jre-6 1.6.0.26 1.6.0.35 runtime/java/jre-7 1.7.0.0 1.7.0.7 runtime/perl-512 5.12.3 5.12.4 runtime/python-26 2.6.4 2.6.8 runtime/python-27 2.7.1 2.7.3 runtime/ruby-18 1.8.7.334 1.8.7.357 runtime/tcl-8/tcl-sqlite-3 3.7.6.3 3.7.11 security/compliance/openscap not included 0.8.1 security/nss-utilities 4.12.10 4.13.1 security/sudo 1.8.1.2 1.8.4.5 service/network/dhcp/isc-dhcp 4.1 4.1.0.6 service/network/dns/bind 9.6.3.4.3 9.6.3.7.2 service/network/ftp (ProFTPD) 1.3.3.0.5 1.3.3.0.7 service/network/samba 3.5.10 3.6.6 shell/conflict 0.2004.9.1 0.2010.6.27 shell/pipe-viewer 1.1.4 1.2.0 shell/zsh 4.3.12 4.3.17 system/boot/grub 0.97 1.99 system/font/truetype/liberation 1.4 1.7.2 system/library/freetype-2 2.4.6 2.4.9 system/library/libnet 1.1.2.1 1.1.5 system/management/cim/pegasus 2.9.1 2.11.0 system/management/ipmitool 1.8.10 1.8.11 system/management/wbem/wbemcli 1.3.7 1.3.9.1 system/network/routing/quagga 0.99.8 0.99.19 system/rsyslog not included 6.2.0 terminal/luit 1.1.0 1.1.1 text/convmv 1.14 1.15 text/gawk 3.1.5 3.1.8 text/gnu-grep 2.5.4 2.10 web/browser/firefox 6.0.2 10.0.6 web/browser/links 1.0 1.0.3 web/java-servlet/tomcat 6.0.33 6.0.35 web/php-53 not included 5.3.14 web/php-53/extension/php-apc not included 3.1.9 web/php-53/extension/php-idn not included 0.2.0 web/php-53/extension/php-memcache not included 3.0.6 web/php-53/extension/php-mysql not included 5.3.14 web/php-53/extension/php-pear not included 5.3.14 web/php-53/extension/php-suhosin not included 0.9.33 web/php-53/extension/php-tcpwrap not included 1.1.3 web/php-53/extension/php-xdebug not included 2.2.0 web/php-common not included 11.1 web/proxy/squid 3.1.8 3.1.18 web/server/apache-22 2.2.20 2.2.22 web/server/apache-22/module/apache-sed 2.2.20 2.2.22 web/server/apache-22/module/apache-wsgi not included 3.3 x11/diagnostic/xev 1.1.0 1.2.0 x11/diagnostic/xscope 1.3 1.3.1 x11/documentation/xorg-docs 1.6 1.7 x11/keyboard/xkbcomp 1.2.3 1.2.4 x11/library/libdmx 1.1.1 1.1.2 x11/library/libdrm 2.4.25 2.4.32 x11/library/libfontenc 1.1.0 1.1.1 x11/library/libfs 1.0.3 1.0.4 x11/library/libice 1.0.7 1.0.8 x11/library/libsm 1.2.0 1.2.1 x11/library/libx11 1.4.4 1.5.0 x11/library/libxau 1.0.6 1.0.7 x11/library/libxcb 1.7 1.8.1 x11/library/libxcursor 1.1.12 1.1.13 x11/library/libxdmcp 1.1.0 1.1.1 x11/library/libxext 1.3.0 1.3.1 x11/library/libxfixes 4.0.5 5.0 x11/library/libxfont 1.4.4 1.4.5 x11/library/libxft 2.2.0 2.3.1 x11/library/libxi 1.4.3 1.6.1 x11/library/libxinerama 1.1.1 1.1.2 x11/library/libxkbfile 1.0.7 1.0.8 x11/library/libxmu 1.1.0 1.1.1 x11/library/libxmuu 1.1.0 1.1.1 x11/library/libxpm 3.5.9 3.5.10 x11/library/libxrender 0.9.6 0.9.7 x11/library/libxres 1.0.5 1.0.6 x11/library/libxscrnsaver 1.2.1 1.2.2 x11/library/libxtst 1.2.0 1.2.1 x11/library/libxv 1.0.6 1.0.7 x11/library/libxvmc 1.0.6 1.0.7 x11/library/libxxf86vm 1.1.1 1.1.2 x11/library/mesa 7.10.2 7.11.2 x11/library/toolkit/libxaw7 1.0.9 1.0.11 x11/library/toolkit/libxt 1.0.9 1.1.3 x11/library/xtrans 1.2.6 1.2.7 x11/oclock 1.0.2 1.0.3 x11/server/xdmx 1.10.3 1.12.2 x11/server/xephyr 1.10.3 1.12.2 x11/server/xorg 1.10.3 1.12.2 x11/server/xorg/driver/xorg-input-keyboard 1.6.0 1.6.1 x11/server/xorg/driver/xorg-input-mouse 1.7.1 1.7.2 x11/server/xorg/driver/xorg-input-synaptics 1.4.1 1.6.2 x11/server/xorg/driver/xorg-input-vmmouse 12.7.0 12.8.0 x11/server/xorg/driver/xorg-video-ast 0.91.10 0.93.10 x11/server/xorg/driver/xorg-video-ati 6.14.1 6.14.4 x11/server/xorg/driver/xorg-video-cirrus 1.3.2 1.4.0 x11/server/xorg/driver/xorg-video-dummy 0.3.4 0.3.5 x11/server/xorg/driver/xorg-video-intel 2.10.0 2.18.0 x11/server/xorg/driver/xorg-video-mach64 6.9.0 6.9.1 x11/server/xorg/driver/xorg-video-mga 1.4.13 1.5.0 x11/server/xorg/driver/xorg-video-openchrome 0.2.904 0.2.905 x11/server/xorg/driver/xorg-video-r128 6.8.1 6.8.2 x11/server/xorg/driver/xorg-video-trident 1.3.4 1.3.5 x11/server/xorg/driver/xorg-video-vesa 2.3.0 2.3.1 x11/server/xorg/driver/xorg-video-vmware 11.0.3 12.0.2 x11/server/xserver-common 1.10.3 1.12.2 x11/server/xvfb 1.10.3 1.12.2 x11/server/xvnc 1.0.90 1.1.0 x11/session/sessreg 1.0.6 1.0.7 x11/session/xauth 1.0.6 1.0.7 x11/session/xinit 1.3.1 1.3.2 x11/transset 0.9.1 1.0.0 x11/trusted/trusted-xorg 1.10.3 1.12.2 x11/x11-window-dump 1.0.4 1.0.5 x11/xclipboard 1.1.1 1.1.2 x11/xclock 1.0.5 1.0.6 x11/xfd 1.1.0 1.1.1 x11/xfontsel 1.0.3 1.0.4 x11/xfs 1.1.1 1.1.2 P.S. To get the version numbers for this table, I ran a quick perl script over the output from: % pkg contents -H -r -t depend -a type=incorporate -o fmri \ `pkg contents -H -r -t depend -a type=incorporate -o fmri [email protected],5.11-0.175.1.0.0.24` \ | sort /tmp/11.1 % pkg contents -H -r -t depend -a type=incorporate -o fmri \ `pkg contents -H -r -t depend -a type=incorporate -o fmri [email protected],5.11-0.175.0.0.0.2` \ | sort /tmp/11.0

    Read the article

< Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >