Search Results

Search found 8465 results on 339 pages for '40 love'.

Page 153/339 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • Table sorting & pagination with jQuery and Razor in ASP.NET MVC

    - by hajan
    Introduction jQuery enjoys living inside pages which are built on top of ASP.NET MVC Framework. The ASP.NET MVC is a place where things are organized very well and it is quite hard to make them dirty, especially because the pattern enforces you on purity (you can still make it dirty if you want so ;) ). We all know how easy is to build a HTML table with a header row, footer row and table rows showing some data. With ASP.NET MVC we can do this pretty easy, but, the result will be pure HTML table which only shows data, but does not includes sorting, pagination or some other advanced features that we were used to have in the ASP.NET WebForms GridView. Ok, there is the WebGrid MVC Helper, but what if we want to make something from pure table in our own clean style? In one of my recent projects, I’ve been using the jQuery tablesorter and tablesorter.pager plugins that go along. You don’t need to know jQuery to make this work… You need to know little CSS to create nice design for your table, but of course you can use mine from the demo… So, what you will see in this blog is how to attach this plugin to your pure html table and a div for pagination and make your table with advanced sorting and pagination features.   Demo Project Resources The resources I’m using for this demo project are shown in the following solution explorer window print screen: Content/images – folder that contains all the up/down arrow images, pagination buttons etc. You can freely replace them with your own, but keep the names the same if you don’t want to change anything in the CSS we will built later. Content/Site.css – The main css theme, where we will add the theme for our table too Controllers/HomeController.cs – The controller I’m using for this project Models/Person.cs – For this demo, I’m using Person.cs class Scripts – jquery-1.4.4.min.js, jquery.tablesorter.js, jquery.tablesorter.pager.js – required script to make the magic happens Views/Home/Index.cshtml – Index view (razor view engine) the other items are not important for the demo. ASP.NET MVC 1. Model In this demo I use only one Person class which defines Person entity with several properties. You can use your own model, maybe one which will access data from database or any other resource. Person.cs public class Person {     public string Name { get; set; }     public string Surname { get; set; }     public string Email { get; set; }     public int? Phone { get; set; }     public DateTime? DateAdded { get; set; }     public int? Age { get; set; }     public Person(string name, string surname, string email,         int? phone, DateTime? dateadded, int? age)     {         Name = name;         Surname = surname;         Email = email;         Phone = phone;         DateAdded = dateadded;         Age = age;     } } 2. View In our example, we have only one Index.chtml page where Razor View engine is used. Razor view engine is my favorite for ASP.NET MVC because it’s very intuitive, fluid and keeps your code clean. 3. Controller Since this is simple example with one page, we use one HomeController.cs where we have two methods, one of ActionResult type (Index) and another GetPeople() used to create and return list of people. HomeController.cs public class HomeController : Controller {     //     // GET: /Home/     public ActionResult Index()     {         ViewBag.People = GetPeople();         return View();     }     public List<Person> GetPeople()     {         List<Person> listPeople = new List<Person>();                  listPeople.Add(new Person("Hajan", "Selmani", "[email protected]", 070070070,DateTime.Now, 25));                     listPeople.Add(new Person("Straight", "Dean", "[email protected]", 123456789, DateTime.Now.AddDays(-5), 35));         listPeople.Add(new Person("Karsen", "Livia", "[email protected]", 46874651, DateTime.Now.AddDays(-2), 31));         listPeople.Add(new Person("Ringer", "Anne", "[email protected]", null, DateTime.Now, null));         listPeople.Add(new Person("O'Leary", "Michael", "[email protected]", 32424344, DateTime.Now, 44));         listPeople.Add(new Person("Gringlesby", "Anne", "[email protected]", null, DateTime.Now.AddDays(-9), 18));         listPeople.Add(new Person("Locksley", "Stearns", "[email protected]", 2135345, DateTime.Now, null));         listPeople.Add(new Person("DeFrance", "Michel", "[email protected]", 235325352, DateTime.Now.AddDays(-18), null));         listPeople.Add(new Person("White", "Johnson", null, null, DateTime.Now.AddDays(-22), 55));         listPeople.Add(new Person("Panteley", "Sylvia", null, 23233223, DateTime.Now.AddDays(-1), 32));         listPeople.Add(new Person("Blotchet-Halls", "Reginald", null, 323243423, DateTime.Now, 26));         listPeople.Add(new Person("Merr", "South", "[email protected]", 3232442, DateTime.Now.AddDays(-5), 85));         listPeople.Add(new Person("MacFeather", "Stearns", "[email protected]", null, DateTime.Now, null));         return listPeople;     } }   TABLE CSS/HTML DESIGN Now, lets start with the implementation. First of all, lets create the table structure and the main CSS. 1. HTML Structure @{     Layout = null;     } <!DOCTYPE html> <html> <head>     <title>ASP.NET & jQuery</title>     <!-- referencing styles, scripts and writing custom js scripts will go here --> </head> <body>     <div>         <table class="tablesorter">             <thead>                 <tr>                     <th> value </th>                 </tr>             </thead>             <tbody>                 <tr>                     <td>value</td>                 </tr>             </tbody>             <tfoot>                 <tr>                     <th> value </th>                 </tr>             </tfoot>         </table>         <div id="pager">                      </div>     </div> </body> </html> So, this is the main structure you need to create for each of your tables where you want to apply the functionality we will create. Of course the scripts are referenced once ;). As you see, our table has class tablesorter and also we have a div with id pager. In the next steps we will use both these to create the needed functionalities. The complete Index.cshtml coded to get the data from controller and display in the page is: <body>     <div>         <table class="tablesorter">             <thead>                 <tr>                     <th>Name</th>                     <th>Surname</th>                     <th>Email</th>                     <th>Phone</th>                     <th>Date Added</th>                 </tr>             </thead>             <tbody>                 @{                     foreach (var p in ViewBag.People)                     {                                 <tr>                         <td>@p.Name</td>                         <td>@p.Surname</td>                         <td>@p.Email</td>                         <td>@p.Phone</td>                         <td>@p.DateAdded</td>                     </tr>                     }                 }             </tbody>             <tfoot>                 <tr>                     <th>Name</th>                     <th>Surname</th>                     <th>Email</th>                     <th>Phone</th>                     <th>Date Added</th>                 </tr>             </tfoot>         </table>         <div id="pager" style="position: none;">             <form>             <img src="@Url.Content("~/Content/images/first.png")" class="first" />             <img src="@Url.Content("~/Content/images/prev.png")" class="prev" />             <input type="text" class="pagedisplay" />             <img src="@Url.Content("~/Content/images/next.png")" class="next" />             <img src="@Url.Content("~/Content/images/last.png")" class="last" />             <select class="pagesize">                 <option selected="selected" value="5">5</option>                 <option value="10">10</option>                 <option value="20">20</option>                 <option value="30">30</option>                 <option value="40">40</option>             </select>             </form>         </div>     </div> </body> So, mainly the structure is the same. I have added @Razor code to create table with data retrieved from the ViewBag.People which has been filled with data in the home controller. 2. CSS Design The CSS code I’ve created is: /* DEMO TABLE */ body {     font-size: 75%;     font-family: Verdana, Tahoma, Arial, "Helvetica Neue", Helvetica, Sans-Serif;     color: #232323;     background-color: #fff; } table { border-spacing:0; border:1px solid gray;} table.tablesorter thead tr .header {     background-image: url(images/bg.png);     background-repeat: no-repeat;     background-position: center right;     cursor: pointer; } table.tablesorter tbody td {     color: #3D3D3D;     padding: 4px;     background-color: #FFF;     vertical-align: top; } table.tablesorter tbody tr.odd td {     background-color:#F0F0F6; } table.tablesorter thead tr .headerSortUp {     background-image: url(images/asc.png); } table.tablesorter thead tr .headerSortDown {     background-image: url(images/desc.png); } table th { width:150px;            border:1px outset gray;            background-color:#3C78B5;            color:White;            cursor:pointer; } table thead th:hover { background-color:Yellow; color:Black;} table td { width:150px; border:1px solid gray;} PAGINATION AND SORTING Now, when everything is ready and we have the data, lets make pagination and sorting functionalities 1. jQuery Scripts referencing <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.tablesorter.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.tablesorter.pager.js")" type="text/javascript"></script> 2. jQuery Sorting and Pagination script   <script type="text/javascript">     $(function () {         $("table.tablesorter").tablesorter({ widthFixed: true, sortList: [[0, 0]] })         .tablesorterPager({ container: $("#pager"), size: $(".pagesize option:selected").val() });     }); </script> So, with only two lines of code, I’m using both tablesorter and tablesorterPager plugins, giving some options to both these. Options added: tablesorter - widthFixed: true – gives fixed width of the columns tablesorter - sortList[[0,0]] – An array of instructions for per-column sorting and direction in the format: [[columnIndex, sortDirection], ... ] where columnIndex is a zero-based index for your columns left-to-right and sortDirection is 0 for Ascending and 1 for Descending. A valid argument that sorts ascending first by column 1 and then column 2 looks like: [[0,0],[1,0]] (source: http://tablesorter.com/docs/) tablesorterPager – container: $(“#pager”) – tells the pager container, the div with id pager in our case. tablesorterPager – size: the default size of each page, where I get the default value selected, so if you put selected to any other of the options in your select list, you will have this number of rows as default per page for the table too. END RESULTS 1. Table once the page is loaded (default results per page is 5 and is automatically sorted by 1st column as sortList is specified) 2. Sorted by Phone Descending 3. Changed pagination to 10 items per page 4. Sorted by Phone and Name (use SHIFT to sort on multiple columns) 5. Sorted by Date Added 6. Page 3, 5 items per page   ADDITIONAL ENHANCEMENTS We can do additional enhancements to the table. We can make search for each column. I will cover this in one of my next blogs. Stay tuned. DEMO PROJECT You can download demo project source code from HERE.CONCLUSION Once you finish with the demo, run your page and open the source code. You will be amazed of the purity of your code.Working with pagination in client side can be very useful. One of the benefits is performance, but if you have thousands of rows in your tables, you will get opposite result when talking about performance. Hence, sometimes it is nice idea to make pagination on back-end. So, the compromise between both approaches would be best to combine both of them. I use at most up to 500 rows on client-side and once the user reach the last page, we can trigger ajax postback which can get the next 500 rows using server-side pagination of the same data. I would like to recommend the following blog post http://weblogs.asp.net/gunnarpeipman/archive/2010/09/14/returning-paged-results-from-repositories-using-pagedresult-lt-t-gt.aspx, which will help you understand how to return page results from repository. I hope this was helpful post for you. Wait for my next posts ;). Please do let me know your feedback. Best Regards, Hajan

    Read the article

  • Dynamic JSON Parsing in .NET with JsonValue

    - by Rick Strahl
    So System.Json has been around for a while in Silverlight, but it's relatively new for the desktop .NET framework and now moving into the lime-light with the pending release of ASP.NET Web API which is bringing a ton of attention to server side JSON usage. The JsonValue, JsonObject and JsonArray objects are going to be pretty useful for Web API applications as they allow you dynamically create and parse JSON values without explicit .NET types to serialize from or into. But even more so I think JsonValue et al. are going to be very useful when consuming JSON APIs from various services. Yes I know C# is strongly typed, why in the world would you want to use dynamic values? So many times I've needed to retrieve a small morsel of information from a large service JSON response and rather than having to map the entire type structure of what that service returns, JsonValue actually allows me to cherry pick and only work with the values I'm interested in, without having to explicitly create everything up front. With JavaScriptSerializer or DataContractJsonSerializer you always need to have a strong type to de-serialize JSON data into. Wouldn't it be nice if no explicit type was required and you could just parse the JSON directly using a very easy to use object syntax? That's exactly what JsonValue, JsonObject and JsonArray accomplish using a JSON parser and some sweet use of dynamic sauce to make it easy to access in code. Creating JSON on the fly with JsonValue Let's start with creating JSON on the fly. It's super easy to create a dynamic object structure. JsonValue uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JsonValue:[TestMethod] public void JsonValueOutputTest() { // strong type instance var jsonObject = new JsonObject(); // dynamic expando instance you can add properties to dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1977; album.Songs = new JsonArray() as dynamic; dynamic song = new JsonObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JsonObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces proper JSON just as you would expect: {"AlbumName":"Dirty Deeds Done Dirt Cheap","Artist":"AC\/DC","YearReleased":1977,"Songs":[{"SongName":"Dirty Deeds Done Dirt Cheap","SongLength":"4:11"},{"SongName":"Love at First Feel","SongLength":"3:10"}]} The important thing about this code is that there's no explicitly type that is used for holding the values to serialize to JSON. I am essentially creating this value structure on the fly by adding properties and then serialize it to JSON. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JsonObject() to create a new object and immediately cast it to dynamic. JsonObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JsonValue/JsonObject these values are stored in pseudo collections of key value pairs that are exposed as properties through the DynamicObject functionality in .NET. The syntax gets a little tedious only if you need to create child objects or arrays that have to be explicitly defined first. Other than that the syntax looks like normal object access sytnax. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the values you create are accessed consistently and without typos in your code. Note that you can also access the JsonValue instance directly and get access to the underlying type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JsonObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JsonValue internally stores properties keys and values in collections and you can iterate over them at runtime. You can also manipulate the collections if you need to to get the object structure to look exactly like you want. Again, if you've used ExpandoObject before JsonObject/Value are very similar in the behavior of the structure. Reading JSON strings into JsonValue The JsonValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JsonValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:[TestMethod] public void JsonValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"",""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JsonValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JsonValue object and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JsonPrimitive and I have to assign them to their appropriate types first before I can do type comparisons. The dynamic properties will automatically cast to the right type expected as long as the compiler can resolve the type of the assignment or usage. The AreEqual() method oesn't as it expects two object instances and comparing json.Company to "West Wind" is comparing two different types (JsonPrimitive to String) which fails. So the intermediary assignment is required to make the test pass. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1977, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/B00008BXJ4/ref=as_li_ss_tl?ie=UTF8&tag=westwindtechn-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""67280fb8"", ""AlbumName"": ""Echoes, Silence, Patience & Grace"", ""Artist"": ""Foo Fighters"", ""YearReleased"": 2007, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/41mtlesQPVL._SL500_AA280_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/B000UFAURI/ref=as_li_ss_tl?ie=UTF8&tag=westwindtechn-20&linkCode=as2&camp=1789&creative=390957&creativeASIN=B000UFAURI"", ""Songs"": [ { ""AlbumId"": ""67280fb8"", ""SongName"": ""The Pretender"", ""SongLength"": ""4:29"" }, { ""AlbumId"": ""67280fb8"", ""SongName"": ""Let it Die"", ""SongLength"": ""4:05"" }, { ""AlbumId"": ""67280fb8"", ""SongName"": ""Erase/Replay"", ""SongLength"": ""4:13"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; dynamic albums = JsonValue.Parse(jsonString); foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName ); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName);}   It's pretty sweet how easy it becomes to parse even complex JSON and then just run through the object using object syntax, yet without an explicit type in the mix. In fact it looks and feels a lot like if you were using JavaScript to parse through this data, doesn't it? And that's the point…© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  JSON   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • The How-To Geek Holiday Gift Guide (Geeky Stuff We Like)

    - by The Geek
    Welcome to the very first How-To Geek Holiday Gift Guide, where we’ve put together a list of our absolute favorites to help you weed through all of the junk out there to pick the perfect gift for anybody. Though really, it’s just a list of the geeky stuff we want. We’ve got a whole range of items on the list, from cheaper gifts that most anybody can afford, to the really expensive stuff that we’re pretty sure nobody is giving us. Stocking Stuffers Here’s a couple of ideas for items that won’t break the bank. LED Keychain Micro-Light   Magcraft 1/8-Inch Rare Earth Cube Magnets Best little LED keychain light around. If they don’t need the penknife of the above item this is the perfect gift. I give them out by the handfuls and nobody ever says anything but good things about them. I’ve got ones that are years old and still running on the same battery.  Price: $8   Geeks cannot resist magnets. Jason bought this pack for his fridge because he was sick of big clunky magnets… these things are amazing. One tiny magnet, smaller than an Altoid mint, can practically hold a clipboard right to the fridge. Amazing. I spend more time playing with them on the counter than I do actually hanging stuff.  Price: $10 Lots of Geeky Mugs   Astronomy Powerful Green Laser Pointer There’s loads of fun, geeky mugs you can find on Amazon or anywhere else—and they are great choices for the geek who loves their coffee. You can get the Caffeine mug pictured here, or go with an Atari one, Canon Lens, or the Aperture mug based on Portal. Your choice. Price: $7   No, it’s not a light saber, but it’s nearly bright enough to be one—you can illuminate low flying clouds at night or just blind some aliens on your day off. All that for an extremely low price. Loads of fun. Price: $15       Geeky TV Shows and Books Sometimes you just want to relax and enjoy a some TV or a good book. Here’s a few choices. The IT Crowd Fourth Season   Doctor Who, Complete Fifth Series Ridiculous, funny show about nerds in the IT department, loved by almost all the geeks here at HTG. Justin even makes this required watching for new hires in his office so they’ll get his jokes. You can pre-order the fourth season, or pick up seasons one, two, or three for even cheaper. Price: $13   It doesn’t get any more nerdy than Eric’s pick, the fifth all-new series of Doctor Who, where the Daleks are hatching a new master plan from the heart of war-torn London. There’s also alien vampires, humanoid reptiles, and a lot more. Price: $52 Battlestar Galactica Complete Series   MAKE: Electronics: Learning Through Discovery Watch the epic fight to save the human race by finding the fabled planet Earth while being hunted by the robotic Cylons. You can grab the entire series on DVD or Blu-ray, or get the seasons individually. This isn’t your average sci-fi TV show. Price: $150 for Blu-ray.   Want to learn the fundamentals of electronics in a fun, hands-on way? The Make:Electronics book helps you build the circuits and learn how it all works—as if you had any more time between all that registry hacking and loading software on your new PC. Price: $21       Geeky Gadgets for the Gadget-Loving Geek Here’s a few of the items on our gadget list, though lets be honest: geeks are going to love almost any gadget, especially shiny new ones. Klipsch Image S4i Premium Noise-Isolating Headset with 3-Button Apple Control   GP2X Caanoo MAME/Console Emulator If you’re a real music geek looking for some serious quality in the headset for your iPhone or iPod, this is the pair that Alex recommends. They aren’t terribly cheap, but you can get the less expensive S3 earphones instead if you prefer. Price: $50-100   Eric says: “As an owner of an older version, I can say the GP2X is one of my favorite gadgets ever. Touted a “Retro Emulation Juggernaut,” GP2X runs Linux and may be the only open source software console available. Sounds too good to be true, but isn’t.” Price: $150 Roku XDS Streaming Player 1080p   Western Digital WD TV Live Plus HD Media Player If you do a lot of streaming over Netflix, Hulu Plus, Amazon’s Video on Demand, Pandora, and others, the Roku box is a great choice to get your content on your TV without paying a lot of money.  It’s also got Wireless-N built in, and it supports full 1080P HD. Price: $99   If you’ve got a home media collection sitting on a hard drive or a network server, the Western Digital box is probably the cheapest way to get that content on your TV, and it even supports Netflix streaming too. It’ll play loads of formats in full HD quality. Price: $99 Fujitsu ScanSnap S300 Color Mobile Scanner   Doxie, the amazing scanner for documents Trevor said: “This wonderful little scanner has become absolutely essential to me. My desk used to just be a gigantic pile of papers that I didn’t need at the moment, but couldn’t throw away ‘just in case.’ Now, every few weeks, I’ll run that paper pile through this and then happily shred the originals!” Price: $300   If you don’t scan quite as often and are looking for a budget scanner you can throw into your bag, or toss into a drawer in your desk, the Doxie scanner is a great alternative that I’ve been using for a while. It’s half the price, and while it’s not as full-featured as the Fujitsu, it might be a better choice for the very casual user. Price: $150       (Expensive) Gadgets Almost Anybody Will Love If you’re not sure that one of the more geeky presents is gonna work, here’s some gadgets that just about anybody is going to love, especially if they don’t have one already. Of course, some of these are a bit on the expensive side—but it’s a wish list, right? Amazon Kindle       The Kindle weighs less than a paperback book, the screen is amazing and easy on the eyes, and get ready for the kicker: the battery lasts at least a month. We aren’t kidding, either—it really lasts that long. If you don’t feel like spending money for books, you can use it to read PDFs, and if you want to get really geeky, you can hack it for custom screensavers. Price: $139 iPod Touch or iPad       You can’t go wrong with either of these presents—the iPod Touch can do almost everything the iPhone can do, including games, apps, and music, and it has the same Retina display as the iPhone, HD video recording, and a front-facing camera so you can use FaceTime. Price: $229+, depending on model. The iPad is a great tablet for playing games, browsing the web, or just using on your coffee table for guests. It’s well worth buying one—but if you’re buying for yourself, keep in mind that the iPad 2 is probably coming out in 3 months. Price: $500+ MacBook Air  The MacBook Air comes in 11” or 13” versions, and it’s an amazing little machine. It’s lightweight, the battery lasts nearly forever, and it resumes from sleep almost instantly. Since it uses an SSD drive instead of a hard drive, you’re barely going to notice any speed problems for general use. So if you’ve got a lot of money to blow, this is a killer gift. Price: $999 and up. Stuck with No Idea for a Present? Gift Cards! Yeah, you’re not going to win any “thoughtful present” awards with these, but you might just give somebody what they really want—the new Angry Birds HD for their iPad, Cut the Rope, or anything else they want. ITunes Gift Card   Amazon.com Gift Card Somebody in your circle getting a new iPod, iPhone, or iPad? You can get them an iTunes gift card, which they can use to buy music, games or apps. Yep, this way you can gift them a copy of Angry Birds if they don’t already have it. Or even Cut the Rope.   No clue what to get somebody on your list? Amazon gift cards let them buy pretty much anything they want, from organic weirdberries to big screen TVs. Yeah, it’s not as thoughtful as getting them a nice present, but look at the bright side: maybe they’ll get you an Amazon gift card and it’ll balance out. That’s the highlights from our lists—got anything else to add? Share your geeky gift ideas in the comments. Latest Features How-To Geek ETC The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography Happy Snow Bears Theme for Chrome and Iron [Holiday] Download Full Command and Conquer: Tiberian Sun Game for Free Scorched Cometary Planet Wallpaper Quick Fix: Add the RSS Button Back to the Firefox Awesome Bar Dropbox Desktop Client 1.0.0 RC for Windows, Linux, and Mac Released Hang in There Scrat! – Ice Age Wallpaper

    Read the article

  • Mr Flibble: As Seen Through a Lens, Darkly

    - by Phil Factor
    One of the rewarding things about getting involved with Simple-Talk has been in meeting and working with some pretty daunting talents. I’d like to say that Dom Reed’s talents are at the end of the visible spectrum, but then there is Richard, who pops up on national radio occasionally, presenting intellectual programs, Andrew, master of the ukulele, with his pioneering local history work, and Tony with marathon running and his past as a university lecturer. However, Dom, who is Red Gate’s head of creative design and who did the preliminary design work for Simple-Talk, has taken the art photography to an extreme that was impossible before Photoshop. He’s not the first person to take a photograph of himself every day for two years, but he is definitely the first to weave the results into a frightening narrative that veers from comedy to pathos, using all the arts of Photoshop to create a fictional character, Mr Flibble.   Have a look at some of the Flickr pages. Uncle Spike The B-Men – Woolverine The 2011 BoyZ iN Sink reunion tour turned out to be their last Error 404 – Flibble not found Mr Flibble is not a normal type of alter-ego. We generally prefer to choose bronze age warriors of impossibly magnificent physique and stamina; superheroes who bestride the world, scorning the forces of evil and anarchy in a series noble and righteous quests. Not so Dom, whose Mr Flibble is vulnerable, and laid low by an addiction to toxic substances. His work has gained an international cult following and is used as course material by several courses in photography. Although his work was for a while ignored by the more conventional world of ‘art’ photography they became famous through the internet. His photos have received well over a million views on Flickr. It was definitely time to turn this work into a book, because the whole sequence of images has its maximum effect when seen in sequence. He has a Kickstarter project page, one of the first following the recent UK launch of the crowdfunding platform. The publication of the book should be a major event and the £45 I shall divvy up will be one of the securest investments I shall ever make. The local news in Cambridge picked up on the project and I can quote from the report by the excellent Cabume website , the source of Tech news from the ‘Cambridge cluster’ Put really simply Mr Flibble likes to dress up and take pictures of himself. One of the benefits of a split personality, however is that Mr Flibble is supported in his endeavour by Reed’s top notch photography skills, supreme mastery of Photoshop and unflinching dedication to the cause. The duo have collaborated to take a picture every day for the past 730-plus days. It is not a big surprise that neither Mr Flibble nor Reed watches any TV: In addition to his full-time role at Cambridge software house,Red Gate Software as head of creativity and the two to five hours a day he spends taking the Mr Flibble shots, Reed also helps organise the . And now Reed is using Kickstarter to see if the world is ready for a Mr Flibble coffee table book. Judging by the early response it is. At the time of writing, just a few days after it went live, ‘I Drink Lead Paint: An absurd photography book by Mr Flibble’ had raised £1,545 of the £10,000 target it needs to raise by the Friday 30 November deadline from 37 backers. Following the standard Kickstarter template, Reed is offering a series of rewards based on the amount pledged, ranging from a Mr Flibble desktop wallpaper for pledges of £5 or more to a signed copy of the book for pledges of £45 or more, right up to a starring role in the book for £1,500. Mr Flibble is unquestionably one of the more deranged Kickstarter hopefuls, but don’t think for a second that he doesn’t have a firm grasp on the challenges he faces on the road to immortalisation on 150 gsm stock. Under the section ‘risks and challenges’ on his Kickstarter page his statement begins: “An angry horde of telepathic iguanas discover the world’s last remaining stock of vintage lead paint and hold me to ransom. Gosh how I love to guzzle lead paint. Anyway… faced with such brazen bravado, I cower at the thought of taking on their combined might and die a sad and lonely Flibble deprived of my one and only true liquid love.” At which point, Reed manages to wrestle away the keyboard, giving him the opportunity to present slightly more cogent analysis of the obstacles the project must still overcome. We asked Reed a few questions about Mr Flibble’s Kickstarter adventure and felt that his responses were worth publishing in full: Firstly, how did you manage it – holding down a full time job and also conceiving and executing these ideas on a daily basis? I employed a small team of ferocious gerbils to feed me ideas on a daily basis. Whilst most of their ideas were incomprehensibly rubbish and usually revolved around food, just occasionally they’d give me an idea like my B-Men series. As a backup plan though, I found that the best way to generate ideas was to actually start taking photos. If I were to stand in front of the camera, pull a silly face, place a vegetable on my head or something else equally stupid, the resulting photo of that would typically spark an idea when I came to look at it. Sitting around idly trying to think of an idea was doomed to result in no ideas. I admit that I really struggled with time. I’m proud that I never missed a day, but it was definitely hard when you were late from work, tired or doing something socially on the same day. I don’t watch TV, which I guess really helps, because I’d frequently be spending 2-5 hours taking and processing the photos every day. Are there any overlaps between software development and creative thinking? Software is an inherently creative business and the speed that it moves ensures you always have to find solutions to new things. Everyone in the team needs to be a problem solver. Has it helped me specifically with my photography? Probably. Working within teams that continually need to figure out new stuff keeps the brain feisty I suppose, and I guess I’m continually exposed to a lot of possible sources of inspiration. How specifically will this Kickstarter project allow you to test the commercial appeal of your work and do you plan to get the book into shops? It’s taken a while to be confident saying it, but I know that people like the work that I do. I’ve had well over a million views of my pictures, many humbling comments and I know I’ve garnered some loyal fans out there who anticipate my next photo. For me, this Kickstarter is about seeing if there’s worth to my work beyond just making people smile. In an online world where there’s an abundance of freely available content, can you hope to receive anything from what you do, or would people just move onto the next piece of content if you happen to ask for some support? A book has been the single-most requested thing that people have asked me to produce and it’s something that I feel would showcase my work well. It’s just hard to convince people in the publishing industry just now to take any kind of risk – they’ve been hit hard. If I can show that people would like my work enough to buy a book, then it sends a pretty clear picture that publishers might hear, or it gives me the confidence enough to invest in myself a bit more – hard to do when you’re riddled with self-doubt! I’d love to see my work in the shops, yes. I could see it being the thing that someone flips through idly as they’re Christmas shopping and recognizing that it’d be just the perfect gift for their difficult to buy for friend or relative. That said, working in the software industry means I’m clearly aware of how I could use technology to distribute my work, but I can’t deny that there’s something very appealing to having a physical thing to hold in your hands. If the project is successful is there a chance that it could become a full-time job? At the moment that seems like a distant dream, as should this be successful, there are many more steps I’d need to take to reach any kind of business viability. Kickstarter seems exactly that – a way for people to help kick start me into something that could take off. If people like my work and want me to succeed with it, then taking a look at my Kickstarter page (and hopefully pledging a bit of support) would make my elbows blush considerably. So there is is. An opportunity to open the wallet just a bit to ensure that one of the more unusual talents sees the light in the format it deserves.  

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #003

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 This was the first year of my blogging and lots of new things I was learning as I go. I was indeed an infant in blogging a few years ago. However, as time passed by I have learned a lot. This year was year of experiments and new learning. 2007 Working as a full time DBA I often encoutered various errors and I started to learn how to avoid those error and document the same. ERROR Msg 5174 Each file size must be greater than or equal to 512 KB Whenever I see this error I wonder why someone is trying to attempt a database which is extremely small. Anyway, it does not matter what I think I keep on seeing this error often in industries. Anyway the solution of the error is equally interesting – just created larger database. Dilbert Humor This was very first encounter with database humor and I started to love it. It does not matter how many time we read this cartoon it does not get old. Generate Script with Data from Database – Database Publishing Wizard Generating schema script with data is one of the most frequently performed tasks among SQL Server Data Professionals. There are many ways to do the same. In the above article I demonstrated that how we can use the Database Publishing Wizard to accomplish the same. It was new to me at that time but I have not seen much of the adoption of the same still in the industry. Here is one of my videos where I demonstrate how we can generate data with schema. 2008 Delete Backup History – Cleanup Backup History Deleting backup history is important too but should be done carefully. If this is not carried out at regular interval there is good chance that MSDB will be filled up with all the old history. Every organization is different. Some would like to keep the history for 30 days and some for a year but there should be some limit. One should regularly archive the database backup history. South Asia MVP Open Days 2008 This was my very first year Microsoft MVP. I had Indeed big blast at the event and the fun was incredible. After this event I have attended many different MVP events but the fun and learning this particular event presented was amazing and just like me many others are not able to forget the same. Here are other links related to the event: South Asia MVP Open Day 2008 – Goa South Asia MVP Open Day 2008 – Goa – Day 1 South Asia MVP Open Day 2008 – Goa – Day 2 South Asia MVP Open Day 2008 – Goa – Day 3 2009 Enable or Disable Constraint  This is very simple script but I personally keep on forgetting it so I had blogged it. Till today, I keep on referencing this again and again as sometime a very little thing is hard to remember. Policy Based Management – Create, Evaluate and Fix Policies This article will cover the most spectacular feature of SQL 2008 – Policy-based management and how the configuration of SQL Server with policy-based management architecture can make a powerful difference. Policy based management is loaded with several advantages. It can help you implement various policies for reliable configuration of the system. It also provides additional administrative assistance to DBAs and helps them effortlessly manage various tasks of SQL Server across the enterprise. SQLPASS 2009 – My Very First SQPASS Experience Just Brilliant! I never had an experience such a thing in my life. SQL SQL and SQL – all around SQL! I am listing my own reasons here in order of importance to me. Networking with SQL fellows and experts Putting face to the name or avatar Learning and improving my SQL skills Understanding the structure of the largest SQL Server Professional Association Attending my favorite training sessions Since last time I have never missed a single time this event. This event is my favorite event and something keeps me going. Here are additional post related SQLPASS 2009. SQL PASS Summit, Seattle 2009 – Day 1 SQL PASS Summit, Seattle 2009 – Day 2 SQL PASS Summit, Seattle 2009 – Day 3 SQL PASS Summit, Seattle 2009 – Day 4 2010 Get All the Information of Database using sys.databases Even though we believe that we know everything about our database, we do not know a lot of things about our database. This little script enables us to know so many details about databases which we may not be familiar with. Run this on your server today and see how much you know your database. Reducing CXPACKET Wait Stats for High Transactional Database While engaging in a performance tuning consultation for a client, a situation occurred where they were facing a lot of CXPACKET Waits Stats. The client asked me if I could help them reduce this huge number of wait stats. I usually receive this kind of request from other client as well, but the important thing to understand is whether this question has any merits or benefits, or not. I discusses the same in this article – a bit long but insightful for sure. Error related to Database in Use There are so many database management operations in SQL Server which requires exclusive access to the database and it is not always possible to get it. When any database is online in SQL Server it either applications or system thread often accesses them. This means database can’t have exclusive access and the operations which required this access throws an error. There is very easy method to overcome this minor issue – a single line script can give you exclusive access to the database. Difference between DATETIME and DATETIME2 Developers have found the root reason of the problem when dealing with Date Functions – when data time values are converted (implicit or explicit) between different data types, which would lose some precision, so the result cannot match each other as expected. In this blog post I go over very interesting details and difference between DATETIME and DATETIME2 History of SQL Server Database Encryption I recently met Michael Coles and Rodeney Landrum the author of one of the kind book Expert SQL Server 2008 Encryption at SQLPASS in Seattle. During the conversation we ended up how Microsoft is evolving encryption technology. The same discussion lead to talking about history of encryption tools in SQL Server. Michale pointed me to page 18 of his book of encryption. He explicitly gave me permission to re-produce relevant part of history from his book. 2011 Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY Some time an interesting feature and smart audience make a total difference in places. From last two days, I have been writing on SQL Server 2012 feature FIRST_VALUE and LAST_VALUE. I created a puzzle which was very interesting and got many people attempt to resolve it. It was based on following two articles: Introduction to FIRST_VALUE and LAST_VALUE Introduction to FIRST_VALUE and LAST_VALUE with OVER clause I even provided the hint about how one can solve this problem. The best part was many people solved the problem without using hints! Try your luck!  A Real Story of Book Getting ‘Out of Stock’ to A 25% Discount Story Available This is a great problem and everybody would love to have it. We had it and we loved it. Our book got out of stock in 48 hours of releasing and stocks were empty. We faced many issues and learned many valuable lessons. Some we were able to avoid in the future and some we are still facing it as those problems have no solutions. However, since that day – our books never gone out of stock. This inspiring learning story for us and I am confident that you will love to read it as well. Introduction to LEAD and LAG – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical function LEAD() and LAG(). This function accesses data from a subsequent row (for lead) and previous row (for lag) in the same result set without the use of a self-join . It will be very difficult to explain this in words so I will attempt small example to explain you this function. I had a fantastic time writing this blog post and I am very confident when you read it, you will like the same. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Tools of the Trade

    - by Ajarn Mark Caldwell
    I got pretty excited a couple of days ago when my new laptop arrived. “The new phone books are here!  The new phone books are here!  I’m a somebody!” - Steve Martin in The Jerk It is a Dell Precision M4500 with an Intel i7 Core 2.8 GHZ running 64-bit Windows 7 with a 15.6” widescreen, 8 GB RAM, 256 GB SSD.  For some of you high fliers, this may be nothing to write home about, but compared to the 32–bit Windows XP laptop with 2 GB of RAM and a regular hard disk that I’m coming from, it’s a really nice step forward.  I won’t even bore you with the details of the desktop PC I was first given when I started here 5 1/2 years ago.  Let’s just say that things have improved.  One really nice thing is that while we are definitely running a lean and mean department in terms of staffing, my boss believes in supporting that lean staff with good tools in order to stay lean instead of having to spend even more money on additional employees.  Of course, that only goes so far, and at some point you have to add more people in order to get more work done, which is why we are bringing on-board a new employee and a new contract developer next week.  But that’s a different story for a different time. But the main topic for this post is to highlight the variety of tools that I use in my job and that you might find useful, too.  This is easy to do right now because the process of building up my new laptop from scratch has forced me to assemble a list of software that had to be installed and configured.  Keep in mind as you look through this list that I play many roles in our company.  My official title is Software Engineering Manager, but in addition to managing the team, I am also an active ASP.NET and SQL developer, the Database Administrator, and 50% of the SAN Administrator team.  So, without further ado, here are the tools and some comments about why I use them: Tool Purpose Virtual Clone Drive Easily mount an ISO image as a DVD Drive.  This is particularly handy when you are downloading disk images from Microsoft for your tools. SQL Server 2008 R2 Developer Edition We are migrating all of our active systems to SQL 2008 R2.  Developer Edition has all the features of Enterprise Edition, but intended for development use. SQL Server 2005 Developer Edition (BIDS ONLY) The migration to SSRS 2008 R2 is just getting started, and in the meantime, maintenance work still has to be done on the reports on our SQL 2005 server.  For some reason, you can’t use BIDS from 2008 to write reports for a 2005 server.  There is some different format and when you open 2005 reports in 2008 BIDS, it forces you to upgrade, and they can no longer be uploaded to a 2005 server.  Hopefully Microsoft will fix this soon in some manner similar to Visual Studio now allows you to pick which version of the .NET Framework you are coding against. Visual Studio 2010 Premium All of our application development is in ASP.NET, and we might as well use the tool designed for it. I’ve used a version of Visual Studio going all the way back to VB 6.0 and Visual Interdev. Vault Professional Client Several years ago we replaced Visual Source Safe with SourceGear Vault (then Fortress, and now Vault Pro), and I love it.  It is very reliable with low overhead - perfect for a small to medium size development team.  And being a small ISV, their support is exceptional. Red-Gate Developer Bundle with the SQL Source Control update for Vault I first used, and fell in love with, SQL Prompt shortly before Red-Gate bought it, and then Red-Gate’s first release made me love it even more.  SQL Refactor (which has since been rolled into the latest version of SQL Prompt) has saved me many hours and migraine’s trying to understand somebody else’s code when their indenting was nonexistant, or worse, irrational.  SQL Compare has been awesome for troubleshooting potential schema issues between different instances of system databases.  SQL Data Compare helped us identify the cause behind a bug which appeared in PROD but could not be reproduced in a nearly (but not quite exactly) identical copy in UAT.  And the newest tool we are embracing: SQL Source Control.  I blogged about it here (and here, and here) last December.  This is really going to help us keep each developer’s copy of the database in sync with one another. Fiddler Helps you watch the whole traffic stream on web visits.  Haven’t used it a lot, but it did help me track down some odd 404 errors we were finding in our own application logs.  Has some other JavaScript troubleshooting capabilities, but some of its usefulness has been supplanted by the Developer Tools option in IE8. Funduc Search & Replace Find any string anywhere in a mound of source code really, really fast.  Does RegEx searches, if you understand that foreign language.  Has really helped with some refactoring work to pinpoint, for example, everywhere a particular stored procedure is referenced, whether in .NET code or other SQL procedures (which we have in script files).  Provides in-context preview of the search results.  Fantastic tool, and a bargain price. SciTE SciTE is a Scintilla based Text Editor and it is a fantastic, light-weight tool for quickly reviewing (or writing) program code, SQL scripts, and extract files.  It has language-specific syntax highlighting.  I used it to write several batch and CMD programs a year ago, and to examine data extract files for exchanging information with other systems.  Extremely handy are the options to View End of Line and View Whitespace.  Ever receive a file that is supposed to use CRLF as an end-of-line marker, but really only has CRs?  SciTE will quickly make that visible. Infragistics Controls We do a lot of ASP.NET development, and frequently use the WebGrid, WebTab, and date picker controls.  We will likely be implementing the Hierarchical Data Grid soon.  Infragistics has control suites for WebForms, WinForms, Silverlight, and coming soon MVC/JQuery. WinZip - WITH Command-Line add-in The classic compression program with a great command-line interface that allows me to build those CMD (and soon PowerShell) programs for automated compression jobs.  Our versioned Build packages are zip files. XML Notepad Haven’t used this a lot myself, but one of my team really likes it for examining large XML files. LINQPad Again, haven’t used this one a lot, but it was recommended to me for learning and practicing my LINQ skills which will come in handy as we implement Entity Framework. SQL Sentry Plan Explorer SQL Server Show Plan on steroids.  Great for helping you focus on the parts of a large query that are of most importance.  Also great for just compressing the graphical plan into more readable layout. Araxis Merge A great DIFF and Merge tool.  SourceGear provides a great tool called DiffMerge that we use all the time, but occasionally, I like the cross-edit capabilities of Araxis Merge.  For a while, we also produced DIFF reports in HTML that showed all the changes that occurred between two releases.  This was most important when we were putting out very small, but very important hot fixes on a very politically hot system.  The reports produced by Araxis Merge gave the Director of IS assurance that we were not accidentally introducing ripples throughout the system with our releases. Idera SQL Admin Toolset A great collection of tools including a password checker to help analyze your SQL Server for weak user passwords, a Backup Status tool to quickly scan a large list of servers and databases to identify any that are overdue for backups.  Particularly helpful for highlighting new databases that have been deployed without getting included in your backup processing.  I also like Space Analyzer to keep an eye on disk space consumed by database files. Idera SQL Job Manager This free tool provides a nice calendar view of SQL Server Job Schedules, but to a degree, you also get what you pay for.  We will be purchasing SQL Sentry Event Manager later this year as an even better job schedule reviewer/manager.  But in the meantime, this at least gives me a good view on potential resource conflicts across multiple instances of SQL Server. DBFViewer 2000 I inherited a couple of FoxPro databases that I have to keep an eye on occasionally and have not yet been able to migrate them to SQL Server. Balsamiq Mockups We are still in evaluation-mode on this tool, but I really like it as a quick UI mockup tool that does not require Visual Studio, so someone other than a programmer can do UI design.  The interface looks hand-drawn which definitely has some psychological benefits when communicating to users, too. FeedDemon I have to stay on top of my WAY TOO MANY blog subscriptions somehow.  I may read blogs on a couple of different computers, and FeedDemon’s integration with Google Reader allows me to keep them all in sync.  I don’t particularly like the Google Reader interface, or the fact that it always wanted to mark articles as read just because I scrolled past them.  FeedDemon solves this problem for me, and provides a multi-tabbed interface which is good because fairly frequently one blog will link to something else I want to read, and I can end up with a half-dozen open tabs all from one article. Synergy+ In my office, I run four monitors across two computers all with one mouse and keyboard.  Synergy is the magic software that makes this work. TweetDeck I’m not the most active Tweeter in the world, but when I want to check-in with the Twitterverse, this really helps.  I have found the #sqlhelp and #PoshHelp hash tags particularly useful, and I also have columns setup to make it easy to monitor #sqlpass, #PASSProfDev, and short term events like #sqlsat68.   Whew!  That’s a lot.  No wonder it took me a couple of days to get everything setup the way I wanted it.  Oh, that and actually getting some work accomplished at the same time.  Anyway, I know that is a huge dump of info, and most people never make it here to the end, so for those who did, let me say, CONGRATULATIONS, you made it! I hope you’ll find a new tool or two to make your work life a little easier.

    Read the article

  • C#/.NET Little Wonders: Tuples and Tuple Factory Methods

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can really help improve your code by making it easier to write and maintain.  This week, we look at the System.Tuple class and the handy factory methods for creating a Tuple by inferring the types. What is a Tuple? The System.Tuple is a class that tends to inspire a reaction in one of two ways: love or hate.  Simply put, a Tuple is a data structure that holds a specific number of items of a specific type in a specific order.  That is, a Tuple<int, string, int> is a tuple that contains exactly three items: an int, followed by a string, followed by an int.  The sequence is important not only to distinguish between two members of the tuple with the same type, but also for comparisons between tuples.  Some people tend to love tuples because they give you a quick way to combine multiple values into one result.  This can be handy for returning more than one value from a method (without using out or ref parameters), or for creating a compound key to a Dictionary, or any other purpose you can think of.  They can be especially handy when passing a series of items into a call that only takes one object parameter, such as passing an argument to a thread's startup routine.  In these cases, you do not need to define a class, simply create a tuple containing the types you wish to return, and you are ready to go? On the other hand, there are some people who see tuples as a crutch in object-oriented design.  They may view the tuple as a very watered down class with very little inherent semantic meaning.  As an example, what if you saw this in a piece of code: 1: var x = new Tuple<int, int>(2, 5); What are the contents of this tuple?  If the tuple isn't named appropriately, and if the contents of each member are not self evident from the type this can be a confusing question.  The people who tend to be against tuples would rather you explicitly code a class to contain the values, such as: 1: public sealed class RetrySettings 2: { 3: public int TimeoutSeconds { get; set; } 4: public int MaxRetries { get; set; } 5: } Here, the meaning of each int in the class is much more clear, but it's a bit more work to create the class and can clutter a solution with extra classes. So, what's the correct way to go?  That's a tough call.  You will have people who will argue quite well for one or the other.  For me, I consider the Tuple to be a tool to make it easy to collect values together easily.  There are times when I just need to combine items for a key or a result, in which case the tuple is short lived and so the meaning isn't easily lost and I feel this is a good compromise.  If the scope of the collection of items, though, is more application-wide I tend to favor creating a full class. Finally, it should be noted that tuples are immutable.  That means they are assigned a value at construction, and that value cannot be changed.  Now, of course if the tuple contains an item of a reference type, this means that the reference is immutable and not the item referred to. Tuples from 1 to N Tuples come in all sizes, you can have as few as one element in your tuple, or as many as you like.  However, since C# generics can't have an infinite generic type parameter list, any items after 7 have to be collapsed into another tuple, as we'll show shortly. So when you declare your tuple from sizes 1 (a 1-tuple or singleton) to 7 (a 7-tuple or septuple), simply include the appropriate number of type arguments: 1: // a singleton tuple of integer 2: Tuple<int> x; 3:  4: // or more 5: Tuple<int, double> y; 6:  7: // up to seven 8: Tuple<int, double, char, double, int, string, uint> z; Anything eight and above, and we have to nest tuples inside of tuples.  The last element of the 8-tuple is the generic type parameter Rest, this is special in that the Tuple checks to make sure at runtime that the type is a Tuple.  This means that a simple 8-tuple must nest a singleton tuple (one of the good uses for a singleton tuple, by the way) for the Rest property. 1: // an 8-tuple 2: Tuple<int, int, int, int, int, double, char, Tuple<string>> t8; 3:  4: // an 9-tuple 5: Tuple<int, int, int, int, double, int, char, Tuple<string, DateTime>> t9; 6:  7: // a 16-tuple 8: Tuple<int, int, int, int, int, int, int, Tuple<int, int, int, int, int, int, int, Tuple<int,int>>> t14; Notice that on the 14-tuple we had to have a nested tuple in the nested tuple.  Since the tuple can only support up to seven items, and then a rest element, that means that if the nested tuple needs more than seven items you must nest in it as well.  Constructing tuples Constructing tuples is just as straightforward as declaring them.  That said, you have two distinct ways to do it.  The first is to construct the tuple explicitly yourself: 1: var t3 = new Tuple<int, string, double>(1, "Hello", 3.1415927); This creates a triple that has an int, string, and double and assigns the values 1, "Hello", and 3.1415927 respectively.  Make sure the order of the arguments supplied matches the order of the types!  Also notice that we can't half-assign a tuple or create a default tuple.  Tuples are immutable (you can't change the values once constructed), so thus you must provide all values at construction time. Another way to easily create tuples is to do it implicitly using the System.Tuple static class's Create() factory methods.  These methods (much like C++'s std::make_pair method) will infer the types from the method call so you don't have to type them in.  This can dramatically reduce the amount of typing required especially for complex tuples! 1: // this 4-tuple is typed Tuple<int, double, string, char> 2: var t4 = Tuple.Create(42, 3.1415927, "Love", 'X'); Notice how much easier it is to use the factory methods and infer the types?  This can cut down on typing quite a bit when constructing tuples.  The Create() factory method can construct from a 1-tuple (singleton) to an 8-tuple (octuple), which of course will be a octuple where the last item is a singleton as we described before in nested tuples. Accessing tuple members Accessing a tuple's members is simplicity itself… mostly.  The properties for accessing up to the first seven items are Item1, Item2, …, Item7.  If you have an octuple or beyond, the final property is Rest which will give you the nested tuple which you can then access in a similar matter.  Once again, keep in mind that these are read-only properties and cannot be changed. 1: // for septuples and below, use the Item properties 2: var t1 = Tuple.Create(42, 3.14); 3:  4: Console.WriteLine("First item is {0} and second is {1}", 5: t1.Item1, t1.Item2); 6:  7: // for octuples and above, use Rest to retrieve nested tuple 8: var t9 = new Tuple<int, int, int, int, int, int, int, 9: Tuple<int, int>>(1,2,3,4,5,6,7,Tuple.Create(8,9)); 10:  11: Console.WriteLine("The 8th item is {0}", t9.Rest.Item1); Tuples are IStructuralComparable and IStructuralEquatable Most of you know about IComparable and IEquatable, what you may not know is that there are two sister interfaces to these that were added in .NET 4.0 to help support tuples.  These IStructuralComparable and IStructuralEquatable make it easy to compare two tuples for equality and ordering.  This is invaluable for sorting, and makes it easy to use tuples as a compound-key to a dictionary (one of my favorite uses)! Why is this so important?  Remember when we said that some folks think tuples are too generic and you should define a custom class?  This is all well and good, but if you want to design a custom class that can automatically order itself based on its members and build a hash code for itself based on its members, it is no longer a trivial task!  Thankfully the tuple does this all for you through the explicit implementations of these interfaces. For equality, two tuples are equal if all elements are equal between the two tuples, that is if t1.Item1 == t2.Item1 and t1.Item2 == t2.Item2, and so on.  For ordering, it's a little more complex in that it compares the two tuples one at a time starting at Item1, and sees which one has a smaller Item1.  If one has a smaller Item1, it is the smaller tuple.  However if both Item1 are the same, it compares Item2 and so on. For example: 1: var t1 = Tuple.Create(1, 3.14, "Hi"); 2: var t2 = Tuple.Create(1, 3.14, "Hi"); 3: var t3 = Tuple.Create(2, 2.72, "Bye"); 4:  5: // true, t1 == t2 because all items are == 6: Console.WriteLine("t1 == t2 : " + t1.Equals(t2)); 7:  8: // false, t1 != t2 because at least one item different 9: Console.WriteLine("t2 == t2 : " + t2.Equals(t3)); The actual implementation of IComparable, IEquatable, IStructuralComparable, and IStructuralEquatable is explicit, so if you want to invoke the methods defined there you'll have to manually cast to the appropriate interface: 1: // true because t1.Item1 < t3.Item1, if had been same would check Item2 and so on 2: Console.WriteLine("t1 < t3 : " + (((IComparable)t1).CompareTo(t3) < 0)); So, as I mentioned, the fact that tuples are automatically equatable and comparable (provided the types you use define equality and comparability as needed) means that we can use tuples for compound keys in hashing and ordering containers like Dictionary and SortedList: 1: var tupleDict = new Dictionary<Tuple<int, double, string>, string>(); 2:  3: tupleDict.Add(t1, "First tuple"); 4: tupleDict.Add(t2, "Second tuple"); 5: tupleDict.Add(t3, "Third tuple"); Because IEquatable defines GetHashCode(), and Tuple's IStructuralEquatable implementation creates this hash code by combining the hash codes of the members, this makes using the tuple as a complex key quite easy!  For example, let's say you are creating account charts for a financial application, and you want to cache those charts in a Dictionary based on the account number and the number of days of chart data (for example, a 1 day chart, 1 week chart, etc): 1: // the account number (string) and number of days (int) are key to get cached chart 2: var chartCache = new Dictionary<Tuple<string, int>, IChart>(); Summary The System.Tuple, like any tool, is best used where it will achieve a greater benefit.  I wouldn't advise overusing them, on objects with a large scope or it can become difficult to maintain.  However, when used properly in a well defined scope they can make your code cleaner and easier to maintain by removing the need for extraneous POCOs and custom property hashing and ordering. They are especially useful in defining compound keys to IDictionary implementations and for returning multiple values from methods, or passing multiple values to a single object parameter. Tweet Technorati Tags: C#,.NET,Tuple,Little Wonders

    Read the article

  • The Microsoft Ajax Library and Visual Studio Beta 2

    - by Stephen Walther
    Visual Studio 2010 Beta 2 was released this week and one of the first things that I hope you notice is that it no longer contains the latest version of ASP.NET AJAX. What happened? Where did AJAX go? Just like Sting and The Police, just like Phil Collins and Genesis, just like Greg Page and the Wiggles, AJAX has gone out of band! We are starting a solo career. A Name Change First things first. In previous releases, our Ajax framework was named ASP.NET AJAX. We now have changed the name of the framework to the Microsoft Ajax Library. There are two reasons behind this name change. First, the members of the Ajax team got tired of explaining to everyone that our Ajax framework is not tied to the server-side ASP.NET framework. You can use the Microsoft Ajax Library with ASP.NET Web Forms, ASP.NET MVC, PHP, Ruby on RAILS, and even pure HTML applications. Our framework can be used as a client-only framework and having the word ASP.NET in our name was confusing people. Second, it was time to start spelling the word Ajax like everyone else. Notice that the name is the Microsoft Ajax Library and not the Microsoft AJAX library. Originally, Microsoft used upper case AJAX because AJAX originally was an acronym for Asynchronous JavaScript and XML. And, according to Strunk and Wagnell, acronyms should be all uppercase. However, Ajax is one of those words that have migrated from acronym status to “just a word” status. So whenever you hear one of your co-workers talk about ASP.NET AJAX, gently correct your co-worker and say “It is now called the Microsoft Ajax Library.” Why OOB? But why move out-of-band (OOB)? The short answer is that we have had approximately 6 preview releases of the Microsoft Ajax Library over the last year. That’s a lot. We pride ourselves on being agile. Client-side technology evolves quickly. We want to be able to get a preview version of the Microsoft Ajax Library out to our customers, get feedback, and make changes to the library quickly. Shipping the Microsoft Ajax Library out-of-band keeps us agile and enables us to continue to ship new versions of the library even after ASP.NET 4 ships. Showing Love for JavaScript Developers One area in which we have received a lot of feedback is around making the Microsoft Ajax Library easier to use for developers who are comfortable with JavaScript. We also wanted to make it easy for jQuery developers to take advantage of the innovative features of the Microsoft Ajax Library. To achieve these goals, we’ve added the following features to the Microsoft Ajax Library (these features are included in the latest preview release that you can download right now): A simplified imperative syntax – We wanted to make it brain-dead simple to create client-side Ajax controls when writing JavaScript. A client script loader – We wanted the Microsoft Ajax Library to load all of the scripts required by a component or control automatically. jQuery integration – We love the jQuery selector syntax. We wanted to make it easy for jQuery developers to use the Microsoft Ajax Library without changing their programming style. If you are interested in learning about these new features of the Microsoft Ajax Library, I recommend that you read the following blog post by Scott Guthrie: http://weblogs.asp.net/scottgu/archive/2009/10/15/announcing-microsoft-ajax-library-preview-6-and-the-microsoft-ajax-minifier.aspx Downloading the Latest Version of the Microsoft Ajax Library Currently, the best place to download the latest version of the Microsoft Ajax Library is directly from the ASP.NET CodePlex project: http://aspnet.codeplex.com/ As I write this, the current version is Preview 6. The next version is coming out at the PDC. Summary I’m really excited about the future of the Microsoft Ajax Library. Moving outside of the ASP.NET framework provides us the flexibility to remain agile and continue to innovate aggressively. The latest preview release of the Microsoft Ajax Library includes several major new features including a client script loader, jQuery integration, and a simplified client control creation syntax.

    Read the article

  • ASP.NET AJAX, jQuery and AJAX Control Toolkit&ndash;the roadmap

    - by Harish Ranganathan
    The opinions mentioned herein are solely mine and do not reflect those of my employer Wanted to post this for a long time but couldn’t.  I have been an ASP.NET Developer for quite sometime and have worked with version 1.1, 2.0, 3.5 as well as the latest 4.0. With ASP.NET 2.0 and Visual Studio 2005, came the era of AJAX and rich UI style web applications.  So, ASP.NET AJAX (codenamed “ATLAS”) was released almost an year later.  This was called as ASP.NET 2.0 AJAX Extensions.  This release was supported further with Visual Studio 2005 Service Pack 1. The initial release of ASP.NET AJAX had 3 components ASP.NET AJAX Library – Client library that is used internally by the server controls as well as scripts that can be used to write hand coded ajax style pages ASP.NET AJAX Extensions – Server controls i.e. ScriptManager,Proxy, UpdatePanel, UpdateProgress and Timer server controls.  Works pretty much like other server controls in terms of development and render client side behavior automatically AJAX Control Toolkit – Set of server controls that extend a behavior or a capability.  Ex.- AutoCompleteExtender The AJAX Control Toolkit was a separate download from CodePlex while the first two get installed when you install ASP.NET AJAX Extensions. With Visual Studio 2008, ASP.NET AJAX made its way into the runtime.  So one doesn’t need to separately install the AJAX Extensions.  However, the AJAX Control Toolkit still remained as a community project that can be downloaded from CodePlex.  By then, the toolkit had close to 30 controls. So, the approach was clear viz., client side programming using ASP.NET AJAX Library and server side model using built-in controls (UpdatePanel) and/or AJAX Control Toolkit. However, with Visual Studio 2008 Service Pack 1, we also added support for the ever increasing popular jQuery library.  That is, you can use jQuery along with ASP.NET and would also get intellisense for jQuery in Visual Studio 2008. Some of you who have played with Visual Studio 2010 Beta and .NET Framework 4 Beta, would also have explored the new AJAX Library which had a lot of templates, live bindings etc.,  But, overall, the road map ahead makes it much simplified. For client side programming using JavaScript for implementing AJAX in ASP.NET, the recommendation is to use jQuery which will be shipped along with Visual Studio and provides intellisense as well. For server side programming one you can use the server controls like UpdatePanel etc., and also the AJAX Control Toolkit which has close to 40 controls now.  The AJAX Control Toolkit still remains as a separate download at CodePlex.  You can download the different versions for different versions of ASP.NET at http://ajaxcontroltoolkit.codeplex.com/ The Microsoft AJAX Library will still be available through the CDN (Content Delivery Network) channels.  You can view the CDN resources at http://www.asp.net/ajaxlibrary/CDN.ashx Similarly even jQuery and the toolkit would be available as CDN resources in case you chose not to download and have them as a part of your application. I think this makes AJAX development pretty simple.  Earlier, having Microsoft AJAX Library as well as jQuery for client side scripting was kind of confusing on which one to use.  With this roadmap, it makes it simple and clear. You can read more on this at http://ajax.asp.net I hope this post provided some clarity on the AJAX roadmap as I could decipher from various product teams. Cheers!!!

    Read the article

  • Developer’s Life – Every Developer is a Batman

    - by Pinal Dave
    Batman is one of the darkest superheroes in the fantasy canon.  He does not come to his powers through any sort of magical coincidence or radioactive insect, but through a lot of psychological scarring caused by witnessing the death of his parents.  Despite his dark back story, he possesses a lot of admirable abilities that I feel bear comparison to developers. Batman has the distinct advantage that his alter ego, Bruce Wayne is a millionaire (or billionaire in today’s reboots).  This means that he can spend his time working on his athletic abilities, building a secret lair, and investing his money in cool tools.  This might not be true for developers (well, most developers), but I still think there are many parallels. So how are developers like Batman? Well, read on my list of reasons. Develop Skills Batman works on his skills.  He didn’t get the strength to scale Gotham’s skyscrapers by inheriting his powers or suffering an industrial accident.  Developers also hone their skills daily.  They might not be doing pull-ups and scaling buldings, but I think their skills are just as impressive. Clear Goals Batman is driven to build a better Gotham.  He knows that the criminal who killed his parents was a small-time thief, not a super villain – so he has larger goals in mind than simply chasing one villain.  He wants his city as a whole to be better.  Developers are also driven to make things better.  It can be easy to get hung up on one problem, but in the end it is best to focus on the well-being of the system as a whole. Ultimate Teamplayers Batman is the hero Gotham needs – even when that means appearing to be the bad guys.  Developers probably know that feeling well.  Batman takes the fall for a crime he didn’t commit, and developers often have to deliver bad news about the limitations of their networks and servers.  It’s not always a job filled with glory and thanks, but someone has to do it. Always Ready Batman and the Boy Scouts have this in common – they are always prepared.  Let’s add developers to this list.  Batman has an amazing tool belt with gadgets and gizmos, and let’s not even get into all the functions of the Batmobile!  Developers’ skills might be the knowledge and skills they have developed, not tools they can carry in a utility belt, but that doesn’t make them any less impressive. 100% Dedication Bruce Wayne cultivates the personality of a playboy, never keeping the same girlfriend for long and spending his time partying.  Even though he hides it, his driving force is his deep concern and love for his friends and the city as a whole.  Developers also care a lot about their company and employees – even when it is driving them crazy.  You do your best work when you care about your job on a personal level. Quality Output Batman believes the city deserves to be saved.  The citizens might have a love-hate relationship with both Batman and Bruce Wayne, and employees might not always appreciate developers.  Batman and developers, though, keep working for the best of everyone. I hope you are all enjoying reading about developers-as-superheroes as much as I am enjoying writing about them.  Please tell me how else developers are like Superheroes in the comments – especially if you know any developers who are faster than a speeding bullet and can leap tall buildings in a single bound. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Developer, Superhero

    Read the article

  • A basic T4 template for generating Model Metadata in ASP.NET MVC2

    - by rajbk
    I have been learning about T4 templates recently by looking at the awesome ADO.NET POCO entity generator. By using the POCO entity generator template as a base, I created a T4 template which generates metadata classes for a given Entity Data Model. This speeds coding by reducing the amount of typing required when creating view specific model and its metadata. To use this template, Download the template provided at the bottom. Set two values in the template file. The first one should point to the EDM you wish to generate metadata for. The second is used to suffix the namespace and classes that get generated. string inputFile = @"Northwind.edmx"; string suffix = "AutoMetadata"; Add the template to your MVC 2 Visual Studio 2010 project. Once you add it, a number of classes will get added to your project based on the number of entities you have.    One of these classes is shown below. Note that the DisplayName, Required and StringLength attributes have been added by the t4 template. //------------------------------------------------------------------------------ // <auto-generated> // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------   using System; using System.ComponentModel; using System.ComponentModel.DataAnnotations;   namespace NorthwindSales.ModelsAutoMetadata { public partial class CustomerAutoMetadata { [DisplayName("Customer ID")] [Required] [StringLength(5)] public string CustomerID { get; set; } [DisplayName("Company Name")] [Required] [StringLength(40)] public string CompanyName { get; set; } [DisplayName("Contact Name")] [StringLength(30)] public string ContactName { get; set; } [DisplayName("Contact Title")] [StringLength(30)] public string ContactTitle { get; set; } [DisplayName("Address")] [StringLength(60)] public string Address { get; set; } [DisplayName("City")] [StringLength(15)] public string City { get; set; } [DisplayName("Region")] [StringLength(15)] public string Region { get; set; } [DisplayName("Postal Code")] [StringLength(10)] public string PostalCode { get; set; } [DisplayName("Country")] [StringLength(15)] public string Country { get; set; } [DisplayName("Phone")] [StringLength(24)] public string Phone { get; set; } [DisplayName("Fax")] [StringLength(24)] public string Fax { get; set; } } } The gen’d class can be used from your project by creating a partial class with the entity name and setting the MetadataType attribute.namespace MyProject.Models{ [MetadataType(typeof(CustomerAutoMetadata))] public partial class Customer { }} You can also copy the code in the metadata class generated and create your own ViewModel class. Note that the template is super basic  and does not take into account complex properties. I have tested it with the Northwind database. This is a work in progress. Feel free to modify the template to suite your requirements. Standard disclaimer follows: Use At Your Own Risk, Works on my machine running VS 2010 RTM/ASP.NET MVC 2 AutoMetaData.zip Mr. Incredible: Of course I have a secret identity. I don't know a single superhero who doesn't. Who wants the pressure of being super all the time?

    Read the article

  • MAMP + Python MySQLDB - trouble installing

    - by Frederico
    I'm currently running the latest version of MAMP on my Snow Leopard OSX, and I'm trying to install MySQLDB. Downloaded: MySQL-python-1.2.3c1 I went into the setup_posix.py and adjusted the location of the mysql_config to the one in MAMP: mysql_config.path = "/Applications/MAMP/Library/bin/mysql_config" When trying to build I get the error below. Could anyone give me a hand please: creating build/temp.macosx-10.6-universal-2.6 gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch i386 -arch ppc -arch x86_64 -pipe -Dversion_info=(1,2,3,'gamma',1) -D_version_=1.2.3c1 -I/Applications/MAMP/Library/include/mysql -I/System/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c _mysql.c -o build/temp.macosx-10.6-universal-2.6/_mysql.o -fno-omit-frame-pointer -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDONT_DECLARE_CXA_PURE_VIRTUAL _mysql.c:36:23: error: my_config.h: No such file or directory _mysql.c:38:19: error: mysql.h: No such file or directory _mysql.c:39:26: error: mysqld_error.h: No such file or directory _mysql.c:40:20: error: errmsg.h: No such file or directory _mysql.c:76: error: expected specifier-qualifier-list before ‘MYSQL’ _mysql.c:90: error: expected specifier-qualifier-list before ‘MYSQL_RES’ _mysql.c: In function ‘_mysql_Exception’: _mysql.c:120: warning: implicit declaration of function ‘mysql_errno’ _mysql.c:120: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:123: error: ‘CR_MAX_ERROR’ undeclared (first use in this function) _mysql.c:123: error: (Each undeclared identifier is reported only once _mysql.c:123: error: for each function it appears in.) _mysql.c:131: error: ‘CR_COMMANDS_OUT_OF_SYNC’ undeclared (first use in this function) _mysql.c:132: error: ‘ER_DB_CREATE_EXISTS’ undeclared (first use in this function) _mysql.c:133: error: ‘ER_SYNTAX_ERROR’ undeclared (first use in this function) _mysql.c:134: error: ‘ER_PARSE_ERROR’ undeclared (first use in this function) _mysql.c:135: error: ‘ER_NO_SUCH_TABLE’ undeclared (first use in this function) _mysql.c:136: error: ‘ER_WRONG_DB_NAME’ undeclared (first use in this function) _mysql.c:137: error: ‘ER_WRONG_TABLE_NAME’ undeclared (first use in this function) _mysql.c:138: error: ‘ER_FIELD_SPECIFIED_TWICE’ undeclared (first use in this function) _mysql.c:139: error: ‘ER_INVALID_GROUP_FUNC_USE’ undeclared (first use in this function) _mysql.c:140: error: ‘ER_UNSUPPORTED_EXTENSION’ undeclared (first use in this function) _mysql.c:141: error: ‘ER_TABLE_MUST_HAVE_COLUMNS’ undeclared (first use in this function) _mysql.c:170: error: ‘ER_DUP_ENTRY’ undeclared (first use in this function) _mysql.c:213: warning: implicit declaration of function ‘mysql_error’ _mysql.c:213: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:213: warning: passing argument 1 of ‘PyString_FromString’ makes pointer from integer without a cast _mysql.c: In function ‘_mysql_server_init’: _mysql.c:308: warning: label ‘finish’ defined but not used _mysql.c:234: warning: unused variable ‘item’ _mysql.c:233: warning: unused variable ‘groupc’ _mysql.c:233: warning: unused variable ‘i’ _mysql.c:233: warning: unused variable ‘cmd_argc’ _mysql.c:232: warning: unused variable ‘s’ _mysql.c: In function ‘_mysql_ResultObject_Initialize’: _mysql.c:363: error: ‘MYSQL_RES’ undeclared (first use in this function) _mysql.c:363: error: ‘result’ undeclared (first use in this function) _mysql.c:368: error: ‘MYSQL_FIELD’ undeclared (first use in this function) _mysql.c:368: error: ‘fields’ undeclared (first use in this function) _mysql.c:377: error: ‘_mysql_ResultObject’ has no member named ‘use’ _mysql.c:380: warning: implicit declaration of function ‘mysql_use_result’ _mysql.c:380: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:382: warning: implicit declaration of function ‘mysql_store_result’ _mysql.c:382: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:383: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:386: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:389: warning: implicit declaration of function ‘mysql_num_fields’ _mysql.c:390: error: ‘_mysql_ResultObject’ has no member named ‘nfields’ _mysql.c:391: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:392: warning: implicit declaration of function ‘mysql_fetch_fields’ _mysql.c:438: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ResultObject_traverse’: _mysql.c:450: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:451: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ResultObject_clear’: _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:463: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ConnectionObject_Initialize’: _mysql.c:475: error: ‘MYSQL’ undeclared (first use in this function) _mysql.c:475: error: ‘conn’ undeclared (first use in this function) _mysql.c:500: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:501: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:525: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:547: warning: implicit declaration of function ‘mysql_init’ _mysql.c:547: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:550: warning: implicit declaration of function ‘mysql_options’ _mysql.c:550: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:550: error: ‘MYSQL_OPT_CONNECT_TIMEOUT’ undeclared (first use in this function) _mysql.c:554: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:554: error: ‘MYSQL_OPT_COMPRESS’ undeclared (first use in this function) _mysql.c:555: error: ‘CLIENT_COMPRESS’ undeclared (first use in this function) _mysql.c:558: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:558: error: ‘MYSQL_OPT_NAMED_PIPE’ undeclared (first use in this function) _mysql.c:560: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:560: error: ‘MYSQL_INIT_COMMAND’ undeclared (first use in this function) _mysql.c:562: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:562: error: ‘MYSQL_READ_DEFAULT_FILE’ undeclared (first use in this function) _mysql.c:564: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:564: error: ‘MYSQL_READ_DEFAULT_GROUP’ undeclared (first use in this function) _mysql.c:567: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:567: error: ‘MYSQL_OPT_LOCAL_INFILE’ undeclared (first use in this function) _mysql.c:575: warning: implicit declaration of function ‘mysql_real_connect’ _mysql.c:575: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:590: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c: In function ‘_mysql_ConnectionObject_traverse’: _mysql.c:671: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:672: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ConnectionObject_clear’: _mysql.c:680: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:680: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:680: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:680: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:681: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ConnectionObject_close’: _mysql.c:696: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:698: warning: implicit declaration of function ‘mysql_close’ _mysql.c:698: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:700: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c: In function ‘_mysql_ConnectionObject_affected_rows’: _mysql.c:722: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:723: warning: implicit declaration of function ‘mysql_affected_rows’ _mysql.c:723: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_debug’: _mysql.c:739: warning: implicit declaration of function ‘mysql_debug’ _mysql.c: In function ‘_mysql_ConnectionObject_dump_debug_info’: _mysql.c:757: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:759: warning: implicit declaration of function ‘mysql_dump_debug_info’ _mysql.c:759: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_autocommit’: _mysql.c:783: warning: implicit declaration of function ‘mysql_query’ _mysql.c:783: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_commit’: _mysql.c:806: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_rollback’: _mysql.c:828: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_errno’: _mysql.c:940: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:941: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_error’: _mysql.c:956: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:957: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:957: warning: passing argument 1 of ‘PyString_FromString’ makes pointer from integer without a cast _mysql.c: In function ‘_mysql_escape_string’: _mysql.c:981: warning: implicit declaration of function ‘mysql_escape_string’ _mysql.c: In function ‘_mysql_escape’: _mysql.c:1088: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ResultObject_describe’: _mysql.c:1168: error: ‘MYSQL_FIELD’ undeclared (first use in this function) _mysql.c:1168: error: ‘fields’ undeclared (first use in this function) _mysql.c:1171: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1172: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1173: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1184: warning: implicit declaration of function ‘IS_NOT_NULL’ _mysql.c: In function ‘_mysql_ResultObject_field_flags’: _mysql.c:1204: error: ‘MYSQL_FIELD’ undeclared (first use in this function) _mysql.c:1204: error: ‘fields’ undeclared (first use in this function) _mysql.c:1207: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1208: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1209: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: At top level: _mysql.c:1250: error: expected declaration specifiers or ‘...’ before ‘MYSQL_ROW’ _mysql.c: In function ‘_mysql_row_to_tuple’: _mysql.c:1256: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1258: warning: implicit declaration of function ‘mysql_fetch_lengths’ _mysql.c:1258: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1258: warning: assignment makes pointer from integer without a cast _mysql.c:1261: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:1262: error: ‘row’ undeclared (first use in this function) _mysql.c: At top level: _mysql.c:1275: error: expected declaration specifiers or ‘...’ before ‘MYSQL_ROW’ _mysql.c: In function ‘_mysql_row_to_dict’: _mysql.c:1280: error: ‘MYSQL_FIELD’ undeclared (first use in this function) _mysql.c:1280: error: ‘fields’ undeclared (first use in this function) _mysql.c:1282: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1284: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1284: warning: assignment makes pointer from integer without a cast _mysql.c:1285: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1288: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:1289: error: ‘row’ undeclared (first use in this function) _mysql.c: At top level: _mysql.c:1314: error: expected declaration specifiers or ‘...’ before ‘MYSQL_ROW’ _mysql.c: In function ‘_mysql_row_to_dict_old’: _mysql.c:1319: error: ‘MYSQL_FIELD’ undeclared (first use in this function) _mysql.c:1319: error: ‘fields’ undeclared (first use in this function) _mysql.c:1321: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1323: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1323: warning: assignment makes pointer from integer without a cast _mysql.c:1324: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1327: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:1328: error: ‘row’ undeclared (first use in this function) _mysql.c: At top level: _mysql.c:1350: error: expected declaration specifiers or ‘...’ before ‘MYSQL_ROW’ _mysql.c: In function ‘mysql_fetch_row’: _mysql.c:1361: error: ‘MYSQL_ROW’ undeclared (first use in this function) _mysql.c:1361: error: expected ‘;’ before ‘row’ _mysql.c:1365: error: ‘_mysql_ResultObject’ has no member named ‘use’ _mysql.c:1366: error: ‘row’ undeclared (first use in this function) _mysql.c:1366: warning: implicit declaration of function ‘mysql_fetch_row’ _mysql.c:1366: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1369: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1372: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:1380: error: too many arguments to function ‘convert_row’ _mysql.c: In function ‘_mysql_ResultObject_fetch_row’: _mysql.c:1404: error: expected declaration specifiers or ‘...’ before ‘MYSQL_ROW’ _mysql.c:1419: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1431: error: ‘_mysql_ResultObject’ has no member named ‘use’ _mysql.c:1445: warning: implicit declaration of function ‘mysql_num_rows’ _mysql.c:1445: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ConnectionObject_character_set_name’: _mysql.c:1512: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c: In function ‘_mysql_get_client_info’: _mysql.c:1603: warning: implicit declaration of function ‘mysql_get_client_info’ _mysql.c:1603: warning: passing argument 1 of ‘PyString_FromString’ makes pointer from integer without a cast _mysql.c: In function ‘_mysql_ConnectionObject_get_host_info’: _mysql.c:1617: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1618: warning: implicit declaration of function ‘mysql_get_host_info’ _mysql.c:1618: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:1618: warning: passing argument 1 of ‘PyString_FromString’ makes pointer from integer without a cast _mysql.c: In function ‘_mysql_ConnectionObject_get_proto_info’: _mysql.c:1632: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1633: warning: implicit declaration of function ‘mysql_get_proto_info’ _mysql.c:1633: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_get_server_info’: _mysql.c:1647: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1648: warning: implicit declaration of function ‘mysql_get_server_info’ _mysql.c:1648: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:1648: warning: passing argument 1 of ‘PyString_FromString’ makes pointer from integer without a cast _mysql.c: In function ‘_mysql_ConnectionObject_info’: _mysql.c:1664: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1665: warning: implicit declaration of function ‘mysql_info’ _mysql.c:1665: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:1665: warning: assignment makes pointer from integer without a cast _mysql.c: In function ‘_mysql_ConnectionObject_insert_id’: _mysql.c:1697: error: ‘my_ulonglong’ undeclared (first use in this function) _mysql.c:1697: error: expected ‘;’ before ‘r’ _mysql.c:1699: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1701: error: ‘r’ undeclared (first use in this function) _mysql.c:1701: warning: implicit declaration of function ‘mysql_insert_id’ _mysql.c:1701: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_kill’: _mysql.c:1718: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1720: warning: implicit declaration of function ‘mysql_kill’ _mysql.c:1720: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_field_count’: _mysql.c:1739: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1741: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ResultObject_num_fields’: _mysql.c:1756: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1757: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ResultObject_num_rows’: _mysql.c:1772: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1773: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ConnectionObject_ping’: _mysql.c:1802: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1803: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:1805: warning: implicit declaration of function ‘mysql_ping’ _mysql.c:1805: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_query’: _mysql.c:1826: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1828: warning: implicit declaration of function ‘mysql_real_query’ _mysql.c:1828: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_select_db’: _mysql.c:1856: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1858: warning: implicit declaration of function ‘mysql_select_db’ _mysql.c:1858: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_shutdown’: _mysql.c:1877: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1879: warning: implicit declaration of function ‘mysql_shutdown’ _mysql.c:1879: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_stat’: _mysql.c:1904: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1906: warning: implicit declaration of function ‘mysql_stat’ _mysql.c:1906: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:1906: warning: assignment makes pointer from integer without a cast _mysql.c: In function ‘_mysql_ConnectionObject_store_result’: _mysql.c:1927: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1928: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:1937: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ConnectionObject_thread_id’: _mysql.c:1966: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1968: warning: implicit declaration of function ‘mysql_thread_id’ _mysql.c:1968: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_use_result’: _mysql.c:1988: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1989: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:1998: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ConnectionObject_dealloc’: _mysql.c:2016: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c: In function ‘_mysql_ConnectionObject_repr’: _mysql.c:2028: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:2029: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ResultObject_data_seek’: _mysql.c:2047: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:2048: warning: implicit declaration of function ‘mysql_data_seek’ _mysql.c:2048: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ResultObject_row_seek’: _mysql.c:2061: error: ‘MYSQL_ROW_OFFSET’ undeclared (first use in this function) _mysql.c:2061: error: expected ‘;’ before ‘r’ _mysql.c:2063: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:2064: error: ‘_mysql_ResultObject’ has no member named ‘use’ _mysql.c:2069: error: ‘r’ undeclared (first use in this function) _mysql.c:2069: warning: implicit declaration of function ‘mysql_row_tell’ _mysql.c:2069: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:2070: warning: implicit declaration of function ‘mysql_row_seek’ _mysql.c:2070: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ResultObject_row_tell’: _mysql.c:2082: error: ‘MYSQL_ROW_OFFSET’ undeclared (first use in this function) _mysql.c:2082: error: expected ‘;’ before ‘r’ _mysql.c:2084: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:2085: error: ‘_mysql_ResultObject’ has no member named ‘use’ _mysql.c:2090: error: ‘r’ undeclared (first use in this function) _mysql.c:2090: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:2091: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ResultObject_dealloc’: _mysql.c:2099: warning: implicit declaration of function ‘mysql_free_result’ _mysql.c:2099: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: At top level: _mysql.c:2330: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:2337: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:2344: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:2351: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:2358: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:2421: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:2421: error: initializer element is not constant _mysql.c:2421: error: (near initialization for ‘_mysql_ResultObject_memberlist[0].offset’) _mysql.c: In function ‘_mysql_ConnectionObject_getattr’: _mysql.c:2443: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:36:23: error: my_config.h: No such file or directory _mysql.c:38:19: error: mysql.h: No such file or directory _mysql.c:39:26: error: mysqld_error.h: No such file or directory _mysql.c:40:20: error: errmsg.h: No such file or directory _mysql.c:76: error: expected specifier-qualifier-list before ‘MYSQL’ _mysql.c:90: error: expected specifier-qualifier-list before ‘MYSQL_RES’ _mysql.c: In function ‘_mysql_Exception’: _mysql.c:120: warning: implicit declaration of function ‘mysql_errno’ _mysql.c:120: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:123: error: ‘CR_MAX_ERROR’ undeclared (first use in this function) _mysql.c:123: error: (Each undeclared identifier is reported only once _mysql.c:123: error: for each function it appears in.) _mysql.c:131: error: ‘CR_COMMANDS_OUT_OF_SYNC’ undeclared (first use in this function) _mysql.c:132: error: ‘ER_DB_CREATE_EXISTS’ undeclared (first use in this function) _mysql.c:133: error: ‘ER_SYNTAX_ERROR’ undeclared (first use in this function) _mysql.c:134: error: ‘ER_PARSE_ERROR’ undeclared (first use in this function) _mysql.c:135: error: ‘ER_NO_SUCH_TABLE’ undeclared (first use in this function) _mysql.c:136: error: ‘ER_WRONG_DB_NAME’ undeclared (first use in this function) _mysql.c:137: error: ‘ER_WRONG_TABLE_NAME’ undeclared (first use in this function) _mysql.c:138: error: ‘ER_FIELD_SPECIFIED_TWICE’ undeclared (first use in this function) _mysql.c:139: error: ‘ER_INVALID_GROUP_FUNC_USE’ undeclared (first use in this function) _mysql.c:140: error: ‘ER_UNSUPPORTED_EXTENSION’ undeclared (first use in this function) _mysql.c:141: error: ‘ER_TABLE_MUST_HAVE_COLUMNS’ undeclared (first use in this function) _mysql.c:170: error: ‘ER_DUP_ENTRY’ undeclared (first use in this function) _mysql.c:213: warning: implicit declaration of function ‘mysql_error’ _mysql.c:213: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:213: warning: passing argument 1 of ‘PyString_FromString’ makes pointer from integer without a cast _mysql.c: In function ‘_mysql_server_init’: _mysql.c:308: warning: label ‘finish’ defined but not used _mysql.c:234: warning: unused variable ‘item’ _mysql.c:233: warning: unused variable ‘groupc’ _mysql.c:233: warning: unused variable ‘i’ _mysql.c:233: warning: unused variable ‘cmd_argc’ _mysql.c:232: warning: unused variable ‘s’ _mysql.c: In function ‘_mysql_ResultObject_Initialize’: _mysql.c:363: error: ‘MYSQL_RES’ undeclared (first use in this function) _mysql.c:363: error: ‘result’ undeclared (first use in this function) _mysql.c:368: error: ‘MYSQL_FIELD’ undeclared (first use in this function) _mysql.c:368: error: ‘fields’ undeclared (first use in this function) _mysql.c:377: error: ‘_mysql_ResultObject’ has no member named ‘use’ _mysql.c:380: warning: implicit declaration of function ‘mysql_use_result’ _mysql.c:380: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:382: warning: implicit declaration of function ‘mysql_store_result’ _mysql.c:382: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:383: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:386: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:389: warning: implicit declaration of function ‘mysql_num_fields’ _mysql.c:390: error: ‘_mysql_ResultObject’ has no member named ‘nfields’ _mysql.c:391: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:392: warning: implicit declaration of function ‘mysql_fetch_fields’ _mysql.c:438: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ResultObject_traverse’: _mysql.c:450: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:451: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ResultObject_clear’: _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:463: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ConnectionObject_Initialize’: _mysql.c:475: error: ‘MYSQL’ undeclared (first use in this function) _mysql.c:475: error: ‘conn’ undeclared (first use in this function) _mysql.c:500: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:501: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:525: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:547: warning: implicit declaration of function ‘mysql_init’ _mysql.c:547: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:550: warning: implicit declaration of function ‘mysql_options’ _mysql.c:550: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:550: error: ‘MYSQL_OPT_CONNECT_TIMEOUT’ undeclared (first use in this function) _mysql.c:554: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:554: error: ‘MYSQL_OPT_COMPRESS’ undeclared (first use in this function) _mysql.c:555: error: ‘CLIENT_COMPRESS’ undeclared (first use in this function) _mysql.c:558: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:558: error: ‘MYSQL_OPT_NAMED_PIPE’ undeclared (first use in this function) _mysql.c:560: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:560: error: ‘MYSQL_INIT_COMMAND’ undeclared (first use in this function) _mysql.c:562: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:562: error: ‘MYSQL_READ_DEFAULT_FILE’ undeclared (first use in this function) _mysql.c:564: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:564: error: ‘MYSQL_READ_DEFAULT_GROUP’ undeclared (first use in this function) _mysql.c:567: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:567: error: ‘MYSQL_OPT_LOCAL_INFILE’ undeclared (first use in this function) _mysql.c:575: warning: implicit declaration of function ‘mysql_real_connect’ _mysql.c:575: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:590: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c: In function ‘_mysql_ConnectionObject_traverse’: _mysql.c:671: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:

    Read the article

  • Is HR/Recruitment Really Ready For Innovative Candidates

    - by david.talamelli
    Before I begin this blog post, I want to acknowledge that there are some great HR/Recruitment people out there who are innovative and are leading the way in using new means to successfully attract and connect with talented people. For those of you who fit in this category, please keep thinking outside the square - just because what you do may not be the norm doesn't mean it is bad. Ok, with that acknowledgment out of the way - Earlier this morning (I started this post Friday morning) I came across this online profile via a tweet from Philip Tusing I love the information that Jason has put on his web-pages. From his work Jason clearly demonstrates not only his skills/experience but also I love how he relates his experience and shows how it will help an employer and what the value add of having him on your team is. Looking at Jason's profile makes me think though, is HR/Recruitment in general terms ready to deal with innovative candidates. Sure most Recruiters are online in some form or another, but how many actually have a process that is flexible enough to deal with someone who may not fit into your processes. Is your company's recruitment practice proactive enough to find Jason's web-pages? I am not sure what he is doing in terms of a job search, but if he is not mailing a resume or replying to ads on a Job Board - hopefully Jason comes up on some of the candidate searching you are doing. Once you find this information, would the information Jason provides fit nicely into your Applicant Tracking System or your Database? If not, how much of the intangible information are you losing and potentially not passing on to a Hiring Manager. I think what has worked in the past will not necessarily work in the future. Candidates want to work somewhere they will be challenged and learn and grow. If your HR/Recruitment team displays processes that take don't necessarily convey this message, this potentially could turn people away who were once interested in your company. For example (and I have to admit I still do some of these things myself), once calling up and having a talk to a candidate a company may say: 1) HR Question: Send me in a copy of your resume - Candidate Reply - you actually already have my resume, the web-page is http:// 2) HR Question:Come in for a chat so we can get to know you - Candidate Reply - if this is the basis of a meeting, you already know me and my thoughts by looking at my online links (blog, portfolio, homepage, etc...) These questions if not handled properly could potentially turn a candidate from being interested in your company to not being interested in your company. It potentially could demonstrate that your company is not social media savvy or maybe give the impression of not really being all that innovative. A candidate may think, if this company isn't able to take information I have provided in the public forum and use it, is it really a company I want to work for? I think when liaising with candidates a company should utilise the information the person has provided in the public domain. A candidate may inadvertantly give you answers to many of the questions you are seeking on their online presence and save everyone time instead of having to fill out forms or paperwork. If you build this into your conversations with your candidates it becomes a much more individualised service you are providing and really demonstrates to a candidate you are thinking of them as an individual. Yes I know we need to have processes in place and I am not saying don't work to those processes, but don't let process take away a candidates individuality. Don't let your process inadvertently scare away the top candidates that you may want in your company. This article was originally posted on David Talamelli's Blog - David's Journal on Tap

    Read the article

  • View Weather Underground Forecasts in Google Chrome

    - by Asian Angel
    If you like a simple straightforward interface for keeping up with weather forecasts then join us as we look at the Weather Underground extension for Google Chrome. Weather Underground in Action As soon as you click on the “Toolbar Icon” you will need to enter a location. Keep in mind that you will need to enter the “city and country” if using that option. Going with less information will yield an “error”. Note: The extension did not work for some Asian locations during our tests. In honor of the Olympics we chose Vancouver, Canada. You can hover over the “Toolbar Button” to see the current conditions or click to view the current day’s conditions, the current day’s forecast, and the forecast for the following three days. It is a simple straightforward interface. Note: There are no options to worry with. Clicking on the “Detailed Forecast Link” in the drop-down window will take you to the Weather Underground webpage for your location. Clicking on the “Weather Underground Link” in the drop-down window will take you to the Weather Underground U.S. Homepage. Additional Weather Underground Fun Since we were focusing on Weather Underground we have an extra bit of fun for you. If you love being able to view a “large scale” map of your location with current conditions and forecast combined then you might want to have a look at Weather Underground’s “wxmap webpage”. Using the link below you can access the basic starting page where you will be asked to enter your location. Once you have entered the information you will see the default “Terrain View” for your location and a “Current Conditions & Forecast Window” in the lower left corner. You can modify how your map looks by choosing from “Temperature, Precipitation, Clouds, Satellite, Hybrid, & Terrain” views. Going full screen in your browser with this gives your monitor a wonderful and unique look that will have your family & friends asking you how you did it. Note: Terrain View shown here. Clicking on the “Settings Link” in the upper left corner will let you tweak your map view very nicely. Conclusion If you love using Weather Underground for your weather forecasts then you can add a “double dose” of goodness to your browser. Links Download the Weather Underground extension (Google Chrome Extensions) Access the Full Screen Weather Underground Map & Forecast for your area Similar Articles Productive Geek Tips Add Weather Forecasts to Google ChromeMonitor the Weather for Your Location in ChromeView the Time & Date in Chrome When Hiding Your TaskbarView Maps and Get Directions in Google ChromeGoogle Image Search Quick Fix TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7? Change DNS servers on the fly with DNS Jumper

    Read the article

  • My new laptop - with a really nice battery option

    - by Rob Farley
    It was about time I got a new laptop, and so I made a phone-call to Dell to discuss my options. I decided not to get an SSD from them, because I’d rather choose one myself – the sales guy tells me that changing the HD doesn’t void my warranty, so that’s good (incidentally, I’d love to hear people’s recommendations for which SSD to get for my laptop). Unfortunately this machine only has one HD slot, but I figure that I’ll put lots of stuff onto external disks anyway. The machine I got was a Dell Studio XPS 16. It’s red (which suits my company), but also has the Intel® Core™ i7-820QM Processor, which is 4 Cores/8 Threads. Makes for a pretty Task Manager, but nothing like the one I saw at SQLBits last year (at 96 cores), or the one that my good friend James Rowland-Jones writes about here. But the reason for this post is actually something in the software that comes with the machine – you know, the stuff that most people uninstall at the earliest opportunity. I had just reinstalled the operating system, and was going through the utilities to get the drivers up-to-date, when I noticed that one of Dell applications included an option to disable battery charging. So I installed it. And sure enough, I can tell the battery not to charge now. Clearly Dell see it as a temporary option, and one that’s designed for when you’re on a plane. But for me, I most often use my laptop with the power plugged in, which means I don’t need to have my battery continually topping itself up. So I really love this option, but I feel like it could go a little further. I’d like “Not Charging” to be the default option, and let me set it when I want to charge it (which should theoretically make my battery last longer). I also intend to work out how this option works, so that I can script it and put it into my StartUp options (so it can be the Default setting). Actually – if someone has already worked this out and can tell me what it does, then please feel free to let me know. Even better would be an external switch. I had a switch on my old laptop (a Dell Latitude) for WiFi, so that I could turn that off before I turned on the computer (this laptop doesn’t give me that option – no physical switch for flight mode). I guess it just means I’ll get used to leaving the WiFi off by default, and turning it on when I want it – might save myself some battery power that way too. Soon I’ll need to take the plunge and sync my iPhone with the new laptop. I’m a little worried that I might lose something – Apple’s messages about how my stuff will be wiped and replaced with what’s on the PC doesn’t fill me with confidence, as it’s a new PC that doesn’t have stuff on it. But having a new machine is definitely a nice experience, and one that I can recommend. I’m sure when I get around to buying an SSD I’ll feel like it’s shiny and new all over again! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • What if &ldquo;Microsoft&rdquo; were in our shoes? About Windows Phone

    - by Vijaya Malla
    This is what I think about Microsoft Windows Phone. If Microsoft were in our shoes looking at various phones available their configurations, memory, front facing cameras etc. Microsoft disappointed the USA customer base again by not getting Nokia Lumia 800. The Past: If we talk few years ago, few business people were on their Blackberry’s and few Gadget lovers were on crappy Windows OS devices. The world was all going right till Apple came with a revolutionary device iPhone, which completely changed our perception towards phone and how great a smartphone can be. It’s not just phone but the whole technology industry. The romantic appealing of the phone and smooth touch and feel of it made everyone to get one of those bad boys. The sales went up for not just Apple for AT&T too. Even though everyone complained about the signal strength of AT&T, everyone wanted to be on it because they have iPhones. All world wanted iPhone back then except Microsoft with few comments on how it is not going to be in market. But it did great and rocked the industry. A few years later with iPhone and Android taking over the smartphone market Microsoft realized that it should be in the game too. Worked on the design of it, and gave us the best Mobile OS ever. Everyone thinks that iOS is a great OS for phones but if you have touched a Windows Phone and use it for real then you will realize the strengths of it. so last year we welcomed Windows Phone 7 The Present : Windows Phone 7 has the fastest growing market. The phones are cheap, you can buy from any carrier out there. The phone became smarter and smarter with the recent update “Mango (7.5)” and with the collaboration with Nokia, Microsoft created a new eco-system for smartphones with the best smartphone hardware and best smartphone software. Everyone in the world was excited about the collaboration. As we fly over cloud 9 imagining about Nokia made Windows Phones we all heard a good news from Nokia “Nokia World”. Nokia showed the world what a best hardware making company can do with Windows Phone 7.5 OS. Nokia Lumia 800 and 710 took the spotlight. Everyone here in USA and all over the world wanted to own a Nokia Lumia 800 because of the design, software, proprietary apps from Nokia (maps, ESPN, drive and music). If USA market had Nokia Lumia 800, then it would have been the best step Microsoft and Nokia had ever made in their history of smartphone market. With all the numbers going to Android and IPhone, its not clear on why Microsoft/Nokia did not release Lumia 800 here in USA. Its unclear if Microsoft had learnt the lesson or not. if it had learnt the lesson I guess Microsoft needs to get the Nokia Lumia 800 to the USA. The Future: This is where we hope we get the best form Microsoft. I was an iPhone user, I used 2G, 3G, 3GS, 4 and then moved to Windows Phone and never felt so happy with my iPhones’. From the day when Nokia announced the partnership with Microsoft and said that they going to come up with a new Nokia windows phone, I was dreaming for my Nokia Phone. but looks like it is not going to happen any time soon. My thoughts about the Market :  Nokia has the biggest market base in the world. Even though people moved to Android or iPhone over the years in other parts of the world like India and China, people still love to use Nokia. Everyone who uses a Windows Phone now will wait for that day when Nokia Lumia comes to the USA but what either or both of the companies should do for a better market share is to make a very aggressive move with the hardware and bet on the devices. I am pretty sure that it will work. everyone here in the USA will like to have a dual core windows phone with front facing camera and all other crazy things that android/apple phones offer. I think we just have to wait for that day and hope that day comes soon. Love Microsoft and Nokia Thank you for reading.

    Read the article

  • SQLAuthority News – The Best Quotes of “Who Wrote This?” Contest

    - by pinaldave
    I am a frequent reader of Brent Ozar PLF, it is one of my favorite blogs. A recent post announced a “Who Wrote This?” contest to see if readers could tell their three contributors apart based on some writing samples. Here are my favorite lines from the sample paragraphs, from each of the three “mystery authors.” Topic 1: Working with Bad Managers Mystery Author A – “Working with bad managers means working against my own happiness, and I’ve come to learn that there’s no changing bad managers.” I love this line because, as anyone who has had a bad manager knows, often a lot of self-doubt rises up. We all have to remember that sometimes the problem is out of our control. Mystery Author B – “Mentor your manager just like you would mentor a junior DBA.” Having a bad manager can be extremely depressing, and we often feel out of control. But we all need to remember that our work is a two-way street, and that sometimes we can subtly influence those above us. Mystery Author C – “The trick to working for all bad managers is to remember that they aren’t your parent. Take charge of your career.” We all also need to learn not to play the blame game. Would you rather stay in a place where you are unhappy, or would you rather take charge of your life? I hope most people would pick the latter. Topic 2: Working with Remote Teams Mystery Author A – “Like almost anything else the key is to make sure that everyone on the team has an understanding of how and when communication will occur.” Communication is so important. I cannot over emphasize how much. And this one line captures how I feel and even communicates the idea clearly! Mystery Author B – “The key to remote team success is verifiable trust: feeling confident that invisible team members are doing the right amount of the right thing at the right time.” I think this line not only captures the key aspects of remote work – verifiable work and trust – but there were so many lines that followed that I loved and could not fit here. The whole paragraph is a list for successful remote work. Everyone could benefit from reading it. Mystery Author C – “What seems clear, precise, and specific in one time zone comes across as vague, soupy, and just plain weird in another.” You know what? I just love this description. The author is right – sometimes vague e-mails really do seem soupy and weird! Topic 3: Working with Your Nemesis Mystery Author A – “Every job is temporary, but your reputation stays with you.” Everyone needs to remember this. The workplace is meant to be a professional arena, and many people have the opinion that work is temporary and disposable. No one wants to work with co-worker like that. Mystery Author B – “Unhealthy conflict is going to lead to leaving three week old tuna fish sandwiches in someone’s desk drawer.” Sometimes humor really is the best policy! Mystery Author C – “Oh no, it’s that guy.” This might seem like a weird phrase to choose as my favorite from an entire paragraph. But the whole piece was written in the form of a story of co-workers getting drunk and plotting against a nemesis. It was too funny to overlook, but too long to post here. A must read! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Things to install on a new machine – revisited

    - by RoyOsherove
    as I prepare to get a new dev machine at work, I write the things I am going to install on it, before writing the first line of code on that machine: Control Freak Tools: Everything Search Engine – a free and amazingly fast search engine for files all over your machine. (just file names, not inside files). This is so fast I use it almost as a replacement for my start menu, but it’s also great for finding those files that get hidden and tucked away in dark places on my system. Ever had a situation where you needed to see exactly how many copies of X.dll were hiding on your machine and where? this tool is perfect for that. Google Chrome. It’s just fast. very fast. and Firefox has become the IE of alternative browsers in terms of speed and memory. Don’t even get me started on IE. TweetDeck – get a complete view of what’s up on twitter Total Commander – my still favorite file manager, over five years now. KatMouse – will scroll any window your hovering on, even if it’s not an active window, when you use scroll the wheel on it. PowerIso or Daemon Tools – for loading up ISO images of discs LogMeIn Ignition – quick access to your LogMeIn computers for online Backup: JungleDisk or BackBlaze KeePass – save important passwords MS Security Essentials – free anti virus that’s quoest and doesn’t make a mess of your system. for home: uTorrent – a torrent client that can read rss feeds (like the ones from ezrss.it ) Camtasia Studio and SnagIt – for recording and capturing the screen, and then adding cool effects on top. Foxit PDF Reader – much faster that adove reader. Toddler Keys (for home) – for when your baby wants to play with your keyboard. Live Writer – for writing blog posts for Lenovo ThinkPads – Lenovo System Update – if you have a “custom” system instead of the one that came built in, this will keep all your lenovo drivers up to date. FileZilla – for FTP stuff All the utils from sysinternals, (or try the live-links) especially: AutoRuns for deciding what’s really going to load at startup, procmon to see what’s really going on with processes in your system   Developer stuff: Reflector. Pure magic. Time saver. See source code of any compiled assembly. Resharper. Great for productivity and navigation across your source code FinalBuilder – a commercial build automation tool. Love it. much better than any xml based time hog out there. TeamCity – a great visual and friendly server to manage continuous integration. powerful features. Test Lint – a free addin for vs 2010 I helped create, that checks your unit tests for possible problems and hints you about it. TestDriven.NET – a great test runner for vs 2008 and 2010 with some powerful features. VisualSVN – a commercial tool if you use subversion. very reliable addin for vs 2008 and 2010 Beyond Compare – a powerful file and directory comparison tool. I love the fact that you can right click in windows exporer on any file and select “select left side to compare”, then right click on another file and select “compare with left side”. Great usability thought! PostSharp 2.0 – for addind system wide concepts into your code (tracing, exception management). Goes great hand in hand with.. SmartInspect – a powerful framework and viewer for tracing for your application. lots of hidden features. Crypto Obfuscator – a relatively new obfuscation tool for .NET that seems to do the job very well. Crypto Licensing – from the same company –finally a licensing solution that seems to really fit what I needed. And it works. Fiddler 2 – great for debugging and tracing http traffic to and from your app. Debugging Tools for Windows and DebugDiag  - great for debugging scenarios. still wanting more? I think this should keep you busy for a while.   Regulator and Regulazy – for testing and generating regular expressions Notepad 2 – for quick editing and viewing with syntax highlighting

    Read the article

  • So No TECH job so far.

    - by Ratman21
    O I found some temp work for the US Census and I have managed to keep the house (so far) but, it looks like I/we are going to have to do a short sale and the temp job will be ending soon.   On top of that it looks like the unemployment fund for me is drying up. I will have about one month left after the Census job is done. I am now down to Appling for work at the KFC.   This is type a work I started with, before I was a tech geek and really I didn’t think I would be doing this kind of work in my later years but, I have a wife and kid. So I got to suck it up and do it.   Oh and here is my new resume…go ahead I know you want to tare it up. I really don’t care any more.   Scott L. Newman 45219 Dutton Way, Callahan, FL32011 H: (904)879-4880 C: (352)356-0945 E: [email protected] Web:  http://beingscottnewman.webs.com/                                                       ______                                                                                 OBJECTIVE To obtain a Network or Technical support position     KEYWORD SUMMARY CompTIA A+, Network+, and Security+ Certified., Network Operation, Technical Support, Client/Vendor Relations, Networking/Administration, Cisco Routers/Switches, Helpdesk, Microsoft Office Suite, Website Design/Dev./Management, Frame Relay, ISDN, Windows NT/98/XP, Visio, Inventory Management, CICS, Programming, COBOL IV, Assembler, RPG   QUALIFICATIONS SUMMARY Twenty years’ experience in computer operations, technical support, and technical writing. Also have two and half years’ experience in internet / intranet operations.   PROFESSIONAL EXPERIENCE October 2009 – Present*   Volunteer Web site and PC technician – Part time       True Faith Christian Fellowship Church – Callahan, FL, Project: Create and maintain web site for Church to give it a worldwide exposure Aug 2008 – September 2009:* Volunteer Church sound and video technician – Part time      Thomas Creek Baptist Church – Callahan, FL   *Note Jobs were for the learning and/or keeping updated on skills, while looking for a tech job and training for new skills.   February 2005 to October 2008: Client Server Dev/Analyst I, Fidelity National Information Services, Jacksonville, FL (FNIS acquired Certegy in 2005 and out of 20 personal, was one of three kept on.) August 2003 to February 2005: Senior NetOps Operator, Certegy, St.Pete, Fl. (August 2003, Certegy terminated contract with EDS and out of 40 personal, was one of six kept on.) Projects: Creation and update of listing and placement for all raised floor equipment at St.Pete site. Listing was made up of, floor plan of the raised floor and equipment racks diagrams showing the placement of all devices using Visio. This was cross-referenced with an inventory excel document showing what dept was responsible for each device. Sole creator of Network operation and Server Operation procedures guide (NetOps Guide).  Expertise: Resolving circuit and/or router issues or assist circuit carrier in resolving issue from the company Network Operation Center (NOC). As well as resolving application problems or assist application support in resolution of it.     July 1999 to August 2003: Senior NetOps Operator,EDS (Certegy Account), St.Pete, FL Same expertise and on going projects as listed above for FNIS/Certegy. (Equifax outsourced the NetOps dept. to EDS in 1999)         January 1991 to July 1999: NetOps/Tandem Operator, Equifax, St.Pete & Tampa, FL Same as all of the above for FNIS/Certegy/EDS except for circuit and router issues   EDUCATION ? New Horizons Computer Learning Center, Jacksonville, Florida - CompTIA A+, Security+, and     Network+ Certified.                        Currently working on CCNA Certification 07/30/10 ? Mott Community College, Flint, Michigan – Associates Degree - Data Processing and General Education ? Currently studying Japanese

    Read the article

  • Oracle Embedded - Duração de meio dia - 14/Abr/10

    - by Claudia Costa
    Convidamo-lo a participar num evento que a Oracle irá realizar no próximo dia 14 de Abril, dedicado a soluções para sistemas Embedded.   A Oracle tem sido desde sempre o líder indisputado - em termos de desempenho, fiabilidade e escalabilidade - em sistemas de gestão de base de dados para aplicações críticas de gestão das grandes organizações. Hoje, no entanto, as aplicações críticas são implementadas não apenas nos data centers, mas cada vez mais em dispositivos móveis, nas infraestruturas de rede e em sistemas de aplicação específica. Por isso, o compromisso da Oracle em desenvolver os melhores produtos de gestão de dados alarga-se hoje do data center às aplicações designadas edge e embedded.   A Oracle oferece hoje a gama mais completa do mercado em tecnologias embedded, tanto para ISVs como para fabricantes de dispositivos e equipamentos, proporcionando-lhe a escolha dos produtos de base de dados e middleware embeddable que melhor se ajustem aos seus requisitos técnicos:   ·         Oracle Database 11g ·         Oracle Database Lite 11g ·         Oracle Berkeley DB ·         Oracle TimesTen In-Memory Database ·         Oracle Fusion Middleware 11g ·         Java for Business   Mais informação sobre produtos embedded Oracle aqui.   Segundo a IDC, a Oracle é hoje o líder mundial no mercado das bases de dados embedded com uma quota de mercado de 28,2% em 2008, estando a crescer a um ritmo 40% superior ao seu concorrente mais próximo e 50% superior à media do mercado.   A par com a riqueza da sua oferta tecnológica, a Oracle oferece igualmente modelos de licenciamento e de preços que se ajustam às necessidades de quem usa esses componentes tecnológicos como peças de uma solução final integrada a se vendida aos seus cliente finais.   A Oracle está empenhada em contribuir para o sucesso da sua empresa. Ao pôr os seus interesses acima de tudo, ganha a sua empresa e ganha a Oracle. E, mais importante que tudo, ganham os clientes, que recebem as melhores soluções possíveis.   Em resumo, as soluções embedded da Oracle proporcionam-lhe:   ·         Melhores produtos ·         Clientes mais satisfeitos ·         Maior rentabilidade das suas soluções   Agenda: ·         Oracle and Embedded ·         Embedded Market Trends ·         Oracle portfolio Oracle Database 11g o    Oracle Berkeley DB  o    Oracle Database Lite o    Oracle TimesTen o    Oracle Fusion Middleware ·         Demo: Berkeley DB ·         Embedded Software Licensing (ESL) Models   -----------------------------------------------------------------------------------   Clique aqui e registe-se.   Horário e Local: 9h30 - 13h00 Oracle Lagoas Park - Edf. 8 Porto Salvo   Para mais informações, por favor contacte: Melissa Lopes 214235194  

    Read the article

  • Restyling RadDataPager for WPF and Silverlight

      A small but powerful control has joined the great family of Telerik XAML controls with the recent release. RadDataPager is a result of an increasing demand from our customers who needed to work with large amounts of server data presented in small portions at the client side.  The primary goal was to make a powerful data paging control to be used in any scenarios requiring getting and presenting portions of data. How good is RadDataPager in working with paged data you can see in Rossen's announcement. Yet being a XAML control it challenged us with making one more visually appealing and highly customizable control. It offers a slick look in all available WPF/Silverlight Telerik themes:     As you can see the default design is targeting mainly business scenarios. At the same time being an example for a truly lookless XAML control RadDataPager allows restyling with minimal efforts this way widening the possible range of applications.  Lets see what we can doo with a few lines of XAML : <Style x:Key="buttonStyle" TargetType="ButtonBase" > <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="ButtonBase"> <Grid Width="40" Background="Black"> <Ellipse StrokeThickness="2" VerticalAlignment="Center" Width="15" Height="15" HorizontalAlignment="Center" Fill="Gray" /> <Ellipse Visibility="{Binding IsCurrent, Converter={StaticResource VisibilityConverter}}" VerticalAlignment="Center" Height="16" Fill="White" HorizontalAlignment="Center" Width="16"/> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> ... <telerik:RadDataPager NumericButtonStyle="{StaticResource buttonStyle}" ... And the result: The IPhone-style-thing bellow the RadCoverFlow is actually our RadDataPager.       With a few more lines of XAML we may get  almost any imaginable look of the pager control including the fascinating result demonstrated in the short video on the top.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Nginx and client certificates from hierarchical OpenSSL-based certification authorities

    - by Fmy Oen
    I'm trying to set up root certification authority, subordinate certification authority and to generate the client certificates signed by any of this CA that nginx 0.7.67 on Debian Squeeze will accept. My problem is that root CA signed client certificate works fine while subordinate CA signed one results in "400 Bad Request. The SSL certificate error". Step 1: nginx virtual host configuration: server { server_name test.local; access_log /var/log/nginx/test.access.log; listen 443 default ssl; keepalive_timeout 70; ssl_protocols SSLv3 TLSv1; ssl_ciphers AES128-SHA:AES256-SHA:RC4-SHA:DES-CBC3-SHA:RC4-MD5; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; ssl_client_certificate /etc/nginx/ssl/client.pem; ssl_verify_client on; ssl_session_cache shared:SSL:10m; ssl_session_timeout 5m; location / { proxy_pass http://testsite.local/; } } Step 2: PKI infrastructure organization for both root and subordinate CA (based on this article): # mkdir ~/pki && cd ~/pki # mkdir rootCA subCA # cp -v /etc/ssl/openssl.cnf rootCA/ # cd rootCA/ # mkdir certs private crl newcerts; touch serial; echo 01 > serial; touch index.txt; touch crlnumber; echo 01 > crlnumber # cp -Rvp * ../subCA/ Almost no changes was made to rootCA/openssl.cnf: [ CA_default ] dir = . # Where everything is kept ... certificate = $dir/certs/rootca.crt # The CA certificate ... private_key = $dir/private/rootca.key # The private key and to subCA/openssl.cnf: [ CA_default ] dir = . # Where everything is kept ... certificate = $dir/certs/subca.crt # The CA certificate ... private_key = $dir/private/subca.key # The private key Step 3: Self-signed root CA certificate generation: # openssl genrsa -out ./private/rootca.key -des3 2048 # openssl req -x509 -new -key ./private/rootca.key -out certs/rootca.crt -config openssl.cnf Enter pass phrase for ./private/rootca.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:rootca Email Address []: Step 4: Subordinate CA certificate generation: # cd ../subCA # openssl genrsa -out ./private/subca.key -des3 2048 # openssl req -new -key ./private/subca.key -out subca.csr -config openssl.cnf Enter pass phrase for ./private/subca.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:subca Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: Step 5: Subordinate CA certificate signing by root CA certificate: # cd ../rootCA/ # openssl ca -in ../subCA/subca.csr -extensions v3_ca -config openssl.cnf Using configuration from openssl.cnf Enter pass phrase for ./private/rootca.key: Check that the request matches the signature Signature ok Certificate Details: Serial Number: 1 (0x1) Validity Not Before: Feb 4 10:49:43 2013 GMT Not After : Feb 4 10:49:43 2014 GMT Subject: countryName = AU stateOrProvinceName = Some-State organizationName = Internet Widgits Pty Ltd commonName = subca X509v3 extensions: X509v3 Subject Key Identifier: C9:E2:AC:31:53:81:86:3F:CD:F8:3D:47:10:FC:E5:8E:C2:DA:A9:20 X509v3 Authority Key Identifier: keyid:E9:50:E6:BF:57:03:EA:6E:8F:21:23:86:BB:44:3D:9F:8F:4A:8B:F2 DirName:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca serial:9F:FB:56:66:8D:D3:8F:11 X509v3 Basic Constraints: CA:TRUE Certificate is to be certified until Feb 4 10:49:43 2014 GMT (365 days) Sign the certificate? [y/n]:y 1 out of 1 certificate requests certified, commit? [y/n]y ... # cd ../subCA/ # cp -v ../rootCA/newcerts/01.pem certs/subca.crt Step 6: Server certificate generation and signing by root CA (for nginx virtual host): # cd ../rootCA # openssl genrsa -out ./private/server.key -des3 2048 # openssl req -new -key ./private/server.key -out server.csr -config openssl.cnf Enter pass phrase for ./private/server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:test.local Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in server.csr -out certs/server.crt -config openssl.cnf Step 7: Client #1 certificate generation and signing by root CA: # openssl genrsa -out ./private/client1.key -des3 2048 # openssl req -new -key ./private/client1.key -out client1.csr -config openssl.cnf Enter pass phrase for ./private/client1.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:Client #1 Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in client1.csr -out certs/client1.crt -config openssl.cnf Step 8: Client #1 certificate converting to PKCS12 format: # openssl pkcs12 -export -out certs/client1.p12 -inkey private/client1.key -in certs/client1.crt -certfile certs/rootca.crt Step 9: Client #2 certificate generation and signing by subordinate CA: # cd ../subCA/ # openssl genrsa -out ./private/client2.key -des3 2048 # openssl req -new -key ./private/client2.key -out client2.csr -config openssl.cnf Enter pass phrase for ./private/client2.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:Client #2 Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in client2.csr -out certs/client2.crt -config openssl.cnf Step 10: Client #2 certificate converting to PKCS12 format: # openssl pkcs12 -export -out certs/client2.p12 -inkey private/client2.key -in certs/client2.crt -certfile certs/subca.crt Step 11: Passing server certificate and private key to nginx (performed with OS superuser privileges): # cd ../rootCA/ # cp -v certs/server.crt /etc/nginx/ssl/ # cp -v private/server.key /etc/nginx/ssl/ Step 12: Passing root and subordinate CA certificates to nginx (performed with OS superuser privileges): # cat certs/rootca.crt > /etc/nginx/ssl/client.pem # cat ../subCA/certs/subca.crt >> /etc/nginx/ssl/client.pem client.pem file look like this: # cat /etc/nginx/ssl/client.pem -----BEGIN CERTIFICATE----- MIID6TCCAtGgAwIBAgIJAJ/7VmaN048RMA0GCSqGSIb3DQEBBQUAMFYxCzAJBgNV BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQxDzANBgNVBAMTBnJvb3RjYTAeFw0xMzAyMDQxMDM1NTda ... -----END CERTIFICATE----- Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) ... -----BEGIN CERTIFICATE----- MIID4DCCAsigAwIBAgIBATANBgkqhkiG9w0BAQUFADBWMQswCQYDVQQGEwJBVTET MBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQ dHkgTHRkMQ8wDQYDVQQDEwZyb290Y2EwHhcNMTMwMjA0MTA0OTQzWhcNMTQwMjA0 ... -----END CERTIFICATE----- It looks like everything is working fine: # service nginx reload # Reloading nginx configuration: Enter PEM pass phrase: # nginx. # Step 13: Installing *.p12 certificates in browser (Firefox in my case) gives the problem I've mentioned above. Client #1 = 200 OK, Client #2 = 400 Bad request/The SSL certificate error. Any ideas what should I do? Update 1: Results of SSL connection test attempts: # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/rootca.crt -cert ~/pki/rootCA/certs/client1.crt -key ~/pki/rootCA/private/client1.key -showcerts Enter pass phrase for tmp/testcert/client1.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 --- Certificate chain 0 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=test.local i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca -----BEGIN CERTIFICATE----- MIIDpjCCAo6gAwIBAgIBAjANBgkqhkiG9w0BAQUFADBWMQswCQYDVQQGEwJBVTET MBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQ dHkgTHRkMQ8wDQYDVQQDEwZyb290Y2EwHhcNMTMwMjA0MTEwNjAzWhcNMTQwMjA0 ... -----END CERTIFICATE----- 1 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca -----BEGIN CERTIFICATE----- MIID6TCCAtGgAwIBAgIJAJ/7VmaN048RMA0GCSqGSIb3DQEBBQUAMFYxCzAJBgNV BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQxDzANBgNVBAMTBnJvb3RjYTAeFw0xMzAyMDQxMDM1NTda ... -----END CERTIFICATE----- --- Server certificate subject=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=test.local issuer=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca --- Acceptable client certificate CA names /C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca /C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca --- SSL handshake has read 3395 bytes and written 2779 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: zlib compression Expansion: zlib compression SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 15BFC2029691262542FAE95A48078305E76EEE7D586400F8C4F7C516B0F9D967 Session-ID-ctx: Master-Key: 23246CF166E8F3900793F0A2561879E5DB07291F32E99591BA1CF53E6229491FEAE6858BFC9AACAF271D9C3706F139C7 Key-Arg : None PSK identity: None PSK identity hint: None SRP username: None TLS session ticket: 0000 - c2 5e 1d d2 b5 6d 40 23-b2 40 89 e4 35 75 70 07 .^...m@#[email protected]. 0010 - 1b bb 2b e6 e0 b5 ab 10-10 bf 46 6e aa 67 7f 58 ..+.......Fn.g.X 0020 - cf 0e 65 a4 67 5a 15 ba-aa 93 4e dd 3d 6e 73 4c ..e.gZ....N.=nsL 0030 - c5 56 f6 06 24 0f 48 e6-38 36 de f1 b5 31 c5 86 .V..$.H.86...1.. ... 0440 - 4c 53 39 e3 92 84 d2 d0-e5 e2 f5 8a 6a a8 86 b1 LS9.........j... Compression: 1 (zlib compression) Start Time: 1359989684 Timeout : 300 (sec) Verify return code: 0 (ok) --- Everything seems fine with Client #2 and root CA certificate but request returns 400 Bad Request error: # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/rootca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 ... Compression: 1 (zlib compression) Start Time: 1359989989 Timeout : 300 (sec) Verify return code: 0 (ok) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request Server: nginx/0.7.67 Date: Mon, 04 Feb 2013 15:00:43 GMT Content-Type: text/html Content-Length: 231 Connection: close <html> <head><title>400 The SSL certificate error</title></head> <body bgcolor="white"> <center><h1>400 Bad Request</h1></center> <center>The SSL certificate error</center> <hr><center>nginx/0.7.67</center> </body> </html> closed Verification fails with Client #2 certificate and subordinate CA certificate: # openssl s_client -connect test.local:443 -CAfile ~/pki/subCA/certs/subca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify error:num=19:self signed certificate in certificate chain verify return:0 ... Compression: 1 (zlib compression) Start Time: 1359990354 Timeout : 300 (sec) Verify return code: 19 (self signed certificate in certificate chain) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request ... Still getting 400 Bad Request error with concatenated CA certificates and Client #2 (but still everything ok with Client #1): # cat certs/rootca.crt ../subCA/certs/subca.crt > certs/concatenatedca.crt # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/concatenatedca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 --- ... Compression: 1 (zlib compression) Start Time: 1359990772 Timeout : 300 (sec) Verify return code: 0 (ok) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request ... Update 2: I've managed to recompile nginx with enabled debug. Here is the part of successfull conection by Client #1 track: 2013/02/05 14:08:23 [debug] 38701#0: *119 accept: <MY IP ADDRESS> fd:3 2013/02/05 14:08:23 [debug] 38701#0: *119 event timer add: 3: 60000:2856497512 2013/02/05 14:08:23 [debug] 38701#0: *119 kevent set event: 3: ft:-1 fl:0025 2013/02/05 14:08:23 [debug] 38701#0: *119 malloc: 28805200:660 2013/02/05 14:08:23 [debug] 38701#0: *119 malloc: 28834400:1024 2013/02/05 14:08:23 [debug] 38701#0: *119 posix_memalign: 28860000:4096 @16 2013/02/05 14:08:23 [debug] 38701#0: *119 http check ssl handshake 2013/02/05 14:08:23 [debug] 38701#0: *119 https ssl handshake: 0x16 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL server name: "test.local" 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_do_handshake: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL handshake handler: 0 2013/02/05 14:08:23 [debug] 38701#0: *119 verify:1, error:0, depth:1, subject:"/C=AU /ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 14:08:23 [debug] 38701#0: *119 verify:1, error:0, depth:0, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=Client #1",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_do_handshake: 1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL: TLSv1, cipher: "AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1" 2013/02/05 14:08:23 [debug] 38701#0: *119 http process request line 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 http process request line 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: 1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: 524 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 http request line: "GET / HTTP/1.1" And here is the part of unsuccessfull conection by Client #2 track: 2013/02/05 13:51:34 [debug] 38701#0: *112 accept: <MY_IP_ADDRESS> fd:3 2013/02/05 13:51:34 [debug] 38701#0: *112 event timer add: 3: 60000:2855488975 2013/02/05 13:51:34 [debug] 38701#0: *112 kevent set event: 3: ft:-1 fl:0025 2013/02/05 13:51:34 [debug] 38701#0: *112 malloc: 28805200:660 2013/02/05 13:51:34 [debug] 38701#0: *112 malloc: 28834400:1024 2013/02/05 13:51:34 [debug] 38701#0: *112 posix_memalign: 28860000:4096 @16 2013/02/05 13:51:34 [debug] 38701#0: *112 http check ssl handshake 2013/02/05 13:51:34 [debug] 38701#0: *112 https ssl handshake: 0x16 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL server name: "test.local" 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL handshake handler: 0 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL handshake handler: 0 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:0, error:20, depth:1, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:0, error:27, depth:1, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:1, error:27, depth:0, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=Client #2",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca" 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: 1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL: TLSv1, cipher: "AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1" 2013/02/05 13:51:34 [debug] 38701#0: *112 http process request line 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: 1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: 524 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 http request line: "GET / HTTP/1.1" So I'm getting OpenSSL error #20 and then #27. According to verify documentation: 20 X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY: unable to get local issuer certificate the issuer certificate could not be found: this occurs if the issuer certificate of an untrusted certificate cannot be found. 27 X509_V_ERR_CERT_UNTRUSTED: certificate not trusted the root CA is not marked as trusted for the specified purpose.

    Read the article

  • Oracle Embedded - 14 de Abril

    - by Claudia Costa
      Convidamo-lo a participar num evento que a Oracle irá realizar no próximo dia 14 de Abril, dedicado a soluções para sistemas Embedded.   A Oracle tem sido desde sempre o líder indisputado - em termos de desempenho, fiabilidade e escalabilidade - em sistemas de gestão de base de dados para aplicações críticas de gestão das grandes organizações. Hoje, no entanto, as aplicações críticas são implementadas não apenas nos data centers, mas cada vez mais em dispositivos móveis, nas infraestruturas de rede e em sistemas de aplicação específica. Por isso, o compromisso da Oracle em desenvolver os melhores produtos de gestão de dados alarga-se hoje do data center às aplicações designadas edge e embedded.   A Oracle oferece hoje a gama mais completa do mercado em tecnologias embedded, tanto para ISVs como para fabricantes de dispositivos e equipamentos, proporcionando-lhe a escolha dos produtos de base de dados e middleware embeddable que melhor se ajustem aos seus requisitos técnicos:   ·         Oracle Database 11g ·         Oracle Database Lite 11g ·         Oracle Berkeley DB ·         Oracle TimesTen In-Memory Database ·         Oracle Fusion Middleware 11g ·         Java for Business   Mais informação sobre produtos embedded Oracle aqui.   Segundo a IDC, a Oracle é hoje o líder mundial no mercado das bases de dados embedded com uma quota de mercado de 28,2% em 2008, estando a crescer a um ritmo 40% superior ao seu concorrente mais próximo e 50% superior à media do mercado.   A par com a riqueza da sua oferta tecnológica, a Oracle oferece igualmente modelos de licenciamento e de preços que se ajustam às necessidades de quem usa esses componentes tecnológicos como peças de uma solução final integrada a se vendida aos seus cliente finais.   A Oracle está empenhada em contribuir para o sucesso da sua empresa. Ao pôr os seus interesses acima de tudo, ganha a sua empresa e ganha a Oracle. E, mais importante que tudo, ganham os clientes, que recebem as melhores soluções possíveis.   Em resumo, as soluções embedded da Oracle proporcionam-lhe:   ·         Melhores produtos ·         Clientes mais satisfeitos ·         Maior rentabilidade das suas soluções   Agenda: ·         Oracle and Embedded ·         Embedded Market Trends ·         Oracle portfolio Oracle Database 11g o    Oracle Berkeley DB  o    Oracle Database Lite o    Oracle TimesTen o    Oracle Fusion Middleware ·         Demo: Berkeley DB ·         Embedded Software Licensing (ESL) Models   -----------------------------------------------------------------------------------   Clique aqui e registe-se.   Horário e Local: 9h30 - 18h00 Oracle Lagoas Park - Edf. 8 Porto Salvo   Para mais informações, por favor contacte: Melissa Lopes 214235194  

    Read the article

  • Oracle Embedded - Porto (29/Abr/10)

    - by Claudia Costa
    Convidamo-lo a participar num evento que a Oracle irá realizar no próximo dia 29 de Abril no Porto, dedicado a soluções para sistemas Embedded.   A Oracle tem sido desde sempre o líder indisputado - em termos de desempenho, fiabilidade e escalabilidade - em sistemas de gestão de base de dados para aplicações críticas de gestão das grandes organizações. Hoje, no entanto, as aplicações críticas são implementadas não apenas nos data centers, mas cada vez mais em dispositivos móveis, nas infraestruturas de rede e em sistemas de aplicação específica. Por isso, o compromisso da Oracle em desenvolver os melhores produtos de gestão de dados alarga-se hoje do data center às aplicações designadas edge e embedded.   A Oracle oferece hoje a gama mais completa do mercado em tecnologias embedded, tanto para ISVs como para fabricantes de dispositivos e equipamentos, proporcionando-lhe a escolha dos produtos de base de dados e middleware embeddable que melhor se ajustem aos seus requisitos técnicos: ·         Oracle Database 11g ·         Oracle Database Lite 11g ·         Oracle Berkeley DB ·         Oracle TimesTen In-Memory Database ·         Oracle Fusion Middleware 11g ·         Java for Business   Segundo a IDC, a Oracle é hoje o líder mundial no mercado das bases de dados embedded com uma quota de mercado de 28,2% em 2008, estando a crescer a um ritmo 40% superior ao seu concorrente mais próximo e 50% superior à media do mercado.   A par com a riqueza da sua oferta tecnológica, a Oracle oferece igualmente modelos de licenciamento e de preços que se ajustam às necessidades de quem usa esses componentes tecnológicos como peças de uma solução final integrada a se vendida aos seus cliente finais.   Em resumo, as soluções embedded da Oracle proporcionam-lhe:   ·         Melhores produtos ·         Clientes mais satisfeitos ·         Maior rentabilidade das suas soluções   Mais informação sobre produtos embedded Oracle aqui   Agenda: ·         Oracle and Embedded ·         Embedded Market Trends ·         Oracle portfolio Oracle Database 11g o    Oracle Berkeley DB  o    Oracle Database Lite o    Oracle TimesTen o    Oracle Fusion Middleware ·         Demo: Berkeley DB ·         Embedded Software Licensing (ESL) Models --------------------------------------------------------------------------- Clique aqui e registe-se.   Horário e Local: 15h00 - 18h00 Hotel Infante Sagres | Praça D. Filipa De Lencastre, 62 | 4050-259 | Porto   Para mais informações, por favor contacte: Melissa Lopes 214235194

    Read the article

  • Error while installing emacs23 from Software Center

    - by vrcmr
    Trying to install emacs in Software Center Ubuntu 12.04 got this error. installArchives() failed: Selecting previously unselected package emacs23. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 182385 files and directories currently installed.) Unpacking emacs23 (from .../emacs23_23.3+1-1ubuntu9_i386.deb) ... Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for man-db ... Setting up emacs23 (23.3+1-1ubuntu9) ... update-alternatives: using /usr/bin/emacs23-x to provide /usr/bin/emacs (emacs) in auto mode. emacs-install emacs23 install/dictionaries-common: Byte-compiling for emacsen flavour emacs23 Warning: Lisp directory `/usr/share/emacs/23.3/site-lisp' does not exist. Warning: Lisp directory `/usr/share/emacs/site-lisp' does not exist. Warning: Lisp directory `/usr/share/emacs/23.3/leim' does not exist. Warning: Lisp directory `/usr/share/emacs/23.3/lisp' does not exist. Warning: Lisp directory `/usr/share/emacs/23.3/leim' does not exist. Error: charsets directory (/usr/share/emacs/23.3/etc/charsets) does not exist. Emacs will not function correctly without the character map files. Please check your installation! Warning: Could not find simple.el nor simple.elc Cannot open load file: bytecomp emacs-install: /usr/lib/emacsen-common/packages/install/dictionaries-common emacs23 failed at /usr/lib/emacsen-common/emacs-install line 28, <TSORT> line 3. dpkg: error processing emacs23 (--configure): subprocess installed post-installation script returned error exit status 255 No apport report written because MaxReports is reached already Errors were encountered while processing: emacs23 Error in function: Setting up emacs23 (23.3+1-1ubuntu9) ... emacs-install emacs23 install/dictionaries-common: Byte-compiling for emacsen flavour emacs23 Warning: Lisp directory `/usr/share/emacs/23.3/site-lisp' does not exist. Warning: Lisp directory `/usr/share/emacs/site-lisp' does not exist. Warning: Lisp directory `/usr/share/emacs/23.3/leim' does not exist. Warning: Lisp directory `/usr/share/emacs/23.3/lisp' does not exist. Warning: Lisp directory `/usr/share/emacs/23.3/leim' does not exist. Error: charsets directory (/usr/share/emacs/23.3/etc/charsets) does not exist. Emacs will not function correctly without the character map files. Please check your installation! Warning: Could not find simple.el nor simple.elc Cannot open load file: bytecomp emacs-install: /usr/lib/emacsen-common/packages/install/dictionaries-common emacs23 failed at /usr/lib/emacsen-common/emacs-install line 28, <TSORT> line 3. dpkg: error processing emacs23 (--configure): subprocess installed post-installation script returned error exit status 255

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >