Search Results

Search found 6460 results on 259 pages for 'spam filter'.

Page 135/259 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • SPF from neutral to positive

    - by denonth
    I need to be able to configure SPF rule for my domain. Problem is that I can normally receive all the emails. But when I send to some recipients they receive it as spam. In the mail failure delivery I always see SPF as neutral. How can I change this settings? I found a Microsoft tool program and I get something like this: v=spf1 a mx mx:mail.code2future.com +all Where do I insert this? I have cPanel Pro 1.0 (RC1) Thank you

    Read the article

  • Fix mail that has been put in deferred folder

    - by Safado
    So we have a little more than usual deferred mail on our Postfix server, so I started to look through the messages to make sure we weren't hacked and sending out spam. Everything is fine and it turns out that a bot had filled out our Request Info form multiple times with bad info. However, I did find one that was a legitimate request for more information about our company and I noticed that it isn't sending because they fat-fingered the address with gmal.com. Is there a way I can correct that and have Postfix send it out? This is on a CentOS server.

    Read the article

  • Need help sending mass emails [closed]

    - by Jose
    Possible Duplicate: Prevent mail being marked as spam I have an Ubuntu server that I want to be able to send out several thousand emails each containing a generic header and an unique pdf attachment containing an invoice. What I want to know is what would be the best way to accomplish this? Also, are there any other programs I need to install such as anti-virus / verification tools? I would like to be able to know what the procedures for this normally are as I would prefer that the emails are not send to the junk folder of the clients receiving the emails. Any tips would be appreciated. Thanks

    Read the article

  • How do I set up a mailing list for over 2000 people?

    - by John Hoffman
    I manage IT for a large organization with over 2000 people. The administration would like to send an email (often with up to 5 word document attachments) about once per week. They also want to categorize people. For instance, about 40 people are "teachers," about 1800 people are "students," and about 10 people are "board members." They may send an email to any combination of these categories. Here's what I have tried: I bought a server, installed Ubuntu server edition on it, and wrote a python script to send out their emails. However, pretty soon, mail services such as Gmail and Yahoo began to treat the mail I send out as SPAM. I'm guessing this is because I am sending an email with 2000+ people in the "bcc" field. Maintaining this server also takes a lot of time. How do I set up a mailing list for over 2000 people? Must I defer to paid services?

    Read the article

  • I can't upload mp3 files using Codeigniter

    - by Drew
    There are lots of suggested fixes for this but none of them work for me. I have tried making the upload class (using the following methods: http://codeigniter.com/forums/viewthread/148605/ and http://codeigniter.com/forums/viewthread/125441/). When i try to upload an mp3 i get the error message "The filetype you are attempting to upload is not allowed". Below is my code for my model and my form (i've got a very skinny controller). If someone could help me out with this I would be eternally grateful. --Model-- function do_upload() { $soundfig['upload_path'] = './uploads/nav'; $soundfig['allowed_types'] = 'mp3'; $this->load->library('upload', $soundfig); if ( ! $this->upload->do_upload()) { $error = array('error' => $this->upload->display_errors()); return $error; } else { /* $data = $this->upload->data('userfile'); */ $sound = $this->upload->data(); /* $full_path = 'uploads/nav/' . $data['file_name']; */ $sound_path = 'uploads/nav/' . $sound['file_name']; if($this->input->post('active') == '1'){ $active = '1'; }else{ $active = '0'; } $spam = array( /* 'image_url' => $full_path, */ 'sound' => $sound_path, 'active' => $active, 'url' => $this->input->post('url') ); $id = $this->input->post('id'); $this->db->where('id', $id); $this->db->update('NavItemData', $spam); return true; } } --View - Form-- <?php echo form_open_multipart('upload/do_upload');?> <?php if(isset($buttons)) : foreach($buttons as $row) : ?> <h2><?php echo $row->name; ?></h2> <!-- <input type="file" name="userfile" size="20" /><br /> --> <input type="file" name="userfile" size="20" /> <input type="hidden" name="oldfile" value="<?php echo $row->image_url; ?>" /> <input type="hidden" name="id" value="<?php echo $row->id; ?>" /> <br /><br /> <label>Url: </label><input type="text" name="url" value="<?php echo $row->url; ?>" /><br /> <input type="checkbox" name="active" value="1" <?php if($row->active == '1') { echo 'checked'; } ?> /><br /><br /> <input type="submit" value="submit" /> </form> <?php endforeach; ?> <?php endif; ?>

    Read the article

  • Problems uploading different file types in codeigniter

    - by Drew
    Below is my script that i'm using to upload different files. All the solutions I've found deal only with multiple image uploads. I am totally stumped for a solution on this. Can someone tell me what it is i'm supposed to be doing to upload different files in the same form? Thanks function do_upload() { $config['upload_path'] = './uploads/nav'; $config['allowed_types'] = 'gif|jpg|png'; $config['max_size'] = '2000'; $this->load->library('upload', $config); if ( ! $this->upload->do_upload('userfile')) { $error = array('error' => $this->upload->display_errors()); return $error; } else { $soundfig['upload_path'] = './uploads/nav'; $soundfig['allowed_types'] = 'mp3|wav'; $this->load->library('upload', $soundfig); if ( ! $this->upload->do_upload('soundfile')) { $error = array('error' => $this->upload->display_errors()); return $error; } else { $data = $this->upload->data('userfile'); $sound = $this->upload->data('soundfile'); $full_path = 'uploads/nav/' . $data['file_name']; $sound_path = 'uploads/nav/' . $sound['file_name']; if($this->input->post('active') == '1'){ $active = '1'; }else{ $active = '0'; } $spam = array( 'image_url' => $full_path, 'sound' => $sound_path, 'active' => $active, 'url' => $this->input->post('url') ); $id = $this->input->post('id'); $this->db->where('id', $id); $this->db->update('NavItemData', $spam); return true; } } } Here is my form: <?php echo form_open_multipart('upload/do_upload');?> <?php if(isset($buttons)) : foreach($buttons as $row) : ?> <h2><?php echo $row->name; ?></h2> <input type="file" name="userfile" size="20" /><br /> <input type="file" name="soundfile" size="20" /> <input type="hidden" name="oldfile" value="<?php echo $row->image_url; ?>" /> <input type="hidden" name="id" value="<?php echo $row->id; ?>" /> <br /><br /> <label>Url: </label><input type="text" name="url" value="<?php echo $row->url; ?>" /><br /> <input type="checkbox" name="active" value="1" <?php if($row->active == '1') { echo 'checked'; } ?> /><br /><br /> <input type="submit" value="submit" /> </form> <?php endforeach; ?> <?php endif; ?>

    Read the article

  • Guarding against CSRF Attacks in ASP.NET MVC2

    - by srkirkland
    Alongside XSS (Cross Site Scripting) and SQL Injection, Cross-site Request Forgery (CSRF) attacks represent the three most common and dangerous vulnerabilities to common web applications today. CSRF attacks are probably the least well known but they are relatively easy to exploit and extremely and increasingly dangerous. For more information on CSRF attacks, see these posts by Phil Haack and Steve Sanderson. The recognized solution for preventing CSRF attacks is to put a user-specific token as a hidden field inside your forms, then check that the right value was submitted. It's best to use a random value which you’ve stored in the visitor’s Session collection or into a Cookie (so an attacker can't guess the value). ASP.NET MVC to the rescue ASP.NET MVC provides an HTMLHelper called AntiForgeryToken(). When you call <%= Html.AntiForgeryToken() %> in a form on your page you will get a hidden input and a Cookie with a random string assigned. Next, on your target Action you need to include [ValidateAntiForgeryToken], which handles the verification that the correct token was supplied. Good, but we can do better Using the AntiForgeryToken is actually quite an elegant solution, but adding [ValidateAntiForgeryToken] on all of your POST methods is not very DRY, and worse can be easily forgotten. Let's see if we can make this easier on the program but moving from an "Opt-In" model of protection to an "Opt-Out" model. Using AntiForgeryToken by default In order to mandate the use of the AntiForgeryToken, we're going to create an ActionFilterAttribute which will do the anti-forgery validation on every POST request. First, we need to create a way to Opt-Out of this behavior, so let's create a quick action filter called BypassAntiForgeryToken: [AttributeUsage(AttributeTargets.Method, AllowMultiple=false)] public class BypassAntiForgeryTokenAttribute : ActionFilterAttribute { } Now we are ready to implement the main action filter which will force anti forgery validation on all post actions within any class it is defined on: [AttributeUsage(AttributeTargets.Class, AllowMultiple = false)] public class UseAntiForgeryTokenOnPostByDefault : ActionFilterAttribute { public override void OnActionExecuting(ActionExecutingContext filterContext) { if (ShouldValidateAntiForgeryTokenManually(filterContext)) { var authorizationContext = new AuthorizationContext(filterContext.Controller.ControllerContext);   //Use the authorization of the anti forgery token, //which can't be inhereted from because it is sealed new ValidateAntiForgeryTokenAttribute().OnAuthorization(authorizationContext); }   base.OnActionExecuting(filterContext); }   /// <summary> /// We should validate the anti forgery token manually if the following criteria are met: /// 1. The http method must be POST /// 2. There is not an existing [ValidateAntiForgeryToken] attribute on the action /// 3. There is no [BypassAntiForgeryToken] attribute on the action /// </summary> private static bool ShouldValidateAntiForgeryTokenManually(ActionExecutingContext filterContext) { var httpMethod = filterContext.HttpContext.Request.HttpMethod;   //1. The http method must be POST if (httpMethod != "POST") return false;   // 2. There is not an existing anti forgery token attribute on the action var antiForgeryAttributes = filterContext.ActionDescriptor.GetCustomAttributes(typeof(ValidateAntiForgeryTokenAttribute), false);   if (antiForgeryAttributes.Length > 0) return false;   // 3. There is no [BypassAntiForgeryToken] attribute on the action var ignoreAntiForgeryAttributes = filterContext.ActionDescriptor.GetCustomAttributes(typeof(BypassAntiForgeryTokenAttribute), false);   if (ignoreAntiForgeryAttributes.Length > 0) return false;   return true; } } The code above is pretty straight forward -- first we check to make sure this is a POST request, then we make sure there aren't any overriding *AntiForgeryTokenAttributes on the action being executed. If we have a candidate then we call the ValidateAntiForgeryTokenAttribute class directly and execute OnAuthorization() on the current authorization context. Now on our base controller, you could use this new attribute to start protecting your site from CSRF vulnerabilities. [UseAntiForgeryTokenOnPostByDefault] public class ApplicationController : System.Web.Mvc.Controller { }   //Then for all of your controllers public class HomeController : ApplicationController {} What we accomplished If your base controller has the new default anti-forgery token attribute on it, when you don't use <%= Html.AntiForgeryToken() %> in a form (or of course when an attacker doesn't supply one), the POST action will throw the descriptive error message "A required anti-forgery token was not supplied or was invalid". Attack foiled! In summary, I think having an anti-CSRF policy by default is an effective way to protect your websites, and it turns out it is pretty easy to accomplish as well. Enjoy!

    Read the article

  • how to implemet a nice scanline effect using libgdx

    - by Alexandre GUIDET
    I am working on an old school platformer based on libgdx. My first attempt is to make a little texture and fill a rect above the whole screen, but it seems that I am messing arround with the orthographic camera (I use two camera, one for the tilemap and one to project the scanline filter). Sometime the texture is stuck on the tilemap and sometime it is too large and cover the whole screen in black. Is my approach correct using two camera? Does someone have a solution to achieve this retro effect using libgdx (see maldita castilla)? Thanks

    Read the article

  • Export a .FBX file in Unity3D at runtime

    - by Timothy Williams
    What I'm looking to do is be able to export an object as a .FBX at runtime in Unity3D. I've made a C# script which can export a mesh filter or skinned mesh renderer to a .OBJ file at runtime, but .OBJ doesn't support the same kind of animations and skins that .FBX does. I've been researching this for a while, as of right now it looks like somehow using the Autodesk FBX SDK or some other external .dll would be my best option. Does anyone know of external .dlls I could use for this? Or how to make calls to Autodesk's FBX SDK at runtime? Another option could possibly be to write the mesh information as a text file then convert to .FBX on exporting. Just looking for fellow programmer's thoughts, or tips, or to see if this has been accomplished already. As far as I can tell there isn't any pre-existing scripts to export FBX at runtime in Unity.

    Read the article

  • How do I prevent a website being misclassified by Websense?

    - by Jeff Atwood
    I received the following email from a user of one of our websites: This morning I tried to log into example.com and I was blocked by Websense at work because it is considered a "social networking" site or something. I assume the websense filter is maintained by a central location, so I'm hoping that by letting you guys know you can get it unblocked. per Wikipedia, Websense is web filtering or Internet content-control software. This means one (or more) of our sites is being miscategorized by Websense as "social networking" and thus disallowed for access at any workplace that uses Websense to control what websites their users can and cannot access during work hours. (I know, they are monsters!) How do we dispute this Websense classification error, as our websites should generally be considered "information technology" and never "social networking"? How do we know what category Websense has put our sites in, so we can pro-actively make sure they're not wrong?

    Read the article

  • Excel 2010 & SSAS – Search Dimension Members

    - by Davide Mauri
    Today I’ve connected my Excel 2010 to an Analysis Services 2008 Cube and I got a very nice (and unexpected) surprise! It’s now finally possibly to search and filter Dimension Members directly from the combo box window: As you can easily imagine, for medium/big dimensions is really – really – really useful! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Parallelism in .NET – Part 1, Decomposition

    - by Reed
    The first step in designing any parallelized system is Decomposition.  Decomposition is nothing more than taking a problem space and breaking it into discrete parts.  When we want to work in parallel, we need to have at least two separate things that we are trying to run.  We do this by taking our problem and decomposing it into parts. There are two common abstractions that are useful when discussing parallel decomposition: Data Decomposition and Task Decomposition.  These two abstractions allow us to think about our problem in a way that helps leads us to correct decision making in terms of the algorithms we’ll use to parallelize our routine. To start, I will make a couple of minor points. I’d like to stress that Decomposition has nothing to do with specific algorithms or techniques.  It’s about how you approach and think about the problem, not how you solve the problem using a specific tool, technique, or library.  Decomposing the problem is about constructing the appropriate mental model: once this is done, you can choose the appropriate design and tools, which is a subject for future posts. Decomposition, being unrelated to tools or specific techniques, is not specific to .NET in any way.  This should be the first step to parallelizing a problem, and is valid using any framework, language, or toolset.  However, this gives us a starting point – without a proper understanding of decomposition, it is difficult to understand the proper usage of specific classes and tools within the .NET framework. Data Decomposition is often the simpler abstraction to use when trying to parallelize a routine.  In order to decompose our problem domain by data, we take our entire set of data and break it into smaller, discrete portions, or chunks.  We then work on each chunk in the data set in parallel. This is particularly useful if we can process each element of data independently of the rest of the data.  In a situation like this, there are some wonderfully simple techniques we can use to take advantage of our data.  By decomposing our domain by data, we can very simply parallelize our routines.  In general, we, as developers, should be always searching for data that can be decomposed. Finding data to decompose if fairly simple, in many instances.  Data decomposition is typically used with collections of data.  Any time you have a collection of items, and you’re going to perform work on or with each of the items, you potentially have a situation where parallelism can be exploited.  This is fairly easy to do in practice: look for iteration statements in your code, such as for and foreach. Granted, every for loop is not a candidate to be parallelized.  If the collection is being modified as it’s iterated, or the processing of elements depends on other elements, the iteration block may need to be processed in serial.  However, if this is not the case, data decomposition may be possible. Let’s look at one example of how we might use data decomposition.  Suppose we were working with an image, and we were applying a simple contrast stretching filter.  When we go to apply the filter, once we know the minimum and maximum values, we can apply this to each pixel independently of the other pixels.  This means that we can easily decompose this problem based off data – we will do the same operation, in parallel, on individual chunks of data (each pixel). Task Decomposition, on the other hand, is focused on the individual tasks that need to be performed instead of focusing on the data.  In order to decompose our problem domain by tasks, we need to think about our algorithm in terms of discrete operations, or tasks, which can then later be parallelized. Task decomposition, in practice, can be a bit more tricky than data decomposition.  Here, we need to look at what our algorithm actually does, and how it performs its actions.  Once we have all of the basic steps taken into account, we can try to analyze them and determine whether there are any constraints in terms of shared data or ordering.  There are no simple things to look for in terms of finding tasks we can decompose for parallelism; every algorithm is unique in terms of its tasks, so every algorithm will have unique opportunities for task decomposition. For example, say we want our software to perform some customized actions on startup, prior to showing our main screen.  Perhaps we want to check for proper licensing, notify the user if the license is not valid, and also check for updates to the program.  Once we verify the license, and that there are no updates, we’ll start normally.  In this case, we can decompose this problem into tasks – we have a few tasks, but there are at least two discrete, independent tasks (check licensing, check for updates) which we can perform in parallel.  Once those are completed, we will continue on with our other tasks. One final note – Data Decomposition and Task Decomposition are not mutually exclusive.  Often, you’ll mix the two approaches while trying to parallelize a single routine.  It’s possible to decompose your problem based off data, then further decompose the processing of each element of data based on tasks.  This just provides a framework for thinking about our algorithms, and for discussing the problem.

    Read the article

  • Creating an ASP.NET report using Visual Studio 2010 - Part 3

    - by rajbk
    We continue building our report in this three part series. Creating an ASP.NET report using Visual Studio 2010 - Part 1 Creating an ASP.NET report using Visual Studio 2010 - Part 2 Adding the ReportViewer control and filter drop downs. Open the source code for index.aspx and add a ScriptManager control. This control is required for the ReportViewer control. Add a DropDownList for the categories and suppliers. Add the ReportViewer control. The markup after these steps is shown below. <div> <asp:ScriptManager ID="smScriptManager" runat="server"> </asp:ScriptManager> <div id="searchFilter"> Filter by: Category : <asp:DropDownList ID="ddlCategories" runat="server" /> and Supplier : <asp:DropDownList ID="ddlSuppliers" runat="server" /> </div> <rsweb:ReportViewer ID="rvProducts" runat="server"> </rsweb:ReportViewer> </div> The design view for index.aspx is shown below. The dropdowns will display the categories and suppliers in the database. Changing the selection in the drop downs will cause the report to be filtered by the selections in the dropdowns. You will see how to do this in the next steps.   Attaching the RDLC to the ReportViewer control by clicking on the top right of the control, going to Report Viewer tasks and selecting Products.rdlc.   Resize the ReportViewer control by dragging at the bottom right corner. I set mine to 800px x 500px. You can also set this value in source view. Defining the data sources. We will now define the Data Source used to populate the report. Go back to the “ReportViewer Tasks” and select “Choose Data Sources” Select a “New data source..” Select “Object” and name your Data Source ID “odsProducts”   In the next screen, choose “ProductRepository” as your business object. Choose “GetProductsProjected” in the next screen.   The method requires a SupplierID and CategoryID. We will set these so that our data source gets the values from the drop down lists we defined earlier. Set the parameter source to be of type “Control” and set the ControlIDs to be ddlSuppliers and ddlCategories respectively. Your screen will look like this: We are now going to define the data source for our drop downs. Select the ddlCategory drop down and pick “Choose Data Source”. Pick “Object” and give it an id “odsCategories”   In the next screen, choose “ProductRepository” Select the GetCategories() method in the next screen.   Select “CategoryName” and “CategoryID” in the next screen. We are done defining the data source for the Category drop down. Perform the same steps for the Suppliers drop down.   Select each dropdown and set the AppendDataBoundItems to true and AutoPostback to true.     The AppendDataBoundItems is needed because we are going to insert an “All“ list item with a value of empty. Go to each drop down and add this list item markup as shown below> Finally, double click on each drop down in the designer and add the following code in the code behind. This along with the “Autopostback= true” attribute refreshes the report anytime a drop down is changed. protected void ddlCategories_SelectedIndexChanged(object sender, EventArgs e) { rvProducts.LocalReport.Refresh(); }   protected void ddlSuppliers_SelectedIndexChanged(object sender, EventArgs e) { rvProducts.LocalReport.Refresh(); } Compile your report and run the page. You should see the report rendered. Note that the tool bar in the ReportViewer control gives you a couple of options including the ability to export the data to Excel, PDF or word.   Conclusion Through this three part series, we did the following: Created a data layer for use by our RDLC. Created an RDLC using the report wizard and define a dataset for the report. Used the report design surface to design our report including adding a chart. Used the ReportViewer control to attach the RDLC. Connected our ReportWiewer to a data source and take parameter values from the drop downlists. Used AutoPostBack to refresh the reports when the dropdown selection was changed. RDLCs allow you to create interactive reports including drill downs and grouping. For even more advanced reports you can use Microsoft® SQL Server™ Reporting Services with RDLs. With RDLs, the report is rendered on the report server instead of the web server. Another nice thing about RDLs is that you can define a parameter list for the report and it gets rendered automatically for you. RDLCs and RDLs both have their advantages and its best to compare them and choose the right one for your requirements. Download VS2010 RTM Sample project NorthwindReports.zip   Alfred Borden: Are you watching closely?

    Read the article

  • Holiday 2010 Personas Themes for Firefox

    - by Asian Angel
    Does your Firefox browser need a touch of holiday spirit to brighten it up? Then sit back and enjoy looking through these 20 wonderful holiday Personas themes that we have collected together for you. Note: The names and links for the themes are located above each image. Snoopy Christmas Tribute A Charlie Brown Christmas Celebration Winnie and Tigger Topping the Tree mickey & minnie – happy christmas Foxkeh as Rudolph the Red Nosed Rein-fox Santa and Frosty Ski Fun Santas Sleigh Ride Envol du traineau – christmas Adorable Santa Santas Hat 3 Frosty the Snowmans Christmas Eve Snowmans Village Warm For Christmas Believe – Snow Christmas in the Forest Christmas Aurora Violet Xmas Homestead Christmas ANIMATED Christmas Window Christmas Tree Lights More Holiday Personas Themes Fun Brighten Up Firefox for the Holidays *Our Holiday 2009 Personas Themes Collection Winter Time Christmas Personas Theme for Firefox Snowy Christmas House Personas Theme for Firefox Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Deathwing the Destroyer – WoW Cataclysm Dragon Wallpaper Drag2Up Lets You Drag and Drop Files to the Web With Ease The Spam Police Parts 1 and 2 – Goodbye Spammers [Videos] Snow Angels Theme for Windows 7 Exploring the Jungle Ruins Wallpaper Protect Your Privacy When Browsing with Chrome and Iron Browser

    Read the article

  • CodePlex Daily Summary for Sunday, March 07, 2010

    CodePlex Daily Summary for Sunday, March 07, 2010New ProjectsAlgorithminator: Universal .NET algorithm visualizer, which helps you to illustrate any algorithm, written in any .NET language. Still in development.ALToolkit: Contains a set of handy .NET components/classes. Currently it contains: * A Numeric Text Box (an Extended NumericUpDown) * A Splash Screen base fo...Automaton Home: Automaton is a home automation software built with a n-Tier, MVVM pattern utilzing WCF, EF, WPF, Silverlight and XBAP.Developer Controls: Developer Controls contains various controls to help build applications that can script/write code.Dynamic Reference Manager: Dynamic Reference Manager is a set (more like a small group) of classes and attributes written in C# that allows any .NET program to reference othe...indiologic: Utilities of an IndioNeural Cryptography in F#: This project is my magistracy resulting work. It is intended to be an example of using neural networks in cryptography. Hashing functions are chose...Particle Filter Visualization: Particle Filter Visualization Program for the Intel Science and Engineering FairPólya: Efficient, immutable, polymorphic collections. .Net lacks them, we provide them*. * By we, we mean I; and by efficient, I mean hopefully so.project euler solutions from mhinze: mhinze project euler solutionsSilverlight 4 and WCF multi layer: Silverlight 4 and WCF multi layersqwarea: Project for a browser-based, minimalistic, massively multiplayer strategy game. Part of the "Génie logiciel et Cloud Computing" course of the ENS (...SuperSocket: SuperSocket, a socket application framework can build FTP/SMTP/POP server easilyToast (for ASP.NET MVC): Dynamic, developer & designer friendly content injection, compression and optimization for ASP.NET MVCNew ReleasesALToolkit: ALToolkit 1.0: Binary release of the libraries containing: NumericTextBox SplashScreen Based on the VB.NET code, but that doesn't really matter.Blacklist of Providers: 1.0-Milestone 1: Blacklist of Providers.Milestone 1In this development release implemented - Main interface (Work Item #5453) - Database (Work Item #5523)C# Linear Hash Table: Linear Hash Table b2: Now includes a default constructor, and will throw an exception if capacity is not set to a power of 2 or loadToMaintain is below 1.Composure: CassiniDev-Trunk-40745-VS2010.rc1.NET4: A simple port of the CassiniDev portable web server project for Visual Studio 2010 RC1 built against .NET 4.0. The WCF tests currently fail unless...Developer Controls: DevControls: These are the version 1.0 releases of these controls. Download the individually or all together (in a .zip file). More releases coming soon!Dynamic Reference Manager: DRM Alpha1: This is the first release. I'm calling it Alpha because I intend implementing other functions, but I do not intend changing the way current functio...ESB Toolkit Extensions: Tellago SOA ESB Extenstions v0.3: Windows Installer file that installs Library on a BizTalk ESB 2.0 system. This Install automatically configures the esb.config to use the new compo...GKO Libraries: GKO Libraries 0.1 Alpha: 0.1 AlphaHome Access Plus+: v3.0.3.0: Version 3.0.3.0 Release Change Log: Added Announcement Box Removed script files that aren't needed Fixed & issue in directory path Stylesheet...Icarus Scene Engine: Icarus Scene Engine 1.10.306.840: Icarus Professional, Icarus Player, the supporting software for Icarus Scene Engine, with some included samples, and the start of a tutorial (with ...mavjuz WndLpt: wndlpt-0.2.5: New: Response to 5 LPT inputs "test i 1" New: Reaction to 12 LPT outputs "test q 8" New: Reaction to all LPT pins "test pin 15" New: Syntax: ...Neural Cryptography in F#: Neural Cryptography 0.0.1: The most simple version of this project. It has a neural network that works just like logical AND and a possibility to recreate neural network from...Password Provider: 1.0.3: This release fixes a bug which caused the program to crash when double clicking on a generic item.RoTwee: RoTwee 6.2.0.0: New feature is as next. 16649 Add hashtag for tweet of tune.Now you can tweet your playing tune with hashtag.Visual Studio DSite: Picture Viewer (Visual C++ 2008): This example source code allows you to view any picture you want in the click of a button. All you got to do is click the button and browser via th...WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.8.00: Whats New File Browser: Folders & Files View reworked File Browser: Folders & Files View reworked File Browser: Folders are displayed as TreeVi...WSDLGenerator: WSDLGenerator 0.0.0.4: - replaced CommonLibrary.dll by CommandLineParser.dll - added better support for custom complex typesMost Popular ProjectsMetaSharpSilverlight ToolkitASP.NET Ajax LibraryAll-In-One Code FrameworkWindows 7 USB/DVD Download Toolニコ生アラートWindows Double ExplorerVirtual Router - Wifi Hot Spot for Windows 7 / 2008 R2Caliburn: An Application Framework for WPF and SilverlightArkSwitchMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsBlogEngine.NETjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryIonics Isapi Rewrite FilterFarseer Physics EngineFasterflect - A Fast and Simple Reflection APIFluent Assertions

    Read the article

  • Desktop Fun: Merry Christmas Wallpaper Collection [Bonus Edition]

    - by Asian Angel
    Are you ready for all of the gifts, assorted goodies, and great food that are a part of Christmas? As part of the build-up to the festivities, we have a larger than normal set of wallpapers to help add those final bits of holiday cheer and decoration to your desktops. Note: Click on the picture to see the full-size image—these wallpapers vary in size so you may need to crop, stretch, or place them on a colored background in order to best match them to your screen’s resolution. For more Christmas desktop goodness be sure to check out our Merry Christmas icon packs & fonts collections (links at bottom)! Note: You can download an additional wallpaper of Rudolph by himself here. Note: There are two wallpapers from “Frosty Returns” available here and here. Note: The Garfield image will need to be slightly sharpened in a photo program and placed on a background to increase the height. Desktop Fun: Merry Christmas Icon Packs Desktop Fun: Merry Christmas Fonts Looking for more Merry Christmas wallpapers? Browse through our 2009 collection here: Awesome Holiday Themed Desktop Wallpapers For more wallpapers be certain to see our great collections in the Desktop Fun section. Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor The Spam Police Parts 1 and 2 – Goodbye Spammers [Videos] Snow Angels Theme for Windows 7 Exploring the Jungle Ruins Wallpaper Protect Your Privacy When Browsing with Chrome and Iron Browser Free Shipping Day is Friday, December 17, 2010 – National Free Shipping Day Find an Applicable Quote for Any Programming Situation

    Read the article

  • Using LogParser - part 2

    - by fatherjack
    PersonAddress.csv SalesOrderDetail.tsv In part 1 of this series we downloaded and installed LogParser and used it to list data from a csv file. That was a good start and in this article we are going to see the different ways we can stream data and choose whether a whole file is selected. We are also going to take a brief look at what file types we can interrogate. If we take the query from part 1 and add a value for the output parameter as -o:datagrid so that the query becomes LOGPARSER "SELECT top 15 * FROM C:\LP\person_address.csv" -o:datagrid and run that we get a different result. A pop-up dialog that lets us view the results in a resizable grid. Notice that because we didn't specify the columns we wanted returned by LogParser (we used SELECT *) is has added two columns to the recordset - filename and rownumber. This behaviour can be very useful as we will see in future parts of this series. You can click Next 10 rows or All rows or close the datagrid once you are finished reviewing the data. You may have noticed that the files that I am working with are different file types - one is a csv (comma separated values) and the other is a tsv (tab separated values). If you want to convert a file from one to another then LogParser makes it incredibly simple. Rather than using 'datagrid' as the value for the output parameter, use 'csv': logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\Sales_SalesOrderDetail.csv FROM C:\Sales_SalesOrderDetail.tsv" -i:tsv -o:csv Those familiar with SQL will not have to make a very big leap of faith to making adjustments to the above query to filter in/out records from the source file. Lets get all the records from the same file where the Order Quantity (OrderQty) is more than 25: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailOver25.csv FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty > 25" -i:tsv -o:csv Or we could find all those records where the Order Quantity is equal to 25 and output it to an xml file: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailEq25.xml FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty = 25" -i:tsv -o:xml All the standard comparison operators are to be found in LogParser; >, <, =, LIKE, BETWEEN, OR, NOT, AND. Input and Output file formats. LogParser has a pretty impressive list of file formats that it can parse and a good selection of output formats that will let you generate output in a format that is useable for whatever process or application you may be using. From any of these To any of these IISW3C: parses IIS log files in the W3C Extended Log File Format.   NAT: formats output records as readable tabulated columns. IIS: parses IIS log files in the Microsoft IIS Log File Format. CSV: formats output records as comma-separated values text. BIN: parses IIS log files in the Centralized Binary Log File Format. TSV: formats output records as tab-separated or space-separated values text. IISODBC: returns database records from the tables logged to by IIS when configured to log in the ODBC Log Format. XML: formats output records as XML documents. HTTPERR: parses HTTP error log files generated by Http.sys. W3C: formats output records in the W3C Extended Log File Format. URLSCAN: parses log files generated by the URLScan IIS filter. TPL: formats output records following user-defined templates. CSV: parses comma-separated values text files. IIS: formats output records in the Microsoft IIS Log File Format. TSV: parses tab-separated and space-separated values text files. SQL: uploads output records to a table in a SQL database. XML: parses XML text files. SYSLOG: sends output records to a Syslog server. W3C: parses text files in the W3C Extended Log File Format. DATAGRID: displays output records in a graphical user interface. NCSA: parses web server log files in the NCSA Common, Combined, and Extended Log File Formats. CHART: creates image files containing charts. TEXTLINE: returns lines from generic text files. TEXTWORD: returns words from generic text files. EVT: returns events from the Windows Event Log and from Event Log backup files (.evt files). FS: returns information on files and directories. REG: returns information on registry values. ADS: returns information on Active Directory objects. NETMON: parses network capture files created by NetMon. ETW: parses Enterprise Tracing for Windows trace log files and live sessions. COM: provides an interface to Custom Input Format COM Plugins. So, you can query data from any of the types on the left and really easily get it into a format where it is ready for analysis by other tools. To a DBA or network Administrator with an enquiring mind this is a treasure trove. In part 3 we will look at working with multiple sources and specifically outputting to SQL format. See you there!

    Read the article

  • Algorithmia Source Code released on CodePlex

    - by FransBouma
    Following the release of our BCL Extensions Library on CodePlex, we have now released the source-code of Algorithmia on CodePlex! Algorithmia is an algorithm and data-structures library for .NET 3.5 or higher and is one of the pillars LLBLGen Pro v3's designer is built on. The library contains many data-structures and algorithms, and the source-code is well documented and commented, often with links to official descriptions and papers of the algorithms and data-structures implemented. The source-code is shared using Mercurial on CodePlex and is licensed under the friendly BSD2 license. User documentation is not available at the moment but will be added soon. One of the main design goals of Algorithmia was to create a library which contains implementations of well-known algorithms which weren't already implemented in .NET itself. This way, more developers out there can enjoy the results of many years of what the field of Computer Science research has delivered. Some algorithms and datastructures are known in .NET but are re-implemented because the implementation in .NET isn't efficient for many situations or lacks features. An example is the linked list in .NET: it doesn't have an O(1) concat operation, as every node refers to the containing LinkedList object it's stored in. This is bad for algorithms which rely on O(1) concat operations, like the Fibonacci heap implementation in Algorithmia. Algorithmia therefore contains a linked list with an O(1) concat feature. The following functionality is available in Algorithmia: Command, Command management. This system is usable to build a fully undo/redo aware system by building your object graph using command-aware classes. The Command pattern is implemented using a system which allows transparent undo-redo and command grouping so you can use it to make a class undo/redo aware and set properties, use its contents without using commands at all. The Commands namespace is the namespace to start. Classes you'd want to look at are CommandifiedMember, CommandifiedList and KeyedCommandifiedList. See the CommandQueueTests in the test project for examples. Graphs, Graph algorithms. Algorithmia contains a sophisticated graph class hierarchy and algorithms implemented onto them: non-directed and directed graphs, as well as a subgraph view class, which can be used to create a view onto an existing graph class which can be self-maintaining. Algorithms include transitive closure, topological sorting and others. A feature rich depth-first search (DFS) crawler is available so DFS based algorithms can be implemented quickly. All graph classes are undo/redo aware, as they can be set to be 'commandified'. When a graph is 'commandified' it will do its housekeeping through commands, which makes it fully undo-redo aware, so you can remove, add and manipulate the graph and undo/redo the activity automatically without any extra code. If you define the properties of the class you set as the vertex type using CommandifiedMember, you can manipulate the properties of vertices and the graph contents with full undo/redo functionality without any extra code. Heaps. Heaps are data-structures which have the largest or smallest item stored in them always as the 'root'. Extracting the root from the heap makes the heap determine the next in line to be the 'maximum' or 'minimum' (max-heap vs. min-heap, all heaps in Algorithmia can do both). Algorithmia contains various heaps, among them an implementation of the Fibonacci heap, one of the most efficient heap datastructures known today, especially when you want to merge different instances into one. Priority queues. Priority queues are specializations of heaps. Algorithmia contains a couple of them. Sorting. What's an algorithm library without sort algorithms? Algorithmia implements a couple of sort algorithms which sort the data in-place. This aspect is important in situations where you want to sort the elements in a buffer/list/ICollection in-place, so all data stays in the data-structure it already is stored in. PropertyBag. It re-implements Tony Allowatt's original idea in .NET 3.5 specific syntax, which is to have a generic property bag and to be able to build an object in code at runtime which can be bound to a property grid for editing. This is handy for when you have data / settings stored in XML or other format, and want to create an editable form of it without creating many editors. IEditableObject/IDataErrorInfo implementations. It contains default implementations for IEditableObject and IDataErrorInfo (EditableObjectDataContainer for IEditableObject and ErrorContainer for IDataErrorInfo), which make it very easy to implement these interfaces (just a few lines of code) without having to worry about bookkeeping during databinding. They work seamlessly with CommandifiedMember as well, so your undo/redo aware code can use them out of the box. EventThrottler. It contains an event throttler, which can be used to filter out duplicate events in an event stream coming into an observer from an event. This can greatly enhance performance in your UI without needing to do anything other than hooking it up so it's placed between the event source and your real handler. If your UI is flooded with events from data-structures observed by your UI or a middle tier, you can use this class to filter out duplicates to avoid redundant updates to UI elements or to avoid having observers choke on many redundant events. Small, handy stuff. A MultiValueDictionary, which can store multiple unique values per key, instead of one with the default Dictionary, and is also merge-aware so you can merge two into one. A Pair class, to quickly group two elements together. Multiple interfaces for helping with building a de-coupled, observer based system, and some utility extension methods for the defined data-structures. We regularly update the library with new code. If you have ideas for new algorithms or want to share your contribution, feel free to discuss it on the project's Discussions page or send us a pull request. Enjoy!

    Read the article

  • Improving the state of the art in API documentation sites

    - by Daniel Cazzulino
    Go straight to the site if you want: http://nudoq.org. You can then come back and continue reading :) Compare some of the most popular NuGet packages API documentation sites: Json.NET EntityFramework NLog Autofac You see the pattern? Huge navigation tree views, static content with no comments/community content, very hard (if not impossible) to search/filter, etc. These are the product of automated tools that have been developed years ago, in a time where CHM help files were common and even expected from libraries. Nowadays, most of the top packages in NuGet.org don’t even provide an online documentation site at all: it’s such a hassle for such a crappy user experience in the end! Good news is that it doesn’t have to be that way. Introducing NuDoq A lot has changed since those early days of .NET. We now have NuGet packages and the awesome channel that is ...Read full article

    Read the article

  • Thunderbird compact is taking forever

    - by mulllhausen
    One day I came in to work and found that our development server - a Ubuntu box had a full hard disk. I did a bit of investigation using the du command and it seems like mozilla thunderbird is the major culprit. After burning off some backups, the disk was left at 94%: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 895G 791G 59G 94% / none 4.0G 300K 4.0G 1% /dev none 4.0G 1.4M 4.0G 1% /dev/shm none 4.0G 140K 4.0G 1% /var/run none 4.0G 0 4.0G 0% /var/lock none 4.0G 0 4.0G 0% /lib/init/rw $ cd $ du -ch | grep [0-9]G 666G ./.thunderbird/ccsmcruu.default/ImapMail/mail.adofms.com.au 666G ./.thunderbird/ccsmcruu.default/ImapMail 667G ./.thunderbird/ccsmcruu.default 667G ./.thunderbird 2.2G ./.VirtualBox/Machines/iBike/Snapshots 2.2G ./.VirtualBox/Machines/iBike 2.2G ./.VirtualBox/Machines 2.2G ./.VirtualBox 670G . 670G total I did some reading and found that Mozilla Thunderbird does not compact files by default - i.e. all of the old emails that were sent to trash are still kept. One of the mailboxes used to get a lot of spam so I guess this accounts for the 667GB. I opened up Thunderbird to see how much space the inbox actually takes up and it turns out to be approximately 500MB - over 1000 times less than the stuff that has not been compacted over the years. So i right clicked on the inbox directory in the tree on the left of Thunderbird and selected 'compact'. I left it for about 12hours but even after that it still said 'compacting folder' on the status bar. I don't use Thunderbird on this PC - it belonged to a colleague who has left the company, however I do occasionally need to look through the inbox for references to the project I am working on, so deleting all traces of Thunderbird is not an option. My question is - is there any way I can monitor the progress of Thunderbird's compacting function? I would really like to know how long it is going to take. Also is there any way I can speed up the compacting process?

    Read the article

  • Replication Services in a BI environment

    - by jorg
    In this blog post I will explain the principles of SQL Server Replication Services without too much detail and I will take a look on the BI capabilities that Replication Services could offer in my opinion. SQL Server Replication Services provides tools to copy and distribute database objects from one database system to another and maintain consistency afterwards. These tools basically copy or synchronize data with little or no transformations, they do not offer capabilities to transform data or apply business rules, like ETL tools do. The only “transformations” Replication Services offers is to filter records or columns out of your data set. You can achieve this by selecting the desired columns of a table and/or by using WHERE statements like this: SELECT <published_columns> FROM [Table] WHERE [DateTime] >= getdate() - 60 There are three types of replication: Transactional Replication This type replicates data on a transactional level. The Log Reader Agent reads directly on the transaction log of the source database (Publisher) and clones the transactions to the Distribution Database (Distributor), this database acts as a queue for the destination database (Subscriber). Next, the Distribution Agent moves the cloned transactions that are stored in the Distribution Database to the Subscriber. The Distribution Agent can either run at scheduled intervals or continuously which offers near real-time replication of data! So for example when a user executes an UPDATE statement on one or multiple records in the publisher database, this transaction (not the data itself) is copied to the distribution database and is then also executed on the subscriber. When the Distribution Agent is set to run continuously this process runs all the time and transactions on the publisher are replicated in small batches (near real-time), when it runs on scheduled intervals it executes larger batches of transactions, but the idea is the same. Snapshot Replication This type of replication makes an initial copy of database objects that need to be replicated, this includes the schemas and the data itself. All types of replication must start with a snapshot of the database objects from the Publisher to initialize the Subscriber. Transactional replication need an initial snapshot of the replicated publisher tables/objects to run its cloned transactions on and maintain consistency. The Snapshot Agent copies the schemas of the tables that will be replicated to files that will be stored in the Snapshot Folder which is a normal folder on the file system. When all the schemas are ready, the data itself will be copied from the Publisher to the snapshot folder. The snapshot is generated as a set of bulk copy program (BCP) files. Next, the Distribution Agent moves the snapshot to the Subscriber, if necessary it applies schema changes first and copies the data itself afterwards. The application of schema changes to the Subscriber is a nice feature, when you change the schema of the Publisher with, for example, an ALTER TABLE statement, that change is propagated by default to the Subscriber(s). Merge Replication Merge replication is typically used in server-to-client environments, for example when subscribers need to receive data, make changes offline, and later synchronize changes with the Publisher and other Subscribers, like with mobile devices that need to synchronize one in a while. Because I don’t really see BI capabilities here, I will not explain this type of replication any further. Replication Services in a BI environment Transactional Replication can be very useful in BI environments. In my opinion you never want to see users to run custom (SSRS) reports or PowerPivot solutions directly on your production database, it can slow down the system and can cause deadlocks in the database which can cause errors. Transactional Replication can offer a read-only, near real-time database for reporting purposes with minimal overhead on the source system. Snapshot Replication can also be useful in BI environments, if you don’t need a near real-time copy of the database, you can choose to use this form of replication. Next to an alternative for Transactional Replication it can be used to stage data so it can be transformed and moved into the data warehousing environment afterwards. In many solutions I have seen developers create multiple SSIS packages that simply copies data from one or more source systems to a staging database that figures as source for the ETL process. The creation of these packages takes a lot of (boring) time, while Replication Services can do the same in minutes. It is possible to filter out columns and/or records and it can even apply schema changes automatically so I think it offers enough features here. I don’t know how the performance will be and if it really works as good for this purpose as I expect, but I want to try this out soon!

    Read the article

  • $.fadeTo/fadeOut() operations on Table Rows in IE fail

    - by Rick Strahl
    Here’s a a small problem that one of customers ran into a few days ago: He was playing around with some of the sample code I’ve put out for one of my simple jQuery demos which deals with providing a simple pulse behavior plug-in: $.fn.pulse = function(time) { if (!time) time = 2000; // *** this == jQuery object that contains selections $(this).fadeTo(time, 0.20, function() { $(this).fadeTo(time, 1); }); return this; } it’s a very simplistic plug-in and it works fine for simple pulse animations. However he ran into a problem where it didn’t work when working with tables – specifically pulsing a table row in Internet Explorer. Works fine in FireFox and Chrome, but IE not so much. It also works just fine in IE as long as you don’t try it on tables or table rows specifically. Applying against something like this (an ASP.NET GridView): var sel = $("#gdEntries>tbody>tr") .not(":first-child") // no header .not(":last-child") // no footer .filter(":even") .addClass("gridalternate"); // *** Demonstrate simple plugin sel.pulse(2000); fails in IE. No pulsing happens in any version of IE. After some additional experimentation with single rows and various ways of selecting each and still failing, I’ve come to the conclusion that the various fade operations in jQuery simply won’t work correctly in IE (any version). So even something as ‘elemental’ as this: var el = $("#gdEntries>tbody>tr").get(0);$(el).fadeOut(2000); is not working correctly. The item will stick around for 2 seconds and then magically disappear. Likewise: sel.hide().fadeIn(5000); also doesn’t fade in although the items become immediately visible in IE. Go figure that behavior out. Thanks to a tweet from red_square and a link he provided here is a grid that explains what works and doesn’t in IE (and most last gen browsers) regarding opacity: http://www.quirksmode.org/js/opacity.html It appears from this link that table and row elements can’t be made opaque, but td elements can. This means for the row selections I can force each of the td elements to be selected and then pulse all of those. Once you have the rows it’s easy to explicitly select all the columns in those rows with .find(“td”). Aha the following actually works: var sel = $("#gdEntries>tbody>tr") .not(":first-child") // no header .not(":last-child") // no footer .filter(":even") .addClass("gridalternate"); // *** Demonstrate simple plugin sel.find("td").pulse(2000); A little unintuitive that, but it works. Stay away from <table> and <tr> Fades The moral of the story is – stay away from TR, TH and TABLE fades and opacity. If you have to do it on tables use the columns instead and if necessary use .find(“td”) on your row(s) selector to grab all the columns. I’ve been surprised by this uhm relevation, since I use fadeOut in almost every one of my applications for deletion of items and row deletions from grids are not uncommon especially in older apps. But it turns out that fadeOut actually works in terms of behavior: It removes the item when the timeout’s done and because the fade is relatively short lived and I don’t extensively test IE code any more I just never noticed that the fade wasn’t happening. Note – this behavior or rather lack thereof appears to be specific to table table,tr,th elements. I see no problems with other elements like <div> and <li> items. Chalk this one up to another of IE’s shortcomings. Incidentally I’m not the only one who has failed to address this in my simplistic plug-in: The jquery-ui pulsate effect also fails on the table rows in the same way. sel.effect("pulsate", { times: 3 }, 2000); and it also works with the same workaround. If you’re already using jquery-ui definitely use this version of the plugin which provides a few more options… Bottom line: be careful with table based fade operations and remember that if you do need to fade – fade on columns.© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  

    Read the article

  • How to Create Quality Photo Prints With Free Software

    - by Eric Z Goodnight
    Photoshop may be the professional standard for high quality photo prints, but that doesn’t mean you have to pay hundreds of dollars for printing software. Freeware program Google Picasa can create excellent quality photo prints that’ll only cost you a download. Picasa and other freeware graphics programs are hardly news to savvy geeks, although with a little patience, they can produce quality prints few could tell apart from thousand dollar graphics suites. Stay tuned for links to various graphics programs, and a simple how-to on getting the perfect print settings for your photos Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Deathwing the Destroyer – WoW Cataclysm Dragon Wallpaper Drag2Up Lets You Drag and Drop Files to the Web With Ease The Spam Police Parts 1 and 2 – Goodbye Spammers [Videos] Snow Angels Theme for Windows 7 Exploring the Jungle Ruins Wallpaper Protect Your Privacy When Browsing with Chrome and Iron Browser

    Read the article

  • SOA Checklist

    - by pat.shepherd
    In a recent meeting, the customer brought up a valid question: “How do I know if a problem/system is a good candidate for using SOA (vs. using old but trusted techniques).  I put this checklist together.  If you can answer yes to 2 or more of these, it might well be a good candidate.  This is V1, and I will likely update it over time.  Comments (that are not spam or sales pitches) appreciated. Part of the conversation was also around the fact that SOA has two faces to it; one is around the obvious reuse possibilities. The other, that often gets forgotten, is that SOA provides goodness in terms of simplifying integration even where opportunities to reuse those integrations are small; at least the integrations are standards-based and more flexible.  I did not write a lot of verbiage about each of them, for example “Business Process” implies that there is a set of step-wise actions that need to take place in a coordinated fashion that include integrating with systems (and sometimes people for approvals and other human-only actions) in the process.  

    Read the article

  • How to Use the Avira Rescue CD to Clean Your Infected PC

    - by The Geek
    When you’ve got a PC completely infected with viruses, sometimes it’s best to reboot into a rescue disc and run a full virus scan from there. Here’s how to use the Avira Rescue CD to clean an infected PC. We’ve previously covered how to clean an infected PC using the BitDefender or Kaspersky rescue disks, and loads of readers have written in saying thanks, and reporting that they were able to clean their PC easily. Be sure and check out our previous articles on the subject: How to Use the BitDefender Rescue CD to Clean Your Infected PC How to Use the Kaspersky Rescue Disk to Clean Your Infected PC Otherwise, keep reading for how it all works with Avira, a well-respected anti-virus solution Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Deathwing the Destroyer – WoW Cataclysm Dragon Wallpaper Drag2Up Lets You Drag and Drop Files to the Web With Ease The Spam Police Parts 1 and 2 – Goodbye Spammers [Videos] Snow Angels Theme for Windows 7 Exploring the Jungle Ruins Wallpaper Protect Your Privacy When Browsing with Chrome and Iron Browser

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >