Search Results

Search found 11226 results on 450 pages for 'reverse thinking'.

Page 432/450 | < Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >

  • Can you call FB.login inside a callback from other FB methods (like FB.getLoginStatus) without triggering popup blockers?

    - by Erik Kallevig
    I'm trying to set up a pretty basic authentication logic flow with the FB JavaScript SDK to check a user's logged-in status and permissions before performing an action (and prompting the user to login with permissions if they are not)... User types a message into a textarea on my site to post to their Facebook feed and click's a 'post to facebook' button on my site. In response to the click, I check user's logged in status with FB.getLoginStatus In the callback to FB.getLoginStatus, if user is not logged in, prompt them to login (FB.login). In the callback to FB.login I then need to make sure they have the right permissions so I make a call to FB.api('/me/permissions') -- if they don't , I again prompt them to login (FB.login) The problem I'm running into is that anytime I try to call FB.login inside a callback to other FB methods, the browser seems to lose track of the origin of execution (the click) and thus will block the popup. I'm wondering if I'm missing some way to prompt the user to login after checking their status without the browser mistakenly thinking that it's not a user-initiated popup? I've currently fallen back to just calling FB.login() first regardless. The undesired side effect of this approach, however, is that if the user is already logged-in with permissions and I'm still calling FB.login, the auth popup will open and close immediately before continuing, which looks rather buggy despite being functional. It seems like checking a user's login status and permissions before doing something would be a common flow so I feel like I'm missing something. Here's some example code. <div onclick="onClickPostBtn()">Post to Facebook</div> <script> // Callback to click on Post button. function onClickPostBtn() { // Check if logged in, prompt to do so if not. FB.getLoginStatus(function(response) { if (response.status === 'connected') { checkPermissions(response.authResponse.accessToken); } else { FB.login(function(){}, {scope: 'publish_stream'}) } }); } // Logged in, check permissions. function checkPermissions(accessToken) { FB.api('/me/permissions', {'access_token': accessToken}, function(response){ // Logged in and authorized for this site. if (response.data && response.data.length) { // Parse response object to check for permission here... if (hasPermission) { // Logged in with permission, perform some action. } else { // Logged in without proper permission, request login with permissions. FB.login(function(){}, {scope: 'publish_stream'}) } // Logged in to FB but not authorized for this site. } else { FB.login(function(){}, {scope: 'publish_stream'}) } } ); } </script>

    Read the article

  • How to find same-value rectangular areas of a given size in a matrix most efficiently?

    - by neo
    My problem is very simple but I haven't found an efficient implementation yet. Suppose there is a matrix A like this: 0 0 0 0 0 0 0 4 4 2 2 2 0 0 4 4 2 2 2 0 0 0 0 2 2 2 1 1 0 0 0 0 0 1 1 Now I want to find all starting positions of rectangular areas in this matrix which have a given size. An area is a subset of A where all numbers are the same. Let's say width=2 and height=3. There are 3 areas which have this size: 2 2 2 2 0 0 2 2 2 2 0 0 2 2 2 2 0 0 The result of the function call would be a list of starting positions (x,y starting with 0) of those areas. List((2,1),(3,1),(5,0)) The following is my current implementation. "Areas" are called "surfaces" here. case class Dimension2D(width: Int, height: Int) case class Position2D(x: Int, y: Int) def findFlatSurfaces(matrix: Array[Array[Int]], surfaceSize: Dimension2D): List[Position2D] = { val matrixWidth = matrix.length val matrixHeight = matrix(0).length var resultPositions: List[Position2D] = Nil for (y <- 0 to matrixHeight - surfaceSize.height) { var x = 0 while (x <= matrixWidth - surfaceSize.width) { val topLeft = matrix(x)(y) val topRight = matrix(x + surfaceSize.width - 1)(y) val bottomLeft = matrix(x)(y + surfaceSize.height - 1) val bottomRight = matrix(x + surfaceSize.width - 1)(y + surfaceSize.height - 1) // investigate further if corners are equal if (topLeft == bottomLeft && topLeft == topRight && topLeft == bottomRight) { breakable { for (sx <- x until x + surfaceSize.width; sy <- y until y + surfaceSize.height) { if (matrix(sx)(sy) != topLeft) { x = if (x == sx) sx + 1 else sx break } } // found one! resultPositions ::= Position2D(x, y) x += 1 } } else if (topRight != bottomRight) { // can skip x a bit as there won't be a valid match in current row in this area x += surfaceSize.width } else { x += 1 } } } return resultPositions } I already tried to include some optimizations in it but I am sure that there are far better solutions. Is there a matlab function existing for it which I could port? I'm also wondering whether this problem has its own name as I didn't exactly know what to google for. Thanks for thinking about it! I'm excited to see your proposals or solutions :)

    Read the article

  • Entity framework 4 many-to-many insertion?

    - by Saxman
    Hi all, I'm not very familiar with the many-to-many insertion process using Entity Framework 4, POCO. I have a blog with 3 tables: Post, Comment, and Tag. A Post can have many Tags and a Tag can be in many Posts. Here are the Post and Tag models: public class Tag { public int Id { get; set; } [Required] [StringLength(25, ErrorMessage = "Tag name can't exceed 25 characters.")] public string Name { get; set; } public virtual ICollection<Post> Posts { get; set; } } public class Post { public int Id { get; set; } [Required] [StringLength(512, ErrorMessage = "Title can't exceed 512 characters")] public string Title { get; set; } [Required] [AllowHtml] public string Content { get; set; } public string FriendlyUrl { get; set; } public DateTime PostedDate { get; set; } public bool IsActive { get; set; } public virtual ICollection<Comment> Comments { get; set; } public virtual ICollection<Tag> Tags { get; set; } } Now when I'm adding a new post, I'm not sure what would be the right way to do. I'm thinking that I'll have a textbox where I can select multiple tags for that post (this part is already done), in my controller, I will check to see if the tag is already exists or not, if not, then I will insert the new tag. But I'm not even sure based on the models that I've created for EF, will they create a PostsTags table, or they are creating just a Tags and a Posts table and links between the two? How would I insert the new Post and set the tags to that post? Is it just newPost.Tags = Tags (where Tags are the one that got selected, do I even need to check to see if they already exists?), and then something like _post.Add(newPost);? Thanks.

    Read the article

  • How to build this encryption system that allows multiple users/objects.

    - by Patrick
    Hello! I am trying to figure out how to create an optimal solution for my project. I made this simple picture in Photoshop to try to illustrate the problem and how i want it (if possible). Illustrative image Ill also try to explain it based on the picture. First off we have a couple of objects to the left, these objects all get encrypted with their own encryption key (EKey on the picture) and then stored in the database. On the other side we have different users placed into roles (one user can be in a lot of roles) and the roles are associated with different objects. So one person only has access the to the objects that the role provides. So for instance Role A might have access to Object A and B. Role B have access only to Object C and Role C have access to all objects. Nothing strange in that, right? Different roles have different objects that they can access. Now to the problem part. Each user has to login with his/her username/password and then he/she gets access to the objects that his/her roles provide. All the objects are encrypted so she needs to get a decryption key somehow. I don't want to store the encryption key as a text string on the server. It should be, if possible, decrypted using the users password (along with the role) or similar. That way you have to be a user on the server in order to decrypt an object an to work with it. I was thinking about making a public/private key encryption system, but i am kinda stuck on how to give the different users the decryption key to the objects. Since i need to be able to move users to and from roles, add new users, add new roles and create/delete objects. There will be one administrator that then adds some data to allow the users in that role to get the decryption key to decrypt the object. Nothing is static and i am trying to get a picture of how this can be built or if there is a far better solution. The only criteria are: -Encrypted objects. -Decryption key should not be stored as text. -Different users have access to different objects. -Does NOT have to have roles.

    Read the article

  • How to check if a checkbox/ radio button is checked in php

    - by user225269
    I have this html code: <tr> <td><label><input type="text" name="id" class="DEPENDS ON info BEING student" id="example">ID</label></td> </tr> <tr> <td> <label> <input type="checkbox" name="yr" class="DEPENDS ON info BEING student"> Year</label> </td> </tr> But I don't have any idea on how do I check this checkboxes if they are checked using php, and then output the corresponding data based on the values that are checked. Please help, I'm thinking of something like this. But of course it won't work, because I don't know how to equate checkboxes in php if they are checked: <?php $con = mysql_connect("localhost","root","nitoryolai123$%^"); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("school", $con); $id = mysql_real_escape_string($_POST['idnum']); if($_POST['id'] == checked & $_POST['yr'] ==checked ){ $result2 = mysql_query("SELECT * FROM student WHERE IDNO='$id'"); echo "<table border='1'> <tr> <th>IDNO</th> <th>YEAR</th> </tr>"; while($row = mysql_fetch_array($result2)) { echo "<tr>"; echo "<td>" . $row['IDNO'] . "</td>"; echo "<td>" . $row['YEAR'] . "</td>"; echo "</tr>"; } echo "</table>"; } mysql_close($con); ?>

    Read the article

  • A versioning workflow for multiple similar (but not identical) deployments

    - by rs77
    I'm currently employed at a small non-tech organisation and have been given the role of coding the organisations' website. While I have enjoyed the task and have learnt much with web dev I've encountered a few issues that I'm hoping someone will be able to help with me or at least point me in the right direction on. A little background: The site I work on has subdomains that each have their own separate WordPress installation on - as this has been the easiest "backend" admin panel for the type of user who will be responsible for updating content (etc). Within the organisation I work under the Marketing Manager (MM) and I code according to his style guide and wire frames. While we have been working with only one subdomain since the beginning of the year the project has been relatively simple and straightforward. However, lately the workflow is becoming a little more complicated as our original subdomain has been copied over to the other subdomains. Each of the new subdomains receives minor edits to their stylesheets (eg. different pictures for background, slightly different colours here and there, etc). The issue: At the moment managing all the different subdomains has been "bearable", but the straw that's braking the camel's back at the moment has been the slight reversions the MM has required now that the CEO has seen the final product. The problem I'm having with reversions in stylesheets is that the CEO will one week state that he likes change "X" and then as the MM and I continue to modify the site (to now "Z"), will another week state that he wants us to change "X" to "W" but keeping most of the changes made in "Y". What I'm looking for is something that allows for: tracking file changes reverting changes made (or reverting back to 'a' from 'e' but including changes 'b' & 'c') easily upload necessary files to their respective WP-theme installation Does anything out there come close to addressing these issues? If so, what? Thanks for any help! PS - I'm learning Git at the moment and it seems to do the "tracking file changes" quite nicely. Haven't learnt about the reverting changes bit yet, though. Maybe for my final point I'm thinking of creating a shell script to automatically upload the files to their folders. Does Git do this too though? Addendum (alexbbrown) I had a similar problem: I ran a custom version of mediawiki where I installed various extensions in the versioned core (with svn). Each of the extensions required an section in the confit file, but the confit file also needed local configuration for each of several deployments. I could have implemented it using includes, but they would not be versioned; and rebasing branches each time is a chore. +50 experience points for a good answer in git.

    Read the article

  • Can't get jQuery to get focus on cloned input fields

    - by Rebel1Moon
    I have a page that needs to create dynamic form fields as often as the user needs, and I am trying to use Ajax to tie it in to my database for faster form entry and to prevent user typos. So, I have put my Ajax returned data into popup div, the user selects, then the form field is filled in. The problem comes on the cloned fields. They don't seem to want to bring up the popup div when focused. I am thinking it is something to do with when they get created/added to the DOM. Here is my JS that creates the clones: $(document).ready(function() { var regex = /^(.*)(\d)+$/i; var cloneIndex = $(".clonedInput").length; $("button.clone").live("click", function(){ $(this).parents(".clonedInput").clone() .appendTo("#course_container") .attr("id", "clonedInput" + cloneIndex) .find("*").each(function() { var id = this.id || ""; var match = id.match(regex) || []; if (match.length == 3) { this.id = match[1] + (cloneIndex); } }); cloneIndex++; numClones=cloneIndex-1; //alert("numClones "+numClones); }); Here is where I expect to be able to get focus on the correct cloned field and call the popup. The baker_equiv0 id is original code, whereas baker_equiv1 is the first clone. $('#baker_equiv0').focus(function() { \\ THIS CODE WORKS $('.popup').fadeIn(500); $('#results').empty(); // document.enter_data.baker_equiv1.value="test"; THIS LINE WORKS //alert("numClones "+numClones); }); $('#baker_equiv1').focus(function() { // THIS DOESN'T EVER FIRE alert("numClones "+numClones); $('.popup').fadeIn(500); $('#results').empty(); }); Here is the HTML with the form: <label for="baker_equiv" class="">Baker Equivalent: <span class="requiredField">*</span></label> <input type="text" class="cinputsa" name="baker_equiv[]" id="baker_equiv0" size="8" ONKEYUP="get_equiv(this.value);"> If I put this in the HTML code above, it works fine: onfocus="alert(this.id)" I'd also be interested in how to adjust the JS code to work based on the id array created rather than having to copy code for each potential set of fields clones, i.e., baker_equiv[] rather than baker_equiv0, baker_equiv1, etc. Thanks all!

    Read the article

  • Translating a C# WCF app into Visual Basic

    - by MikeG
    I'm trying to write a simple/small Windows Communication Foundation service application in Visual Basic (but I am very novice in VB) and all the good examples I've found on the net are written in C#. So far I've gotten my WCF service application working but now I'm trying to add callback functionality and the program has gotten more complicated. In the C# example code I understand how everything works but I am having trouble translating into VB the portion of code that uses a delegate. Can someone please show the VB equivalent? Here is the C# code sample I'm using for reference: namespace WCFCallbacks { using System; using System.ServiceModel; [ServiceContract(CallbackContract = typeof(IMessageCallback))] public interface IMessage { [OperationContract] void AddMessage(string message); [OperationContract] bool Subscribe(); [OperationContract] bool Unsubscribe(); } interface IMessageCallback { [OperationContract(IsOneWay = true)] void OnMessageAdded(string message, DateTime timestamp); } } namespace WCFCallbacks { using System; using System.Collections.Generic; using System.ServiceModel; public class MessageService : IMessage { private static readonly List<IMessageCallback> subscribers = new List<IMessageCallback>(); //The code in this AddMessage method is what I'd like to see re-written in VB... public void AddMessage(string message) { subscribers.ForEach(delegate(IMessageCallback callback) { if (((ICommunicationObject)callback).State == CommunicationState.Opened) { callback.OnMessageAdded(message, DateTime.Now); } else { subscribers.Remove(callback); } }); } public bool Subscribe() { try { IMessageCallback callback = OperationContext.Current.GetCallbackChannel<IMessageCallback>(); if (!subscribers.Contains(callback)) subscribers.Add(callback); return true; } catch { return false; } } public bool Unsubscribe() { try { IMessageCallback callback = OperationContext.Current.GetCallbackChannel<IMessageCallback>(); if (!subscribers.Contains(callback)) subscribers.Remove(callback); return true; } catch { return false; } } } } I was thinking I could do something like this but I don't know how to pass the message string from AddMessage to DoSomething... Dim subscribers As New List(Of IMessageCallback) Public Sub AddMessage(ByVal message As String) Implements IMessage.AddMessage Dim action As Action(Of IMessageCallback) action = AddressOf DoSomething subscribers.ForEach(action) 'Or this instead of the above three lines: 'subscribers.ForEach(AddressOf DoSomething) End Sub Public Sub DoSomething(ByVal callback As IMessageCallback) 'I am also confused by: '((ICommunicationObject)callback).State 'Is that casting the callback object as type ICommunicationObject? 'How is that done in VB? End Sub

    Read the article

  • What are the Rails best practices for javascript templates in restful/resourceful controllers?

    - by numbers1311407
    First, 2 common (basic) approaches: # returning from some FoosController method respond_to do |format| # 1. render the javascript directly format.js { render :json => @foo.to_json } # 2. render the default template, say update.js.erb format.js { render } end # in update.js.erb $('#foo').html("<%= escape_javascript(render(@foo)) %>") These are obviously simple cases but I wanted to illustrate what I'm talking about. I believe that these are also the cases expected by the default responder in rails 3 (either the action-named default template or calling to_#{format} on the resource.) The Issues With 1, you have total flexibility on the view side with no worries about the template, but you have to manipulate the DOM directly via javascript. You lose access to helpers, partials, etc. With 2, you have partials and helpers at your disposal, but you're tied to the one template (by default at least). All your views that make JS calls to FoosController use the same template, which isn't exactly flexible. Three Other Approaches (none really satisfactory) 1.) Escape partials/helpers I need into javascript beforehand, then inserting them into the page after, using string replacement to tailor them to the results returned (subbing in name, id, etc). 2.) Put view logic in the templates. For example, looking for a particular DOM element and doing one thing if it exists, another if it does not. 3.) Put logic in the controller to render different templates. For example, in a polymorphic belongs to where update might be called for either comments/foo or posts/foo, rendering commnts/foos/update.js.erb versus posts/foos/update.js.erb. I've used all of these (and probably others I'm not thinking of). Often in the same app, which leads to confusing code. Are there best practices for this sort of thing? It seems like a common enough use-case that you'd want to call controllers via Ajax actions from different views and expect different things to happen (without having to do tedious things like escaping and string-replacing partials and helpers client side). Any thoughts?

    Read the article

  • How to optimize foreach loop in PHP

    - by vanneto
    First off, I know the title is generic and not fitting. I just couldn't think of a title that could describe my problem. I have a table Recipients in MySQL structured like this: id | email | status 1 foo@bar S 2 bar@baz S 3 abc@def R 4 sta@cko B I need to convert the data into the following XML, depending on the status field. For example: <Recipients> <RecipientsSent> <!-- Have the 'S' status --> <recipient>foo@bar</recipient> <recipient>bar@baz</recipient> </RecipientsSent> <RecipientsRegexError> <recipient>abc@def</recipient> </RecipientsRegexError> <RecipientsBlocked> <recipient>sta@cko</recipient> </RecipientsBlocked> </Recipients> I have this PHP code to implement this ($recipients contains an associative array of the db table): <Recipients> <RecipientsSent> <?php foreach ($recipients as $recipient): if ($recipient['status'] == 'S'): echo "<recipient>" . $recipient['email'] . "</recipient>"; endif; endforeach; ?> </RecipientsSent> <RecipientsRegexError> <?php foreach ($recipients as $recipient): if ($recipient['status'] == 'R'): echo "<recipient>" . $recipient['email'] . "</recipient>"; endif; endforeach; ?> </RecipientsRegexError> <?php /** same loop for the B status */ ?> </Recipients> So, this means that if I have 1000 entries in the table and 4 different status' that can be checked, it means that there will be 4 loops, each one executing 1000 times. How can this be done in a more efficient manner? I thought about fetching four different sets from the database, meaning 4 different queries, would that be more efficient? I'm thinking it could be done with one loop but but I can't come up with a solution. Any way this could be done with only one loop?

    Read the article

  • pointer to preallocated memory as an input parameter and have the function fill it

    - by djones2010
    test code: void modify_it(char * mystuff) { char test[7] = "123456"; //last element is null i presume for c style strings here. //static char test[] = "123123"; //when i do this i thought i should be able to gain access to this bit of memory when the function is destroyed but that does not seem to be the case. //char * test = new char[7]; //this is also creating memory on stack and not the heap i reckon and gets destroyed once the function is done with. strcpy_s(mystuff,7,test); //this does the job as long as memory for mystuff has been allocated outside the function. mystuff = test; //this does not work. I know with c style strings you can't just do string assignments they have to be actually copied. in this case I was using this in conjunction with static char test thinking by having it as static the memory would not get destroyed and i can then simply point mystuff to test and be done with it. i would later have address the memory cleanup in the main function. but anyway this never worked. } int main(void) { char * mystuff = new char [7]; //allocate memory on heap where the pointer will point cool(mystuff); std::string test_case(mystuff); std::cout<<test_case.c_str(); //this is the only way i know how to use cout by making it into a string c++ string. delete [] mystuff; return 0; } in the case, of a static array in the function why would it not work. in the case, when i allocated memory using new in the function does it get created on the stack or heap? in the case, i have string which needs to be copied into a char * form. everything i see usually requires const char* instead of just char*. I know i could use reference to take care of this easy. Or char ** to send in the pointer and do it that way. But i just wanted to know if I could do it with just char *. Anyway your thoughts and comments plus any examples would be very helpful.

    Read the article

  • jQuery reference to (this) does not work?

    - by FFish
    I have this href link with text either "attivo" or "non attivo" User can set the item to 'active' or 'closed' in the database with an ajax request $.post() I have 2 questions for these: I can't get the reference to $(this) to work.. I tried it with a normal link and it works, but not wrapped in if/else?? How can I prevent the user from clicking more than one time on the link and submitting several request? Is this a valid concern? Do I need some sort of a small timer or something? First I was thinking about a javascript confirm message, but that's pretty annoying for this function.. HTML: <dl id='album-list'> <dt id="dt-2">some title</dt> <dd id="dd-2"> some description<br /> <div class='links-right'>status: <a class='toggle-active' href='#'>non attivo</a></div> </dd> </dl> <a class="test" href="#">test</a> JS: $('dd a.toggle-active').click(function() { var a_ref = $(this); var id = a_ref.parent().parent().attr('id').substring(3); if (a_ref.text() == "non attivo") { var new_active = "active"; // for db in english $.post("ajax-aa.php", {album_id:id, album_active:new_active}, function(data) { // alert("success"); a_ref.text("non attivo"); // change href text }); } else { var new_active = "closed"; // for db in english $.post("ajax-aa.php", {album_id:id, album_active:new_active}, function(data) { // alert("success"); a_ref.text("attivo"); // change href text }); } return false; }); $('a.test').click(function() { var a_ref = $(this); $.post("ajax-aa.php", {album_id:2, album_active:"active"}, function(data) { a_ref.text("changed"); }); return false; })

    Read the article

  • Is it possible to make this Flex/Flash application safe?

    - by Frank
    I'm back with another Flex/Flash security question. I've already received some help from the community on this topic, but I'm still not quite sure this is the best way to do. Here's the thing. A flex web app, a lot of users (1000+), custom configuration of the application depending of the user group. Can I make this thing safe... or safer. For the moment, when a user comes to the application, there is only one configuration possible, but for the next version we've implented a multi-configuration protocol, this way : 1. The user connect to Default.aspx, server code process the windows credentials (whe are on intranet) and give the correct xml configuration file. 2. The flex app loads with the xml conf file as a flashvar and then the app 'builds' itself with the content of the xml file. As we know, since this is a flex application the swf is downloaded on the client computer and the xml file too. If more than one user connects to the app, from the same computer, the can possibly see the other xml file in the windows temp folder. The current directory of the application looks that way : Web site |-> default.aspx |-> index.swf |-> configAdmin.xml |-> configUserType1.xml |-> configUserType2.xml |-> com |-> a lot of swf and xml files I was first thinking making another directory (without read access for the client) containing all the configurations xml files, picking the right one, copying it to the client and deleting it afterwards. But it seems like I must let know the user know when downloading/deleting content on it's computer... I'm running out of ideas, so I hope you have some great ones. It's there are some design flaws (in the way the app is build, not in Flash :p) please share. I'm always looking forward to improve. Thanks Update : In browser Flash/Flex (without AIR that is) doesn't allow deleting file localy silently (on the client computer, where the application is). It's also not yet possible to get session data.

    Read the article

  • SSIS - user variable used in derived column transform is not available - in some cases

    - by soo
    Unfortunately I don't have a repro for my issue, but I thought I would try to describe it in case it sounds familiar to someone... I am using SSIS 2005, SP2. My package has a package-scope user variable - let's call it user_var first step in the control flow is an Execute SQL task which runs a stored procedure. All that SP does is insert a record in a SQL table (with an identity column) and then go back and get the max ID value. The Execute SQL task saves this output into user_var the control flow then has a Data Flow Task - it goes and gets some source data, has a derived column which sets a column called run_id to user_var - and saves the data to a SQL destination In most cases (this template is used for many packages, running every day) this all works great. All of the destination records created get set with a correct run_id. However, in some cases, there is a set of the destination data that does not get run_id equal to user_var, but instead gets a value of 0 (0 is the default value for user_var). I have 2 instances where this has happened, but I can't make it happen. In both cases, it was just less that 10,000 records that have run_id = 0. Since SSIS writes data out in 10,000 record blocks, this really makes me think that, for the first set of data written out, user_var was not yet set. Then, after that first block, for the rest of the data, run_id is set to a correct value. But control passed on to my data flow from the Execute SQL task - it would have seemed reasonable to me that it wouldn't go on until the SP has completed and user_var is set. Maybe it just runs the SP, but doesn't wait for it to complete? In both cases where this has happened there seemed to be a few packages hitting the table to get a new user_var at about the same time. And in both cases lots of data was written (40 million rows, 60 million rows) - my thinking is that that means the writes were happening for a while. Sorry to be both long-winded AND vague. A winning combination! Does this sound familiar to anyone? Thanks.

    Read the article

  • Extending struts 2 "property" data tag

    - by John B.
    Hi, We are currently developing a project with Struts2. We have a module on which we display a large amount of data on read-only fields from a group of beans via the "property" Struts 2 data tag (i.e. <s:property value="aBeanProperty" /) on a jsp file. In some cases most of the fields might come empty, so a blank is being displayed on screen. Our customer is now requesting us to display default string (i.e. "N/A") whenever a property comes empty, so that it is displayed in place of the blank spaces currently shown. We are looking for a way to achieve this in a clean and maintainable way. The 'property' tag comes with a 'default' attribute on which one can define a default value in cases when the accessed property comes as null. However, most of our properties are empty strings, therefore it does not work in our case. Another solution we are thinking of is to define a base class for all of our beans, and define a util method which will validate if a string is null or empty and then return the default value. Then we would call this method from each bean getter. And yes this would be tiresome and kind of ugly :), therefore we are holding out on this one in case of a better solution. Now, we have in mind a solution which we think would be the best but have not had luck on how implement it. We are planning on extending the 'property' tag some way, defining a new 'default' attribute so that besides working on null properties, it also do so on empty strings ("", " ", etc). Therefore we would only need to replace the original s:property tag with our new custom tag, and the desired result would be achieved without touching java code. Do you have an idea on how to do this? Also, any other clever solution (maybe some sort of design pattern?) on how to default the values of a large amount of property beans are welcome too! (Or maybe, even there might be some tag that does this already in Struts2??) Thanks in advance.

    Read the article

  • float addition 2.5 + 2.5 = 4.0? RPN

    - by AJ Clou
    The code below is my subprogram to do reverse polish notation calculations... basically +, -, *, and /. Everything works in the program except when I try to add 2.5 and 2.5 the program gives me 4.0... I think I have an idea why, but I'm not sure how to fix it... Right now I am reading all the numbers and operators in from command line as required by this assignment, then taking that string and using sscanf to get the numbers out of it... I am thinking that somehow the array that contains the three characters '2', '.', and '5', is not being totally converted to a float... instead i think just the '2' is. Could someone please take a look at my code and either confirm or deny this, and possibly tell me how to fix it so that i get the proper answer? Thank you in advance for any help! float fsm (char mystring[]) { int i = -1, j, k = 0, state = 0; float num1, num2, ans; char temp[10]; c_stack top; c_init_stack (&top); while (1) { switch (state) { case 0: i++; if ((mystring[i]) == ' ') { state = 0; } else if ((isdigit (mystring[i])) || (mystring[i] == '.')) { state = 1; } else if ((mystring[i]) == '\0') { state = 3; } else { state = 4; } break; case 1: temp[k] = mystring[i]; k++; i++; if ((isdigit (mystring[i])) || (mystring[i] == '.')) { state = 1; } else { state = 2; } break; case 2: temp[k] = '\0'; sscanf (temp, "%f", &num1); c_push (&top, num1); i--; k = 0; state = 0; break; case 3: ans = c_pop (&top); if (c_is_empty (top)) return ans; else { printf ("There are still items on the stack\n"); exit (0); case 4: num2 = c_pop (&top); num1 = c_pop (&top); if (mystring[i] == '+'){ ans = num1 + num2; return ans; } else if (mystring[i] == '-'){ ans = num1 - num2; return ans; } else if (mystring[i] == '*'){ ans = num1 * num2; return ans; } else if (mystring[i] == '/'){ if (num2){ ans = num1 / num2; return ans; } else{ printf ("Error: cannot divide by 0\n"); exit (0); } } c_push (&top, ans); state = 0; break; } } } } Here is my main program: #include <stdio.h> #include <stdlib.h> #include "boolean.h" #include "c_stack.h" #include <string.h> int main(int argc, char *argv[]) { char mystring[100]; int i; sscanf("", "%s", mystring); for (i=1; i<argc; i++){ strcat(mystring, argv[i]); strcat(mystring, " "); } printf("%.2f\n", fsm(mystring)); } and here is the header file with prototypes and the definition for c_stack: #include "boolean.h" #ifndef CSTACK_H #define CSTACK_H typedef struct c_stacknode{ char data; struct c_stacknode *next; } *c_stack; #endif void c_init_stack(c_stack *); boolean c_is_full(void); boolean c_is_empty(c_stack); void c_push(c_stack *,char); char c_pop(c_stack *); void print_c_stack(c_stack); boolean is_open(char); boolean is_brother(char, char); float fsm(char[]);

    Read the article

  • Efficient method of finding database rows that have *one or more* qualities from a list of seven qualities

    - by hithere
    Hello! For this question, I'm looking to see if anyone has a better idea of how to implement what I'm currently planning on implementing (below): I'm keeping track of a set of images, using a database. Each image is represented by one row. I want to be able to search for images, using a number of different search parameters. One of these parameters involves a search-by-color option. (The rest of the search stuff is currently working fine.) Images in this database can contain up to seven colors: -Red -Orange -Yellow -Green -Blue -Indigo -Violet Here are some example user queries: "I want an image that contains red." "I want an image that contains red and blue." "I want an image that contains yellow and violet." "I want an image that contains red, orange, yellow, green, blue, indigo and violet." And so on. Users make this selection through the use of checkboxes in an html form. They can check zero checkboxes, all seven, and anything in between. I'm curious to hear what people think would be the most efficient way to perform this database search. I have two possible options right now, but I feel like there must be something better that I'm not thinking of. (Option 1) -For each row, simply have seven additional fields in the database, one for each color. Each field holds a 1 or 0 (true/false) value, and I SELECT based on whatever the user has checked off. (I didn't like this solution so much, because it seemed kind of wasteful to add seven additional fields...especially since most pictures in this table will only have 3-4 colors max, though some could have up to 7. So that means I'm storing a lot of zeros.) Also, if I added more searchable colors later on (which I don't think I will, but it's always possible), I'd have to add more fields. (Option 2) -For each image row, I could have a "colors" text field that stores space-separated color names (or numbers for the sake of compactness). Then I could do a fulltext match against search through the fields, selecting rows that contain "red yellow green" (or "1 3 4"). But I kind of didn't want to do fulltext searching because I already allow a keyword search, and I didn't really want to do two fulltext searches per image search. Plus, if the database gets big, fulltext stuff might slow down. Any better options that I didn't think of? Thanks! Side Note: I'm using PHP to work with a MySQL database.

    Read the article

  • Auto not being recognised by the compiler, what would be the best replacement?

    - by user1719605
    So I have wrote a program that uses auto however the compiler doesn't seem to recognize it, probably it is an earlier compiler. I was wondering for my code, with are suitable variables to fix my code so that I do not need to use the auto keyword? I'm thinking a pointer to a string? or a string iterator, though I am not sure. #include <cstdlib> #include <string> #include <iostream> #include <unistd.h> #include <algorithm> using namespace std; int main(int argc, char* argv[]) { enum MODE { WHOLE, PREFIX, SUFFIX, ANYWHERE, EMBEDDED } mode = WHOLE; bool reverse_match = false; int c; while ((c = getopt(argc, argv, ":wpsaev")) != -1) { switch (c) { case 'w': // pattern matches whole word mode = WHOLE; break; case 'p': // pattern matches prefix mode = PREFIX; break; case 'a': // pattern matches anywhere mode = ANYWHERE; break; case 's': // pattern matches suffix mode = SUFFIX; break; case 'e': // pattern matches anywhere mode = EMBEDDED; break; case 'v': // reverse sense of match reverse_match = true; break; } } argc -= optind; argv += optind; string pattern = argv[0]; string word; int matches = 0; while (cin >> word) { switch (mode) { case WHOLE: if (reverse_match) { if (pattern != word) { matches += 1; cout << word << endl; } } else if (pattern == word) { matches += 1; cout << word << endl; } break; case PREFIX: if (pattern.size() <= word.size()) { auto res = mismatch(pattern.begin(), pattern.end(), word.begin()); if (reverse_match) { if (res.first != word.end()) { matches += 1; cout << word << endl; } } else if (res.first == word.end()) { matches += 1; cout << word << endl; } } break; case ANYWHERE: if (reverse_match) { if (!word.find(pattern) != string::npos) { matches += 1; cout << word << endl; } } else if (word.find(pattern) != string::npos) { matches += 1; cout << word << endl; } break; case SUFFIX: if (pattern.size() <= word.size()) { auto res = mismatch(pattern.rbegin(), pattern.rend(), word.rbegin()); if (reverse_match) { if (res.first != word.rend()) { matches = +1; cout << word << endl; } } else if (res.first == word.rend()) { matches = +1; cout << word << endl; } } break; case EMBEDDED: if (reverse_match) { if (!pattern.find(word) != string::npos) { matches += 1; cout << word << endl;} } else if (pattern.find(word) != string::npos) { matches += 1; cout << word << endl; } break; } } return (matches == 0) ? 1 : 0; } Thanks in advance!

    Read the article

  • Declare Ajax-webservicecall OnSuccess method anonymous.

    - by user333113
    I write a lot of ajax javascript code and have a little design problem which I'm not totally satisfied with. A lot of times I end up with writing something like this: var typeOfPopup; function RetrievePopupContent(_typeOfPopup) { switch (_typeOfPopup) { case Popup1: WebService.RetrievePopup1Content(param1, param2, DisplayPopup, OnError); break; case Popup2: WebService.RetrievePopup2Content(param1, param2, DisplayPopup, OnError); break; } typeOfPopup = _typeOfPopup; } function DisplayPopup(result) { switch (typeOfPopup) { case Popup1: $get('Popup1').innerHTML = result; break; case Popup2: $get('Popup2').innerHTML = result; break; } Allright. This is a simplified example of what I mean. Often I end up with a lot worse code I believe. What I don't like is the global state variabel outside the functions. One solution I wasn't thinking about when writing this code is to send a context object. I believe you could write something like this: function RetrievePopupContent(typeOfPopup) { switch (typeOfPopup) { case Popup1: WebService.RetrievePopup1Content(param1, param2, DisplayPopup, OnError, typeOfPopup); break; case Popup2: WebService.RetrievePopup2Content(param1, param2, DisplayPopup, OnError, typeOfPopup); break; } } function DisplayPopup(result, typeOfPopup) { switch (typeOfPopup) { case Popup1: $get('Popup1').innerHTML = result; break; case Popup2: $get('Popup2').innerHTML = result; break; } Is this the recommended way? What I also want to do is to be able to write something like this: function RetrievePopupContent(typeOfPopup) { switch (typeOfPopup) { case Popup1: WebService.RetrievePopup1Content(param1, param2, new function(result) { $get('Popup1').innerHTML = result; }, OnError); break; case Popup2: WebService.RetrievePopup2Content(param1, param2, new function(result) { $get('Popup2').innerHTML = result; }, OnError); break; } } Is this possible at all? To declare the callback function anonymous? I am grateful for all opinions on the two options I mentioned myself and also new alternatives to get rid of my global variables I have used this way.

    Read the article

  • Asyncronous While Loop?

    - by o7th Web Design
    I have a pretty great SqlDataReader wrapper in which I can map the output into a strongly typed list. What I am finding now is that on larger datasets with larger numbers of columns, performance could probably be a bit better if I can optimize my mapping. In thinking about this there is one section in particular that I am concerned about as it seems to be the heaviest hitter: while (_Rdr.Read()) { T newObject = new T(); for (int i = 0; i <= _Rdr.FieldCount - 1; ++i) { PropertyInfo info = (PropertyInfo)_ht[_Rdr.GetName(i).ToUpper()]; if ((info != null) && info.CanWrite) { info.SetValue(newObject, (_Rdr.GetValue(i) is DBNull) ? default(T) : _Rdr.GetValue(i), null); } } _en.Add(newObject); } _Rdr.Close(); What I would really like to know, is if there is a way that I can make this loop asyncronous? I feel that will make all the difference in the world with this beast :) Here is the entire Map method in case anyone can see where I can make further improvements on it... IList<T> Map<T> // Map our datareader object to a strongly typed list private static IList<T> Map<T>(IDataReader _Rdr) where T : new() { try { Type _t = typeof(T); List<T> _en = new List<T>(); Hashtable _ht = new Hashtable(); PropertyInfo[] _props = _t.GetProperties(); Parallel.ForEach(_props, info => { _ht[info.Name.ToUpper()] = info; }); while (_Rdr.Read()) { T newObject = new T(); for (int i = 0; i <= _Rdr.FieldCount - 1; ++i) { PropertyInfo info = (PropertyInfo)_ht[_Rdr.GetName(i).ToUpper()]; if ((info != null) && info.CanWrite) { info.SetValue(newObject, (_Rdr.GetValue(i) is DBNull) ? default(T) : _Rdr.GetValue(i), null); } } _en.Add(newObject); } _Rdr.Close(); return _en; }catch(Exception ex){ _Msg += "Wrapper.Map Exception: " + ex.Message; ErrorReporting.WriteEm.WriteItem(ex, "o7th.Class.Library.Data.Wrapper.Map", _Msg); return default(IList<T>); } }

    Read the article

  • PHP array performance

    - by dfo
    Hi, this is my first question on Stackoverflow, please bear with me. I'm testing an algorithm for 2d bin packing and I've chosen PHP to mock it up as it's my bread-and-butter language nowadays. As you can see on http://themworks.com/pack_v0.2/oopack.php?ol=1 it works pretty well, but you need to wait around 10-20 seconds for 100 rectangles to pack. For some hard to handle sets it would hit the php's 30s runtime limit. I did some profiling and it shows that most of the time my script goes through different parts of a small 2d array with 0's and 1's in it. It either checks if certain cell equals to 0/1 or sets it to 0/1. It can do such operations million times and each times it takes few microseconds. I guess I could use an array of booleans in a statically typed language and things would be faster. Or even make an array of 1 bit values. I'm thinking of converting the whole thing to some compiled language. Is PHP just not good for it? If I do need to convert it to let's say C++, how good are the automatic converters? My script is just a lot of for loops with basic arrays and objects manipulations. Thank you! Edit. This function gets called more than any other. It reads few properties of a very simple object, and goes through a very small part of a smallish array to check if there's any element not equal to 0. function fits($bin, $file, $x, $y) { $flag = true; $xw = $x + $file->get_width();; $yh = $y + $file->get_height(); for ($i = $x; $i < $xw; $i++) { for ($j = $y; $j < $yh; $j++) { if ($bin[$i][$j] !== 0) { $flag = false; break; } } if (!$flag) break; } return $flag; }

    Read the article

  • C#/.NET Fundamentals: Choosing the Right Collection Class

    - by James Michael Hare
    The .NET Base Class Library (BCL) has a wide array of collection classes at your disposal which make it easy to manage collections of objects. While it's great to have so many classes available, it can be daunting to choose the right collection to use for any given situation. As hard as it may be, choosing the right collection can be absolutely key to the performance and maintainability of your application! This post will look at breaking down any confusion between each collection and the situations in which they excel. We will be spending most of our time looking at the System.Collections.Generic namespace, which is the recommended set of collections. The Generic Collections: System.Collections.Generic namespace The generic collections were introduced in .NET 2.0 in the System.Collections.Generic namespace. This is the main body of collections you should tend to focus on first, as they will tend to suit 99% of your needs right up front. It is important to note that the generic collections are unsynchronized. This decision was made for performance reasons because depending on how you are using the collections its completely possible that synchronization may not be required or may be needed on a higher level than simple method-level synchronization. Furthermore, concurrent read access (all writes done at beginning and never again) is always safe, but for concurrent mixed access you should either synchronize the collection or use one of the concurrent collections. So let's look at each of the collections in turn and its various pros and cons, at the end we'll summarize with a table to help make it easier to compare and contrast the different collections. The Associative Collection Classes Associative collections store a value in the collection by providing a key that is used to add/remove/lookup the item. Hence, the container associates the value with the key. These collections are most useful when you need to lookup/manipulate a collection using a key value. For example, if you wanted to look up an order in a collection of orders by an order id, you might have an associative collection where they key is the order id and the value is the order. The Dictionary<TKey,TVale> is probably the most used associative container class. The Dictionary<TKey,TValue> is the fastest class for associative lookups/inserts/deletes because it uses a hash table under the covers. Because the keys are hashed, the key type should correctly implement GetHashCode() and Equals() appropriately or you should provide an external IEqualityComparer to the dictionary on construction. The insert/delete/lookup time of items in the dictionary is amortized constant time - O(1) - which means no matter how big the dictionary gets, the time it takes to find something remains relatively constant. This is highly desirable for high-speed lookups. The only downside is that the dictionary, by nature of using a hash table, is unordered, so you cannot easily traverse the items in a Dictionary in order. The SortedDictionary<TKey,TValue> is similar to the Dictionary<TKey,TValue> in usage but very different in implementation. The SortedDictionary<TKey,TValye> uses a binary tree under the covers to maintain the items in order by the key. As a consequence of sorting, the type used for the key must correctly implement IComparable<TKey> so that the keys can be correctly sorted. The sorted dictionary trades a little bit of lookup time for the ability to maintain the items in order, thus insert/delete/lookup times in a sorted dictionary are logarithmic - O(log n). Generally speaking, with logarithmic time, you can double the size of the collection and it only has to perform one extra comparison to find the item. Use the SortedDictionary<TKey,TValue> when you want fast lookups but also want to be able to maintain the collection in order by the key. The SortedList<TKey,TValue> is the other ordered associative container class in the generic containers. Once again SortedList<TKey,TValue>, like SortedDictionary<TKey,TValue>, uses a key to sort key-value pairs. Unlike SortedDictionary, however, items in a SortedList are stored as an ordered array of items. This means that insertions and deletions are linear - O(n) - because deleting or adding an item may involve shifting all items up or down in the list. Lookup time, however is O(log n) because the SortedList can use a binary search to find any item in the list by its key. So why would you ever want to do this? Well, the answer is that if you are going to load the SortedList up-front, the insertions will be slower, but because array indexing is faster than following object links, lookups are marginally faster than a SortedDictionary. Once again I'd use this in situations where you want fast lookups and want to maintain the collection in order by the key, and where insertions and deletions are rare. The Non-Associative Containers The other container classes are non-associative. They don't use keys to manipulate the collection but rely on the object itself being stored or some other means (such as index) to manipulate the collection. The List<T> is a basic contiguous storage container. Some people may call this a vector or dynamic array. Essentially it is an array of items that grow once its current capacity is exceeded. Because the items are stored contiguously as an array, you can access items in the List<T> by index very quickly. However inserting and removing in the beginning or middle of the List<T> are very costly because you must shift all the items up or down as you delete or insert respectively. However, adding and removing at the end of a List<T> is an amortized constant operation - O(1). Typically List<T> is the standard go-to collection when you don't have any other constraints, and typically we favor a List<T> even over arrays unless we are sure the size will remain absolutely fixed. The LinkedList<T> is a basic implementation of a doubly-linked list. This means that you can add or remove items in the middle of a linked list very quickly (because there's no items to move up or down in contiguous memory), but you also lose the ability to index items by position quickly. Most of the time we tend to favor List<T> over LinkedList<T> unless you are doing a lot of adding and removing from the collection, in which case a LinkedList<T> may make more sense. The HashSet<T> is an unordered collection of unique items. This means that the collection cannot have duplicates and no order is maintained. Logically, this is very similar to having a Dictionary<TKey,TValue> where the TKey and TValue both refer to the same object. This collection is very useful for maintaining a collection of items you wish to check membership against. For example, if you receive an order for a given vendor code, you may want to check to make sure the vendor code belongs to the set of vendor codes you handle. In these cases a HashSet<T> is useful for super-quick lookups where order is not important. Once again, like in Dictionary, the type T should have a valid implementation of GetHashCode() and Equals(), or you should provide an appropriate IEqualityComparer<T> to the HashSet<T> on construction. The SortedSet<T> is to HashSet<T> what the SortedDictionary<TKey,TValue> is to Dictionary<TKey,TValue>. That is, the SortedSet<T> is a binary tree where the key and value are the same object. This once again means that adding/removing/lookups are logarithmic - O(log n) - but you gain the ability to iterate over the items in order. For this collection to be effective, type T must implement IComparable<T> or you need to supply an external IComparer<T>. Finally, the Stack<T> and Queue<T> are two very specific collections that allow you to handle a sequential collection of objects in very specific ways. The Stack<T> is a last-in-first-out (LIFO) container where items are added and removed from the top of the stack. Typically this is useful in situations where you want to stack actions and then be able to undo those actions in reverse order as needed. The Queue<T> on the other hand is a first-in-first-out container which adds items at the end of the queue and removes items from the front. This is useful for situations where you need to process items in the order in which they came, such as a print spooler or waiting lines. So that's the basic collections. Let's summarize what we've learned in a quick reference table.  Collection Ordered? Contiguous Storage? Direct Access? Lookup Efficiency Manipulate Efficiency Notes Dictionary No Yes Via Key Key: O(1) O(1) Best for high performance lookups. SortedDictionary Yes No Via Key Key: O(log n) O(log n) Compromise of Dictionary speed and ordering, uses binary search tree. SortedList Yes Yes Via Key Key: O(log n) O(n) Very similar to SortedDictionary, except tree is implemented in an array, so has faster lookup on preloaded data, but slower loads. List No Yes Via Index Index: O(1) Value: O(n) O(n) Best for smaller lists where direct access required and no ordering. LinkedList No No No Value: O(n) O(1) Best for lists where inserting/deleting in middle is common and no direct access required. HashSet No Yes Via Key Key: O(1) O(1) Unique unordered collection, like a Dictionary except key and value are same object. SortedSet Yes No Via Key Key: O(log n) O(log n) Unique ordered collection, like SortedDictionary except key and value are same object. Stack No Yes Only Top Top: O(1) O(1)* Essentially same as List<T> except only process as LIFO Queue No Yes Only Front Front: O(1) O(1) Essentially same as List<T> except only process as FIFO   The Original Collections: System.Collections namespace The original collection classes are largely considered deprecated by developers and by Microsoft itself. In fact they indicate that for the most part you should always favor the generic or concurrent collections, and only use the original collections when you are dealing with legacy .NET code. Because these collections are out of vogue, let's just briefly mention the original collection and their generic equivalents: ArrayList A dynamic, contiguous collection of objects. Favor the generic collection List<T> instead. Hashtable Associative, unordered collection of key-value pairs of objects. Favor the generic collection Dictionary<TKey,TValue> instead. Queue First-in-first-out (FIFO) collection of objects. Favor the generic collection Queue<T> instead. SortedList Associative, ordered collection of key-value pairs of objects. Favor the generic collection SortedList<T> instead. Stack Last-in-first-out (LIFO) collection of objects. Favor the generic collection Stack<T> instead. In general, the older collections are non-type-safe and in some cases less performant than their generic counterparts. Once again, the only reason you should fall back on these older collections is for backward compatibility with legacy code and libraries only. The Concurrent Collections: System.Collections.Concurrent namespace The concurrent collections are new as of .NET 4.0 and are included in the System.Collections.Concurrent namespace. These collections are optimized for use in situations where multi-threaded read and write access of a collection is desired. The concurrent queue, stack, and dictionary work much as you'd expect. The bag and blocking collection are more unique. Below is the summary of each with a link to a blog post I did on each of them. ConcurrentQueue Thread-safe version of a queue (FIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentStack Thread-safe version of a stack (LIFO). For more information see: C#/.NET Little Wonders: The ConcurrentStack and ConcurrentQueue ConcurrentBag Thread-safe unordered collection of objects. Optimized for situations where a thread may be bother reader and writer. For more information see: C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection ConcurrentDictionary Thread-safe version of a dictionary. Optimized for multiple readers (allows multiple readers under same lock). For more information see C#/.NET Little Wonders: The ConcurrentDictionary BlockingCollection Wrapper collection that implement producers & consumers paradigm. Readers can block until items are available to read. Writers can block until space is available to write (if bounded). For more information see C#/.NET Little Wonders: The ConcurrentBag and BlockingCollection Summary The .NET BCL has lots of collections built in to help you store and manipulate collections of data. Understanding how these collections work and knowing in which situations each container is best is one of the key skills necessary to build more performant code. Choosing the wrong collection for the job can make your code much slower or even harder to maintain if you choose one that doesn’t perform as well or otherwise doesn’t exactly fit the situation. Remember to avoid the original collections and stick with the generic collections.  If you need concurrent access, you can use the generic collections if the data is read-only, or consider the concurrent collections for mixed-access if you are running on .NET 4.0 or higher.   Tweet Technorati Tags: C#,.NET,Collecitons,Generic,Concurrent,Dictionary,List,Stack,Queue,SortedList,SortedDictionary,HashSet,SortedSet

    Read the article

  • From Bluehost to WP Engine, My WordPress Story

    - by thatjeffsmith
    This is probably the longest blog post I’ve written in a LONG time. And if you’re used to coming here for the Oracle stuff, this post is not about that. It’s about my blog, and the stuff under the hood that makes it run, AKA WordPress. If you want to skip to the juicy stuff, then use these shortcuts: My Site Slowed Down How I Moved to WP Engine How WP Engine ‘Hooked’ Me Why WP Engine? I started thatJeffSmith.com on May 28th, 2010. I had been already been blogging for several years, but a couple of really smart people I respected (Andy, Brent – thanks again!) suggested that I take ownership of my content and begin building my personal brand. I thought that was a good idea, and so I signed up for service with bluehost. Bluehost makes setting up a WordPress site very, very easy. And, they continued to be easy to work with for the past 2 years. I would even recommend them to anyone looking to host their own WordPress install/site. For $83.40, I purchased a year’s worth of service and my domain name registration – a very good value. And then last year I paid $107.40 for another year’s services. And when that year expired I paid another $190.80 for an additional two year’s service in advance. I had been up to that point, getting my money’s worth. And then, just a few weeks ago… My Site Slowed to a Crawl That spike was from an April Fool's Day Post, I think Why? Well, when I first started blogging, I had the same problem that most beginner bloggers have – not many readers. In my first year of blogging, I think the highest number of readers on a single day was about 125. I remember that day as I was very excited to break 100! Bluehost was very reliable, serving up my content with maybe a total of 3-4 outages in the past 2 years. Support was usually very prompt with answers and solutions, and I love their ‘Chat now’ technology – much nicer than message boards only or pay-to-talk phone support. In the past 6 months however, I noticed a couple of things: daily traffic was increasing – woohoo! my service was experiencing severe CPU throttling – doh! To be honest, I wasn’t aware the throttling was occuring, but I did know that the response time of my blog was starting to lag. Average load times were approaching 20-30 seconds. Not good when good sites are loading in 5 seconds or less. And just this past week, in getting ready to launch a new website for work that sucked in an RSS feed from my blog, the new page was left waiting for more than a minute. Not good! In fact my boss asked, why aren’t you blogging on Blogger? Ugh. I tried a few things to fix the problem: I paid for a premium WordPress theme – Themify’s Grido (thanks to @SQLRockstar for the heads-up) I installed a couple of WP caching plugins I read every WP optimization blog post I could get my greedy little eyes on However, at the same time I was also getting addicted to WordPress bloggers talking about all the cool things you could do with your blog. As a result I had at one point about 30 different plugins installed. WordPress runs on MySQL, and certain queries running via these plugins were starving for CPU. Plugins that would be called every page load meant that as more people clicked on my site, the more CPU I needed. I’m not stupid, so I eventually figured out that maybe less plugins was better, and was able to go down to just 20. But still, the site was running like a dog. CPU Throttling, makes MySQL wait to run a query Bluehost runs shared servers. Your site runs on the same box that several hundred (or thousand?) other services are running on. If you take more CPU than they think you should have, they will limit your service by making you stand in line for CPU, AKA ‘throttling.’ This is not bad. This business model allows them to serve many, many users for a very fair price. It works great until, well, until it doesn’t. I noticed in the last week that for every minute of service, I was being throttled between 60 and 300 seconds. If there were 5 MySQL processes running, then every single one of them were being held in check. The blog visitor notice this as their page requests would take a minute or more to be answered. Bluehost unfortunately doesn’t offer dedicated server hosting, so there was no real upgrade path for me follow and remain one of their customers. So what was I to do? Uninstall every plugin and hope the site sped up? Ask for people to take turns on my blog? I decided to spend my way out of the problem. I signed up for service with WP Engine and moved ThatJeffSmith.com The first 2 months are free, and after that it’s about $29/month to run my site on their system. My math tells me that’s a good bit more expensive than what Bluehost was charging me – to the tune of about 300% more a month. Oh, and I should just say that my blog is a personal blog even though I talk about work stuff here. I don’t get paid for blogging, I don’t sell ads, and I don’t expense the service fees – this is my personal passion. So is it worth it? In the first 4 days, it seems to be totally worth it. Load times have gone from 20-30 seconds to less than 5 seconds. A few folks have told me via Twitter that they notice faster page loads. I anticipate this will indirectly lead to more traffic as Google penalizes you in search results if your site is too slow, and of course some folks won’t even bother waiting more than 5-10 seconds. I noticed right away that writing posts, uploading pictures, and just using the WordPress dashboard in general was much more responsive. So writing is less of a chore now, which means I won’t have a good reason not to write How I Moved to WP Engine I signed up for the service and registered my domain. I then took a full export of my ‘old’ site by doing a FTP GET of all my files, then did a MySQL database backup, exported my WordPress Theme settings to a .zip file, and then finally used the WordPress ‘Export’ feature. I then used the WordPress ‘Import’ on the new site to load up my posts. Then I uploaded the theme .zip package from Themify. Then I FTP’d the ‘wp-content’ directory up to my new server using SFTP (WP Engine only supports secure FTP – good on them!) Using a temporary URL to see my new site, I was able to confirm that everything looked mostly OK – I’ll detail the challenges and issues of fixing the content next – but then it was time to ‘flip the switch.’ I updated the IP address that the DNS lookup tables use to route traffic to my new server. In a matter of minutes the DNS servers around the world were updated and it was time to see the new site! But It Was ‘Broken’ I had never moved a website before, and in my rush to update the DNS, I had changed the records without really finding out what I was supposed to do first. After re-reading the directions provided by WP Engine and following the guidance of their support engineer, I realized I had needed to set the CNAME (Alias) ‘www’ record to point to a different URL than the ‘www.thatjeffsmith.com’ entry I had set. Once corrected the site was up and running in less than a minute. Then It Was Only Mostly Broken Many of my plugins weren’t working. Apparently just ftp’ing the wp-content directory up wasn’t the proper way to re-install the plugin. I suspect file permissions or file ownership wasn’t proper. Some plug-ins were working, many had their settings wiped to the defaults, and a few just didn’t work again. I had to delete the directory of the plug-in manually via SFTP, and then use the WP Dashboard to install it from scratch. And here was my first ‘lesson’ – don’t switch the DNS records until you’ve completely tested your new site. I wasn’t able to navigate the old WP console to review my plug-in settings. Thankfully I was able to use the Wayback Machine to reverse engineer some things, and of course most plug-ins aren’t that complicated to setup to begin with. An example of one that I had to redo from scratch is the ‘Twitter @Anywhere Plus’ plugin that I use to create the form that allows folks to tweet a post they enjoyed at the end of each story. How WP Engine ‘Hooked’ Me I actually signed up with another provider first. They ranked highly in Google searches and a few Tweeps recommended them to me. But hours after signing up and I still didn’t have sever reyady, I was ready to give up on them. They offered no chat or phone support – only mail and message boards. And the message boards were rife with posts about how the service had gone downhill in the past 6 months. To their credit, they did make it easy to cancel, although I did have to do so via email as their website ‘cancel’ button was non-existent. Within minutes of activating my WP Engine account I had received my welcome message and directions on how to get started. I was able to see my staged website right away. They also did something very cool before I even got started – they looked at my existing site and told me by how much they could improve its performance. The proof is in the web pudding. I like this for a few reasons, but primarily I liked their business model. It told me they knew what they were doing, and that they were willing to put their money where their mouth was. This was further evident by their 60-day money back guarantee. And if I understand it correctly, they don’t even take your money until after that 60 day period is over. After a day, I was welcomed by the WP Engine social media team, and was given the opportunity to subscribe to their newsletter and follow their account on Twitter. I noticed their Twitter team is sure to post regular WordPress tips several times a day. It’s not just an account that’s setup for the sake of having a Twitter presence. These little things add up and give me confidence in my decision to choose them as my hosting partner. ‘Partner’ – that’s a lot nicer word than just ‘service provider,’ isn’t it? Oh, and they offered me a t-shirt. Don’t ever doubt the power of a ‘free’ t-shirt! How awesome is this e-mail, from a customer perspective? I wasn’t really expecting any of this. Exceeding expectations before I have even handed over a single dollar seems like a pretty good business plan. This is how you treat customers. Love them to death, and they reward you with loyalty. But Jeff, You Skipped a Piece Here, Why WP Engine? I found them on one of those ‘Top 10′ list posts, and pulled up their webpage. I noticed they offered a specialized service – they host WordPress installs, and that’s it. Their servers are tuned specifically for running WordPress. They had in bolded text, things like ‘INSANELY FAST. INFINITELY SCALABLE.’ and ‘LIGHTNING SPEED.’ And then they offered insurance against hackers and they took care of automatic backups and restores. The only drawbacks I have noticed so far relate to plugins I used that have been ‘blacklisted.’ In order to guarantee that ‘lightning’ speed, they have banned the use of the CPU-suckiest plugins. One of those is the ‘Related Posts’ plugin. So if you are a subscriber and are reading this in your email, you’ll notice there’s no links back to my blog to continue reading other related stories. Since that referral traffic is very small single-digit for my site, I decided that I’m OK with that. I’d rather have the warp-speed page loads. Again, I think that will lead to higher traffic down the road. In 50+ days I will need to decide if WP Engine is a permanent solution. I’ll be sure to update this post when that time comes and let y’all know how it turns out.

    Read the article

  • MERGE Bug with Filtered Indexes

    - by Paul White
    A MERGE statement can fail, and incorrectly report a unique key violation when: The target table uses a unique filtered index; and No key column of the filtered index is updated; and A column from the filtering condition is updated; and Transient key violations are possible Example Tables Say we have two tables, one that is the target of a MERGE statement, and another that contains updates to be applied to the target.  The target table contains three columns, an integer primary key, a single character alternate key, and a status code column.  A filtered unique index exists on the alternate key, but is only enforced where the status code is ‘a’: CREATE TABLE #Target ( pk integer NOT NULL, ak character(1) NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) );   CREATE UNIQUE INDEX uq1 ON #Target (ak) INCLUDE (status_code) WHERE status_code = 'a'; The changes table contains just an integer primary key (to identify the target row to change) and the new status code: CREATE TABLE #Changes ( pk integer NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) ); Sample Data The sample data for the example is: INSERT #Target (pk, ak, status_code) VALUES (1, 'A', 'a'), (2, 'B', 'a'), (3, 'C', 'a'), (4, 'A', 'd');   INSERT #Changes (pk, status_code) VALUES (1, 'd'), (4, 'a');          Target                     Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ a           ¦    ¦  1 ¦ d           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ d           ¦ +-----------------------+ The target table’s alternate key (ak) column is unique, for rows where status_code = ‘a’.  Applying the changes to the target will change row 1 from status ‘a’ to status ‘d’, and row 4 from status ‘d’ to status ‘a’.  The result of applying all the changes will still satisfy the filtered unique index, because the ‘A’ in row 1 will be deleted from the index and the ‘A’ in row 4 will be added. Merge Test One Let’s now execute a MERGE statement to apply the changes: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; The MERGE changes the two target rows as expected.  The updated target table now contains: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦ <—changed from ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦ <—changed from ‘d’ +-----------------------+ Merge Test Two Now let’s repopulate the changes table to reverse the updates we just performed: TRUNCATE TABLE #Changes;   INSERT #Changes (pk, status_code) VALUES (1, 'a'), (4, 'd'); This will change row 1 back to status ‘a’ and row 4 back to status ‘d’.  As a reminder, the current state of the tables is:          Target                        Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ d           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ a           ¦ +-----------------------+ We execute the same MERGE statement: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; However this time we receive the following message: Msg 2601, Level 14, State 1, Line 1 Cannot insert duplicate key row in object 'dbo.#Target' with unique index 'uq1'. The duplicate key value is (A). The statement has been terminated. Applying the changes using UPDATE Let’s now rewrite the MERGE to use UPDATE instead: UPDATE t SET status_code = c.status_code FROM #Target AS t JOIN #Changes AS c ON t.pk = c.pk WHERE c.status_code <> t.status_code; This query succeeds where the MERGE failed.  The two rows are updated as expected: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ a           ¦ <—changed back to ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ d           ¦ <—changed back to ‘d’ +-----------------------+ What went wrong with the MERGE? In this test, the MERGE query execution happens to apply the changes in the order of the ‘pk’ column. In test one, this was not a problem: row 1 is removed from the unique filtered index by changing status_code from ‘a’ to ‘d’ before row 4 is added.  At no point does the table contain two rows where ak = ‘A’ and status_code = ‘a’. In test two, however, the first change was to change row 1 from status ‘d’ to status ‘a’.  This change means there would be two rows in the filtered unique index where ak = ‘A’ (both row 1 and row 4 meet the index filtering criteria ‘status_code = a’). The storage engine does not allow the query processor to violate a unique key (unless IGNORE_DUP_KEY is ON, but that is a different story, and doesn’t apply to MERGE in any case).  This strict rule applies regardless of the fact that if all changes were applied, there would be no unique key violation (row 4 would eventually be changed from ‘a’ to ‘d’, removing it from the filtered unique index, and resolving the key violation). Why it went wrong The query optimizer usually detects when this sort of temporary uniqueness violation could occur, and builds a plan that avoids the issue.  I wrote about this a couple of years ago in my post Beware Sneaky Reads with Unique Indexes (you can read more about the details on pages 495-497 of Microsoft SQL Server 2008 Internals or in Craig Freedman’s blog post on maintaining unique indexes).  To summarize though, the optimizer introduces Split, Filter, Sort, and Collapse operators into the query plan to: Split each row update into delete followed by an inserts Filter out rows that would not change the index (due to the filter on the index, or a non-updating update) Sort the resulting stream by index key, with deletes before inserts Collapse delete/insert pairs on the same index key back into an update The effect of all this is that only net changes are applied to an index (as one or more insert, update, and/or delete operations).  In this case, the net effect is a single update of the filtered unique index: changing the row for ak = ‘A’ from pk = 4 to pk = 1.  In case that is less than 100% clear, let’s look at the operation in test two again:          Target                     Changes                   Result +-----------------------+    +------------------+    +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦    ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦    ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ d           ¦    ¦  1 ¦ A  ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦    ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+    ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦                            ¦  4 ¦ A  ¦ d           ¦ +-----------------------+                            +-----------------------+ From the filtered index’s point of view (filtered for status_code = ‘a’ and shown in nonclustered index key order) the overall effect of the query is:   Before           After +---------+    +---------+ ¦ pk ¦ ak ¦    ¦ pk ¦ ak ¦ ¦----+----¦    ¦----+----¦ ¦  4 ¦ A  ¦    ¦  1 ¦ A  ¦ ¦  2 ¦ B  ¦    ¦  2 ¦ B  ¦ ¦  3 ¦ C  ¦    ¦  3 ¦ C  ¦ +---------+    +---------+ The single net change there is a change of pk from 4 to 1 for the nonclustered index entry ak = ‘A’.  This is the magic performed by the split, sort, and collapse.  Notice in particular how the original changes to the index key (on the ‘ak’ column) have been transformed into an update of a non-key column (pk is included in the nonclustered index).  By not updating any nonclustered index keys, we are guaranteed to avoid transient key violations. The Execution Plans The estimated MERGE execution plan that produces the incorrect key-violation error looks like this (click to enlarge in a new window): The successful UPDATE execution plan is (click to enlarge in a new window): The MERGE execution plan is a narrow (per-row) update.  The single Clustered Index Merge operator maintains both the clustered index and the filtered nonclustered index.  The UPDATE plan is a wide (per-index) update.  The clustered index is maintained first, then the Split, Filter, Sort, Collapse sequence is applied before the nonclustered index is separately maintained. There is always a wide update plan for any query that modifies the database. The narrow form is a performance optimization where the number of rows is expected to be relatively small, and is not available for all operations.  One of the operations that should disallow a narrow plan is maintaining a unique index where intermediate key violations could occur. Workarounds The MERGE can be made to work (producing a wide update plan with split, sort, and collapse) by: Adding all columns referenced in the filtered index’s WHERE clause to the index key (INCLUDE is not sufficient); or Executing the query with trace flag 8790 set e.g. OPTION (QUERYTRACEON 8790). Undocumented trace flag 8790 forces a wide update plan for any data-changing query (remember that a wide update plan is always possible).  Either change will produce a successfully-executing wide update plan for the MERGE that failed previously. Conclusion The optimizer fails to spot the possibility of transient unique key violations with MERGE under the conditions listed at the start of this post.  It incorrectly chooses a narrow plan for the MERGE, which cannot provide the protection of a split/sort/collapse sequence for the nonclustered index maintenance. The MERGE plan may fail at execution time depending on the order in which rows are processed, and the distribution of data in the database.  Worse, a previously solid MERGE query may suddenly start to fail unpredictably if a filtered unique index is added to the merge target table at any point. Connect bug filed here Tests performed on SQL Server 2012 SP1 CUI (build 11.0.3321) x64 Developer Edition © 2012 Paul White – All Rights Reserved Twitter: @SQL_Kiwi Email: [email protected]

    Read the article

  • CodePlex Daily Summary for Thursday, March 17, 2011

    CodePlex Daily Summary for Thursday, March 17, 2011Popular ReleasesLeage of Legends Masteries Tool: LoLMasterSave_v1.6.1.274: -Addresses resent LoL update that interfered with the way MasterSave sets / reads masteries - Removed Shift windows since some people experiencing issues If your interested in this function i can provide it as small separate tool.ASP.NET Comet Ajax Library (Reverse Ajax - Server Push): ASP.NET Server Push Samples: This package contains 14 sample projects.LogExpert: 1.4 build 4092: TabControl: Tooltip on dropdown list shows full path names now New menu item "Lock instance" in Options menu. Only available when "Allow only one instance" is disabled in the settings. "Lock instance" will temporary enable the single instance mode. The locked instance will receive all new launched files Some NullPtrExceptions fixed (e.g. in the settings dialog) Note: The debug build is identical to the release build. But the debug version writes a log file. It also contains line numbers ...Facebook C# SDK: 5.0.6 (BETA): This is seventh BETA release of the version 5 branch of the Facebook C# SDK. Remember this is a BETA build. Some things may change or not work exactly as planned. We are absolutely looking for feedback on this release to help us improve the final 5.X.X release. New in this release: Version 5.0.6 is almost completely backward compatible with 4.2.1 and 5.0.3 (BETA) Bug fixes and helpers to simplify many common scenarios For more information about this release see the following blog posts: F...SQLCE Code Generator: Build 1.0.3: New beta of the SQLCE Code Generator. New features: - Generates an IDataRepository interface that contains the generated repository interfaces that represents each table - Visual Studio 2010 Custom Tool Support Custom Tool: The custom tool is called SQLCECodeGenerator. Write this in the Custom Tool field in the Properties Window of an SDF file included in your project, this should create a code-behind file for the generated data access codeDotNetNuke® Community Edition: 06.00.00 CTP: CTP 1 (Build 155) is firmly focused around our conversion to C#. As many people have noted, this is a significant change to the platform and affects all areas of the product. This is one of the driving factors in why we felt it was important to get this release into your hands as soon as possible. We have already done quite a bit of testing on this feature internally and have fixed a number of issues in this area. We also recognize that there are probably still some more bugs to be found ...Kooboo CMS: Kooboo 3.0 RC: Bug fixes Inline editing toolbar positioning for websites with complicate CSS. Inline editing is turned on by default now for the samplesite template. MongoDB version content query for multiple filters. . Add a new 404 page to guide users to login and create first website. Naming validation for page name and datarule name. Files in this download kooboo_CMS.zip: The Kooboo application files Content_DBProvider.zip: Additional content database implementation of MSSQL,SQLCE, RavenDB ...SQL Monitor - tracking sql server activities: SQL Monitor 3.2: 1. introduce sql color syntax highlighting with http://www.codeproject.com/KB/edit/FastColoredTextBox_.aspxUmbraco CMS: Umbraco 4.7.0: Service release fixing 50+ issues! Getting Started A great place to start is with our Getting Started Guide: Getting Started Guide: http://umbraco.codeplex.com/Project/Download/FileDownload.aspx?DownloadId=197051 Make sure to check the free foundation videos on how to get started building Umbraco sites. They're available from: Introduction for webmasters: http://umbraco.tv/help-and-support/video-tutorials/getting-started Understand the Umbraco concepts: http://umbraco.tv/help-and-support...ProDinner - ASP.NET MVC EF4 Code First DDD jQuery Sample App: first release: ProDinner is an ASP.NET MVC sample application, it uses DDD, EF4 Code First for Data Access, jQuery and MvcProjectAwesome for Web UI, it has Multi-language User Interface Features: CRUD and search operations for entities Multi-Language User Interface upload and crop Images (make thumbnail) for meals pagination using "more results" button very rich and responsive UI (using Mvc Project Awesome) Multiple UI themes (using jQuery UI themes)BEPUphysics: BEPUphysics v0.15.1: Latest binary release. Version HistoryIronRuby: 1.1.3: IronRuby 1.1.3 is a servicing release that keeps on improving compatibility with Ruby 1.9.2 and includes IronRuby integration to Visual Studio 2010. We decided to drop 1.8.6 compatibility mode in all post-1.0 releases. We recommend using IronRuby 1.0 if you need 1.8.6 compatibility. The main purpose of this release is to sync with IronPython 2.7 release, i.e. to keep the Dynamic Language Runtime that both these languages build on top shareable. This release also fixes a few bugs: 5763 Use...SQL Server PowerShell Extensions: 2.3.2.1 Production: Release 2.3.2.1 implements SQLPSX as PowersShell version 2.0 modules. SQLPSX consists of 13 modules with 163 advanced functions, 2 cmdlets and 7 scripts for working with ADO.NET, SMO, Agent, RMO, SSIS, SQL script files, PBM, Performance Counters, SQLProfiler, Oracle and MySQL and using Powershell ISE as a SQL and Oracle query tool. In addition optional backend databases and SQL Server Reporting Services 2008 reports are provided with SQLServer and PBM modules. See readme file for details.IronPython: 2.7: On behalf of the IronPython team, I'm very pleased to announce the release of IronPython 2.7. This release contains all of the language features of Python 2.7, as well as several previously missing modules and numerous bug fixes. IronPython 2.7 also includes built-in Visual Studio support through IronPython Tools for Visual Studio. IronPython 2.7 requires .NET 4.0 or Silverlight 4. To download IronPython 2.7, visit http://ironpython.codeplex.com/releases/view/54498. Any bugs should be report...XML Explorer: XML Explorer 4.0.2: Changes in 4.0: This release is built on the Microsoft .NET Framework 4 Client Profile. Changed XSD validation to use the schema specified by the XML documents. Added a VS style Error List, double-clicking an error takes you to the offending node. XPathNavigator schema validation finally gives SourceObject (was fixed in .NET 4). Added Namespaces window and better support for XPath expressions in documents with a default namespace. Added ExpandAll and CollapseAll toolbar buttons (in a...Mobile Device Detection and Redirection: 1.0.0.0: Stable Release 51 Degrees.mobi Foundation has been in beta for some time now and has been used on thousands of websites worldwide. We’re now highly confident in the product and have designated this release as stable. We recommend all users update to this version. New Capabilities MappingsTo improve compatibility with other libraries some new .NET capabilities are now populated with wurfl data: “maximumRenderedPageSize” populated with “max_deck_size” “rendersBreaksAfterWmlAnchor” populated ...ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7.3: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager added interactive search for the lookupWPF Inspector: WPF Inspector 0.9.7: New Features in Version 0.9.7 - Support for .NET 3.5 and 4.0 - Multi-inspection of the same process - Property-Filtering for multiple keywords e.g. "Height Width" - Smart Element Selection - Select Controls by clicking CTRL, - Select Template-Parts by clicking CTRL+SHIFT - Possibility to hide the element adorner (over the context menu on the visual tree) - Many bugfixes??????????: All-In-One Code Framework ??? 2011-03-10: http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1codechs&DownloadId=216140 ??,????。??????????All-In-One Code Framework ???,??20?Sample!!????,?????。http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ASP.NET ??: CSASPNETBingMaps VBASPNETRemoteUploadAndDownload CS/VBASPNETSerializeJsonString CSASPNETIPtoLocation CSASPNETExcelLikeGridView ....... Winform??: FTPDownload FTPUpload MultiThreadedWebDownloader...Rawr: Rawr 4.1.0: Note: This release may say 4.0.21 in the version bar. This is a typo and the version is actually 4.1.0, not to be confused with 4.0.10 which was released a while back. Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a Relea...New ProjectsAMWC: AMWCAny Repository Membership Provider: Any Repository Membership is about easing the writing of Custom Membership Providers for ASP.NET. With Any Repository Membership you will simply need Implement an interface with few methods to configure a Data Store. Conectors for SQL Server CE, Azure and MySql Planned.Award Citation Helper: This simple Silverlight application demonstrates key concepts including using isolated storage, Silverlight to HTML communications, observable collections and context menus. It is also useful as a tool for creating award citations for your own use or on the BrainyAwards.com site.Azure Storage Backup: Provides a backup and restore function for a Windows Azure storage account, including Table Storage and Blob Storage.Babil: Babil is aimed to be an open source web based portal for software localization. It will support importing and exporting different formats such as .resx files and GNU gettext. Another goal is to have some basic project management features for managing translators and changes. BuildExplorer: Analyze a TFS2010 build whether it is finished, hanging or in progress.DHM17438: DHM17438Flowmarks Events: Data entry tool for simple time series.Go to Browser - Visual Studio 2010 Extension: This is a visual studio 2010 extension, that goes to a web browser on your repository. In order to specify the url format, you have to go solution context menu > "Go to Browser...". It is also available through Visual Studio Gallery. (Comming soon)Hierarchical State Machine Compiler: This projects helps you to easily create and generate hierarchical state machine in .NET. State, transitions, conditions, action, state inheritance Silverlight generation MVVM support iLBC.NET: A port of the iLBC (internet Low Bitrate Codec) speech codec for .NET platform.JAMBE: Simple test project for JA-BulgariaSAY IT WITH CSS! Slide Show implemented as pure CSS3 / HTML 5 solution: The solution demonstrates CSS3 coding technique, which allows to implement online slide show with "darkbox" (or "lightbox") effects using pure CSS3 and HTML5; it does not require any Javascript/jQuery coding.SharePoint Test Data Tool: This project is basically a small window based application to be able to quickly populate items in a list/library. This can be helpful while testing your code bits ( WP's etc ) Shiros: Una idea que empezó en un Japo de Seattle.TwitterOnSQLAzure: Twitter Parser used for 24 Hours of SQL Pass presentation March 2011. Pulls and parses tweet stream, based on parameters, builds database (locally). All features compliant with SQL Azure. Use SQL Azure migration wizard to move data / schema to cloud - or chg conn stringVisio Forward Engineer Addin: Somehow Microsoft decided not to include this feature in 2010 version of Visio. Visio Forward Engineer Addin for Visio 2010, adds the ability to generate the database scripts directly from the database model defined in Visio 2010. It is developed in C#.Winwise Surface & Tablet AR Drone: This is the Surface & Tablet client to pilot a Parrot AR Drone.WNY: Online gaming system that support multiple usersWow Project: Wow Project is a tool for wow users to easy manage quests and items. It's writen in VB.NET or C#

    Read the article

< Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >