Search Results

Search found 615 results on 25 pages for 'eventual consistency'.

Page 22/25 | < Previous Page | 18 19 20 21 22 23 24 25  | Next Page >

  • properly setting email subject line delivered from email form

    - by DC1
    I have a email form for my website but here is the issue: when i receive an email, the subject line in my inbox shows whatever the user inputted as subject in the form. id like to override that so that whenever an email comes in. the subject in the email header is always "an inquiry from your website". In the message body, sure i don't mind their specific subject they entered but when I receive an email, id like consistency in my inbox. this is the current code: <?php session_start(); if(isset($_POST['Submit'])) { if( $_SESSION['chapcha_code'] == $_POST['chapcha_code'] && !empty($_SESSION['chapcha_code'] ) ) { $youremail = 'xxxxxxxxx'; $fromsubject = 'An inquiry from your website'; $title = $_POST['title']; $fname = $_POST['fname']; $lname = $_POST['lname']; $mail = $_POST['mail']; $address = $_POST['address']; $city = $_POST['city']; $zip = $_POST['zip']; $country = $_POST['country']; $phone = $_POST['phone']; $subject = $_POST['subject']; $message = $_POST['message']; $headers = "From: $nome <$mail>\r\n"; $to = $youremail; $mailsubject = 'Message received from'.$fromsubject.' Contact Page'; $body = $fromsubject.' The person that contacted you is '.$fname.' '.$lname.' Address: '.$address.' '.$city.', '.$zip.', '.$country.' Phone Number: '.$phone.' E-mail: '.$mail.' Subject: '.$subject.' Message: '.$message.' |---------END MESSAGE----------|'; echo "Thank you for inquiring. We will contact you shortly.<br/>You may return to our <a href='/index.html'>Home Page</a>"; mail($to, $subject, $body, $headers); unset($_SESSION['chapcha_code']); } else { echo 'Sorry, you have provided an invalid security code'; } } else { echo "You must write a message. </br> Please visit our <a href='/contact.html'>Contact Page</a> and try again."; } ?>

    Read the article

  • Can simple javascript inheritance be simplified even further?

    - by Will
    John Resig (of jQuery fame) provides a concise and elegant way to allow simple JavaScript inheritance. It was so short and sweet, in fact, that it inspired me to try and simplify it even further (see code below). I've modified his original function such that it still passes all his tests and has the potential advantage of: readability (50% less code) simplicity (you don't have to be a ninja to understand it) performance (no extra wrappers around super/base method calls) consistency with C#'s base keyword Because this seems almost too good to be true, I want to make sure my logic doesn't have any fundamental flaws/holes/bugs, or if anyone has additional suggestions to improve or refute the code (perhaps even John Resig could chime in here!). Does anyone see anything wrong with my approach (below) vs. John Resig's original approach? if (!window.Class) { window.Class = function() {}; window.Class.extend = function(members) { var prototype = new this(); for (var i in members) prototype[i] = members[i]; prototype.base = this.prototype; function object() { if (object.caller == null && this.initialize) this.initialize.apply(this, arguments); } object.constructor = object; object.prototype = prototype; object.extend = arguments.callee; return object; }; } And the tests (below) are nearly identical to the original ones except for the syntax around base/super method calls (for the reason enumerated above): var Person = Class.extend( { initialize: function(isDancing) { this.dancing = isDancing; }, dance: function() { return this.dancing; } }); var Ninja = Person.extend( { initialize: function() { this.base.initialize(false); }, dance: function() { return this.base.dance(); }, swingSword: function() { return true; } }); var p = new Person(true); alert("true? " + p.dance()); // => true var n = new Ninja(); alert("false? " + n.dance()); // => false alert("true? " + n.swingSword()); // => true alert("true? " + (p instanceof Person && p instanceof Class && n instanceof Ninja && n instanceof Person && n instanceof Class));

    Read the article

  • Copy android.R.layout to my project

    - by eric
    Good advice from CommonWare and Steve H but it's not as easy to me as I first thought. Based on their advice I'm trying to copy android.R.layout to my project to ensure consistency. How do you do this? I looked in Eclipse's Package Explorer and under Android 1.5android.jarandroidR.classRlayout and find R$layout.class. Do I copy the code out of there into my own class? From my very limited knowledge of Java, the following code doesn't make much sense: public static final class android.R$layout { // Field descriptor #8 I public static final int activity_list_item = 17367040; // Field descriptor #8 I public static final int browser_link_context_header = 17367054; // Field descriptor #8 I public static final int expandable_list_content = 17367041; // Field descriptor #8 I public static final int preference_category = 17367042; // Field descriptor #8 I public static final int select_dialog_item = 17367057; // Field descriptor #8 I public static final int select_dialog_multichoice = 17367059; // Field descriptor #8 I public static final int select_dialog_singlechoice = 17367058; // Field descriptor #8 I public static final int simple_dropdown_item_1line = 17367050; // Field descriptor #8 I public static final int simple_expandable_list_item_1 = 17367046; // Field descriptor #8 I public static final int simple_expandable_list_item_2 = 17367047; // Field descriptor #8 I public static final int simple_gallery_item = 17367051; // Field descriptor #8 I public static final int simple_list_item_1 = 17367043; // Field descriptor #8 I public static final int simple_list_item_2 = 17367044; // Field descriptor #8 I public static final int simple_list_item_checked = 17367045; // Field descriptor #8 I public static final int simple_list_item_multiple_choice = 17367056; // Field descriptor #8 I public static final int simple_list_item_single_choice = 17367055; // Field descriptor #8 I public static final int simple_spinner_dropdown_item = 17367049; // Field descriptor #8 I public static final int simple_spinner_item = 17367048; // Field descriptor #8 I public static final int test_list_item = 17367052; // Field descriptor #8 I public static final int two_line_list_item = 17367053; // Method descriptor #50 ()V // Stack: 3, Locals: 1 public R$layout(); 0 aload_0 [this] 1 invokespecial java.lang.Object() [1] 4 new java.lang.RuntimeException [2] 7 dup 8 ldc <String "Stub!"> [3] 10 invokespecial java.lang.RuntimeException(java.lang.String) [4] 13 athrow Line numbers: [pc: 0, line: 899] Local variable table: [pc: 0, pc: 14] local: this index: 0 type: android.R.layout Inner classes: [inner class info: #5 android/R$layout, outer class info: #64 android/R inner name: #55 layout, accessflags: 25 public static final] }

    Read the article

  • How to delete multiple files with msbuild/web deployment project?

    - by Alex
    I have an odd issue with how msbuild is behaving with a VS2008 Web Deployment Project and would like to know why it seems to randomly misbehave. I need to remove a number of files from a deployment folder that should only exist in my development environment. The files have been generated by the web application during dev/testing and are not included in my Visual Studio project/solution. The configuration I am using is as follows: <!-- Partial extract from Microsoft Visual Studio 2008 Web Deployment Project --> <ItemGroup> <DeleteAfterBuild Include="$(OutputPath)data\errors\*.xml" /> <!-- Folder 1: 36 files --> <DeleteAfterBuild Include="$(OutputPath)data\logos\*.*" /> <!-- Folder 2: 2 files --> <DeleteAfterBuild Include="$(OutputPath)banners\*.*" /> <!-- Folder 3: 1 file --> </ItemGroup> <Target Name="AfterBuild"> <Message Text="------ AfterBuild process starting ------" Importance="high" /> <Delete Files="@(DeleteAfterBuild)"> <Output TaskParameter="DeletedFiles" PropertyName="deleted" /> </Delete> <Message Text="DELETED FILES: $(deleted)" Importance="high" /> <Message Text="------ AfterBuild process complete ------" Importance="high" /> </Target> The problem I have is that when I do a build/rebuild of the Web Deployment Project it "sometimes" removes all the files but other times it will not remove anything! Or it will remove only one or two of the three folders in the DeleteAfterBuild item group. There seems to be no consistency in when the build process decides to remove the files or not. When I've edited the configuration to include only Folder 1 (for example), it removes all the files correctly. Then adding Folder 2 and 3, it starts removing all the files as I want. Then, seeming at random times, I'll rebuild the project and it won't remove any of the files! I have tried moving these items to the ExcludeFromBuild item group (which is probably where it should be) but it gives me the same unpredictable result. Has anyone experienced this? Am I doing something wrong? Why does this happen?

    Read the article

  • Code golf - hex to (raw) binary conversion

    - by Alnitak
    In response to this question asking about hex to (raw) binary conversion, a comment suggested that it could be solved in "5-10 lines of C, or any other language." I'm sure that for (some) scripting languages that could be achieved, and would like to see how. Can we prove that comment true, for C, too? NB: this doesn't mean hex to ASCII binary - specifically the output should be a raw octet stream corresponding to the input ASCII hex. Also, the input parser should skip/ignore white space. edit (by Brian Campbell) May I propose the following rules, for consistency? Feel free to edit or delete these if you don't think these are helpful, but I think that since there has been some discussion of how certain cases should work, some clarification would be helpful. The program must read from stdin and write to stdout (we could also allow reading from and writing to files passed in on the command line, but I can't imagine that would be shorter in any language than stdin and stdout) The program must use only packages included with your base, standard language distribution. In the case of C/C++, this means their respective standard libraries, and not POSIX. The program must compile or run without any special options passed to the compiler or interpreter (so, 'gcc myprog.c' or 'python myprog.py' or 'ruby myprog.rb' are OK, while 'ruby -rscanf myprog.rb' is not allowed; requiring/importing modules counts against your character count). The program should read integer bytes represented by pairs of adjacent hexadecimal digits (upper, lower, or mixed case), optionally separated by whitespace, and write the corresponding bytes to output. Each pair of hexadecimal digits is written with most significant nibble first. The behavior of the program on invalid input (characters besides [a-fA-F \t\r\n], spaces separating the two characters in an individual byte, an odd number of hex digits in the input) is undefined; any behavior (other than actively damaging the user's computer or something) on bad input is acceptable (throwing an error, stopping output, ignoring bad characters, treating a single character as the value of one byte, are all OK) The program may write no additional bytes to output. Code is scored by fewest total bytes in the source file. (Or, if we wanted to be more true to the original challenge, the score would be based on lowest number of lines of code; I would impose an 80 character limit per line in that case, since otherwise you'd get a bunch of ties for 1 line).

    Read the article

  • Why does std::map operator[] create an object if the key doesn't exist?

    - by n1ck
    Hi, I'm pretty sure I already saw this question somewhere (comp.lang.c++? Google doesn't seem to find it there either) but a quick search here doesn't seem to find it so here it is: Why does the std::map operator[] create an object if the key doesn't exist? I don't know but for me this seems counter-intuitive if you compare to most other operator[] (like std::vector) where if you use it you must be sure that the index exists. I'm wondering what's the rationale for implementing this behavior in std::map. Like I said wouldn't it be more intuitive to act more like an index in a vector and crash (well undefined behavior I guess) when accessed with an invalid key? Refining my question after seeing the answers: Ok so far I got a lot of answers saying basically it's cheap so why not or things similar. I totally agree with that but why not use a dedicated function for that (I think one of the comment said that in java there is no operator[] and the function is called put)? My point is why doesn't map operator[] work like a vector? If I use operator[] on an out of range index on a vector I wouldn't like it to insert an element even if it was cheap because that probably mean an error in my code. My point is why isn't it the same thing with map. I mean, for me, using operator[] on a map would mean: i know this key already exist (for whatever reason, i just inserted it, I have redundancy somewhere, whatever). I think it would be more intuitive that way. That said what are the advantage of doing the current behavior with operator[] (and only for that, I agree that a function with the current behavior should be there, just not operator[])? Maybe it give clearer code that way? I don't know. Another answer was that it already existed that way so why not keep it but then, probably when they (the ones before stl) choose to implement it that way they found it provided an advantage or something? So my question is basically: why choose to implement it that way, meaning a somewhat lack of consistency with other operator[]. What benefit do it give? Thanks

    Read the article

  • Using jQuery to gather all text nodes from a wrapped set, separated by spaces

    - by Bungle
    I'm looking for a way to gather all of the text in a jQuery wrapped set, but I need to create spaces between sibling nodes that have no text nodes between them. For example, consider this HTML: <div> <ul> <li>List item #1.</li><li>List item #2.</li><li>List item #3.</li> </ul> </div> If I simply use jQuery's text() method to gather the text content of the <div>, like such: var $div = $('div'), text = $div.text().trim(); alert(text); that produces the following text: List item #1.List item #2.List item #3. because there is no whitespace between each <li> element. What I'm actually looking for is this (note the single space between each sentence): List item #1. List item #3. List item #3. This suggest to me that I need to traverse the DOM nodes in the wrapped set, appending the text for each to a string, followed by a space. I tried the following code: var $div = $('div'), text = ''; $div.find('*').each(function() { text += $(this).text().trim() + ' '; }); alert(text); but this produced the following text: This is list item #1.This is list item #2.This is list item #3. This is list item #1. This is list item #2. This is list item #3. I assume this is because I'm iterating through every descendant of <div> and appending the text, so I'm getting the text nodes within both <ul> and each of its <li> children, leading to duplicated text. I think I could probably find/write a plain JavaScript function to recursively walk the DOM of the wrapped set, gathering and appending text nodes - but is there a simpler way to do this using jQuery? Cross-browser consistency is very important. Thanks for any help!

    Read the article

  • Sorted sets and comparators

    - by Jack
    Hello, I'm working with a TreeSetthat is meant to store pathfind locations used during the execution of a A* algorithm. Basically until there are "open" elements (still to be exhaustively visited) the neighbours of every open element are taken into consideration and added to a SortedSetthat keeps them ordered by their cost and heuristic cost. This means that I have a class like: public class PathTileInfo implements Comparable<PathTileInfo> { int cost; int hCost; final int x, y; @Override public int compareTo(PathTileInfo t2) { int c = cost + hCost; int c2 = t2.cost + t2.hCost; int costComp = c < c2 ? -1 : (c > c2 ? 1: 0); return costComp != 0 ? costComp : (x < t2.x || y < t2.y ? -1 : (x > t2.x || y > t2.y ? 1 : 0)); } @Override public boolean equals(Object o2) { if (o2 instanceof PathTileInfo) { PathTileInfo i = (PathTileInfo)o2; return i.cost + i.hCost == cost + hCost && x == i.x && y == i.y; } return false; } } In this way first the total cost is considered, then, since a total ordering is needed (consistency with equals) a ordering according to the x,y coordinate is taken into account. This should work but simply it doesn't, if I iterate over the TreeSet during the algorithm execution like in for (PathTileInfo t : openSet) System.out.print("("+t.x+","+t.y+","+(t.cost+t.hCost)+") "); I get results in which the right ordering is not kept, eg: (7,7,6) (7,6,7) (6,8,6) (6,6,7) (5,8,7) (5,7,7) (6,7,6) (6,6,7) (6,5,7) (5,7,7) (5,5,8) (4,7,7) (4,6,8) (4,5,8) is there something subtle I am missing? Thanks!

    Read the article

  • .Net Custom Components "disappear" after file save

    - by EatATaco
    I might have a hard time explaining this because I am at a total loss for what is happening so I am just looking for some guidance. I might be a bit wordy because I don't know exactly what is the relevant information. I am developing a GUI for a project that I am working on in using .Net (C#) Part of the interface mimics, exactly, what we do in another product. For consistency reasons, my boss wants me to make it look the same way. So I got the other software and basically copied and pasted the components into my new GUI. This required me to introduce a component library (the now defunct Graphics Server GSNet, so I can't go to them for help) so I could implement some simple graphs and temperature/pressure "widgets." The components show up fine, and when I compile, everything seems to work fine. However, at some point during my programming it just breaks. Sometimes the tab that these components are on starts throwing exceptions when I view the designer page (A missing method exception) so it won't display. Sometimes JUST those components from the GSNet library don't show up. Sometimes, if I try to run it, I get a not-instantiated exception on one of their lines of code in the designer code file. Sometimes I can't view the designer at all. No matter what I do I can't reverse it. Even if I undo what I just did it won't fix it. If it happens, I have to revert to a backup and start over again. So I started to backup pretty much every step. I compile it and it works. I comment out a line of code, save it, and then uncomment that same line of code (so I am working with the same exact code) and the components all disappear. It doesn't matter what line of code I actually comment out, as long as it is in the same project that these components are being used. I pretty much have to use the components. . . so does anyone have any suggestion or where I can look to debug this?

    Read the article

  • Not sure what happens to my apps objects when using NSURLSession in background - what state is my app in?

    - by Avner Barr
    More of a general question - I don't understand the workings of NSURLSession when using it in "background session mode". I will supply some simple contrived example code. I have a database which holds objects - such that portions of this data can be uploaded to a remote server. It is important to know which data/objects were uploaded in order to accurately display information to the user. It is also important to be able to upload to the server in a background task because the app can be killed at any point. for instance a simple profile picture object: @interface ProfilePicture : NSObject @property int userId; @property UIImage *profilePicture; @property BOOL successfullyUploaded; // we want to know if the image was uploaded to out server - this could also be a property that is queryable but lets assume this is attached to this object @end Now Lets say I want to upload the profile picture to a remote server - I could do something like: @implementation ProfilePictureUploader -(void)uploadProfilePicture:(ProfilePicture *)profilePicture completion:(void(^)(BOOL successInUploading))completion { NSUrlSession *uploadImageSession = ..... // code to setup uploading the image - and calling the completion handler; [uploadImageSession resume]; } @end Now somewhere else in my code I want to upload the profile picture - and if it was successful update the UI and the database that this action happened: ProfilePicture *aNewProfilePicture = ...; aNewProfilePicture.profilePicture = aImage; aNewProfilePicture.userId = 123; aNewProfilePicture.successfullyUploaded = NO; // write the change to disk [MyDatabase write:aNewProfilePicture]; // upload the image to the server ProfilePictureUploader *uploader = [ProfilePictureUploader ....]; [uploader uploadProfilePicture:aNewProfilePicture completion:^(BOOL successInUploading) { if (successInUploading) { // persist the change to my db. aNewProfilePicture.successfullyUploaded = YES; [Mydase update:aNewProfilePicture]; // persist the change } }]; Now obviously if my app is running then this "ProfilePicture" object is successfully uploaded and all is well - the database object has its own internal workings with data structures/caches and what not. All callbacks that may exist are maintained and the app state is straightforward. But I'm not clear what happens if the app "dies" at some point during the upload. It seems that any callbacks/notifications are dead. According to the API documentation- the uploading is handled by a separate process. Therefor the upload will continue and my app will be awakened at some point in the future to handle completion. But the object "aNewProfilePicture" is non existant at that point and all callbacks/objects are gone. I don't understand what context exists at this point. How am I supposed to ensure consistency in my DB and UI (For instance update the "successfullyUploaded" property for that user)? Do I need to re-work everything touching the DB or UI to correspond with the new API and work in a context free environment?

    Read the article

  • Troubleshooting PHP email sending?

    - by darkAsPitch
    I created a website that occasionally emails users when they register/change their password/etc. Every other person however cannot or does not receive the emails. They are telling me that they are not even hitting their spam folders. I don't know a ton about MX records or email sending, but when I "Edit DNS Zone" for this domain in particular there is 1 MX record listed there. How do you go about troubleshooting botched PHP mail actions? UPDATE: Here is my super-simple php mailing code: $subject = "Subject Here"; $message = "Emails Message"; $to = $verified_user_data["email_address"]; $headers = "From: [email protected]\r\n" . "Reply-To: [email protected]\r\n" . "X-Mailer: PHP/" . phpversion(); //returns true on success, false on failure $email_result = mail($to, $subject, $message, $headers); re: "are you saying that some do and some do not?" @ Jacob Yes, basically. I send the emails containing the user's login username/password using similar code above. And I sell to fairly tech-savvy people. About 50% of the time, my customers claim they cannot find their welcome emails in their inbox OR in their spam box. It's as if it never arrived. I have the largest problem with Yahoo email addresses accepting my emails or so it seems. re: "The MX record at your end doesn't factor in, although the SPF record (or lack of it) will. How much access and control do you have on the server itself?" @ John Gardeniers I rent a dedicated server from Codero. Running CentOS 5, WHM + cPanel. I have full root access to the entire thing. Don't know much about MX records and/or SPF records. I just want the PHP mail function to work. It doesn't say much about that on the php mail function's help page. re: "What are you using for the SMTP server?" @ JonLim No idea. I use the code above when I need to fire off an email to a loyal customer, and that's it. Do I need to be worrying about SMTP servers? re: "Could be many, many things. Can you describe how you're sending mail in your code? i.e. are you relaying off of another mail server somewhere, using the local sendmail or postfix? Any consistency in domains that can/cannot receive email? Do you have a PTR record setup from the IP address that you're sending mail out as? What about SPF records?" @ gravyface I just described my simple code above! I believe I have been having the most trouble with Yahoo domains, however "independent" domains (probably running spamassasin) ex. [email protected] as opposed to [email protected] seem to give a lot of trouble as well. I do not know if I have a PTR record setup from the IP address I'm sending my mail from. It's probably the same IP address that I setup my domain on, because I didn't do anything extra special. No idea about SPF records either, where can I go to create one? Side Note: It's a crying shame what havoc the spammers have brought upon our beloved email system.

    Read the article

  • Troubleshooting inconsistent ODBC connectivity

    - by Chris
    I'm attempting to integrate UPS WorldShip with a SQL Server 2008 R2 database but the connection is very inconsistent. UPS claims this is a DSN/Windows problem and I have not been able to convince them otherwise. The integration is quite simple: my shipping guy clicks a button which opens a form where he enters an order #. After pressing enter the shipping information will be pulled from the database for that order #. The problem is that WorldShip often times thinks the DSN does not exist. However, I am able to open WorldShip's customization tool and browse all the tables and fields in the database my DSN is connected to which means at the very least my DSN does, in fact, exist. The reason this has been so difficult to troubleshoot is because there is no consistency to the problem and I'm not able to reliably repeat any behavior. That is to say that rebooting the PC doesn't cause the connection to break and opening the integration tool and viewing the tables and fields doesn't cause the integration button to work. Is there some way for me to monitor this connection from the SQL server or get any clues as to why it fails? As requested by TallTed here is a sample of the trace file I created. After a mere 5 hours the trace file was over 130MB so there's no way I could provide it in its entirety. WorldShipTD d94-690 EXIT SQLSetStmtAttrW with return code -1 (SQL_ERROR) SQLHSTMT 0x0C6632A0 SQLINTEGER 1227 <unknown> SQLPOINTER [Unknown attribute 1227] SQLINTEGER -5 DIAG [IM006] [Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed (0) WorldShipTD d94-690 ENTER SQLAllocHandle SQLSMALLINT 3 <SQL_HANDLE_STMT> SQLHANDLE 0x0C662FC0 SQLHANDLE * 0x03EBCE38 WorldShipTD d94-690 EXIT SQLAllocHandle with return code 0 (SQL_SUCCESS) SQLSMALLINT 3 <SQL_HANDLE_STMT> SQLHANDLE 0x0C662FC0 SQLHANDLE * 0x03EBCE38 ( 0x0C6632A0) WorldShipTD d94-690 ENTER SQLSetStmtAttrW SQLHSTMT 0x0C6632A0 SQLINTEGER 0 <SQL_ATTR_QUERY_TIMEOUT> SQLPOINTER 30 SQLINTEGER -5 WorldShipTD d94-690 EXIT SQLSetStmtAttrW with return code -1 (SQL_ERROR) SQLHSTMT 0x0C6632A0 SQLINTEGER 0 <SQL_ATTR_QUERY_TIMEOUT> SQLPOINTER 30 SQLINTEGER -5 DIAG [HYC00] [Microsoft][ODBC Microsoft Access Driver]Optional feature not implemented (106) WorldShipTD d94-690 ENTER SQLGetDiagFieldW SQLSMALLINT 3 SQLHANDLE 0x0C6632A0 SQLSMALLINT 1 SQLSMALLINT 4 SQLPOINTER 0x00520708 SQLSMALLINT 12 SQLSMALLINT * 0x0028E2A8 WorldShipTD d94-690 EXIT SQLGetDiagFieldW with return code 0 (SQL_SUCCESS) SQLSMALLINT 3 SQLHANDLE 0x0C6632A0 SQLSMALLINT 1 SQLSMALLINT 4 SQLPOINTER 0x00520708 SQLSMALLINT 12 SQLSMALLINT * 0x0028E2A8 (10) WorldShipTD d94-690 ENTER SQLGetInfoW HDBC 0x0C662FC0 UWORD 77 <SQL_DRIVER_ODBC_VER> PTR 0x03EBCEDC SWORD 100 SWORD * 0x0028E290 WorldShipTD d94-690 EXIT SQLGetInfoW with return code 0 (SQL_SUCCESS) HDBC 0x0C662FC0 UWORD 77 <SQL_DRIVER_ODBC_VER> PTR 0x03EBCEDC [ 10] "03.51" SWORD 100 SWORD * 0x0028E290 (10) WorldShipTD d94-690 ENTER SQLSetStmtAttrW SQLHSTMT 0x0C6632A0 SQLINTEGER 1228 <unknown> SQLPOINTER [Unknown attribute 1228] SQLINTEGER -5 WorldShipTD d94-690 EXIT SQLSetStmtAttrW with return code -1 (SQL_ERROR) SQLHSTMT 0x0C6632A0 SQLINTEGER 1228 <unknown> SQLPOINTER [Unknown attribute 1228] SQLINTEGER -5 DIAG [HY092] [Microsoft][ODBC Microsoft Access Driver]Invalid attribute/option identifier (86) WorldShipTD d94-690 ENTER SQLGetDiagFieldW SQLSMALLINT 3 SQLHANDLE 0x0C6632A0 SQLSMALLINT 1 SQLSMALLINT 4 SQLPOINTER 0x00520708 SQLSMALLINT 12 SQLSMALLINT * 0x0028E2A8 WorldShipTD d94-690 EXIT SQLGetDiagFieldW with return code 0 (SQL_SUCCESS) SQLSMALLINT 3 SQLHANDLE 0x0C6632A0 SQLSMALLINT 1 SQLSMALLINT 4 SQLPOINTER 0x00520708 SQLSMALLINT 12 SQLSMALLINT * 0x0028E2A8 (10) WorldShipTD d94-690 ENTER SQLSetStmtAttrW SQLHSTMT 0x0C6632A0 SQLINTEGER 1227 <unknown> SQLPOINTER [Unknown attribute 1227] SQLINTEGER -5 WorldShipTD d94-690 EXIT SQLSetStmtAttrW with return code -1 (SQL_ERROR) SQLHSTMT 0x0C6632A0 SQLINTEGER 1227 <unknown> SQLPOINTER [Unknown attribute 1227] SQLINTEGER -5 DIAG [HY092] [Microsoft][ODBC Microsoft Access Driver]Invalid attribute/option identifier (86) WorldShipTD d94-690 ENTER SQLGetDiagFieldW SQLSMALLINT 3 SQLHANDLE 0x0C6632A0 SQLSMALLINT 1 SQLSMALLINT 4 SQLPOINTER 0x00520708 SQLSMALLINT 12 SQLSMALLINT * 0x0028E2A8 WorldShipTD d94-690 EXIT SQLGetDiagFieldW with return code 0 (SQL_SUCCESS) SQLSMALLINT 3 SQLHANDLE 0x0C6632A0 SQLSMALLINT 1 SQLSMALLINT 4 SQLPOINTER 0x00520708 SQLSMALLINT 12 SQLSMALLINT * 0x0028E2A8 (10)

    Read the article

  • reiserfsck on lvm

    - by DaDaDom
    It seems like my filesystem got corrupted somehow during the last reboot of my server. I can't fsck some logical volumes anymore. The setup: root@rescue ~ # cat /mnt/rescue/etc/fstab proc /proc proc defaults 0 0 /dev/md0 /boot ext3 defaults 0 2 /dev/md1 / ext3 defaults,errors=remount-ro 0 1 /dev/systemlvm/home /home reiserfs defaults 0 0 /dev/systemlvm/usr /usr reiserfs defaults 0 0 /dev/systemlvm/var /var reiserfs defaults 0 0 /dev/systemlvm/tmp /tmp reiserfs noexec,nosuid 0 2 /dev/sda5 none swap defaults,pri=1 0 0 /dev/sdb5 none swap defaults,pri=1 0 0 [UPDATE] First question: what "part" should I check for bad blocks? The logical volume, the underlying /dev/md or the /dev/sdx below that? Is doing what I am doing the right way to go? [/UPDATE] The errormessage when checking /dev/systemlvm/usr: root@rescue ~ # reiserfsck /dev/systemlvm/usr reiserfsck 3.6.19 (2003 www.namesys.com) [...] Will read-only check consistency of the filesystem on /dev/systemlvm/usr Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes ########### reiserfsck --check started at Wed Feb 3 07:10:55 2010 ########### Replaying journal.. Reiserfs journal '/dev/systemlvm/usr' in blocks [18..8211]: 0 transactions replayed Checking internal tree.. Bad root block 0. (--rebuild-tree did not complete) Aborted Well so far, let's try --rebuild-tree: root@rescue ~ # reiserfsck --rebuild-tree /dev/systemlvm/usr reiserfsck 3.6.19 (2003 www.namesys.com) [...] Will rebuild the filesystem (/dev/systemlvm/usr) tree Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes Replaying journal.. Reiserfs journal '/dev/systemlvm/usr' in blocks [18..8211]: 0 transactions replayed ########### reiserfsck --rebuild-tree started at Wed Feb 3 07:12:27 2010 ########### Pass 0: ####### Pass 0 ####### Loading on-disk bitmap .. ok, 269716 blocks marked used Skipping 8250 blocks (super block, journal, bitmaps) 261466 blocks will be read 0%....20%....40%....60%....80%....100% left 0, 11368 /sec 52919 directory entries were hashed with "r5" hash. "r5" hash is selected Flushing..finished Read blocks (but not data blocks) 261466 Leaves among those 13086 Objectids found 53697 Pass 1 (will try to insert 13086 leaves): ####### Pass 1 ####### Looking for allocable blocks .. finished 0% left 12675, 0 /sec The problem has occurred looks like a hardware problem (perhaps memory). Send us the bug report only if the second run dies at the same place with the same block number. mark_block_used: (39508) used already Aborted Bad. But let's do it again as mentioned: [...] Flushing..finished Read blocks (but not data blocks) 261466 Leaves among those 13085 Objectids found 54305 Pass 1 (will try to insert 13085 leaves): ####### Pass 1 ####### Looking for allocable blocks .. finished 0%... left 12127, 958 /sec The problem has occurred looks like a hardware problem (perhaps memory). Send us the bug report only if the second run dies at the same place with the same block number. build_the_tree: Nothing but leaves are expected. Block 196736 - internal Aborted Same happens every time, only the actual error message changes. Sometimes I get mark_block_used: (somenumber) used already, other times the block number changes. Seems like something is REALLY broken. Are there any chances I can somehow get the partitions to work again? It's a server to which I don't have physical access directly (hosted server). Thanks in advance!

    Read the article

  • Why does my computer run slowly and freeze sometimes?

    - by Brae
    EDIT I disabled sound in the BIOS, rebooted and it hung. I removed the (previously) faulty HDD, rebooted and it hung. I have managed to get my Realtek audio manager open again after its mysterious disappearance. Subsequently my microphone is now working again, to fix it I had to uninstall audio drivers, disable audio in BIOS, install audio drivers, enable audio in BIOS. Access via network (with faulty HDD in) seems to not be triggering hangs at the moment. I think with the sound problem fixed it might play a little nicer, but I think its still going to hang. If it does, then I'm fairly sure its been narrowed down to the mobo. EDIT Pretty convinced my motherboard is the culprit, because nothing else seems to have any obvious problems (bar the hard drive, which the PC still hangs without it being plugged in) So thank you all for helping, once I get more rep ill up a few of the answers. My PC is doing some weird things... Sometimes when I open up a program, lets say Adobe Photoshop, it will load everything normally, nice and quick no problems and I can use it fine. Other times its a little odd, and it loads the program as if it's only using half of the CPU. It's pretty obvious when it does it, normally the loading screen skims past, but when it does this weird load you see it slowly tick though each thing, and the program itself becomes incredibly slow. Even Google Chrome does it sometimes. Yet when I exit and reopen the program without doing anything else, it typically opens fine without lagging. I think this problem is probably a symptom of something bigger, because of other problems I'm having. Random hangs; no matter what I have open or what I'm doing. My PC will sort of freeze up for a few seconds. If music is playing it will either loop the last second or two, or it will buzz. This only lasts for a few seconds then it returns to normal without having to restart. During this time all programs lock up and freeze, and the mouse and keyboard are useless. I am also having a weird issue with my audio jacks, without touching my PC at all, sometimes I will see a popup saying that I have unplugged something or plugged something in, neither of which has actually happened. Pretty sure this is cause by the motherboard. I recently had a 'Pink Screen of Death' (yes pink) which pointed to my audio drivers. The lockups seem to happen with some consistency when someone is accessing my shared files via my home LAN. Which leads me to believe either one of my hard drives is acting up or more likely the controller. One of my drives had a bit of a crash before, I used Spinrite and managed to recover my stuff and now the drive appears to be working okay. This is possibly adding to the problems. My best guess is something has gone wrong with my motherboard, possibly a power issue or a chip has died, I really don't know. So what I would like to know is: Have I have missed some obvious diagnostic to help figure out what it is, or should I just bite the bullet and assume its my motherboard acting up and buy a new system? dxdiag[64-bit] - http://pastebin.com/G30kb2TL PSOD (minidump) - http://pastebin.com/aZsv0H56 HWiNFO64 (system info / specs) - http://pastebin.com/X6h3K8g6

    Read the article

  • How to export SQL Server data from corrupted database (with disk write error)

    - by damitamit
    IT realised there was a disk write error on our production SQL Server 2005 and hence was causing the backups to fail. By the time they had realised this the nightly backup was old, so were not able to just restore the backup on another server. The database is still running and being used constantly. However DBCC CheckDB fails. Also the SQL Server backup task fails, Copy Database fails, Export Data Wizard fails. However it seems all the data can be read from the tables (i.e using bcp etc) Another observation I have made is that the Transaction Log is nearly double the size of the Database. (Does that mean all the changes arent being written to the MDF?) What would be the best plan of attack to get the database to a state where backups are working and the data is safe? Take the database offline and use the MDF/LDF to somehow create the database on another sql server? Export the data from the database using bcp. Create the database (use the Generate Scripts function on the corrupt db to create the schema on the new db) on another sql server and use bcp again to import the data. Some other option that is the right course of action in this situation? The IT manager says the data is safe as if the server fails, the data can be restored from the mdf/ldf. I'm not sure so insisted that we start exporting the data each night as a failsafe (using bcp for example). IT are also having issues on the hardware side of things as supposedly the disk error in on a virtualized disk and can't be rebuilt like a normal raid array (or something like that). Please excuse my use of incorrect terminology and incorrect assumptions on how Sql Server operates. I'm the application developer and have been called to help (as it seems IT know less about SQL Server than I do). Many Thanks, Amit Results of DBBC CheckDB: Msg 1823, Level 16, State 2, Line 1 A database snapshot cannot be created because it failed to start. Msg 7928, Level 16, State 1, Line 1 The database snapshot for online checks could not be created. Either the reason is given in a previous error or one of the underlying volumes does not support sparse files or alternate streams. Attempting to get exclusive access to run checks offline. Msg 5030, Level 16, State 12, Line 1 The database could not be exclusively locked to perform the operation. Msg 7926, Level 16, State 1, Line 1 Check statement aborted. The database could not be checked as a database snapshot could not be created and the database or table could not be locked. See Books Online for details of when this behavior is expected and what workarounds exist. Also see previous errors for more details. Msg 823, Level 24, State 3, Line 1 The operating system returned error 1(error not found) to SQL Server during a write at offset 0x00000674706000 in file 'G:\AX40_Dynamics_Live.mdf'. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

    Read the article

  • Is Berkeley DB a NoSQL solution?

    - by Gregory Burd
    Berkeley DB is a library. To use it to store data you must link the library into your application. You can use most programming languages to access the API, the calls across these APIs generally mimic the Berkeley DB C-API which makes perfect sense because Berkeley DB is written in C. The inspiration for Berkeley DB was the DBM library, a part of the earliest versions of UNIX written by AT&T's Ken Thompson in 1979. DBM was a simple key/value hashtable-based storage library. In the early 1990s as BSD UNIX was transitioning from version 4.3 to 4.4 and retrofitting commercial code owned by AT&T with unencumbered code, it was the future founders of Sleepycat Software who wrote libdb (aka Berkeley DB) as the replacement for DBM. The problem it addressed was fast, reliable local key/value storage. At that time databases almost always lived on a single node, even the most sophisticated databases only had simple fail-over two node solutions. If you had a lot of data to store you would choose between the few commercial RDBMS solutions or to write your own custom solution. Berkeley DB took the headache out of the custom approach. These basic market forces inspired other DBM implementations. There was the "New DBM" (ndbm) and the "GNU DBM" (GDBM) and a few others, but the theme was the same. Even today TokyoCabinet calls itself "a modern implementation of DBM" mimicking, and improving on, something first created over thirty years ago. In the mid-1990s, DBM was the name for what you needed if you were looking for fast, reliable local storage. Fast forward to today. What's changed? Systems are connected over fast, very reliable networks. Disks are cheep, fast, and capable of storing huge amounts of data. CPUs continued to follow Moore's Law, processing power that filled a room in 1990 now fits in your pocket. PCs, servers, and other computers proliferated both in business and the personal markets. In addition to the new hardware entire markets, social systems, and new modes of interpersonal communication moved onto the web and started evolving rapidly. These changes cause a massive explosion of data and a need to analyze and understand that data. Taken together this resulted in an entirely different landscape for database storage, new solutions were needed. A number of novel solutions stepped up and eventually a category called NoSQL emerged. The new market forces inspired the CAP theorem and the heated debate of BASE vs. ACID. But in essence this was simply the market looking at what to trade off to meet these new demands. These new database systems shared many qualities in common. There were designed to address massive amounts of data, millions of requests per second, and scale out across multiple systems. The first large-scale and successful solution was Dynamo, Amazon's distributed key/value database. Dynamo essentially took the next logical step and added a twist. Dynamo was to be the database of record, it would be distributed, data would be partitioned across many nodes, and it would tolerate failure by avoiding single points of failure. Amazon did this because they recognized that the majority of the dynamic content they provided to customers visiting their web store front didn't require the services of an RDBMS. The queries were simple, key/value look-ups or simple range queries with only a few queries that required more complex joins. They set about to use relational technology only in places where it was the best solution for the task, places like accounting and order fulfillment, but not in the myriad of other situations. The success of Dynamo, and it's design, inspired the next generation of Non-SQL, distributed database solutions including Cassandra, Riak and Voldemort. The problem their designers set out to solve was, "reliability at massive scale" so the first focal point was distributed database algorithms. Underneath Dynamo there is a local transactional database; either Berkeley DB, Berkeley DB Java Edition, MySQL or an in-memory key/value data structure. Dynamo was an evolution of local key/value storage onto networks. Cassandra, Riak, and Voldemort all faced similar design decisions and one, Voldemort, choose Berkeley DB Java Edition for it's node-local storage. Riak at first was entirely in-memory, but has recently added write-once, append-only log-based on-disk storage similar type of storage as Berkeley DB except that it is based on a hash table which must reside entirely in-memory rather than a btree which can live in-memory or on disk. Berkeley DB evolved too, we added high availability (HA) and a replication manager that makes it easy to setup replica groups. Berkeley DB's replication doesn't partitioned the data, every node keeps an entire copy of the database. For consistency, there is a single node where writes are committed first - a master - then those changes are delivered to the replica nodes as log records. Applications can choose to wait until all nodes are consistent, or fire and forget allowing Berkeley DB to eventually become consistent. Berkeley DB's HA scales-out quite well for read-intensive applications and also effectively eliminates the central point of failure by allowing replica nodes to be elected (using a PAXOS algorithm) to mastership if the master should fail. This implementation covers a wide variety of use cases. MemcacheDB is a server that implements the Memcache network protocol but uses Berkeley DB for storage and HA to replicate the cache state across all the nodes in the cache group. Google Accounts, the user authentication layer for all Google properties, was until recently running Berkeley DB HA. That scaled to a globally distributed system. That said, most NoSQL solutions try to partition (shard) data across nodes in the replication group and some allow writes as well as reads at any node, Berkeley DB HA does not. So, is Berkeley DB a "NoSQL" solution? Not really, but it certainly is a component of many of the existing NoSQL solutions out there. Forgetting all the noise about how NoSQL solutions are complex distributed databases when you boil them down to a single node you still have to store the data to some form of stable local storage. DBMs solved that problem a long time ago. NoSQL has more to do with the layers on top of the DBM; the distributed, sometimes-consistent, partitioned, scale-out storage that manage key/value or document sets and generally have some form of simple HTTP/REST-style network API. Does Berkeley DB do that? Not really. Is Berkeley DB a "NoSQL" solution today? Nope, but it's the most robust solution on which to build such a system. Re-inventing the node-local data storage isn't easy. A lot of people are starting to come to appreciate the sophisticated features found in Berkeley DB, even mimic them in some cases. Could Berkeley DB grow into a NoSQL solution? Absolutely. Our key/value API could be extended over the net using any of a number of existing network protocols such as memcache or HTTP/REST. We could adapt our node-local data partitioning out over replicated nodes. We even have a nice query language and cost-based query optimizer in our BDB XML product that we could reuse were we to build out a document-based NoSQL-style product. XML and JSON are not so different that we couldn't adapt one to work with the other interchangeably. Without too much effort we could add what's missing, we could jump into this No SQL market withing a single product development cycle. Why isn't Berkeley DB already a NoSQL solution? Why aren't we working on it? Why indeed...

    Read the article

  • CodePlex Daily Summary for Thursday, February 10, 2011

    CodePlex Daily Summary for Thursday, February 10, 2011Popular ReleasesSnoop, the WPF Spy Utility: Snoop 2.6.1: This release is a bug fixing release. Most importantly, issues have been seen around WPF 4.0 applications not always showing up in the app chooser. Hopefully, they are fixed now. I thought this issue warranted a minor release since more and more people are going WPF 4.0 and I don't want anyone to have any problems. Dan Hanan also contributes again with several usability features. Thanks Dan! Happy Snooping! Work Item Description 5149 Dan Hanan: You can now use the mouse wheel t...RIBA - Rich Internet Business Application for Silverlight: Preview of MVVM Framework Source + Tutorials: This is a first public release of the MVVM Framework which is part of the final RIBA application. The complete RIBA example LOB application has yet to be published. Further Documentation on the MVVM part can be found on the Blog, http://www.SilverlightBlog.Net and in the downloadable source ( mvvm/doc/ ). Please post all issues and suggestions in the issue tracker.SharePoint Learning Kit: 1.5: SharePoint Learning Kit 1.5 has the following new functionality: *Support for SharePoint 2010 *E-Learning Actions can be localised *Two New Document Library Edit Options *Automatically add the Assignment List Web Part to the Web Part Gallery *Various Bug Fixes for the Drop Box There are 2 downloads for this release SLK-1.5-2010.zip for SharePoint 2010 SLK-1.5-2007.zip for SharePoint 2007 (WSS3 & MOSS 2007)Facebook C# SDK: 5.0.3 (BETA): This is fourth BETA release of the version 5 branch of the Facebook C# SDK. Remember this is a BETA build. Some things may change or not work exactly as planned. We are absolutely looking for feedback on this release to help us improve the final 5.X.X release. For more information about this release see the following blog posts: Facebook C# SDK - Writing your first Facebook Application Facebook C# SDK v5 Beta Internals Facebook C# SDK V5.0.0 (BETA) Released We have spend time trying ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.161: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release adds a new Twitter List network importer, makes some minor feature improvements, and fixes a few bugs. See the Complete NodeXL Release History for details. Installation StepsFollow these steps to install and use the template: Download the Zip file. Unzip it into any folder. Use WinZip or a similar program, or just right-click the Zip file...WatchersNET.TagCloud: WatchersNET.TagCloud 01.09.03: Whats NewAdded New Skin TagTastic http://www.watchersnet.de/Portals/0/screenshots/dnn/TagCloud-TagTastic-Skin.jpg Added New Skin RoundedButton http://www.watchersnet.de/Portals/0/screenshots/dnn/TagCloud-RoundedButton-Skin.jpg changes Tag Count fixed on Tag Source Referrals Fixed Tag Count when multiple Tag Sources are usedFinestra Virtual Desktops: 1.1: This release adds a few more performance and graphical enhancements to 1.0. Switching desktops is now about as fast as you can blink. Desktop switching optimizations New welcome wizard for Vista/7 Fixed a few minor bugs Added a few more options to the options dialog (including ability to disable the taskbar switching)WCF Data Services Toolkit: WCF Data Services Toolkit: The source code and binary releases of the WCF Data Services Toolkit. For simplicity, the source code download doesn't include any of the MSTest files. If you want those, you can pull the code down via MercurialyoutubeFisher: youtubeFisher 3.0 [beta]: What's new: Video capturing improved Supports YouTube's new layout (january 2011) Internal refactoringNearforums - ASP.NET MVC forum engine: Nearforums v5.0: Version 5.0 of the ASP.NET MVC Forum Engine, containing the following improvements: .NET 4.0 as target framework using ASP.NET MVC 3. All views migrated to Razor for cleaner markup. Alternate template (Layout file) for mobile devices 4 Bug Fixes since Version 4.1 Visit the project Roadmap for more details.fuv: 1.0 release, codename Chopper Joe: features: search/replace :o to open file :s to save file :q to quitASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager html generation optimized new features for the lookup (add additional search data ) live demo went aeroEnhSim: EnhSim 2.3.6 BETA: 2.3.6 BETAThis release supports WoW patch 4.06 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 Changes since 2.3.0 ...TestApi - a library of Test APIs: TestApi v0.6: TestApi v0.6 comes with the following changes: TestApi code development has been moved to Codeplex: Moved TestApi soluton to VS 2010; Moved all source code to Codeplex. All development work is done there now. Fault Injection API: Integrated the unmanaged FaultInjectionEngine.dll COM component in the build; Cleaned up FaultInjectionEngine.dll to build at warning level 4; Implemented “FaultScope” which allows for in-process fault injection; Added automation scripts & sample program; ...AutoLoL: AutoLoL v1.5.5: AutoChat now allows up to 6 items. Items with nr. 7-0 will be removed! News page url's are now opened in the default browser Added a context menu to the system tray icon (thanks to Alex Banagos) AutoChat now allows configuring the Chat Keys and the Modifier Key The recent files list now supports compact and full mode Fix: Swapped mouse buttons are now properly detected Fix: Sometimes the Play button was pressed while still greyed out Champion: Karma Note: You can also run the u...mojoPortal: 2.3.6.2: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2362-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.ReSharper Settings Manager: RSM v4.0 (For ReSharper 5.1): Donate ChangesChanged the mechanism of locating shared settings files (discussion, issue): You can now create a set of "global" settings files and configure how R# settings will get loaded (settings inheritance and overrides). All solutions located at the same folder and\or subfolders where the global settings file is located will load that file automatically. Improved "Manage Settings" dialog to support new settings sharing features. Bug fixes: 228048 16590 16584Rawr: Rawr 4.0.19 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Trac...IronRuby: 1.1.2: IronRuby 1.1.2 is a servicing release that keeps on improving compatibility with Ruby 1.9.2 and includes IronRuby integration to Visual Studio 2010. We decided to drop 1.8.6 compatibility mode in all post-1.0 releases. We recommend using IronRuby 1.0 if you need 1.8.6 compatibility. In this release we fixed several major issues: - problems that blocked Gem installation in certain cases - regex syntax: the parser was replaced with a new one that is much more compatible with Ruby 1.9.2 - cras...MVVM Light Toolkit: MVVM Light Toolkit V3 SP1 (4): There was a small issue with the previous release that caused errors when installing the templates in VS10 Express. This release corrects the error. Only use this if you encountered issues when installing the previous release. No changes in the binaries.New ProjectsAJAX Map DataConnector: AJAX Map DataConnector is an Open Source + Open Data project focused on connecting the power of Bing Maps AJAX Control, Version 7.0 to the spatial query capabilities of SQL Server 2008. The examples provided here represent a starting point, showing some ways to harness SQL ServerASP.NET Mvc Cdn Management: Cdn Management loads different resources (like styles, scripts etc) from configuration file. It reads resource location (url) from a configuration file and renders it into a page.Azure Content Provider for Telerik File Browser: Windows Azure Storage FileBrowserContentProvider makes it easier for web developers to Connect the Telerik RadFileExplorer control to the Windows Azure Storage. It's developed in VB.NET. BeerForge: BeerForge is an open source application for the home brewer to ease the creation of beer recipes and provide handy calculation tools.budget-manage: ???????????,??????????。Command Line Parsing, C++ and C#: This is a simple project that does command line arguments parsing. C# and C++ projects are supplied, together with unit tests.DeleteAfterRunning: Application allowing you to convert any file to self-deleting executable. You have a picture, audio file or a document you want to share, but don’t want anyone to keep a copy? Simply create a self-deleting package using DeleteAfterRunning that can be opened only once.DotNetNuke Contest: A module for contest voting in DotNetNuke.Foundry: Experiments in language design and data modeling.GenAttributeLib: Libreria per la gestione di attributi generici di un oggetto.Graduation Project Management System: This is our graduation project. It uses Asp.net Mvc 3,Jquery,Ado.net Entity Framework,and so on. by Veiller hu,ZSPiKnow - A tiny wiki: iKnow is a (really) tiny wiki. Up and running in 5 minutes, easy to customize and of course open source and free to use.internationalOffice: This will contain all the code and bugs for a student project.Iroo Package Manager: Orchard package management UI (auto-update)Iroo Version Manager: Version management for Content ItemsJavApi: JavApi provides a collection of .NET classes in the form of the Java API. It thus allows you to use an identical API to develop for both platforms.jsfcore @ Personal Repository: This project contains s wmultiple sampleith various snippets and projects from blog posts, user group talks, and conference sessions. M3 CMS: M3 Cms is a lightweight content management system built on the ASP.NET MVC 4.0 framework. Uses SQLCE database and nHibernate+ActiveRecord framework.Market-Basket Synthetic Data Generator: An open-source C# market-basket synthetic data generator, capable of creating transactions, sequences and taxonomies, based on the IBM Quest version. Written to address the maintainability and portability problems of the original, feedback, fixes and extensions are encouraged!MicroLinq : Libraries for .NET Micro Framework: MicroLinq is a project to bring a small subset of the power of Linq to the .NET Micro Framework. Over time (hours) the project has expanded to include other helpful libraries and proof of concept code others might find useful.MJPEG Decoder for WPF, WinForms, WP7 and XNA: Library to decode MJPEG streams for Silverlight, Windows Phone 7, XNA 4.0, WinForms, and WPF. Sample code showing usage is included with the distribution. For more information, see the full article at Coding4Fun.modSIC: Modulo's Open Distributed SCAP Infrastructure Collector, or modSIC, makes it easier for security analysts to scan an environment vulnerabilities/compliance based on OVAL-Definitions file. It's an open source service specializing in distributed assessment on a network.MoneyTracker: MoneyTracker is used to keep track of transactions, create your own categories and view reports.PixelsCMS: ASP.NET CMS, PixelsCMS, MVC 2rainTwitter: A twitter module/skin for the rainmeter Windows desktop customization platform. (IN DEVELOPMENT)Reactor.ServiceBus: Reactor Service Bus is a light weight .Net service bus built upon the Apache NMS abstraction library. It provides a slim and easy to use interface that supports all the underlying brokers NMS supports.RsMenu: RsMenu es un MenuStrip que permite tener los Informes de Reporting Services en un elegante menu en nuestra aplicacion winform.Top Protocols Expert for Network Monitor: A Network Monitor Expert which shows you the usage frequency of protocols in a trace. This expert plugs into the Network Monitor UI so you run it directly from the Expert menu. trollr.net - remotely configure & control your .Net applications: trollr.net provides a remote configuration and control network to the .Net apps in your enterprise. App configuration becomes centralised and live/runtime changes are pushed to each application - no restart required to pick up the change. Cache control commands can also be sent!Tweet 4 ME: Tweet 4 ME is a Java Micro Edition (MIDP 2.0 CLDC 1.0) based Twitter client built with a custom GUI framework. The application is part of a college project.UntitledGameProject: Untitled Game Project, Logic and Design for eventual port to Android, IOS and WP7web-framework: web??????????。C#??,framework2.0,???????。WebMatrix helpers from old school teachings: Remember when the web was fun? Well it's back with WebMatrix. We are providing just a few minor helper extensions with a sample app that should help Neo on his quest to be one with the Matrix.

    Read the article

  • MySQL Cluster 7.3 Labs Release – Foreign Keys Are In!

    - by Mat Keep
    0 0 1 1097 6254 Homework 52 14 7337 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary (aka TL/DR): Support for Foreign Key constraints has been one of the most requested feature enhancements for MySQL Cluster. We are therefore extremely excited to announce that Foreign Keys are part of the first Labs Release of MySQL Cluster 7.3 – available for download, evaluation and feedback now! (Select the mysql-cluster-7.3-labs-June-2012 build) In this blog, I will attempt to discuss the design rationale, implementation, configuration and steps to get started in evaluating the first MySQL Cluster 7.3 Labs Release. Pace of Innovation It was only a couple of months ago that we announced the General Availability (GA) of MySQL Cluster 7.2, delivering 1 billion Queries per Minute, with 70x higher cross-shard JOIN performance, Memcached NoSQL key-value API and cross-data center replication.  This release has been a huge hit, with downloads and deployments quickly reaching record levels. The announcement of the first MySQL Cluster 7.3 Early Access lab release at today's MySQL Innovation Day event demonstrates the continued pace in Cluster development, and provides an opportunity for the community to evaluate and feedback on new features they want to see. What’s the Plan for MySQL Cluster 7.3? Well, Foreign Keys, as you may have gathered by now (!), and this is the focus of this first Labs Release. As with MySQL Cluster 7.2, we plan to publish a series of preview releases for 7.3 that will incrementally add new candidate features for a final GA release (subject to usual safe harbor statement below*), including: - New NoSQL APIs; - Features to automate the configuration and provisioning of multi-node clusters, on premise or in the cloud; - Performance and scalability enhancements; - Taking advantage of features in the latest MySQL 5.x Server GA. Design Rationale MySQL Cluster is designed as a “Not-Only-SQL” database. It combines attributes that enable users to blend the best of both relational and NoSQL technologies into solutions that deliver web scalability with 99.999% availability and real-time performance, including: Concurrent NoSQL and SQL access to the database; Auto-sharding with simple scale-out across commodity hardware; Multi-master replication with failover and recovery both within and across data centers; Shared-nothing architecture with no single point of failure; Online scaling and schema changes; ACID compliance and support for complex queries, across shards. Native support for Foreign Key constraints enables users to extend the benefits of MySQL Cluster into a broader range of use-cases, including: - Packaged applications in areas such as eCommerce and Web Content Management that prescribe databases with Foreign Key support. - In-house developments benefiting from Foreign Key constraints to simplify data models and eliminate the additional application logic needed to maintain data consistency and integrity between tables. Implementation The Foreign Key functionality is implemented directly within MySQL Cluster’s data nodes, allowing any client API accessing the cluster to benefit from them – whether using SQL or one of the NoSQL interfaces (Memcached, C++, Java, JPA or HTTP/REST.) The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL In addition, the MySQL Cluster implementation supports the online adding and dropping of Foreign Keys, ensuring the Cluster continues to serve both read and write requests during the operation. An important difference to note with the Foreign Key implementation in InnoDB is that MySQL Cluster does not support the updating of Primary Keys from within the Data Nodes themselves - instead the UPDATE is emulated with a DELETE followed by an INSERT operation. Therefore an UPDATE operation will return an error if the parent reference is using a Primary Key, unless using CASCADE action, in which case the delete operation will result in the corresponding rows in the child table being deleted. The Engineering team plans to change this behavior in a subsequent preview release. Also note that when using InnoDB "NO ACTION" is identical to "RESTRICT". In the case of MySQL Cluster “NO ACTION” means “deferred check”, i.e. the constraint is checked before commit, allowing user-defined triggers to automatically make changes in order to satisfy the Foreign Key constraints. Configuration There is nothing special you have to do here – Foreign Key constraint checking is enabled by default. If you intend to migrate existing tables from another database or storage engine, for example from InnoDB, there are a couple of best practices to observe: 1. Analyze the structure of the Foreign Key graph and run the ALTER TABLE ENGINE=NDB in the correct sequence to ensure constraints are enforced 2. Alternatively drop the Foreign Key constraints prior to the import process and then recreate when complete. Getting Started Read this blog for a demonstration of using Foreign Keys with MySQL Cluster.  You can download MySQL Cluster 7.3 Labs Release with Foreign Keys today - (select the mysql-cluster-7.3-labs-June-2012 build) If you are new to MySQL Cluster, the Getting Started guide will walk you through installing an evaluation cluster on a singe host (these guides reflect MySQL Cluster 7.2, but apply equally well to 7.3) Post any questions to the MySQL Cluster forum where our Engineering team will attempt to assist you. Post any bugs you find to the MySQL bug tracking system (select MySQL Cluster from the Category drop-down menu) And if you have any feedback, please post them to the Comments section of this blog. Summary MySQL Cluster 7.2 is the GA, production-ready release of MySQL Cluster. This first Labs Release of MySQL Cluster 7.3 gives you the opportunity to preview and evaluate future developments in the MySQL Cluster database, and we are very excited to be able to share that with you. Let us know how you get along with MySQL Cluster 7.3, and other features that you want to see in future releases. * Safe Harbor Statement This information is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • Design for complex ATG applications

    - by Glen Borkowski
    Overview Needless to say, some ATG applications are more complex than others.  Some ATG applications support a single site, single language, single catalog, single currency, have a single development staff, single business team, and a relatively simple business model.  The real complex applications have to support multiple sites, multiple languages, multiple catalogs, multiple currencies, a couple different development teams, multiple business teams, and a highly complex business model (and processes to go along with it).  While it's still important to implement a proper design for simple applications, it's absolutely critical to do this for the complex applications.  Why?  It's all about time and money.  If you are unable to manage your complex applications in an efficient manner, the cost of managing it will increase dramatically as will the time to get things done (time to market).  On the positive side, your competition is most likely in the same situation, so you just need to be more efficient than they are. This article is intended to discuss a number of key areas to think about when designing complex applications on ATG.  Some of this can get fairly technical, so it may help to get some background first.  You can get enough of the required background information from this post.  After reading that, come back here and follow along. Application Design Of all the various types of ATG applications out there, the most complex tend to be the ones in the telecommunications industry - especially the ones which operate in multiple countries.  To get started, let's assume that we are talking about an application like that.  One that has these properties: Operates in multiple countries - must support multiple sites, catalogs, languages, and currencies The organization is fairly loosely-coupled - single brand, but different businesses across different countries There is some common functionality across all sites in all countries There is some common functionality across different sites within the same country Sites within a single country may have some unique functionality - relative to other sites in the same country Complex product catalog (mostly in terms of bundles, eligibility, and compatibility) At this point, I'll assume you have read through the required reading and have a decent understanding of how ATG modules work... Code / configuration - assemble into modules When it comes to defining your modules for a complex application, there are a number of goals: Divide functionality between the modules in a way that maps to your business Group common functionality 'further down in the stack of modules' Provide a good balance between shared resources and autonomy for countries / sites Now I'll describe a high level approach to how you could accomplish those goals...  Let's start from the bottom and work our way up.  At the very bottom, you have the modules that ship with ATG - the 'out of the box' stuff.  You want to make sure that you are leveraging all the modules that make sense in order to get the most value from ATG as possible - and less stuff you'll have to write yourself.  On top of the ATG modules, you should create what we'll refer to as the Corporate Foundation Module described as follows: Sits directly on top of ATG modules Used by all applications across all countries and sites - this is the foundation for everyone Contains everything that is common across all countries / all sites Once established and settled, will change less frequently than other 'higher' modules Encapsulates as many enterprise-wide integrations as possible Will provide means of code sharing therefore less development / testing - faster time to market Contains a 'reference' web application (described below) The next layer up could be multiple modules for each country (you could replace this with region if that makes more sense).  We'll define those modules as follows: Sits on top of the corporate foundation module Contains what is unique to all sites in a given country Responsible for managing any resource bundles for this country (to handle multiple languages) Overrides / replaces corporate integration points with any country-specific ones Finally, we will define what should be a fairly 'thin' (in terms of functionality) set of modules for each site as follows: Sits on top of the country it resides in module Contains what is unique for a given site within a given country Will mostly contain configuration, but could also define some unique functionality as well Contains one or more web applications The graphic below should help to indicate how these modules fit together: Web applications As described in the previous section, there are many opportunities for sharing (minimizing costs) as it relates to the code and configuration aspects of ATG modules.  Web applications are also contained within ATG modules, however, sharing web applications can be a bit more difficult because this is what the end customer actually sees, and since each site may have some degree of unique look & feel, sharing becomes more challenging.  One approach that can help is to define a 'reference' web application at the corporate foundation layer to act as a solid starting point for each site.  Here's a description of the 'reference' web application: Contains minimal / sample reference styling as this will mostly be addressed at the site level web app Focus on functionality - ensure that core functionality is revealed via this web application Each individual site can use this as a starting point There may be multiple types of web apps (i.e. B2C, B2B, etc) There are some techniques to share web application assets - i.e. multiple web applications, defined in the web.xml, and it's worth investigating, but is out of scope here. Reference infrastructure In this complex environment, it is assumed that there is not a single infrastructure for all countries and all sites.  It's more likely that different countries (or regions) could have their own solution for infrastructure.  In this case, it will be advantageous to define a reference infrastructure which contains all the hardware and software that make up the core environment.  Specifications and diagrams should be created to outline what this reference infrastructure looks like, as well as it's baseline cost and the incremental cost to scale up with volume.  Having some consistency in terms of infrastructure will save time and money as new countries / sites come online.  Here are some properties of the reference infrastructure: Standardized approach to setup of hardware Type and number of servers Defines application server, operating system, database, etc... - including vendor and specific versions Consistent naming conventions Provides a consistent base of terminology and understanding across environments Defines which ATG services run on which servers Production Staging BCC / Preview Each site can change as required to meet scale requirements Governance / organization It should be no surprise that the complex application we're talking about is backed by an equally complex organization.  One of the more challenging aspects of efficiently managing a series of complex applications is to ensure the proper level of governance and organization.  Here are some ideas and goals to work towards: Establish a committee to make enterprise-wide decisions that affect all sites Representation should be evenly distributed Should have a clear communication procedure Focus on high level business goals Evaluation of feature / function gaps and how that relates to ATG release schedule / roadmap Determine when to upgrade & ensure value will be realized Determine how to manage various levels of modules Who is responsible for maintaining corporate / country / site layers Determine a procedure for controlling what goes in the corporate foundation module Standardize on source code control, database, hardware, OS versions, J2EE app servers, development procedures, etc only use tested / proven versions - this is something that should be centralized so that every country / site does not have to worry about compatibility between versions Create a innovation team Quickly develop new features, perform proof of concepts All teams can benefit from their findings Summary At this point, it should be clear why the topics above (design, governance, organization, etc) are critical to being able to efficiently manage a complex application.  To summarize, it's all about competitive advantage...  You will need to reduce costs and improve time to market with the goal of providing a better experience for your end customers.  You can reduce cost by reducing development time, time allocated to testing (don't have to test the corporate foundation module over and over again - do it once), and optimizing operations.  With an efficient design, you can improve your time to market and your business will be more flexible  and agile.  Over time, you'll find that you're becoming more focused on offering functionality that is new to the market (creativity) and this will be rewarded - you're now a leader. In addition to the above, you'll realize soft benefits as well.  Your staff will be operating in a culture based on sharing.  You'll want to reward efforts to improve and enhance the foundation as this will benefit everyone.  This culture will inspire innovation, which can only lend itself to your competitive advantage.

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • Orchestrating the Virtual Enterprise

    - by John Murphy
    During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems.  Case in point, almost everyone has ordered from Amazon.com at one time or another. Our orders are as likely to be fulfilled by third parties as they are by Amazon itself. To deliver the order promptly and efficiently, Amazon has to send it to the right fulfillment location and know the availability in that location. It needs to be able to track status of the fulfillment and deal with exceptions. As a virtual enterprise, Amazon's operations, using thousands of trading partners, requires a very different approach to fulfillment than the traditional 'take an order and ship it from your own warehouse' model. Amazon had no choice but to develop a complex, expensive and custom solution to tackle this problem as there used to be no product solution available. Now, other companies who want to follow similar models have a better off-the-shelf choice -- Oracle Distributed Order Orchestration (DOO).  Consider how another of our customers is using our distributed orchestration solution. This major airplane manufacturer has a highly complex business and interacts regularly with the U.S. Government and major airlines. It sits in the middle of an intricate supply chain and needed to improve visibility across its many different entities. Oracle Fusion DOO gives the company an orchestration mechanism so it could improve quality, speed, flexibility, and consistency without requiring an organ transplant of these highly complex legacy systems. Many retailers face the challenge of dealing with brick and mortar, Web, and reseller channels. They all need to be knitted together into a virtual enterprise experience that is consistent for their customers. When a large U.K. grocer with a strong brick and mortar retail operation added an online business, they turned to Oracle Fusion DOO to bring these entities together. Disturbing the Peace with Acquisitions Quite often a company's ERP system is disrupted when it acquires a new company. An acquisition can inject a new set of processes and systems -- or even introduce an entirely new business like Sun's hardware did at Oracle. This challenge has been a driver for some of our DOO customers. A large power management company is using Oracle Fusion DOO to provide the flexibility to rapidly integrate additional products and services into its central fulfillment operation. The Flip Side of Fulfillment Meanwhile, we haven't ignored similar challenges on the supply side of the equation. Specifically, how to manage complex supply in a flexible way when there are multiple trading parties involved? How to manage the supply to suppliers? How to manage critical components that need to merge in a tier two or tier three supply chain? By investing in supply orchestration solutions for the virtual enterprise, we plan to give users better visibility into their network of suppliers to help them drive down costs. We also think this technology and full orchestration process can be applied to the financial side of organizations. An example is transactions that flow through complex internal structures to minimize tax exposure. We can help companies manage those transactions effectively by thinking about the internal organization as a virtual enterprise and bringing the same solution set to this internal challenge.  The Clear Front Runner No other company is investing in solving the virtual enterprise supply chain issues like Oracle is. Oracle is in a unique position to become the gold standard in this market space. We have the infrastructure of Oracle technology. We already have an Oracle Fusion DOO application which embraces the best of what's required in this area. And we're absolutely committed to extending our Fusion solution to other use cases and delivering even more business value.

    Read the article

  • Shrinking TCP Window Size to 0 on Cisco ASA

    - by Brent
    Having an issue with any large file transfer that crosses our Cisco ASA unit come to an eventual pause. Setup Test1: Server A, FileZilla Client <- 1GBPS - Cisco ASA <- 1 GBPS - Server B, FileZilla Server TCP Window size on large transfers will drop to 0 after around 30 seconds of a large file transfer. RDP session then becomes unresponsive for a minute or two and then is sporadic. After a minute or two, the FTP transfer resumes, but at 1-2 MB/s. When the FTP transfer is over, the responsiveness of the RDP session returns to normal. Test2: Server C in same network as Server B, FileZilla Client <- local network - Server B, FileZilla Server File will transfer at 30+ MB/s. Details ASA: 5520 running 8.3(1) with ASDM 6.3(1) Windows: Server 2003 R2 SP2 with latest patches Server: VMs running on HP C3000 blade chasis FileZilla: 3.3.5.1, latest stable build Transfer: 20 GB SQL .BAK file Protocol: Active FTP over tcp/20, tcp/21 Switches: Cisco Small Business 2048 Gigabit running latest 2.0.0.8 VMware: 4.1 HP: Flex-10 3.15, latest version Notes All servers are VMs. Thoughts Pretty sure the ASA is at fault since a transfer between VMs on the same network will not show a shrinking Window size. Our ASA is pretty vanilla. No major changes made to any of the settings. It has a bunch of NAT and ACLs. Wireshark Sample No. Time Source Destination Protocol Info 234905 73.916986 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131981791 Win=65535 Len=0 234906 73.917220 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234907 73.917224 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234908 73.917231 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131984551 Win=64155 Len=0 234909 73.917463 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234910 73.917467 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234911 73.917469 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234912 73.917476 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131988691 Win=60015 Len=0 234913 73.917706 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234914 73.917710 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234915 73.917715 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131991451 Win=57255 Len=0 234916 73.917949 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234917 73.917953 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234918 73.917958 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131994211 Win=54495 Len=0 234919 73.918193 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234920 73.918197 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234921 73.918202 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131996971 Win=51735 Len=0 234922 73.918435 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234923 73.918440 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234924 73.918445 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131999731 Win=48975 Len=0 234925 73.918679 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234926 73.918684 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234927 73.918689 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132002491 Win=46215 Len=0 234928 73.918922 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234929 73.918927 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234930 73.918932 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132005251 Win=43455 Len=0 234931 73.919165 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234932 73.919169 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234933 73.919174 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132008011 Win=40695 Len=0 234934 73.919408 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234935 73.919413 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234936 73.919418 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132010771 Win=37935 Len=0 234937 73.919652 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234938 73.919656 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234939 73.919661 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132013531 Win=35175 Len=0 234940 73.919895 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234941 73.919899 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234942 73.919904 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132016291 Win=32415 Len=0 234943 73.920138 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234944 73.920142 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234945 73.920147 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132019051 Win=29655 Len=0 234946 73.920381 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234947 73.920386 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234948 73.920391 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132021811 Win=26895 Len=0 234949 73.920625 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234950 73.920629 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234951 73.920632 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234952 73.920638 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132025951 Win=22755 Len=0 234953 73.920868 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234954 73.920871 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234955 73.920876 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132028711 Win=19995 Len=0 234956 73.921111 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234957 73.921115 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234958 73.921120 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132031471 Win=17235 Len=0 234959 73.921356 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234960 73.921362 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234961 73.921370 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132034231 Win=14475 Len=0 234962 73.921598 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234963 73.921606 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234964 73.921613 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132036991 Win=11715 Len=0 234965 73.921841 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234966 73.921848 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234967 73.921855 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132039751 Win=8955 Len=0 234968 73.922085 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234969 73.922092 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234970 73.922099 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132042511 Win=6195 Len=0 234971 73.922328 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234972 73.922335 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234973 73.922342 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132045271 Win=3435 Len=0 234974 73.922571 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234975 73.922579 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234976 73.922586 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132048031 Win=675 Len=0 234981 75.866453 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 675 bytes 234985 76.020168 1.1.1.1 2.2.2.2 TCP [TCP ZeroWindow] ftp-data ivecon-port [ACK] Seq=1 Ack=132048706 Win=0 Len=0 234989 76.771633 2.2.2.2 1.1.1.1 TCP [TCP ZeroWindowProbe] ivecon-port ftp-data [ACK] Seq=132048706 Ack=1 Win=65535 Len=1 234990 76.771648 1.1.1.1 2.2.2.2 TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] ftp-data ivecon-port [ACK] Seq=1 Ack=132048706 Win=0 Len=0 234997 78.279701 2.2.2.2 1.1.1.1 TCP [TCP ZeroWindowProbe] ivecon-port ftp-data [ACK] Seq=132048706 Ack=1 Win=65535 Len=1 234998 78.279714 1.1.1.1 2.2.2.2 TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] ftp-data ivecon-port [ACK] Seq=1 Ack=132048706 Win=0 Len=0

    Read the article

  • Shrinking Windows Size to 0 on Cisco ASA

    - by Brent
    Having an issue with any large file transfer that crosses our Cisco ASA unit come to an eventual pause. Setup Test1: Server A, FileZilla Client <- 1GBPS - Cisco ASA <- 1 GBPS - Server B, FileZilla Server TCP Window size on large transfers will drop to 0 after around 30 seconds of a large file transfer. RDP session then becomes unresponsive for a minute or two and then is sporadic. After a minute or two, the FTP transfer resumes, but at 1-2 MB/s. When the FTP transfer is over, the responsiveness of the RDP session returns to normal. Test2: Server C in same network as Server B, FileZilla Client <- local network - Server B, FileZilla Server File will transfer at 30+ MB/s. Details ASA: 5520 running 8.3(1) with ASDM 6.3(1) Windows: Server 2003 R2 SP2 with latest patches Server: VMs running on HP C3000 blade chasis FileZilla: 3.3.5.1, latest stable build Transfer: 20 GB SQL .BAK file Protocol: Active FTP over tcp/20, tcp/21 Switches: Cisco Small Business 2048 Gigabit running latest 2.0.0.8 VMware: 4.1 HP: Flex-10 3.15, latest version Notes All servers are VMs. Thoughts Pretty sure the ASA is at fault since a transfer between VMs on the same network will not show a shrinking Window size. Our ASA is pretty vanilla. No major changes made to any of the settings. It has a bunch of NAT and ACLs. Wireshark Sample No. Time Source Destination Protocol Info 234905 73.916986 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131981791 Win=65535 Len=0 234906 73.917220 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234907 73.917224 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234908 73.917231 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131984551 Win=64155 Len=0 234909 73.917463 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234910 73.917467 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234911 73.917469 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234912 73.917476 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131988691 Win=60015 Len=0 234913 73.917706 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234914 73.917710 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234915 73.917715 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131991451 Win=57255 Len=0 234916 73.917949 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234917 73.917953 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234918 73.917958 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131994211 Win=54495 Len=0 234919 73.918193 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234920 73.918197 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234921 73.918202 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131996971 Win=51735 Len=0 234922 73.918435 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234923 73.918440 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234924 73.918445 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=131999731 Win=48975 Len=0 234925 73.918679 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234926 73.918684 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234927 73.918689 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132002491 Win=46215 Len=0 234928 73.918922 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234929 73.918927 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234930 73.918932 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132005251 Win=43455 Len=0 234931 73.919165 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234932 73.919169 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234933 73.919174 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132008011 Win=40695 Len=0 234934 73.919408 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234935 73.919413 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234936 73.919418 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132010771 Win=37935 Len=0 234937 73.919652 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234938 73.919656 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234939 73.919661 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132013531 Win=35175 Len=0 234940 73.919895 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234941 73.919899 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234942 73.919904 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132016291 Win=32415 Len=0 234943 73.920138 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234944 73.920142 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234945 73.920147 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132019051 Win=29655 Len=0 234946 73.920381 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234947 73.920386 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234948 73.920391 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132021811 Win=26895 Len=0 234949 73.920625 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234950 73.920629 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234951 73.920632 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234952 73.920638 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132025951 Win=22755 Len=0 234953 73.920868 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234954 73.920871 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234955 73.920876 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132028711 Win=19995 Len=0 234956 73.921111 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234957 73.921115 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234958 73.921120 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132031471 Win=17235 Len=0 234959 73.921356 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234960 73.921362 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234961 73.921370 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132034231 Win=14475 Len=0 234962 73.921598 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234963 73.921606 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234964 73.921613 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132036991 Win=11715 Len=0 234965 73.921841 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234966 73.921848 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234967 73.921855 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132039751 Win=8955 Len=0 234968 73.922085 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234969 73.922092 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234970 73.922099 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132042511 Win=6195 Len=0 234971 73.922328 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234972 73.922335 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234973 73.922342 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132045271 Win=3435 Len=0 234974 73.922571 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234975 73.922579 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 1380 bytes 234976 73.922586 1.1.1.1 2.2.2.2 TCP ftp-data ivecon-port [ACK] Seq=1 Ack=132048031 Win=675 Len=0 234981 75.866453 2.2.2.2 1.1.1.1 FTP-DATA FTP Data: 675 bytes 234985 76.020168 1.1.1.1 2.2.2.2 TCP [TCP ZeroWindow] ftp-data ivecon-port [ACK] Seq=1 Ack=132048706 Win=0 Len=0 234989 76.771633 2.2.2.2 1.1.1.1 TCP [TCP ZeroWindowProbe] ivecon-port ftp-data [ACK] Seq=132048706 Ack=1 Win=65535 Len=1 234990 76.771648 1.1.1.1 2.2.2.2 TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] ftp-data ivecon-port [ACK] Seq=1 Ack=132048706 Win=0 Len=0 234997 78.279701 2.2.2.2 1.1.1.1 TCP [TCP ZeroWindowProbe] ivecon-port ftp-data [ACK] Seq=132048706 Ack=1 Win=65535 Len=1 234998 78.279714 1.1.1.1 2.2.2.2 TCP [TCP ZeroWindowProbeAck] [TCP ZeroWindow] ftp-data ivecon-port [ACK] Seq=1 Ack=132048706 Win=0 Len=0

    Read the article

  • WCF: SecurityNegotiationException when using client

    - by bradhe
    So I've been trying to set up certificate authentication for my clients and services. The eventual goal is to partition data based on the certificate a client connects with (i.e. the certificate becomes their credentials in to the greater system and their data is partitioned based on these credentials). I have been able to set it up successfully on both the client and the server side. I have created a certificate and a private key, installed them on my computer, and set up my server such that 1) it has a certificate-based service credential and 2) if a client connects without providing a certificate-based credential an exception is thrown. What I then did was create a simple client and add a certificate credential to the configuration and try to call a simple operation on the service. It looks like the client connects OK, and it looks like the certificate is accepted by the server, but I do get this: SecurityNegotiationException: "The caller was not authenticated by the service." That seems rather ambiguous to me. Note that I am using wsHttpBinding, which supposedly defaults to Windows auth for transport security...but all of these processes are being run as my user account as I'm running in my debug environment. Here is my server configuration: <system.serviceModel> <bindings> <wsHttpBinding> <binding name="MyServiceBinding"> <security mode="Message"> <transport clientCredentialType="None"/> <message clientCredentialType="Certificate"/> </security> </binding> </wsHttpBinding> </bindings> <services> <service behaviorConfiguration="MyServiceBehavior" name="MyService"> <endpoint binding="wsHttpBinding" bindingConfiguration="MyServiceBinding" contract="IMyContract"/> <endpoint binding="mexHttpBinding" address="mex" contract="IMetadataExchange"> <identity> <dns value="localhost"/> </identity> </endpoint> </service> </services> <behaviors> <serviceBehaviors> <behavior name="MyServiceBehavior"> <serviceMetadata httpGetEnabled="true" policyVersion="Policy15" /> <serviceDebug includeExceptionDetailInFaults="false" /> <serviceCredentials> <serviceCertificate storeLocation="CurrentUser" storeName="My" x509FindType="FindBySubjectName" findValue="tmp123"/> </serviceCredentials> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> Here is my client config -- note that I'm using the same cert for the client that I use on the service: <system.serviceModel> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_IMyService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384"/> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false"/> <security mode="Message"> <!--<transport clientCredentialType="Windows" proxyCredentialType="None" realm=""/>--> <message clientCredentialType="Certificate" negotiateServiceCredential="true" algorithmSuite="Default"/> </security> </binding> </wsHttpBinding> </bindings> <client> <endpoint address="http://localhost:50120/UserServices.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IMyService" behaviorConfiguration="IMyService_Behavior" contract="UserServices.IUserServices" name="WSHttpBinding_IMyService"> <identity> <certificate encodedValue="Some RSA stuff"/> </identity> </endpoint> </client> <behaviors> <endpointBehaviors> <behavior name="IMyService_Behavior"> <clientCredentials> <clientCertificate storeLocation="CurrentUser" storeName="My" x509FindType="FindBySubjectName" findValue="tmp123"/> </clientCredentials> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> Can anyone please help provide some insight as to what might be up here? Thanks,

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when setup projects need to be built?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25  | Next Page >