Search Results

Search found 1011 results on 41 pages for 'dirty henry'.

Page 39/41 | < Previous Page | 35 36 37 38 39 40 41  | Next Page >

  • JpegBitmapEncoder.Save() throws exception when writing image with metadata to MemoryStream

    - by Dmitry
    I am trying to set up metadata on JPG image what does not have it. You can't use in-place writer (InPlaceBitmapMetadataWriter) in this case, cuz there is no place for metadata in image. If I use FileStream as output - everything works fine. But if I try to use MemoryStream as output - JpegBitmapEncoder.Save() throws an exception (Exception from HRESULT: 0xC0000005). After some investigation I also found out what encoder can save image to memory stream if I supply null instead of metadata. I've made a very simplified and short example what reproduces the problem: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Drawing; using System.Drawing.Imaging; using System.Windows.Media.Imaging; namespace JpegSaveTest { class Program { public static JpegBitmapEncoder SetUpMetadataOnStream(Stream src, string title) { uint padding = 2048; BitmapDecoder original; BitmapFrame framecopy, newframe; BitmapMetadata metadata; JpegBitmapEncoder output = new JpegBitmapEncoder(); src.Seek(0, SeekOrigin.Begin); original = JpegBitmapDecoder.Create(src, BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.OnLoad); if (original.Frames[0] != null) { framecopy = (BitmapFrame)original.Frames[0].Clone(); if (original.Frames[0].Metadata != null) metadata = original.Frames[0].Metadata.Clone() as BitmapMetadata; else metadata = new BitmapMetadata("jpeg"); metadata.SetQuery("/app1/ifd/PaddingSchema:Padding", padding); metadata.SetQuery("/app1/ifd/exif/PaddingSchema:Padding", padding); metadata.SetQuery("/xmp/PaddingSchema:Padding", padding); metadata.SetQuery("System.Title", title); newframe = BitmapFrame.Create(framecopy, framecopy.Thumbnail, metadata, original.Frames[0].ColorContexts); output.Frames.Add(newframe); } else { Exception ex = new Exception("Image contains no frames."); throw ex; } return output; } public static MemoryStream SetTagsInMemory(string sfname, string title) { Stream src, dst; JpegBitmapEncoder output; src = File.Open(sfname, FileMode.Open, FileAccess.Read, FileShare.Read); output = SetUpMetadataOnStream(src, title); dst = new MemoryStream(); output.Save(dst); src.Close(); return (MemoryStream)dst; } static void Main(string[] args) { string filename = "Z:\\dotnet\\gnom4.jpg"; MemoryStream s; s = SetTagsInMemory(filename, "test title"); } } } It is simple console application. To run it, replace filename variable content with path to any .jpg file without metadata (or use mine). Ofc I can just save image to temporary file first, close it, then open and copy to MemoryStream, but its too dirty and slow workaround. Any ideas about getting this working are welcome :)

    Read the article

  • Crypted_password is null when using Authlogic to save a user

    - by kareem
    i'm getting a strange error on my production install when i try and create a new user using AL: ActiveRecord::StatementInvalid: Mysql::Error: Column 'crypted_password' cannot be null: INSERT INTO users especially strange b/c it works as expected on my local box. RUnning Rails 2.3.2 and ruby 1.8.7 on both boxes. user.rb: class User < ActiveRecord::Base before_create :set_username acts_as_authentic do |c| c.require_password_confirmation = false c.login_field = "email" c.validates_length_of_password_field_options = {:minimum => 4} c.validate_login_field = false #don't validate email field with additional validations end end Here's output from my production console: >> u = User.new => #<User id: nil, username: nil, email: nil, crypted_password: nil, password_salt: nil, persistence_token: nil, single_access_token: nil, perishable_token: nil, login_count: 0, failed_login_count: 0, last_request_at: nil, current_login_at: nil, last_login_at: nil, current_login_ip: nil, last_login_ip: nil, created_at: nil, updated_at: nil, is_admin: 0, first_name: nil, last_name: nil> >> u.full_name = 'john smith' => "john smith" >> u.password = 'test' => "test" >> u.email = '[email protected]' => "[email protected]" >> u.valid? => true >> u.save ActiveRecord::StatementInvalid: Mysql::Error: Column 'crypted_password' cannot be null: INSERT INTO `users` (`single_access_token`, `last_request_at`, `created_at`, `crypted_password`, `perishable_token`, `updated_at`, `username`, `failed_login_count`, `current_login_ip`, `password_salt`, `current_login_at`, `is_admin`, `persistence_token`, `login_count`, `last_name`, `last_login_ip`, `last_login_at`, `email`, `first_name`) VALUES('B-XSXwhO7hkbtISIOyEq', NULL, '2009-07-31 01:10:44', NULL, 'FK3mYS2Tp5Tzeq5IXE1z', '2009-07-31 01:10:44', 'john', 0, NULL, NULL, NULL, 0, '2c76b645f761eb3509353290e93874cecdb68a63caa165812ab1b126d63660757090ecf69995caef9e78f93d070b524e2542b3fec4ee050726088c2a9fdb0c9f', 0, 'smith', NULL, NULL, '[email protected]', 'john') from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/connection_adapters/abstract_adapter.rb:212:in `log' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/connection_adapters/mysql_adapter.rb:320:in `execute' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/connection_adapters/abstract/database_statements.rb: 259:in `insert_sql' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/connection_adapters/mysql_adapter.rb:330:in `insert_sql' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/connection_adapters/abstract/database_statements.rb: 44:in `insert_without_query_dirty' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/connection_adapters/abstract/query_cache.rb:18:in `insert' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/base.rb:2902:in `create_without_timestamps' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/timestamp.rb:29:in `create_without_callbacks' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/callbacks.rb:266:in `create' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/base.rb:2868:in `create_or_update_without_callbacks' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/callbacks.rb:250:in `create_or_update' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/base.rb:2539:in `save_without_validation' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/validations.rb:1009:in `save_without_dirty' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/dirty.rb:79:in `save_without_transactions' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/transactions.rb:229:in `send' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/transactions.rb:229:in `with_transaction_returning_status' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/connection_adapters/abstract/database_statements.rb: 136:in `transaction' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/transactions.rb:182:in `transaction' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/transactions.rb:228:in `with_transaction_returning_status' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/transactions.rb:196:in `save' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/transactions.rb:208:in `rollback_active_record_state!' from /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/ active_record/transactions.rb:196:in `save' No idea why this is happening, and especially why this saves a new user on dev but not on production. Any help is much appreciated, thanks! edit: using Apache & Passenger 2.2.4

    Read the article

  • jQuery sequence and function call problem

    - by Jonas
    Hi everyone, I'm very new to jquery and programming in general but I'm still trying to achieve something here. I use Fullcalendar to allow the users of my web application to insert an event in the database. The click on a day, view changes to agendaDay, then on a time of the day, and a dialog popup opens with a form. I am trying to combine validate (pre-jquery.1.4), jquery.form to post the form without page refresh The script calendar.php, included in several pages, defines the fullcalendar object and displays it in a div: $(document).ready(function() { function EventLoad() { $("#addEvent").validate({ rules: { calendar_title: "required", calendar_url: { required: false, maxlength: 100, url: true } }, messages: { calendar_title: "Title required", calendar_url: "Invalid URL format" }, success: function() { $('#addEvent').submit(function() { var options = { success: function() { $('#eventDialog').dialog('close'); $('#calendar').fullCalendar( 'refetchEvents' ); } }; // submit the form $(this).ajaxSubmit(options); // return false to prevent normal browser submit and page navigation return false; }); } }); } $('#calendar').fullCalendar({ header: { left: 'prev,next today', center: 'title', right: 'month,agendaWeek,agendaDay' }, theme: true, firstDay: 1, editable: false, events: "json-events.php?list=1&<?php echo $events_list; ?>", <?php if($_GET['page'] == 'home') echo "defaultView: 'agendaWeek',"; ?> eventClick: function(event) { if (event.url) { window.open(event.url); return false; } }, dayClick: function(date, allDay, jsEvent, view) { if (view.name == 'month') { $('#calendar').fullCalendar( 'changeView', 'agendaDay').fullCalendar( 'gotoDate', date ); }else{ if(allDay) { var timeStamp = $.fullCalendar.formatDate( date, 'dddd+dd+MMMM_u'); var $eventDialog = $('<div/>').load("json-events.php?<?php echo $events_list; ?>&new=1&all_day=1&timestamp=" + timeStamp, null, EventLoad).dialog({autoOpen:false,draggable: false, width: 675, modal:true, position:['center',202], resizable: false, title:'Add an Event'}); $eventDialog.dialog('open').attr('id','eventDialog'); } else { var timeStamp = $.fullCalendar.formatDate( date, 'dddd+dd+MMMM_u'); var $eventDialog = $('<div/>').load("json-events.php?<?php echo $events_list; ?>&new=1&all_day=0&timestamp=" + timeStamp, null, EventLoad).dialog({autoOpen:false,draggable: false, width: 675, modal:true, position:['center',202], resizable: false, title:'Add an Event'}); $eventDialog.dialog('open').attr('id','eventDialog');; } } } }); }); The script json-events.php contains the form and also the code to process the data from the submitted form. What happens when I test the whole thing: - first user click on a day, then time of day. Popup opens with time and date indicated on the form. When user submits the form, dialog closes and calendar refreshes its events.... and the event added by the user appears several times (from 4 to up to 11 times!). The form has been processed several times by the receiving php script?! - second user click, the popup opens, user submit empty form. Form is submitted (validate function not triggered) and user redirected to empty page json-events.php (ajaxForm not triggered either) Obviously, my code is wrong (and dirty as well, sorry). Why is the submitted form, submitted several time to receiving script and why is the javascript function EventLoad triggered only once ? Thank you very much for you help. This problem is killing me !

    Read the article

  • Get User SID From Logon ID (Windows XP and Up)

    - by Dave Ruske
    I have a Windows service that needs to access registry hives under HKEY_USERS when users log on, either locally or via Terminal Server. I'm using a WMI query on win32_logonsession to receive events when users log on, and one of the properties I get from that query is a LogonId. To figure out which registry hive I need to access, now, I need the users's SID, which is used as a registry key name beneath HKEY_USERS. In most cases, I can get this by doing a RelatedObjectQuery like so (in C#): RelatedObjectQuery relatedQuery = new RelatedObjectQuery( "associators of {Win32_LogonSession.LogonId='" + logonID + "'} WHERE AssocClass=Win32_LoggedOnUser Role=Dependent" ); where "logonID" is the logon session ID from the session query. Running the RelatedObjectQuery will generally give me a SID property that contains exactly what I need. There are two issues I have with this. First and most importantly, the RelatedObjectQuery will not return any results for a domain user that logs in with cached credentials, disconnected from the domain. Second, I'm not pleased with the performance of this RelatedObjectQuery --- it can take up to several seconds to execute. Here's a quick and dirty command line program I threw together to experiment with the queries. Rather than setting up to receive events, this just enumerates the users on the local machine: using System; using System.Collections.Generic; using System.Text; using System.Management; namespace EnumUsersTest { class Program { static void Main( string[] args ) { ManagementScope scope = new ManagementScope( "\\\\.\\root\\cimv2" ); string queryString = "select * from win32_logonsession"; // for all sessions //string queryString = "select * from win32_logonsession where logontype = 2"; // for local interactive sessions only ManagementObjectSearcher sessionQuery = new ManagementObjectSearcher( scope, new SelectQuery( queryString ) ); ManagementObjectCollection logonSessions = sessionQuery.Get(); foreach ( ManagementObject logonSession in logonSessions ) { string logonID = logonSession["LogonId"].ToString(); Console.WriteLine( "=== {0}, type {1} ===", logonID, logonSession["LogonType"].ToString() ); RelatedObjectQuery relatedQuery = new RelatedObjectQuery( "associators of {Win32_LogonSession.LogonId='" + logonID + "'} WHERE AssocClass=Win32_LoggedOnUser Role=Dependent" ); ManagementObjectSearcher userQuery = new ManagementObjectSearcher( scope, relatedQuery ); ManagementObjectCollection users = userQuery.Get(); foreach ( ManagementObject user in users ) { PrintProperties( user.Properties ); } } Console.WriteLine( "\nDone! Press a key to exit..." ); Console.ReadKey( true ); } private static void PrintProperty( PropertyData pd ) { string value = "null"; string valueType = "n/a"; if ( null == pd.Value ) value = "null"; if ( pd.Value != null ) { value = pd.Value.ToString(); valueType = pd.Value.GetType().ToString(); } Console.WriteLine( " \"{0}\" = ({1}) \"{2}\"", pd.Name, valueType, value ); } private static void PrintProperties( PropertyDataCollection properties ) { foreach ( PropertyData pd in properties ) { PrintProperty( pd ); } } } } So... is there way to quickly and reliably obtain the user SID given the information I retrieve from WMI, or should I be looking at using something like SENS instead?

    Read the article

  • DataGridView CheckBox events

    - by Kevin
    I'm making a DataGridView with a series of Checkboxes in it with the same labels horizontally and vertically. Any labels that are the same, the checkboxes will be inactive, and I only want one of the two "checks" for each combination to be valid. The following screenshot shows what I have: Anything that's checked on the lower half, I want UN-checked on the upper. So if [quux, spam] (or [7, 8] for zero-based co-ordinates) is checked, I want [spam, quux] ([8, 7]) un-checked. What I have so far is the following: dgvSysGrid.RowHeadersWidthSizeMode = DataGridViewRowHeadersWidthSizeMode.AutoSizeToAllHeaders; dgvSysGrid.AutoSizeColumnsMode = DataGridViewAutoSizeColumnsMode.AllCells; string[] allsysNames = { "heya", "there", "lots", "of", "names", "foo", "bar", "quux", "spam", "eggs", "bacon" }; // Add a column for each entry, and a row for each entry, and mark the "diagonals" as readonly for (int i = 0; i < allsysNames.Length; i++) { dgvSysGrid.Columns.Add(new DataGridViewCheckBoxColumn(false)); dgvSysGrid.Columns[i].HeaderText = allsysNames[i]; dgvSysGrid.Rows.Add(); dgvSysGrid.Rows[i].HeaderCell.Value = allsysNames[i]; // Mark all of the "diagonals" as unable to change DataGridViewCell curDiagonal = dgvSysGrid[i, i]; curDiagonal.ReadOnly = true; curDiagonal.Style.BackColor = Color.Black; curDiagonal.Style.ForeColor = Color.Black; } // Hook up the event handler so that we can change the "corresponding" checkboxes as needed //dgvSysGrid.CurrentCellDirtyStateChanged += new EventHandler(dgvSysGrid_CurrentCellDirtyStateChanged); dgvSysGrid.CellValueChanged += new DataGridViewCellEventHandler(dgvSysGrid_CellValueChanged); } void dgvSysGrid_CellValueChanged(object sender, DataGridViewCellEventArgs e) { Point cur = new Point(e.ColumnIndex, e.RowIndex); // Change the diagonal checkbox to the opposite state DataGridViewCheckBoxCell curCell = (DataGridViewCheckBoxCell)dgvSysGrid[cur.X, cur.Y]; DataGridViewCheckBoxCell diagCell = (DataGridViewCheckBoxCell)dgvSysGrid[cur.Y, cur.X]; if ((bool)(curCell.Value) == true) { diagCell.Value = false; } else { diagCell.Value = true; } } /// <summary> /// Change the corresponding checkbox to the opposite state of the current one /// </summary> /// <param name="sender"></param> /// <param name="e"></param> void dgvSysGrid_CurrentCellDirtyStateChanged(object sender, EventArgs e) { Point cur = dgvSysGrid.CurrentCellAddress; // Change the diagonal checkbox to the opposite state DataGridViewCheckBoxCell curCell = (DataGridViewCheckBoxCell)dgvSysGrid[cur.X, cur.Y]; DataGridViewCheckBoxCell diagCell = (DataGridViewCheckBoxCell)dgvSysGrid[cur.Y, cur.X]; if ((bool)(curCell.Value) == true) { diagCell.Value = false; } else { diagCell.Value = true; } } The problem comes is that the cell value changed always seems to be "one behind" where you actually click if I use the CellValueChanged event, and I'm not sure how to get the current cell if I'm in the "dirty" state as curCell comes in as a null (suggesting the current cell address is wrong somehow, but I didn't try and get that value out) meaning that path isn't working at all. Basically, how do I get the "right" address with the right boolean value so that my flipping algorithm will work?

    Read the article

  • Should we develop a custom membership provider in this case?

    - by Allen
    I'll be adding a bounty to this, probably 200, more if you guys think its appropriate. I wont accept an answer until I can add a bounty so feel free to go ahead and answer now Summary Long story short, we've been tasked with gutting the authentication and authorization parts of a fairly old and bloated asp.net application that previously had all of these components written from scratch. Since our application isn't a typical one, and none of us have experience in asp.net's built in membership provider stuff, we're not sure if we should roll our own authentication and authorization again or if we should try to work within the asp.net membership provider mindset and develop our own membership provider. Our Application We have a fairly old asp.net application that gets installed at customer locations to service clients on a LAN. Admins create users (users do not sign up) and depending on the install, we may have the software integrated with LDAP. Currently, the LDAP integration bulk-imports the users to our database and when they login, it authenticates against LDAP so we dont have to manage their passwords. Nothing amazing there. Admins can assign users to 1 group and they can change the authorization of that group to manage access to various parts of the software. Groups are maintained by Admins (web based UI) and as said earlier, granted / denied permissions to certain functionality within the application. All this was completely written from the ground up without using any of the built in .net authorization or authentication. We literally have IsLoggedIn() methods that check for login and redirect to our login page if they aren't. Our Rewrite We've been tasked to integrate more tightly with LDAP, they want us to tie groups in our application to groups (or whatever types of containers that LDAP uses) in LDAP so that when a customer opt's to use our LDAP integration, they dont have to manage their users in LDAP AND in our application. The new way, they will simply create users in LDAP, add them to Groups in LDAP and our application will see that they belong to the appropriate LDAP group and authenticate and authorize them. In addition, we've been granted the go ahead to completely rip out the User authentication and authorization code and completely re-do it. Our Problem The problem is that none of us have any experience with asp.net membership provider functionality. The little bit of exposure I have to it makes me worry that it was not intended to be used for an application such as ours. Though, developing our own ASP.NET Membership Provider and Role Manager sounds like it would be a great experience and most likely the appropriate thing to do. Basically, I'm looking for advice, should we be using the ASP.NET Membership provider & Role Management API or should we continue to roll our own? I know this decision will be influenced by our requirements so I'm going over them below Our Requirements Just a quick n dirty list Maintain the ability to have a db of users and authenticate them and give admins (only, not users) the ability to CRUD users Allow the site to integrate with LDAP, when this is chosen, they don't want any users stored in the DB, only the relationship between Groups as they exist in our app / db and the Groups/Containers as they exist in LDAP. .net 3.5 is being used (mix of asp.net webforms and asp.net mvc) Has to work in ASP.NET and ASP.NET MVC (shouldn't be a problem I'm guessing) This can't be user centric, administrators need to be the only ones that CRUD (or import via ldap) users and groups We have to be able to Auth via LDAP when its configured to do so I always try to monitor my questions closely so feel free to ask for more info. Also, as a general summary of what I'm looking for in an answer is just. "You should/shouldn't use xyz, here's why". Links regarding asp.net membership provider and role management stuff are very welcome, most of the stuff I'm finding is 5+ years old. Edit: Added some stuff to "Our Rewrite"

    Read the article

  • Using UUIDs for cheap equals() and hashCode()

    - by Tom McIntyre
    I have an immutable class, TokenList, which consists of a list of Token objects, which are also immutable: @Immutable public final class TokenList { private final List<Token> tokens; public TokenList(List<Token> tokens) { this.tokens = Collections.unmodifiableList(new ArrayList(tokens)); } public List<Token> getTokens() { return tokens; } } I do several operations on these TokenLists that take multiple TokenLists as inputs and return a single TokenList as the output. There can be arbitrarily many TokenLists going in, and each can have arbitrarily many Tokens. These operations are expensive, and there is a good chance that the same operation (ie the same inputs) will be performed multiple times, so I would like to cache the outputs. However, performance is critical, and I am worried about the expense of performing hashCode() and equals() on these objects that may contain arbitrarily many elements (as they are immutable then hashCode could be cached, but equals will still be expensive). This led me to wondering whether I could use a UUID to provide equals() and hashCode() simply and cheaply by making the following updates to TokenList: @Immutable public final class TokenList { private final List<Token> tokens; private final UUID uuid; public TokenList(List<Token> tokens) { this.tokens = Collections.unmodifiableList(new ArrayList(tokens)); this.uuid = UUID.randomUUID(); } public List<Token> getTokens() { return tokens; } public UUID getUuid() { return uuid; } } And something like this to act as a cache key: @Immutable public final class TopicListCacheKey { private final UUID[] uuids; public TopicListCacheKey(TopicList... topicLists) { uuids = new UUID[topicLists.length]; for (int i = 0; i < uuids.length; i++) { uuids[i] = topicLists[i].getUuid(); } } @Override public int hashCode() { return Arrays.hashCode(uuids); } @Override public boolean equals(Object other) { if (other == this) return true; if (other instanceof TopicListCacheKey) return Arrays.equals(uuids, ((TopicListCacheKey) other).uuids); return false; } } I figure that there are 2^128 different UUIDs and I will probably have at most around 1,000,000 TokenList objects active in the application at any time. Given this, and the fact that the UUIDs are used combinatorially in cache keys, it seems that the chances of this producing the wrong result are vanishingly small. Nevertheless, I feel uneasy about going ahead with it as it just feels 'dirty'. Are there any reasons I should not use this system? Will the performance costs of the SecureRandom used by UUID.randomUUID() outweigh the gains (especially since I expect multiple threads to be doing this at the same time)? Are collisions going to be more likely than I think? Basically, is there anything wrong with doing it this way?? Thanks.

    Read the article

  • Animated background image in a hidden <div> doesn't load or loads not animated

    - by Guanche
    Hello, I have spent the whole day trying to make a script which on "submit" hides the form and shows hidden with animated progress bar. The problem is that Internet Explorer doesn't show animated gif images in hidden divs. The images are static. I visited many websites and found a script which uses: document.getElementById(id).style.backgroundImage = 'url(/images/load.gif)'; Finally, my script works in Internet Explorer, Firefox, Opera but... Google Chrome doesn't display the image at all. I see only div text. After many tests I discovered the following: the only way to see the background image in Google Chrome is to include the same image somewhere in the page (outside of hidden div) with 1px dimensions: <img src="/images/load.gif" width="1" heigh="1" /> This did the trick but... after this dirty solution Microsoft Explorer for some reason shows the image as static again. So, my question is: is there any way how to force Gogle Chrome to show the image? Thanks. This is my script: <script language="JavaScript" type="text/javascript"> function ver (id, elementId){ if (document.getElementById('espera').style.visibility == "visible") { return false; }else{ var esplit = document.forms[0]['userfile'].value.split("."); ext = esplit[esplit.length-1]; if (document.forms[0]['userfile'].value == '') { alert('Please select a file'); return false; }else{ if ((ext.toLowerCase() == 'jpg')) { document.getElementById(id).style.position = 'absolute'; document.getElementById(id).style.display = 'inline'; document.getElementById(id).style.visibility = "visible"; document.getElementById(id).style.backgroundImage = 'url(/images/load.gif)'; document.getElementById(id).style.height = "100px"; document.getElementById(id).style.backgroundColor = '#f3f3f3'; document.getElementById(id).style.backgroundRepeat = "no-repeat"; document.getElementById(id).style.backgroundPosition = "50% 50%"; var element; if (document.all) element = document.all[elementId]; else if (document.getElementById) element = document.getElementById(elementId); if (element && element.style) element.style.display = 'none'; return true; }else{ alert('This is not a jpg file'); return false; } } } } </script> <div id="frmDiv"> <form enctype="multipart/form-data" action="/upload.php" method="post" name="upload3" onsubmit="return ver('espera','frmDiv');"> <input type="hidden" name="max_file_size" value="4194304" /> <table border="0" cellspacing="1" cellpadding="2" width="100%"> <tr bgcolor="#f5f5f5"> <td>File (jpg)</td> <td> <input type="file" name="userfile" class="upf" /></td></tr> <tr bgcolor="#f5f5f5"> <td>&nbsp;</td> <td> <input class="upf2" type="submit" name="add" value="Upload" /> </td></tr></table></form> </div> <div id="espera" style="display:none;text-align:center;float:left;width:753px;">&nbsp;<br />&nbsp;<br />&nbsp;<br />&nbsp;<br /> &nbsp;<br />Please wait...<br />&nbsp; </div>

    Read the article

  • How to show form in front in C#

    - by corlettk
    Folks, Please does anyone know how to show a Form from an otherwise invisible application, and have it get the focus (i.e. appear on top of other windows)? I'm working in C# .NET 3.5. I suspect I've taken "completely the wrong approach"... I do not Application.Run(new TheForm ()) instead I (new TheForm()).ShowModal()... The Form is basically a modal dialogue, with a few check-boxes; a text-box, and OK and Cancel Buttons. The user ticks a checkbox and types in a description (or whatever) then presses OK, the form disappears and the process reads the user-input from the Form, Disposes it, and continues processing. This works, except when the form is show it doesn't get the focus, instead it appears behind the "host" application, until you click on it in the taskbar (or whatever). This is a most annoying behaviour, which I predict will cause many "support calls", and the existing VB6 version doesn't have this problem, so I'm going backwards in usability... and users won't accept that (and nor should they). So... I'm starting to think I need to rethink the whole shebang... I should show the form up front, as a "normal application" and attach the remainer of the processing to the OK-button-click event. It should work, But that will take time which I don't have (I'm already over time/budget)... so first I really need to try to make the current approach work... even by quick-and-dirty methods. So please does anyone know how to "force" a .NET 3.5 Form (by fair means or fowl) to get the focus? I'm thinking "magic" windows API calls (I know Twilight Zone: This only appears to be an issue at work, we're I'm using Visual Studio 2008 on Windows XP SP3... I've just failed to reproduce the problem with an SSCCE (see below) at home on Visual C# 2008 on Vista Ulimate... This works fine. Huh? WTF? Also, I'd swear that at work yesterday showed the form when I ran the EXE, but not when F5'ed (or Ctrl-F5'ed) straight from the IDE (which I just put up with)... At home the form shows fine either way. Totaly confusterpating! It may or may not be relevant, but Visual Studio crashed-and-burned this morning when the project was running in debug mode and editing the code "on the fly"... it got stuck what I presumed was an endless loop of error messages. The error message was something about "can't debug this project because it is not the current project, or something... So I just killed it off with process explorer. It started up again fine, and even offered to recover the "lost" file, an offer which I accepted. using System; using System.Windows.Forms; namespace ShowFormOnTop { static class Program { [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); //Application.Run(new Form1()); Form1 frm = new Form1(); frm.ShowDialog(); } } } Background: I'm porting an existing VB6 implementation to .NET... It's a "plugin" for a "client" GIS application called MapInfo. The existing client "worked invisibly" and my instructions are "to keep the new version as close as possible to the old version", which works well enough (after years of patching); it's just written in an unsupported language, so we need to port it. About me: I'm pretty much a noob to C# and .NET generally, though I've got a bottoms wiping certificate, I have been a professional programmer for 10 years; So I sort of "know some stuff". Any insights would be most welcome... and Thank you all for taking the time to read this far. Consiseness isn't (apparently) my forte. Cheers. Keith.

    Read the article

  • Trac FastCGI + Python on dreamhost leaves zombie python running

    - by Katsuke
    Hello there, Recently I installed trac in one of my dreamhost domains. I followed the instructions of http://trac.mlalonde.net/wiki/CreamyTrac and everything worked perfectly. At least i thought that was the case. After a few days, i started to notice that i was getting random 500 pages. I quickly checked the error log, and found a bunch of: [Fri Apr 16 14:35:34 2010] [error] [client *.*.*.*] Premature end of script headers: dispatch.fcgi [Fri Apr 16 14:35:54 2010] [error] [client *.*.*.*] Premature end of script headers: dispatch.fcgi, referer: http://www.trac.****.com/login [Fri Apr 16 16:05:58 2010] [error] [client *.*.*.*] Premature end of script headers: dispatch.fcgi, referer: http://www.trac.****.com/timeline The trac instance has very low traffic so it is possible that this might have been triggered faster if there was more use of it. So next step i went to look at top and am amazed by what i see: (this is NOT exactly what i saw, i just reproduced this from memory) 23730 rl_inst 20 0 44516 15m 4012 S 0.0 0.4 0:00.17 python2.4 23731 rl_inst 20 0 44616 15m 4012 S 0.0 0.4 0:03.17 python2.4 23732 rl_inst 20 0 44116 15m 4012 S 0.0 0.4 0:01.17 python2.4 23733 rl_inst 20 0 44826 15m 4012 S 0.0 0.4 0:04.17 python2.4 23734 rl_inst 20 0 44216 15m 4012 S 0.0 0.4 0:02.17 python2.4 23735 rl_inst 20 0 44416 15m 4012 S 0.0 0.4 0:01.17 python2.4 I opened the trac site again and that changed to this: 23730 rl_inst 20 0 44516 15m 4012 S 0.0 0.4 0:00.17 python2.4 23731 rl_inst 20 0 44616 15m 4012 S 0.0 0.4 0:03.17 python2.4 23732 rl_inst 20 0 44116 15m 4012 S 0.0 0.4 0:01.17 python2.4 23733 rl_inst 20 0 44826 15m 4012 S 0.0 0.4 0:04.17 python2.4 23734 rl_inst 20 0 44216 15m 4012 S 0.0 0.4 0:02.17 python2.4 23735 rl_inst 20 0 44416 15m 4012 S 0.0 0.4 0:01.17 python2.4 28378 rl_inst 20 0 2608 1208 1008 S 0.0 0.0 0:00.00 dispatch.fcgi 28382 rl_inst 20 0 44248 15m 4012 S 0.0 0.4 0:02.19 python2.4 So the dispatch.fcgi was being called correctly and was spawning correctly other python2.4 process. After the IDLE time passed this is the results: 23730 rl_inst 20 0 44516 15m 4012 S 0.0 0.4 0:00.17 python2.4 23731 rl_inst 20 0 44616 15m 4012 S 0.0 0.4 0:03.17 python2.4 23732 rl_inst 20 0 44116 15m 4012 S 0.0 0.4 0:01.17 python2.4 23733 rl_inst 20 0 44826 15m 4012 S 0.0 0.4 0:04.17 python2.4 23734 rl_inst 20 0 44216 15m 4012 S 0.0 0.4 0:02.17 python2.4 23735 rl_inst 20 0 44416 15m 4012 S 0.0 0.4 0:01.17 python2.4 28382 rl_inst 20 0 44248 15m 4012 S 0.0 0.4 0:02.19 python2.4 dispatch.fcgi was gone but the corresponding python2.4 was still there o_O. I had started with 5 sleeping processes of python and after idle time i ended up with 6. I repeated this and found that it kept spawning more and more python2.4, this was indeed what was causing my 500s indirectly. Let me explain my findings. Am on a shared hosting so my processes get killed if they are way too many. So everytime i opened trac 2 new processes spawned and one remained, to the point that dreamhost was killing a random process when i reached my limit. Sometimes killing the python2.4 that actually was rendering the current page. Hence header premature ending, python is gone the .fcgi doesnt know what to do and throws an 500. I found a dirty solution to this. I changed my dispatch.fcgi to contain a line that killed any currently running python2.4 process and then spawn a new one. Since then i dont get any rogues what so ever. But i dont think this is the best solution, calling killall in every fcgi just seems wrong. Anyone has run into this issue and found a cleaner solution? Is there anything that i have overlooked?

    Read the article

  • has_one and has_many associations: which side of the association is saved first

    - by SeeBees
    I have three simplified models: class Team < ActiveRecord::Base has_many :players has_one :coach end class Player < ActiveRecord::Base belongs_to :team validates_presence_of :team_id end class Coach < ActiveRecord::Base belongs_to :team validates_presence_of :team_id end I use the following code to test these models: t = Team.new team.coach = Coach.new team.save! team.save! returns true. But in another test: t = Team.new team.players << Player.new team.save! team.save! gives the following error: > ActiveRecord::RecordInvalid: > Validation failed: Players is invalid > from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/validations.rb:1090:in > `save_without_dirty!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/dirty.rb:87:in `save_without_transactions!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:200:in > `save!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connection_adapters/abstract/database_statements.rb:136:in > `transaction' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:182:in > `transaction' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:200:in > `save!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:208:in > `rollback_active_record_state!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:200:in > `save!' from (irb):14 I figured out when team.save! is called, it first calls player.save!. player needs to validate the presence of the id of the associated team. But at the time player.save! is called, team hasn't been saved yet, and therefore, team_id doesn't yet exist for player. This fails the player's validation, so the error occurs. But on the other hand, team is saved before coach.save!, otherwise the first example will get the same error as the second. So I've concluded that when a has_many bs, a.save! will save bs prior to a. When a has_one b, a.save! will save a prior to b. If I am right, why is this the case? It doesn't seem logical to me. Why has_one and has_many association have different order in saving? Any ideas? Thanks.

    Read the article

  • How can I get SQL Server transactions to use record-level locks?

    - by Joe White
    We have an application that was originally written as a desktop app, lo these many years ago. It starts a transaction whenever you open an edit screen, and commits if you click OK, or rolls back if you click Cancel. This worked okay for a desktop app, but now we're trying to move to ADO.NET and SQL Server, and the long-running transactions are problematic. I found that we'll have a problem when multiple users are all trying to edit (different subsets of) the same table at the same time. In our old database, each user's transaction would acquire record-level locks to every record they modified during their transaction; since different users were editing different records, everyone gets their own locks and everything works. But in SQL Server, as soon as one user edits a record inside a transaction, SQL Server appears to get a lock on the entire table. When a second user tries to edit a different record in the same table, the second user's app simply locks up, because the SqlConnection blocks until the first user either commits or rolls back. I'm aware that long-running transactions are bad, and I know that the best solution would be to change these screens so that they no longer keep transactions open for a long time. But since that would mean some invasive and risky changes, I also want to research whether there's a way to get this code up and running as-is, just so I know what my options are. How can I get two different users' transactions in SQL Server to lock individual records instead of the entire table? Here's a quick-and-dirty console app that illustrates the issue. I've created a database called "test1", with one table called "Values" that just has ID (int) and Value (nvarchar) columns. If you run the app, it asks for an ID to modify, starts a transaction, modifies that record, and then leaves the transaction open until you press ENTER. I want to be able to start the program and tell it to update ID 1; let it get its transaction and modify the record; start a second copy of the program and tell it to update ID 2; have it able to update (and commit) while the first app's transaction is still open. Currently it freezes at step 4, until I go back to the first copy of the app and close it or press ENTER so it commits. The call to command.ExecuteNonQuery blocks until the first connection is closed. public static void Main() { Console.Write("ID to update: "); var id = int.Parse(Console.ReadLine()); Console.WriteLine("Starting transaction"); using (var scope = new TransactionScope()) using (var connection = new SqlConnection(@"Data Source=localhost\sqlexpress;Initial Catalog=test1;Integrated Security=True")) { connection.Open(); var command = connection.CreateCommand(); command.CommandText = "UPDATE [Values] SET Value = 'Value' WHERE ID = " + id; Console.WriteLine("Updating record"); command.ExecuteNonQuery(); Console.Write("Press ENTER to end transaction: "); Console.ReadLine(); scope.Complete(); } } Here are some things I've already tried, with no change in behavior: Changing the transaction isolation level to "read uncommitted" Specifying a "WITH (ROWLOCK)" on the UPDATE statement

    Read the article

  • Drawing outlines around organic shapes

    - by ThunderChunky_SF
    One thing that seems particularly easy to do in the Flash IDE but difficult to do with code is to outline an organic shape. In the IDE you can just use the inkbucket tool to draw a stroke around something. Using nothing but code it seems much trickier. One method I've seen is to add a glow filter to the shape in question and just mess with the strength. But what if i want to only show the outline? What I'd like to do is to collect all of the points that make up the edge of the shape and then just connect the dots. I've actually gotten so far as to collect all of the points with a quick and dirty edge detection script that I wrote. So now I have a Vector of all the points that makeup my shape. How do I connect them in the proper sequence so it actually looks like the original object? For anyone who is interested here is my edge detection script: // Create a new sprite which we'll use for our outline var sp:Sprite = new Sprite(); var radius:int = 50; sp.graphics.beginFill(0x00FF00, 1); sp.graphics.drawCircle(0, 0, radius); sp.graphics.endFill(); sp.x = stage.stageWidth / 2; sp.y = stage.stageHeight / 2; // Create a bitmap data object to draw our vector data var bmd:BitmapData = new BitmapData(sp.width, sp.height, true, 0); // Use a transform matrix to translate the drawn clip so that none of its // pixels reside in negative space. The draw method will only draw starting // at 0,0 var mat:Matrix = new Matrix(1, 0, 0, 1, radius, radius); bmd.draw(sp, mat); // Pass the bitmap data to an actual bitmap var bmp:Bitmap = new Bitmap(bmd); // Add the bitmap to the stage addChild(bmp); // Grab all of the pixel data from the bitmap data object var pixels:Vector.<uint> = bmd.getVector(bmd.rect); // Setup a vector to hold our stroke points var points:Vector.<Point> = new Vector.<Point>; // Loop through all of the pixels of the bitmap data object and // create a point instance for each pixel location that isn't // transparent. var l:int = pixels.length; for(var i:int = 0; i < l; ++i) { // Check to see if the pixel is transparent if(pixels[i] != 0) { var pt:Point; // Check to see if the pixel is on the first or last // row. We'll grab everything from these rows to close the outline if(i <= bmp.width || i >= (bmp.width * bmp.height) - bmp.width) { pt = new Point(); pt.x = int(i % bmp.width); pt.y = int(i / bmp.width); points.push(pt); continue; } // Check to see if the current pixel is on either extreme edge if(int(i % bmp.width) == 0 || int(i % bmp.width) == bmp.width - 1) { pt = new Point(); pt.x = int(i % bmp.width); pt.y = int(i / bmp.width); points.push(pt); continue; } // Check to see if the previous or next pixel are transparent, // if so save the current one. if(i > 0 && i < bmp.width * bmp.height) { if(pixels[i - 1] == 0 || pixels[i + 1] == 0) { pt = new Point(); pt.x = int(i % bmp.width); pt.y = int(i / bmp.width); points.push(pt); } } } }

    Read the article

  • How to convert an ORM to its subclass using Hibernate ?

    - by Gaaston
    Hi everybody, For example, I have two classes : Person and Employee (Employee is a subclass of Person). Person : has a lastname and a firstname. Employee : has also a salary. On the client-side, I have a single HTML form where i can fill the person informations (like lastname and firstname). I also have a "switch" between "Person" and "Employee", and if the switch is on Employee I can fill the salary field. On the server-side, Servlets receive informations from the client and use the Hibernate framework to create/update data to/from the database. The mapping i'm using is a single table for persons and employee, with a discriminator. I don't know how to convert a Person in an Employee. I firstly tried to : load the Person p from the database create an empty Employee e object copy values from p into e set the salary value save e into the database But i couldn't, as I also copy the ID, and so Hibernate told me they where two instanciated ORM with the same id. And I can't cast a Person into an Employee directly, as Person is Employee's superclass. There seems to be a dirty way : delete the person, and create an employee with the same informations, but I don't really like it.. So I'd appreciate any help on that :) Some precisions : The person class : public class Person { protected int id; protected String firstName; protected String lastName; // usual getters and setters } The employee class : public class Employee extends Person { // string for now protected String salary; // usual getters and setters } And in the servlet : // type is the "switch" if(request.getParameter("type").equals("Employee")) { Employee employee = daoPerson.getEmployee(Integer.valueOf(request.getParameter("ID"))); modifyPerson(employee, request); employee.setSalary(request.getParameter("salary")); daoPerson.save(employee ); } else { Person person = daoPerson.getPerson(Integer.valueOf(request.getParameter("ID"))); modifyPerson(employee, request); daoPerson.save(person); } And finally, the loading (in the dao) : public Contact getPerson(int ID){ Session session = HibernateSessionFactory.getSession(); Person p = (Person) session.load(Person.class, new Integer(ID)); return p; } public Contact getEmployee(int ID){ Session session = HibernateSessionFactory.getSession(); Employee = (Employee) session.load(Employee.class, new Integer(ID)); return p; } With this, i'm getting a ClassCastException when trying to load a Person using getEmployee. XML Hibernate mapping : <class name="domain.Person" table="PERSON" discriminator-value="P"> <id name="id" type="int"> <column name="ID" /> <generator class="native" /> </id> <discriminator column="type" type="character"/> <property name="firstName" type="java.lang.String"> <column name="FIRSTNAME" /> </property> <property name="lastName" type="java.lang.String"> <column name="LASTNAME" /> </property> <subclass name="domain.Employee" discriminator-value="E"> <property name="salary" column="SALARY" type="java.lang.String" /> </subclass> </class> Is it clear enough ? :-/

    Read the article

  • GetAcceptExSockaddrs returns garbage! Does anyone know why?

    - by David
    Hello, I'm trying to write a quick/dirty echoserver in Delphi, but I notice that GetAcceptExSockaddrs seems to be writing to only the first 4 bytes of the structure I pass it. USES SysUtils; TYPE BOOL = LongBool; DWORD = Cardinal; LPDWORD = ^DWORD; short = SmallInt; ushort = Word; uint16 = Word; uint = Cardinal; ulong = Cardinal; SOCKET = uint; PVOID = Pointer; _HANDLE = DWORD; _in_addr = packed record s_addr : ulong; end; _sockaddr_in = packed record sin_family : short; sin_port : uint16; sin_addr : _in_addr; sin_zero : array[0..7] of Char; end; P_sockaddr_in = ^_sockaddr_in; _Overlapped = packed record Internal : Int64; Offset : Int64; hEvent : _HANDLE; end; LP_Overlapped = ^_Overlapped; IMPORTS function _AcceptEx (sListenSocket, sAcceptSocket : SOCKET; lpOutputBuffer : PVOID; dwReceiveDataLength, dwLocalAddressLength, dwRemoteAddressLength : DWORD; lpdwBytesReceived : LPDWORD; lpOverlapped : LP_OVERLAPPED) : BOOL; stdcall; external MSWinsock name 'AcceptEx'; procedure _GetAcceptExSockaddrs (lpOutputBuffer : PVOID; dwReceiveDataLength, dwLocalAddressLength, dwRemoteAddressLength : DWORD; LocalSockaddr : P_Sockaddr_in; LocalSockaddrLength : LPINT; RemoteSockaddr : P_Sockaddr_in; RemoteSockaddrLength : LPINT); stdcall; external MSWinsock name 'GetAcceptExSockaddrs'; CONST BufDataSize = 8192; BufAddrSize = SizeOf (_sockaddr_in) + 16; VAR ListenSock, AcceptSock : SOCKET; Addr, LocalAddr, RemoteAddr : _sockaddr_in; LocalAddrSize, RemoteAddrSize : INT; Buf : array[1..BufDataSize + BufAddrSize * 2] of Byte; BytesReceived : DWORD; Ov : _Overlapped; BEGIN //WSAStartup, create listen socket, bind to port 1066 on any interface, listen //Create event for overlapped (autoreset, initally not signalled) //Create accept socket if _AcceptEx (ListenSock, AcceptSock, @Buf, BufDataSize, BufAddrSize, BufAddrSize, @BytesReceived, @Ov) then WinCheck ('SetEvent', _SetEvent (Ov.hEvent)) else if GetLastError <> ERROR_IO_PENDING then WinCheck ('AcceptEx', GetLastError); {do WaitForMultipleObjects} _GetAcceptExSockaddrs (@Buf, BufDataSize, BufAddrSize, BufAddrSize, @LocalAddr, @LocalAddrSize, @RemoteAddr, @RemoteAddrSize); So if I run this, connect to it with Telnet (on same computer, connecting to localhost) and then type a key, WaitForMultipleObjects will unblock and GetAcceptExSockaddrs will run. But the result is garbage! RemoteAddr.sin_family = -13894 RemoteAddr.sin_port = 64 and the rest is zeroes. What gives? Thanks in advance!

    Read the article

  • When saving a model with has_one or has_many associations, which side of the association is saved fi

    - by SeeBees
    I have three simplified models: class Team < ActiveRecord::Base has_many :players has_one :coach end class Player < ActiveRecord::Base belongs_to :team validates_presence_of :team_id end class Coach < ActiveRecord::Base belongs_to :team validates_presence_of :team_id end I use the following code to test these models: t = Team.new team.coach = Coach.new team.save! team.save! returns true. But in another test: t = Team.new team.players << Player.new team.save! team.save! gives the following error: > ActiveRecord::RecordInvalid: > Validation failed: Players is invalid > from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/validations.rb:1090:in > `save_without_dirty!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/dirty.rb:87:in `save_without_transactions!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:200:in > `save!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connection_adapters/abstract/database_statements.rb:136:in > `transaction' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:182:in > `transaction' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:200:in > `save!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:208:in > `rollback_active_record_state!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:200:in > `save!' from (irb):14 I figured out that when team.save! is called, it first calls player.save!. player needs to validate the presence of the id of the associated team. But at the time player.save! is called, team hasn't been saved yet, and therefore, team_id doesn't yet exist for player. This fails the player's validation, so the error occurs. But on the other hand, team is saved before coach.save!, otherwise the first example will get the same error as the second one. So I've concluded that when a has_many bs, a.save! will save bs prior to a. When a has_one b, a.save! will save a prior to b. If I am right, why is this the case? It doesn't seem logical to me. Why do has_one and has_many association have different order in saving? Any ideas? And is there any way I can change the order? Say I want to have the same saving order for both has_one and has_many. Thanks.

    Read the article

  • Loading multiple copies of a group of DLLs in the same process

    - by george
    Background I'm maintaining a plugin for an application. I'm Using Visual C++ 2003. The plugin is composed of several DLLs - there's the main DLL, that's the one that the application loads using LoadLibrary, and there are several utility DLLs that are used by the main DLL and by each other. Dependencies generally look like this: plugin.dll - utilA.dll, utilB.dll utilA.dll - utilB.dll utilB.dll - utilA.dll, utilC.dll You get the picture. Some of the dependencies between the DLLs are load-time and some run-time. All the DLL files are stored in the executable's directory (not a requirement, just how it works now). The problem There's a new requirement - running multiple instances of the plugin within the application. The application runs each instance of a plugin in its own thread, i.e. each thread calls The plugin's code, however, is nothing but thread-safe - lots of global variables etc.. Unfortunately, fixing the whole thing isn't currently an option, so I need a way to load multiple (at most 3) copies of the plugin's DLLs in the same process. Option 1: The distinct names approach Creating 3 copies of each DLL file, so that each file has a distinct name. e.g. plugin1.dll, plugin2.dll, plugin3.dll, utilA1.dll, utilA2.dll, utilA3.dll, utilB1.dll, etc.. The application will load plugin1.dll, plugin2.dll and plugin3.dll. The files will be in the executable's directory. For each group of DLLs to know each other by name (so the inter-dependencies work), the names need to be known at compilation time - meaning the DLLs need to be compiled multiple times, only each time with different output file names. Not very complicated, but I'd hate having 3 copies of the VS project files, and don't like having to compile the same files over and over. Option 2: The side-by-side assemblies approach Creating 3 copies of the DLL files, each group in its own directory, and defining each group as an assembly by putting an assembly manifest file in the directory, listing the plugin's DLLs. Each DLL will have an application manifest pointing to the assembly, so that the loader finds the copies of the utility DLLs that reside in the same directory. The manifest needs to be embedded for it to be found when a DLL is loaded using LoadLibrary. I'll use mt.exe from a later VS version for the job, since VS2003 has no built-in manifest embedding support. I've tried this approach with partial success - dependencies are found during load-time of the DLLs, but not when a DLL function is called that loads another DLL. This seems to be the expected behavior according to this article - A DLL's activation context is only used at the DLL's load-time, and afterwards it's deactivated and the process's activation context is used. I haven't yet tried working around this using ISOLATION_AWARE_ENABLED. Questions Got any other options? Any quick & dirty solution will do. :-) Will ISOLATION_AWARE_ENABLED even work with VS2003? Comments will be greatly appreciated. Thanks!

    Read the article

  • Configuring UCM cache to check for external Content Server changes

    - by Martin Deh
    Recently, I was involved in a customer scenario where they were modifying the Content Server's contributor data files directly through Content Server.  This operation of course is completely supported.  However, since the contributor data file was modified through the "backdoor", a running WebCenter Spaces page, which also used the same data file, would not get the updates immediately.  This was due to two reasons.  The first reason is that the Spaces page was using Content Presenter to display the contents of the data file. The second reason is that the Spaces application was using the "cached" version of the data file.  Fortunately, there is a way to configure cache so backdoor changes can be picked up more quickly and automatically. First a brief overview of Content Presenter.  The Content Presenter task flow enables WebCenter Spaces users with Page-Edit permissions to precisely customize the selection and presentation of content in a WebCenter Spaces application.  With Content Presenter, you can select a single item of content, contents under a folder, a list of items, or query for content, and then select a Content Presenter based template to render the content on a page in a Spaces application.  In addition to displaying the folders and the files in a Content Server, Content Presenter integrates with Oracle Site Studio to allow you to create, access, edit, and display Site Studio contributor data files (Content Server Document) in either a Site Studio region template or in a custom Content Presenter display template.  More information about creating Content Presenter Display Template can be found in the OFM Developers Guide for WebCenter Portal. The easiest way to configure the cache is to modify the WebCenter Spaces Content Server service connection setting through Enterprise Manager.  From here, under the Cache Details, there is a section to set the Cache Invalidation Interval.  Basically, this enables the cache to be monitored by the cache "sweeper" utility.  The cache sweeper queries for changes in the Content Server, and then "marks" the object in cache as "dirty".  This causes the application in turn to get a new copy of the document from the Content Server that replaces the cached version.  By default the initial value for the Cache Invalidation Interval is set to 0 (minutes).  This basically means that the sweeper is OFF.  To turn the sweeper ON, just set a value (in minutes).  The mininal value that can be set is 2 (minutes): Just a note.  In some instances, once the value of the Cache Invalidation Interval has been set (and saved) in the Enterprise Manager UI, it becomes "sticky" and the interval value cannot be set back to 0.  The good news is that this value can also be updated throught a WLST command.   The WLST command to run is as follows: setJCRContentServerConnection(appName, name, [socketType, url, serverHost, serverPort, keystoreLocation, keystorePassword, privateKeyAlias, privateKeyPassword, webContextRoot, clientSecurityPolicy, cacheInvalidationInterval, binaryCacheMaxEntrySize, adminUsername, adminPassword, extAppId, timeout, isPrimary, server, applicationVersion]) One way to get the required information for executing the command is to use the listJCRContentServerConnections('webcenter',verbose=true) command.  For example, this is the sample output from the execution: ------------------ UCM ------------------ Connection Name: UCM Connection Type: JCR External Appliction ID: Timeout: (not set) CIS Socket Type: socket CIS Server Hostname: webcenter.oracle.local CIS Server Port: 4444 CIS Keystore Location: CIS Private Key Alias: CIS Web URL: Web Server Context Root: /cs Client Security Policy: Admin User Name: sysadmin Cache Invalidation Interval: 2 Binary Cache Maximum Entry Size: 1024 The Documents primary connection is "UCM" From this information, the completed  setJCRContentServerConnection would be: setJCRContentServerConnection(appName='webcenter',name='UCM', socketType='socket', serverHost='webcenter.oracle.local', serverPort='4444', webContextRoot='/cs', cacheInvalidationInterval='0', binaryCacheMaxEntrySize='1024',adminUsername='sysadmin',isPrimary=1) Note: The Spaces managed server must be restarted for the change to take effect. More information about using WLST for WebCenter can be found here. Once the sweeper is turned ON, only cache objects that have been changed will be invalidated.  To test this out, I will go through a simple scenario.  The first thing to do is configure the Content Server so it can monitor and report on events.  Log into the Content Server console application, and under the Administration menu item, select System Audit Information.  Note: If your console is using the left menu display option, the Administration link will be located there. Under the Tracing Sections Information, add in only "system" and "requestaudit" in the Active Sections.  Check Full Verbose Tracing, check Save, then click the Update button.  Once this is done, select the View Server Output menu option.  This will change the browser view to display the log.  This is all that is needed to configure the Content Server. For example, the following is the View Server Output with the cache invalidation interval set to 2(minutes) Note the time stamp: requestaudit/6 08.30 09:52:26.001  IdcServer-68    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.016933999955654144(secs) requestaudit/6 08.30 09:52:26.010  IdcServer-69    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.006134999915957451(secs) requestaudit/6 08.30 09:52:26.014  IdcServer-70    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.004271999932825565(secs) ... other trace info ... requestaudit/6 08.30 09:54:26.002  IdcServer-71    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.020323999226093292(secs) requestaudit/6 08.30 09:54:26.011  IdcServer-72    GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.017928000539541245(secs) requestaudit/6 08.30 09:54:26.017  IdcServer-73    GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.010185999795794487(secs) Now that the tracing logs are reporting correctly, the next step is set up the Spaces app to test the sweeper. I will use 2 different pages that will use Content Presenter task flows.  Each task flow will use a different custom Content Presenter display template, and will be assign 2 different contributor data files (document that will be in the cache).  The pages at run time appear as follows: Initially, when the Space pages containing the content is loaded in the browser for the first time, you can see the tracing information in the Content Server output viewer. requestaudit/6 08.30 11:51:12.030 IdcServer-129 CLEAR_SERVER_OUTPUT [dUser=weblogic] 0.029171999543905258(secs) requestaudit/6 08.30 11:51:12.101 IdcServer-130 GET_SERVER_OUTPUT [dUser=weblogic] 0.025721000507473946(secs) requestaudit/6 08.30 11:51:26.592 IdcServer-131 VCR_GET_DOCUMENT_BY_NAME [dID=919][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.21525299549102783(secs) requestaudit/6 08.30 11:51:27.117 IdcServer-132 VCR_GET_CONTENT_TYPES [dUser=sysadmin][IsJava=1] 0.5059549808502197(secs) requestaudit/6 08.30 11:51:27.146 IdcServer-133 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.03360399976372719(secs) requestaudit/6 08.30 11:51:27.169 IdcServer-134 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.008806000463664532(secs) requestaudit/6 08.30 11:51:27.204 IdcServer-135 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.013265999965369701(secs) requestaudit/6 08.30 11:51:27.384 IdcServer-136 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.18119299411773682(secs) requestaudit/6 08.30 11:51:27.533 IdcServer-137 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.1519480049610138(secs) requestaudit/6 08.30 11:51:27.634 IdcServer-138 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.10827399790287018(secs) requestaudit/6 08.30 11:51:27.687 IdcServer-139 VCR_GET_CONTENT_TYPE [dUser=sysadmin][IsJava=1] 0.059702999889850616(secs) requestaudit/6 08.30 11:51:28.271 IdcServer-140 GET_USER_PERMISSIONS [dUser=weblogic][IsJava=1] 0.006703000050038099(secs) requestaudit/6 08.30 11:51:28.285 IdcServer-141 GET_ENVIRONMENT [dUser=sysadmin][IsJava=1] 0.010893999598920345(secs) requestaudit/6 08.30 11:51:30.433 IdcServer-142 GET_SERVER_OUTPUT [dUser=weblogic] 0.017318999394774437(secs) requestaudit/6 08.30 11:51:41.837 IdcServer-143 VCR_GET_DOCUMENT_BY_NAME [dID=508][dDocName=113_ES][dDocTitle=Landing Home][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.15937699377536774(secs) requestaudit/6 08.30 11:51:42.781 IdcServer-144 GET_FILE [dID=326][dDocName=WEBCENTERORACL000315][dDocTitle=Duke][dUser=anonymous][RevisionSelectionMethod=LatestReleased][dSecurityGroup=Public][xCollectionID=0] 0.16288499534130096(secs) The highlighted sections show where the 2 data files DF_UCMCACHETESTER (P1 page) and 113_ES (P2 page) were called by the (Spaces) VCR connection to the Content Server. The most important line to notice is the VCR_GET_DOCUMENT_BY_NAME invocation.  On subsequent refreshes of these 2 pages, you will notice (after you refresh the Content Server's View Server Output) that there are no further traces of the same VCR_GET_DOCUMENT_BY_NAME invocations.  This is because the pages are getting the documents from the cache. The next step is to go through the "backdoor" and change one of the documents through the Content Server console.  This operation can be done by first locating the data file document, and from the Content Information page, select Edit Data File menu option.   This invokes the Site Studio Contributor, where the modifications can be made. Refreshing the Content Server View Server Output, the tracing displays the operations perform on the document.  requestaudit/6 08.30 11:56:59.972 IdcServer-255 SS_CHECKOUT_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dUser=weblogic][dSecurityGroup=Public] 0.05558200180530548(secs) requestaudit/6 08.30 11:57:00.065 IdcServer-256 SS_GET_CONTRIBUTOR_CONFIG [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.08632399886846542(secs) requestaudit/6 08.30 11:57:00.470 IdcServer-259 DOC_INFO_BY_NAME [dID=922][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.02268899977207184(secs) requestaudit/6 08.30 11:57:10.177 IdcServer-264 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007652000058442354(secs) requestaudit/6 08.30 11:57:10.181 IdcServer-263 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.01868399977684021(secs) requestaudit/6 08.30 11:57:10.187 IdcServer-265 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.009367000311613083(secs) (internal)/6 08.30 11:57:26.118 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml (internal)/6 08.30 11:57:26.121 IdcServer-266 File to be removed: /oracle/app/admin/domains/webcenter/ucm/cs/vault/~temp/703253295.xml requestaudit/6 08.30 11:57:26.122 IdcServer-266 SS_SET_ELEMENT_DATA [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0][StatusCode=0][StatusMessage=Successfully checked in content item 'DF_UCMCACHETESTER'.] 0.3765290081501007(secs) requestaudit/6 08.30 11:57:30.710 IdcServer-267 DOC_INFO_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][dSecurityGroup=Public][xCollectionID=0] 0.07942699640989304(secs) requestaudit/6 08.30 11:57:30.733 IdcServer-268 SS_GET_CONTRIBUTOR_STRINGS [dUser=weblogic] 0.0044570001773536205(secs) After a few moments and refreshing the P1 page, the updates has been applied. Note: The refresh time may very, since the Cache Invalidation Interval (set to 2 minutes) is not determined by when changes happened.  The sweeper just runs every 2 minutes. Refreshing the Content Server View Server Output, the tracing displays the important information. requestaudit/6 08.30 11:59:10.171 IdcServer-270 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.00952600035816431(secs) requestaudit/6 08.30 11:59:10.179 IdcServer-271 GET_FOLDER_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.011118999682366848(secs) requestaudit/6 08.30 11:59:10.182 IdcServer-272 GET_DOCUMENT_HISTORY_REPORT [dUser=sysadmin][IsJava=1] 0.007447000127285719(secs) requestaudit/6 08.30 11:59:16.885 IdcServer-273 VCR_GET_DOCUMENT_BY_NAME [dID=923][dDocName=DF_UCMCACHETESTER][dDocTitle=DF_UCMCacheTester][dUser=weblogic][RevisionSelectionMethod=LatestReleased][IsJava=1] 0.0786449983716011(secs) After the specifed interval time the sweeper is invoked, which is noted by the GET_ ... calls.  Since the history has noted the change, the next call is to the VCR_GET_DOCUMENT_BY_NAME to retrieve the new version of the (modifed) data file.  Navigating back to the P2 page, and viewing the server output, there are no further VCR_GET_DOCUMENT_BY_NAME to retrieve the data file.  This simply means that this data file was just retrieved from the cache.   Upon further review of the server output, we can see that there was only 1 request for the VCR_GET_DOCUMENT_BY_NAME: requestaudit/6 08.30 12:08:00.021 Audit Request Monitor Request Audit Report over the last 120 Seconds for server webcenteroraclelocal16200****  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor -Num Requests 8 Errors 0 Reqs/sec. 0.06666944175958633 Avg. Latency (secs) 0.02762500010430813 Max Thread Count 2  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 1 Service VCR_GET_DOCUMENT_BY_NAME Total Elapsed Time (secs) 0.09200000017881393 Num requests 1 Num errors 0 Avg. Latency (secs) 0.09200000017881393  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 2 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.054999999701976776 Num requests 1 Num errors 0 Avg. Latency (secs) 0.054999999701976776  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 3 Service GET_FOLDER_HISTORY_REPORT Total Elapsed Time (secs) 0.028999999165534973 Num requests 2 Num errors 0 Avg. Latency (secs) 0.014499999582767487  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 4 Service GET_SERVER_OUTPUT Total Elapsed Time (secs) 0.017999999225139618 Num requests 1 Num errors 0 Avg. Latency (secs) 0.017999999225139618  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor 5 Service GET_FILE Total Elapsed Time (secs) 0.013000000268220901 Num requests 1 Num errors 0 Avg. Latency (secs) 0.013000000268220901  requestaudit/6 08.30 12:08:00.021 Audit Request Monitor ****End Audit Report*****  

    Read the article

  • How Expedia Made My New Bride Cry

    - by Lance Robinson
    Tweet this? Email Expedia and ask them to give me and my new wife our honeymoon? When Expedia followed up their failure with our honeymoon trip with a complete and total lack of acknowledgement of any responsibility for the problem and endless loops of explaining the issue over and over again - I swore that they would make it right. When they brought my new bride to tears, I got an immediate and endless supply of motivation. I hope you will help me make them make it right by posting our story on Twitter, Facebook, your blog, on Expedia itself, and when talking to your friends in person about their own travel plans.   If you are considering using them now for an important trip - reconsider. Short summary: We arrived early for a flight - but Expedia had made a mistake with the data they supplied to JetBlue and Emirates, which resulted in us not being able to check in (one leg of our trip was missing)!  At the time of this post, three people (myself, my wife, and an exceptionally patient JetBlue employee named Mary) each spent hours on the phone with Expedia.  I myself spent right at 3 hours (according to iPhone records), Lauren spent an hour and a half or so, and poor Mary was probably on the phone for a good 3.5 hours.  This is after 5 hours total at the airport.  If you add up our phone time, that is nearly 8 hours of phone time over a 5 hour period with little or no help, stall tactics (?), run-around, denial, shifting of blame, and holding. Details below (times are approximate): First, my wife and I were married yesterday - June 18th, the 3 year anniversary of our first date. She is awesome. She is the nicest person I have ever known, a ton of fun, absolutely beautiful in every way. Ok enough mushy - here are the dirty details. 2:30 AM - Early Check-in Attempt - we attempted to check-in for our flight online. Some sort of technology error on website, instructed to checkin at desk. 4:30 AM - Arrive at airport. Try to check-in at kiosk, get the same error. We got to the JetBlue desk at RDU International Airport, where Mary helped us. Mary discovered that the Expedia provided itinerary does not match the Expedia provided tickets. We are informed that when that happens American, JetBlue, and others that use the same software cannot check you in for the flight because. Why? Because the itinerary was missing a leg of our flight! Basically we were not shown in the system as definitely being able to make it home. Mary called Expedia and was put on hold by their automated system. 4:55 AM - Mary, myself, and my brand new bride all waited for about 25 minutes when finally I decided I would make a call myself on my iPhone while Mary was on the airport phone. In their automated system, I chose "make a new reservation", thinking they might answer a little more quickly than "customer service". Not surprisingly I was connected to an Expedia person within 1 minute. They informed me that they would have to forward me to a customer service specialist. I explained to them that we were already on hold for that and had been for nearly half an hour, that we were going on our honeymoon and that our flight would be leaving soon - could they please help us. "Yes, I will help you". I hand the phone to JetBlue Mary who explains the situation 3 or 4 times. Obviously I couldn't hear both ends of the conversation at this point, but the Expedia person explained what the problem was by stating exactly what Mary had just spent 15 minutes explaining. Mary calmly confirms that this is the problem, and asks Expedia to re-issue the itinerary. Expedia tells Mary that they'll have to transfer her to customer service. Mary asks for someone specific so that we get an answer this time, and goes on hold. Mary get's connected, explains the situation, and then Mary's connection gets terminated. 5:10 AM - Mary calls back to the Expedia automated system again, and we wait for about 5 minutes on hold this time before I pick up my iPhone and call Expedia again myself. Again I go to sales, a person picks up the phone in less than a minute. I explain the situation and let them know that we are now very close to missing our flight for our honeymoon, could they please help us. "Yes, I will help you". Again I give the phone to Mary who provides them with a call back number in case we get disconnected again and explains the situation again. More back and forth with Expedia doing nothing but repeating the same questions, Mary answering the questions with the same information she provided in the original explanation, and Expedia simply restating the problem. Mary again asks them to re-issue the itinerary, and explains that doing so will fix the problem. Expedia again repeats the problem instead of fixing it, and Mary's connection gets terminated. 5:20 AM - Mary again calls back to Expedia. My beautiful bride also calls on her own phone. At this point she is struggling to hold back her tears, stumbling through an explanation of all that has happened and that we are about to miss our flight. Please help us. "Yes, I will help". My beautiful bride's connection gets terminated. Ok, maybe this disconnection isn't an accident. We've now been disconnected 3 times on two different phones. 5:45 AM - I walk away and pleadingly beg a person to help me. They "escalate" the issue to "Rosy" (sp?) at Expedia. I go through the whole song and dance again with Rosy, who gives me the same treatment Mary was given. Rosy blames JetBlue for now having the correct data. Meanwhile Mary is on the phone with Emirates Air (the airline for the second leg of our trip), who agrees with JetBlue that Expedia's data isn't up to date. We are informed by two airport employees that issues like this with Expedia are not uncommon, and that the fix is simple. On the phone iwth Rosy, I ask her to re-issue the itinerary because we are about to miss our flight. She again explains the problem to me. At this point, I am standing at the window, pleading with Rosy to help us get to our honeymoon, watching our airplane. Then our airplane leaves without us. 6:03 AM - At this point we have missed our flight. Re-issuing the itinerary is no longer a solution. I ask Rosy to start from the beginning and work us up a new trip. She says that she cannot do that. She says that she needs to talk to JetBlue and Emirates and find out why we cannot check-in for our flight. I remind Rosy that our flight has already left - I just watched it taxi away - it no longer matters why (not to mention the fact that we already knew why, and have known why since 4:30 AM), and have known the solution since 4:30 AM. Rosy, can you please book a new trip? Yes, but it will cost $400. Excuse me? Now you can, but it will cost ME to fix your mistake? Rosy says that she can escalate the situation to her supervisor but that will take 1.5 hours. 6:15 AM - I told Rosy that if they had re-issued the itinerary as JetBlue asked (at 4:30 AM), my new wife and I might be on the airplane now instead of dealing with this on the phone and missing the beginning (and how much more?) of our honeymoon. Rosy said that it was not necessary to re-issue the itinerary. Out of curiosity, i asked Rosy if there was some financial burden on them to re-issue the itinerary. "No", said Rosy. I asked her if it was a large time burden on Expedia to re-issue the itinerary. "No", said Rosy. I directly asked Rosy: Why wouldn't Expedia have re-issued the itinerary when JetBlue asked? No answer. I asked Rosy: If you had re-issued the itinerary at 4:30, isn't it possible that I would be on that flight right now? She actually surprised me by answering "Yes" to that question. So I pointed out that it followed that Expedia was responsible for the fact that we missed out flight, and she immediately went into more about how the problem was with JetBlue - but now it was ALSO an Emirates Air problem as well. I tell Rosy to go ahead and escalate the issue again, and please call me back in that 1.5 hours (which how is about 1 hour and 10 minutes away). 6:30 AM - I start tweeting my frustration with iPhone. It's now pretty much impossible for us to make it to The Maldives by 3pm, which is the time at which we would need to arrive in order to be allowed service to the actual island where we are staying. Expedia has now given me the run-around for 2 hours, caused me to miss my flight, and worst of all caused my amazing new wife Lauren to miss our honeymoon. You think I was mad? No. Furious. Its ok to make mistakes - but to refuse to fix them and to ruin our honeymoon? No, not ok, Expedia. I swore right then that Expedia would make this right. 7:45 AM - JetBlue mary is still talking her tail off to other people in JetBlue and Emirates Air. Mary works it out so that if Expedia simply books a new trip, JetBlue and Emirates will both waive all the fees. Now we just have to convince Expedia to fix their mistake and get us on our way! Around this time Expedia Rosy calls me back! I inform her of the excellent work of JetBlue Mary - that JetBlue and Emirates both will waive the fees so Expedia can fix their mistake and get us going on our way. She says that she sees documentation of this in her system and that she needs to put me on hold "for 1 to 10 minutes" to talk to Emirates Air (why I'm not exactly sure). I say ok. 8:45 AM - After an hour on hold, Rosy comes on the line and asks me to hold more. I ask her to call me back. 9:35 AM - I put down the iPhone Twitter app and picks up the laptop. You think I made some noise with my iPhone? Heh 11:25 AM - Expedia follows me and sends a canned "We're sorry, DM us the details".  If you look at their Twitter feed, 16 out of the most recent 20 tweets are exactly the same canned response.  The other 4?  Ads.  Um - #MultiFAIL? To Expedia:  You now have had (as explained above) 8 hours of 3 different people explaining our situation, you know the email address of our Expedia account, you know my web blog, you know my Twitter address, you know my phone number.  You also know how upset you have made both me and my new bride by treating us with such a ... non caring, scripted, uncooperative, argumentative, and possibly even deceitful manner.  In the wise words of the great Kenan Thompson of SNL: "FIX IT!".  And no, I'm NOT going away until you make this right. Period. 11:45 AM - Expedia corporate office called.  The woman I spoke to was very nice and apologetic.  She listened to me tell the story again, she says she understands the problem and she is going to work to resolve it.  I don't have any details on what exactly that resolution might me, she said she will call me back in 20 minutes.  She found out about the problem via Twitter.  Thank you Twitter, and all of you who helped.  Hopefully social media will win my wife and I our honeymoon, and hopefully Expedia will encourage their customer service teams treat their customers properly. 12:22 PM - Spoke to Fran again from Expedia corporate office.  She has a flight for us tonight.  She is booking it now.  We will arrive at our honeymoon destination of beautiful Veligandu Island Resort only 1 day late.  She cannot confirm today, but she expects that Expedia will pay for the lost honeymoon night.  Thank you everyone for your help.  I will reflect more on this whole situation and confirm its resolution after our flight is 100% confirmed.  For now, I'm going to take a breather and go kiss my wonderful wife! 1:50 PM - Have not yet received the promised phone call.  We did receive an email with a new itinerary for a flight but the booking is not for specific seats, so there is no guarantee that my wife and I will be able to sit together.  With the original booking I carefully selected our seats for every segment of our trip.  I decided to call into the phone number that Fran from the Expedia corporate office gave me.  Its automated voice system identified itself as "Tier 3 Support".  I am currently still on hold with them, I have not gotten through to a human yet. 1:55 PM - Fran from Expedia called me back.  She confirmed us as booked.  She called the airlines to confirm.  Unfortunately, Expedia was unwilling or unable to allow us any type of seat selection.  It is possible that i won't get to sit next to the woman I married less than a day ago on our 40 total hours of flight time (there and back).  In addition, our seats could be the worst seats on the planes, with no reclining seat back or right next to the restroom.  Despite this fact (which in my opinion is huge), the horrible inconvenience, the hours at the airport, and the negative Internet publicity that Expedia is receiving, Expedia declined to offer us any kind of upgrade or to mark us as SFU (suitable for upgrade).  Since they didn't offer - I asked, and was rejected.  I am grateful to finally be heading in the right direction, but not only did Expedia horribly botch this job from the very beginning, they followed that botch job with near zero customer service, followed by a verbally apologetic but otherwise half-hearted resolution.  If this works out favorably for us, great.  If not - I'm not done making noise, Expedia.  You owe us, and I expect you to make it right.  You haven't quite done that yet. Thanks - Thank you to Twitter.  Thanks to all those who sympathize with us and helped us get the attention of Expedia, since three people (one of them an airline employee) using Expedia's normal channels of communication for many hours didn't help.  Thanks especially to my PowerShell and Sharepoint friends, my local friends, and those connectors who encouraged me and spread my story. 5:15 PM - Love Wins - After all this, Lauren and I are exhausted.  We both took a short nap, and when we woke up we talked about the last 24 hours.  It was a big, amazing, story-filled 24 hours.  I said that Expedia won, but Lauren said no.  She pointed out how lucky we are.  We are in love and married.  We have wonderful family and friends.  We are both hard-working successful people who love what they do.  We get to go to an amazing exotic destination for our honeymoon like Veligandu in The Maldives...  That's a lot of good.  Expedia didn't win.  This was (is) a big loss for Expedia.  It is a public blemish for all to see.  But Lauren and I did win, big time.  Expedia may not have made things right - but things are right for us.  Post in progress... I will relay any further comments (or lack of) from Expedia soon, as well as an update on confirmation of their repayment of our lost resort room rates.  I'll also post a picture of us on our honeymoon as soon as I can!

    Read the article

  • Passing multiple POST parameters to Web API Controller Methods

    - by Rick Strahl
    ASP.NET Web API introduces a new API for creating REST APIs and making AJAX callbacks to the server. This new API provides a host of new great functionality that unifies many of the features of many of the various AJAX/REST APIs that Microsoft created before it - ASP.NET AJAX, WCF REST specifically - and combines them into a whole more consistent API. Web API addresses many of the concerns that developers had with these older APIs, namely that it was very difficult to build consistent REST style resource APIs easily. While Web API provides many new features and makes many scenarios much easier, a lot of the focus has been on making it easier to build REST compliant APIs that are focused on resource based solutions and HTTP verbs. But  RPC style calls that are common with AJAX callbacks in Web applications, have gotten a lot less focus and there are a few scenarios that are not that obvious, especially if you're expecting Web API to provide functionality similar to ASP.NET AJAX style AJAX callbacks. RPC vs. 'Proper' REST RPC style HTTP calls mimic calling a method with parameters and returning a result. Rather than mapping explicit server side resources or 'nouns' RPC calls tend simply map a server side operation, passing in parameters and receiving a typed result where parameters and result values are marshaled over HTTP. Typically RPC calls - like SOAP calls - tend to always be POST operations rather than following HTTP conventions and using the GET/POST/PUT/DELETE etc. verbs to implicitly determine what operation needs to be fired. RPC might not be considered 'cool' anymore, but for typical private AJAX backend operations of a Web site I'd wager that a large percentage of use cases of Web API will fall towards RPC style calls rather than 'proper' REST style APIs. Web applications that have needs for things like live validation against data, filling data based on user inputs, handling small UI updates often don't lend themselves very well to limited HTTP verb usage. It might not be what the cool kids do, but I don't see RPC calls getting replaced by proper REST APIs any time soon.  Proper REST has its place - for 'real' API scenarios that manage and publish/share resources, but for more transactional operations RPC seems a better choice and much easier to implement than trying to shoehorn a boatload of endpoint methods into a few HTTP verbs. In any case Web API does a good job of providing both RPC abstraction as well as the HTTP Verb/REST abstraction. RPC works well out of the box, but there are some differences especially if you're coming from ASP.NET AJAX service or WCF Rest when it comes to multiple parameters. Action Routing for RPC Style Calls If you've looked at Web API demos you've probably seen a bunch of examples of how to create HTTP Verb based routing endpoints. Verb based routing essentially maps a controller and then uses HTTP verbs to map the methods that are called in response to HTTP requests. This works great for resource APIs but doesn't work so well when you have many operational methods in a single controller. HTTP Verb routing is limited to the few HTTP verbs available (plus separate method signatures) and - worse than that - you can't easily extend the controller with custom routes or action routing beyond that. Thankfully Web API also supports Action based routing which allows you create RPC style endpoints fairly easily:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi", action = "GetAblums" } ); This uses traditional MVC style {action} method routing which is different from the HTTP verb based routing you might have read a bunch about in conjunction with Web API. Action based routing like above lets you specify an end point method in a Web API controller either via the {action} parameter in the route string or via a default value for custom routes. Using routing you can pass multiple parameters either on the route itself or pass parameters on the query string, via ModelBinding or content value binding. For most common scenarios this actually works very well. As long as you are passing either a single complex type via a POST operation, or multiple simple types via query string or POST buffer, there's no issue. But if you need to pass multiple parameters as was easily done with WCF REST or ASP.NET AJAX things are not so obvious. Web API has no issue allowing for single parameter like this:[HttpPost] public string PostAlbum(Album album) { return String.Format("{0} {1:d}", album.AlbumName, album.Entered); } There are actually two ways to call this endpoint: albums/PostAlbum Using the Model Binder with plain POST values In this mechanism you're sending plain urlencoded POST values to the server which the ModelBinder then maps the parameter. Each property value is matched to each matching POST value. This works similar to the way that MVC's  ModelBinder works. Here's how you can POST using the ModelBinder and jQuery:$.ajax( { url: "albums/PostAlbum", type: "POST", data: { AlbumName: "Dirty Deeds", Entered: "5/1/2012" }, success: function (result) { alert(result); }, error: function (xhr, status, p3, p4) { var err = "Error " + " " + status + " " + p3; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); Here's what the POST data looks like for this request: The model binder and it's straight form based POST mechanism is great for posting data directly from HTML pages to model objects. It avoids having to do manual conversions for many operations and is a great boon for AJAX callback requests. Using Web API JSON Formatter The other option is to post data using a JSON string. The process for this is similar except that you create a JavaScript object and serialize it to JSON first.album = { AlbumName: "PowerAge", Entered: new Date(1977,0,1) } $.ajax( { url: "albums/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify(album), success: function (result) { alert(result); } }); Here the data is sent using a JSON object rather than form data and the data is JSON encoded over the wire. The trace reveals that the data is sent using plain JSON (Source above), which is a little more efficient since there's no UrlEncoding that occurs. BTW, notice that WebAPI automatically deals with the date. I provided the date as a plain string, rather than a JavaScript date value and the Formatter and ModelBinder both automatically map the date propertly to the Entered DateTime property of the Album object. Passing multiple Parameters to a Web API Controller Single parameters work fine in either of these RPC scenarios and that's to be expected. ModelBinding always works against a single object because it maps a model. But what happens when you want to pass multiple parameters? Consider an API Controller method that has a signature like the following:[HttpPost] public string PostAlbum(Album album, string userToken) Here I'm asking to pass two objects to an RPC method. Is that possible? This used to be fairly straight forward either with WCF REST and ASP.NET AJAX ASMX services, but as far as I can tell this is not directly possible using a POST operation with WebAPI. There a few workarounds that you can use to make this work: Use both POST *and* QueryString Parameters in Conjunction If you have both complex and simple parameters, you can pass simple parameters on the query string. The above would actually work with: /album/PostAlbum?userToken=sekkritt but that's not always possible. In this example it might not be a good idea to pass a user token on the query string though. It also won't work if you need to pass multiple complex objects, since query string values do not support complex type mapping. They only work with simple types. Use a single Object that wraps the two Parameters If you go by service based architecture guidelines every service method should always pass and return a single value only. The input should wrap potentially multiple input parameters and the output should convey status as well as provide the result value. You typically have a xxxRequest and a xxxResponse class that wraps the inputs and outputs. Here's what this method might look like:public PostAlbumResponse PostAlbum(PostAlbumRequest request) { var album = request.Album; var userToken = request.UserToken; return new PostAlbumResponse() { IsSuccess = true, Result = String.Format("{0} {1:d} {2}", album.AlbumName, album.Entered,userToken) }; } with these support types:public class PostAlbumRequest { public Album Album { get; set; } public User User { get; set; } public string UserToken { get; set; } } public class PostAlbumResponse { public string Result { get; set; } public bool IsSuccess { get; set; } public string ErrorMessage { get; set; } }   To call this method you now have to assemble these objects on the client and send it up as JSON:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result.Result); } }); I assemble the individual types first and then combine them in the data: property of the $.ajax() call into the actual object passed to the server, that mimics the structure of PostAlbumRequest server class that has Album, User and UserToken properties. This works well enough but it gets tedious if you have to create Request and Response types for each method signature. If you have common parameters that are always passed (like you always pass an album or usertoken) you might be able to abstract this to use a single object that gets reused for all methods, but this gets confusing too: Overload a single 'parameter' too much and it becomes a nightmare to decipher what your method actual can use. Use JObject to parse multiple Property Values out of an Object If you recall, ASP.NET AJAX and WCF REST used a 'wrapper' object to make default AJAX calls. Rather than directly calling a service you always passed an object which contained properties for each parameter: { parm1: Value, parm2: Value2 } WCF REST/ASP.NET AJAX would then parse this top level property values and map them to the parameters of the endpoint method. This automatic type wrapping functionality is no longer available directly in Web API, but since Web API now uses JSON.NET for it's JSON serializer you can actually simulate that behavior with a little extra code. You can use the JObject class to receive a dynamic JSON result and then using the dynamic cast of JObject to walk through the child objects and even parse them into strongly typed objects. Here's how to do this on the API Controller end:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } This is clearly not as nice as having the parameters passed directly, but it works to allow you to pass multiple parameters and access them using Web API. JObject is JSON.NET's generic object container which sports a nice dynamic interface that allows you to walk through the object's properties using standard 'dot' object syntax. All you have to do is cast the object to dynamic to get access to the property interface of the JSON type. Additionally JObject also allows you to parse JObject instances into strongly typed objects, which enables us here to retrieve the two objects passed as parameters from this jquery code:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result); } }); Summary ASP.NET Web API brings many new features and many advantages over the older Microsoft AJAX and REST APIs, but realize that some things like passing multiple strongly typed object parameters will work a bit differently. It's not insurmountable, but just knowing what options are available to simulate this behavior is good to know. Now let me say here that it's probably not a good practice to pass a bunch of parameters to an API call. Ideally APIs should be closely factored to accept single parameters or a single content parameter at least along with some identifier parameters that can be passed on the querystring. But saying that doesn't mean that occasionally you don't run into a situation where you have the need to pass several objects to the server and all three of the options I mentioned might have merit in different situations. For now I'm sure the question of how to pass multiple parameters will come up quite a bit from people migrating WCF REST or ASP.NET AJAX code to Web API. At least there are options available to make it work.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Microsoft Declares the Future of ASP.NET is Web API

    - by sbwalker
    Sitting on a plane on my way home from Tech Ed 2012 in Orlando, I thought it would be a good time to jot down some key takeaways from this year’s conference. Some of these items I have known since the Microsoft MVP Summit which occurred in Redmond in late February ( but due to NDA restrictions I could not share them with the developer community at large ) and some of them are a result of insightful conversations with a wide variety of industry insiders and Microsoft employees at the conference. First, let’s travel back in time 4 years to the Microsoft MVP Summit in 2008. Microsoft was facing some heat from market newcomer Ruby on Rails and responded with a new web development framework of its own, ASP.NET MVC. At the Summit they estimated that MVC would only be applicable for ~10% of all new web development projects. Based on that prediction I questioned why they were investing such considerable resources for such a relative edge case, but my guess is that they felt it was an important edge case at the time as some of the more vocal .NET evangelists as well as some very high profile start-ups ( ie. Twitter ) had publicly announced their intent to use Rails. Microsoft made a lot of noise about MVC. In fact, they focused so much of their messaging and marketing hype around MVC that it appeared that WebForms was essentially dead. Yes, it may have been true that Microsoft continued to invest in WebForms, but from an outside perspective it really appeared that MVC was the only framework getting any real attention. As a result, MVC started to gain market share. An inside source at Microsoft told me that MVC usage has grown at a rate of about 5% per year and now sits at ~30%. Essentially by focusing so much marketing effort on MVC, Microsoft actually created a larger market demand for it.  This is because in the Microsoft ecosystem there is somewhat of a bandwagon mentality amongst developers. If Microsoft spends a lot of time talking about a specific technology, developers get the perception that it must be really important. So rather than choosing the right tool for the job, they often choose the tool with the most marketing hype and then try to sell it to the customer. In 2010, I blogged about the fact that MVC did not make any business sense for the DotNetNuke platform. This was because our ecosystem relied on third party extensions which were dependent on the WebForms model. If we migrated the core to MVC it would mean that all of the third party extensions would no longer be compatible, which would be an irresponsible business decision for us to make at the expense of our users and customers. However, this did not stop the debate from continuing to occur in our ecosystem. Clearly some developers had drunk Microsoft’s Kool-Aid about MVC and were of the mindset, to paraphrase an old Scottish saying, “If its not MVC, it’s crap”. Now, this is a rather ignorant position to take as most of the benefits of MVC can be achieved in WebForms with solid architecture and responsible coding practices. Clean separation of concerns, unit testing, and direct control over page output are all possible in the WebForms model – it just requires diligence and discipline. So over the past few years some horror stories have begun to bubble to the surface of software development projects focused on ground-up rewrites of web applications for the sole purpose of migrating from WebForms to MVC. These large scale rewrites were typically initiated by engineering teams with only a single argument driving the business decision, that Microsoft was promoting MVC as “the future”. These ill-fated rewrites offered no benefit to end users or customers and in fact resulted in a less stable, less scalable and more complicated systems – basically taking one step forward and two full steps back. A case in point is the announcement earlier this week that a popular open source .NET CMS provider has decided to pull the plug on their new MVC product which has been under active development for more than 18 months and revert back to WebForms. The availability of multiple server-side development models has deeply fragmented the Microsoft developer community. Some folks like to compare it to the age-old VB vs. C# language debate. However, the VB vs. C# language debate was ultimately more of a religious war because at least the two dominant programming languages were compatible with one another and could be used interchangeably. The issue with WebForms vs. MVC is much more challenging. This is because the messaging from Microsoft has positioned the two solutions as being incompatible with one another and as a result web developers feel like they are forced to choose one path or another. Yes, it is true that it has always been technically possible to use WebForms and MVC in the same project, but the tooling support has always made this feel “dirty”. The fragmentation has also made it difficult to attract newcomers as the perceived barrier to entry for learning ASP.NET has become higher. As a result many new software developers entering the market are gravitating to environments where the development model seems more simple and intuitive ( ie. PHP or Ruby ). At the same time that the Web Platform team was busy promoting ASP.NET MVC, the Microsoft Office team has been promoting Sharepoint as a platform for building internal enterprise web applications. Sharepoint has great penetration in the enterprise and over time has been enhanced with improved extensibility capabilities for software developers. But, like many other mature enterprise ASP.NET web applications, it is built on the WebForms development model. Similar to DotNetNuke, Sharepoint leverages a rich third party ecosystem for both generic web controls and more specialized WebParts – both of which rely on WebForms. So basically this resulted in a situation where the Web Platform group had headed off in one direction and the Office team had gone in another direction, and the end customer was stuck in the middle trying to figure out what to do with their existing investments in Microsoft technology. It really emphasized the perception that the left hand was not speaking to the right hand, as strategically speaking there did not seem to be any high level plan from Microsoft to ensure consistency and continuity across the different product lines. With the introduction of ASP.NET MVC, it also made some of the third party control vendors scratch their heads, and wonder what the heck Microsoft was thinking. The original value proposition of ASP.NET over Classic ASP was the ability for web developers to emulate the highly productive desktop development model by using abstract components for creating rich, interactive web interfaces. Web control vendors like Telerik, Infragistics, DevExpress, and ComponentArt had all built sizable businesses offering powerful user interface components to WebForms developers. And even after MVC was introduced these vendors continued to improve their products, offering greater productivity and a superior user experience via AJAX to what was possible in MVC. And since many developers were comfortable and satisfied with these third party solutions, the demand remained strong and the third party web control market continued to prosper despite the availability of MVC. While all of this was going on in the Microsoft ecosystem, there has also been a fundamental shift in the general software development industry. Driven by the explosion of Internet-enabled devices, the focus has now centered on service-oriented architecture (SOA). Service-oriented architecture is all about defining a public API for your product that any client can consume; whether it’s a native application running on a smart phone or tablet, a web browser taking advantage of HTML5 and Javascript, or a rich desktop application running on a PC. REST-based services which utilize the less verbose characteristics of JSON as a transport mechanism, have become the preferred approach over older, more bloated SOAP-based techniques. SOA also has the benefit of producing a cross-platform API, as every major technology stack is able to interact with standard REST-based web services. And for web applications, more and more developers are turning to robust Javascript libraries like JQuery and Knockout for browser-based client-side development techniques for calling web services and rendering content to end users. In fact, traditional server-side page rendering has largely fallen out of favor, resulting in decreased demand for server-side frameworks like Ruby on Rails, WebForms, and (gasp) MVC. In response to these new industry trends, Microsoft did what it always does – it immediately poured some resources into developing a solution which will ensure they remain relevant and competitive in the web space. This work culminated in a new framework which was branded as Web API. It is convention-based and designed to embrace native HTTP standards without copious layers of abstraction. This framework is designed to be the ultimate replacement for both the REST aspects of WCF and ASP.NET MVC Web Services. And since it was developed out of band with a dependency only on ASP.NET 4.0, it means that it can be used immediately in a variety of production scenarios. So at Tech Ed 2012 it was made abundantly clear in numerous sessions that Microsoft views Web API as the “Future of ASP.NET”. In fact, one Microsoft PM even went as far as to say that if we look 3-4 years into the future, that all ASP.NET web applications will be developed using the Web API approach. This is a fairly bold prediction and clearly telegraphs where Microsoft plans to allocate its resources going forward. Currently Web API is being delivered as part of the MVC4 package, but this is only temporary for the sake of convenience. It also sounds like there are still internal discussions going on in terms of how to brand the various aspects of ASP.NET going forward – perhaps the moniker of “ASP.NET Web Stack” coined a couple years ago by Scott Hanselman and utilized as part of the open source release of ASP.NET bits on Codeplex a few months back will eventually stick. Web API is being positioned as the unification of ASP.NET – the glue that is able to pull this fragmented mess back together again. The  “One ASP.NET” strategy will promote the use of all frameworks - WebForms, MVC, and Web API, even within the same web project. Basically the message is utilize the appropriate aspects of each framework to solve your business problems. Instead of navigating developers to a fork in the road, the plan is to educate them that “hybrid” applications are a great strategy for delivering solutions to customers. In addition, the service-oriented approach coupled with client-side development promoted by Web API can effectively be used in both WebForms and MVC applications. So this means it is also relevant to application platforms like DotNetNuke and Sharepoint, which means that it starts to create a unified development strategy across all ASP.NET product lines once again. And so what about MVC? There have actually been rumors floated that MVC has reached a stage of maturity where, similar to WebForms, it will be treated more as a maintenance product line going forward ( MVC4 may in fact be the last significant iteration of this framework ). This may sound alarming to some folks who have recently adopted MVC but it really shouldn’t, as both WebForms and MVC will continue to play a vital role in delivering solutions to customers. They will just not be the primary area where Microsoft is spending the majority of its R&D resources. That distinction will obviously go to Web API. And when the question comes up of why not enhance MVC to make it work with Web API, you must take a step back and look at this from the higher level to see that it really makes no sense. MVC is a server-side page compositing framework; whereas, Web API promotes client-side page compositing with a heavy focus on web services. In order to make MVC work well with Web API, would require a complete rewrite of MVC and at the end of the day, there would be no upgrade path for existing MVC applications. So it really does not make much business sense. So what does this have to do with DotNetNuke? Well, around 8-12 months ago we recognized the software industry trends towards web services and client-side development. We decided to utilize a “hybrid” model which would provide compatibility for existing modules while at the same time provide a bridge for developers who wanted to utilize more modern web techniques. Customers who like the productivity and familiarity of WebForms can continue to build custom modules using the traditional approach. However, in DotNetNuke 6.2 we also introduced a new Service Framework which is actually built on top of MVC2 ( we chose to leverage MVC because it had the most intuitive, light-weight REST implementation in the .NET stack ). The Services Framework allowed us to build some rich interactive features in DotNetNuke 6.2, including the Messaging and Notification Center and Activity Feed. But based on where we know Microsoft is heading, it makes sense for the next major version of DotNetNuke ( which is expected to be released in Q4 2012 ) to migrate from MVC2 to Web API. This will likely result in some breaking changes in the Services Framework but we feel it is the best approach for ensuring the platform remains highly modern and relevant. The fact that our development strategy is perfectly aligned with the “One ASP.NET” strategy from Microsoft means that our customers and developer community can be confident in their current and future investments in the DotNetNuke platform.

    Read the article

  • SOA Community Newsletter: nouvelle lettre !

    - by mseika
    SOA PARTNER COMMUNITY NEWSLETTERAUGUST 2012 Dear SOA partner community member Have you submitted your feedback on SOA Partner Community Survey 2012? This is the last chance to participate in the survey. We recommend you to complete the survey and help us to improve our SOA Community. Thanks to all attendees and trainers for their participation in the excellent Fusion Middleware Summer Camps held in Lisbon and Munich. I would also like to thank you for the great feedback and the nice reports provided by AMIS Technology Blog & Middleware by Link Consulting. Most of our courses have been overbooked, if you did not get a chance or missed it, we offer a wide range of online training and the course material. Key take-away from the advanced BPM course is to become an expert in ADF. Here is the course from Grant Ronald Learn Advanced ADF online available. The Link Consulting Team became experts in SOA Governance with EAMS and Oracle Enterprise Repository! We always encourage our community members to share their best practices and are very keen to publish it. Please let us know if you want to share your best practices through this medium.We encourage you to make use of the Specialization benefits - this month we are giving an opportunity to Promote Your SOA & BPM Events. Jürgen KressOracle SOA & BPM Partner Adoption EMEA NEW CONTENT Presentations & Training material OFM Summer CampsPromote Your SOA & BPM Events Advanced ADF Online, For Free By Grant BPM 11g Customer Stories & Solution Catalog & Process Accelerators Delivering SOA Governance with EAMS by Link Consulting Team WebLogic Server Provisioning and Patching News from our Partners & CommunityUpdated material by Oracle Connect and Network SOA Blogs SOA on Facebook SOA on LinkedIn SOA on Twitter Mix SOA Forum SOA Workspace PRESENTATIONS & TRAINING MATERIAL OFM SUMMER CAMPS Thanks to all attendees who invested their time and utilized the opportunity to attend the Summer Camps! Due to high demand of our most of the trainings, we had a long waiting list with more numbers of partners who are keen to attend it. We would like to give our special thanks to all trainers, who delivered excellent workshops! Most of the presentations and course material have been posted on our SOA Community Workspaceand WebLogic Community Workspace. You can access the content only if you are a registered community member. To register for the SOA Community please click here. You can register for the WebLogic Community here. To find out the first impressions of the event please visit our Facebook pages:www.facebook.com/WebLogicCommunity &www.facebook.com/soacommunity or Picasa AlbumThanks for the excellent blog posts from AMIS Technology Blog & Middleware by Link Consulting. Let us know if you published a twitter blog on@soacommunity & @wlscommunity. We will be pleased to publish it in our Newsletters. BPM Course Quotes “Its always easy, if you know, what you are doing” - Torsten Winterberg, Opitz“ The best ideas are the ideas from the best” - Filipe Sequeria, Primesoft “Best invest in the education in the last 12 months” - Richard Schaller, IPT “Practice best practice with the best instructor” - Graham Lamond Capgemini “If you have basic BPM knowledge, this is the course to really mater it” - Diogo Henriques Link Consulting “Very good trainers lot of work. Lot of fun as well” - Matthias Gris Workflow Factory “If you like to accelerate in Oracle come to the training to bring it all together” - Marcel van der Glind, Amis ADF Course Quotes "Excellent training, great opportunity to network!" - Frank Houweling, Amis "Lots of fun and good ideas" - Ana Santiago, GFI "Learn ADF, worth it Fusion Apps is the future" - Miguel Delgadillo, STO Consulting "The best way to learn Fusion Middleware from the #1" Alexandro Montantes, STO Consulting "Be advanced to to be the first” - Dimitar Petrov Fadata "Great opportunity to suck all the knowledge out of some very experienced product managers” - Wilfred von der Deijl, The Future Group WebLogic Course Quotes “Oracle trainings are the best” - Pedro Neto Novobas“ "Excellent training, well organized” - Pedro Antunh, Capgemini “This course dives you into Oracle WebLogic giving you a quick start on benefiting from Fusion Apps” - Leonardo Fernandes, Outsystems Additional Quotes “Thanks a lot again for organizing such a great and informative Summer Camp. Both training and networking were organized very professionally. I have gained tons of very useful Info, which will definitely help to increase quality of our future projects.” - Daniel Fasko fss-group.com I didn’t get the chance yesterday to thank you for a most enjoyable and thoroughly educational time I had in Munich over the last few days.” - Jeroen Bakker Ordina “Just to congratulate you on a great event, not only today but also in the previous days of training. As we know, a very good organization and, as a native Portuguese that knows Lisbon very good, a nice choice of places to visit. Looking forward to come again next year.” Pedro Miguel Neto, Novobase PROMOTE YOUR SOA & BPM EVENTS The Partner Event Publisher has just been made available to all SOA & BPM specialized partners in EMEA. Partners now have the opportunity to publish their events to theOracle.com/events site and spread the word on their upcoming live in-person and/or live webcast events. See the demo below and click here to read more information. ADVANCED ADF ONLINE, FOR FREE BY GRANT The second part of the advanced ADF online eCourse is Live now! This covers the advanced topics of region and region interaction as well as getting down and dirty with some of the layout features of ADF Faces, skinning and DVT components. The aim of this course is to give you a self-paced learning aid which covers the more advanced topics of ADF development. The content is developed by Product Management and our Curriculum development teams and is based on advanced training material we have been running internally for about 18 months. We will get started on the next chapter, but in the meantime, please have a look at chapters one and two. Back to top BPM 11G CUSTOMER STORIES & SOLUTION CATALOG & PROCESS ACCELERATORS Stories Everyone loves a good story on planning or implementing a BPM strategy. Everyone wants to hear how it was done before?, what worked?, what was achieved? If you have achieved success with BPM, we are very keen to hear your stories and examples of how your customers use it. We receive lots of requests from people who are thinking of using BPM to solve a specific problem or in combination with a specific technology to talk to someone who has done it before. These stories are invaluable. Drop down the details of anything you think is relevant with a bit of detail and we will follow up on it. As one good deed deserves another, we will do our best to give you stories if you need them to show that where you are going, others have treaded before. Send your stories to us using this e-mail link and we will share them among other like minded people. Solution Catalogue This summer, Oracle is launching a solution catalogue specifically intended for partners. If you have delivered a successful implementation in BPM and think it could be reused and applied again in a similar scenario in the same industry or in a similar environment, then we ware keen to know about it and will add it to the solution catalogue. The solution catalogue will showcase successful BPM solutions both inside and outside Oracle. Be in touch with us on this e-mail link and we will make sure to add your solution. Process AcceleratorsFinally if you have specific processes that you are expert on, you have implemented at a customer and you want to work with us on getting these productised, then we would love to know about it. The process accelerator programme is explained in the most recent SOA/BPM Community Newsletter but again feel free to contact us if you want to get involved. Good luck with BPM and let us know how we can help. Barry O'Reilly Director BPM [email protected] DELIVERING SOA GOVERNANCE WITH EAMS BY LINK CONSULTING TEAM In the last 12 years Link Consulting has been making its presence in specific areas such as Governance and Architecture, both in terms of practices and methodologies, products, know-how and technological expertise. The Enterprise Architecture Management System - Oracle Enterprise Edition (EAMS - OER Edition) is the result of this experience and combines the architecture management solution with OER in order to deliver a product specialized for SOA Governance that gathers the better of two worlds in solution that enables SOA Governance projects, initiatives and programs. Enterprise Architecture Management System Enterprise Architecture Management System (EAMS), is an automation based solution that enables the efficient management of Enterprise Architectures. The solution uses configured enterprise repositories and takes advantages of its features to provide automation capabilities to the users. EAMS provides capabilities to create/customize/analyze repository data, architectural blueprints, reports and analytic charts. Oracle Enterprise Repository Oracle Enterprise Repository (OER) is one of the major and central elements of the Oracle SOA Governance solution. Oracle Enterprise Repository provides the tools to manage and govern the metadata for any type of software asset, from business processes and services to patterns, frameworks, applications, components, and models. OER maps the relationships and inter-dependencies that connect those assets to improve impact analysis, promote and optimize their reuse, and measure their impact on the bottom line. It provides the visibility, feedback, controls, and analytics to keep your SOA on track to deliver business value. The intense focus on automation helps to overcome barriers to SOA adoption and streamline governance throughout the lifecycle. Core capabilities of the OER include: Asset Management Asset Lifecycle Management Usage Tracking Service Discovery Version Management Dependency Analysis Portfolio Management EAMS - OER Edition The solution takes the advantages and features from both products and combines them in a symbiotic tool that enhances the quality of SOA Governance Initiatives and Programs. EAMS is able to produce a vast number of outputs by combining its analytical engine, SOA-specific configurations and the assets in OER and other related tools, catalogs and repositories. The configurations encompass not only the extendable parametrization of the metadata but also fully configurable blueprints, PowerPoint reports, charts and queries. The SOA blueprints The solution comes with a set of predefined architectural representations that help the organization better perceive their SOA landscape. More blueprints can be easily created in order to accommodate the organizations needs in terms of detail, audience and metadata. Charts & Dashboards The solution encompasses a set of predefined charts and dashboards that promote a more agile way to control and explore the assets. Time Based Visualization All representations are time bound, and with EAMS - OER you can truly govern SOA with a complete view of the Past, Present and Future; The solution delivers Gap Analysis, a project oriented approach while taking into consideration the As-Was, As-Is an To-Be. Time based visualization differentiating factors: Extensive automation and maintenance of architectural representations Organization wide solution. Easy access and navigation to and between all architectural artifacts and representations. Flexible meta-model, customization and extensibility capabilities. Lifecycle management and enforcement of the time dimension over all the repository content. Profile based customization. Comprehensive visibility Architectural alignment Friendly and striking user interfaces For more information on EAMS visit us here. For more information on SOA visit us here. WEBLOGIC SERVER PROVISIONING AND PATCHING For access to the Oracle demo systems please visit OPN and talk to your Partner Expert.SOA Suite and BPM Suite runs on WebLogic! We are pleased to announce the availability of a WebLogic Server Management demo that showcases some of the key provisioning and patching capabilities of WebLogic Server Management Pack Enterprise Edition (EE). To learn more about these features - as well as other features of the pack - please visit the pack's saleskit page.Demo Highlights The demo showcases the following capabilities: Patching Oracle WebLogic Servers Standardizing WebLogic Server Patch Rollouts Creating a WebLogic Domain Provisioning Profile Cloning a WebLogic Domain from a Provisioning Profile Deploying a Java EE Application Scaling Out an Oracle WebLogic Cluster Demo Instructions Go to the DSS website for Oracle Partners. On the Standard Demo Launchpad page, under the “Software Lifecycle Automation” section, click on the link “EM Cloud Control 12c WLS Provisioning and Patching” (tagged as “NEW”). Specific demo launchpad page contains a link to the detailed demo script with instructions on how to show the demo.

    Read the article

  • Stop Spinning Your Wheels&hellip; Sage Advice for Aspiring Developers

    - by Mark Rackley
    So… lately I’ve been tasked with helping bring some non-developers over the hump and become full-fledged, all around, SharePoint developers. Well, only time will tell if I’m successful or a complete failure. Good thing about failures though, you know what NOT to do next time! Anyway, I’ve been writing some sort of code since I was about 10 years old; so I sometimes take for granted the effort some people have to go through to learn a new technology. I guess if I had to say I was an “expert” in one thing it would be learning (and getting “stuff” done) in new technologies. Maybe that’s why I’ve embraced SharePoint and the SharePoint community. SharePoint is the first technology I haven’t been able to master or get everything done without help from other people. I KNOW I’ll never know it all and I learn something new every day.  It keeps it interesting, it keeps me motivated, and keeps me involved. So, what some people may consider a downside of SharePoint, I definitely consider a plus. Crap.. I’m rambling. Where was I? Oh yeah… me trying to be helpful. Like I said, I am able to quickly and effectively pick up new languages, technology, etc. and put it to good use. Am I just brilliant? Well, my mom thinks so.. but maybe not. Maybe I’ve just been doing it for a long time…. 25 years in some form or fashion… wow I’m old… Anyway, what I lack in depth I make up for in breadth and being the “go-to” guy wherever I work when someone needs to “get stuff done”.  Let’s see if I can take some of that experience and put it to practical use to help new people get up to speed faster, learn things more effectively, and become that go-to guy. First off…  make sure you… Know The Basics I don’t have the time to teach new developers the basics, but you gotta know them. I’ve only been “taught” two languages.. Fortran 77 and C… everything else I’ve picked up from “doing”. I HAD to know the basics though, and all new developers need to understand the very basics of development.  97.23% of all languages will have the following: Variables Functions Arrays If statements For loops / While loops If you think about it, most development is “if this, do this… or while this, do this…”.  “This” may be some unique method to your language or something you develop, but the basics are the basics. YES there are MANY other development topics you need to understand, but you shouldn’t be scratching your head trying to figure out what a ”for loop” is… (Also learn about classes and hashtables as quickly as possible). Once you have the basics down it makes it much easier to… Learn By Doing This may just apply to me and my warped brain.  I don’t learn a new technology by reading or hearing someone speak about it. I learn by doing. It does me no good to try and learn all of the intricacies of a new language or technology inside-and-out before getting my hands dirty. Just show me how to do one thing… let me get that working… then show me how to do the next thing.. let me get that working… Now, let’s see what I can figure out on my own. Okay.. now it starts to make sense. I see how the language works, I can step through the code, and before you know it.. I’m productive in a new technology. Be careful here though…. make sure you… Don’t Reinvent The Wheel People have been writing code for what… 50+ years now? So, why are you trying to tackle ANYTHING without first Googling it with Bing to see what others have done first? When I was first learning C# (I had come from a Java background) I had to call a web service.  Sure! No problem! I’d done this many times in Java. So, I proceeded to write an HTTP Handler, called the Web Service and it worked like a charm!!!  Probably about 2.3 seconds after I got it working completely someone says to me “Why didn’t you just add a Web Reference?” Really? You can do that?  oops… I just wasted a lot of time. Before undertaking the development of any sort of utility method in a new language, make sure it’s not already handled for you… Okay… you are starting to write some code and are curious about the possibilities? Well… don’t just sit there… Try It And See What Happens This is actually one of my biggest pet peeves. “So… ‘x++’ works in C#, but does it also work in JavaScript?”   Really? Did you just ask me that? In the time it spent for you to type that email, press the send button, me receive the email, get around to reading it, and replying with “yes” you could have tested it 47 times and know the answer! Just TRY it! See what happens! You aren’t doing brain surgery. You aren’t going to kill anyone, and you BETTER not be developing in production. So, you are not going to crash any production systems!! Seriously! Get off your butt and just try it yourself. The extra added benefit is that it doesn’t work, the absolute best way to learn is to… Learn From Your Failures I don’t know about you… but if I screw up and something doesn’t work, I learn A LOT more debugging my problem than if everything magically worked. It’s okay that you aren’t perfect! Not everyone can be me? In the same vein… don’t ask someone else to debug your problem until you have made a valiant attempt to do so yourself. There’s nothing quite like stepping through code line by line to see what it’s REALLY doing… and you’ll never feel more stupid sometimes than when you realize WHY it’s not working.. but you realize... you learn... and you remember. There is nothing wrong with failure as long as you learn from it. As you start writing more and more and more code make sure that you ALWAYS… Develop for Production You will soon learn that the “prototype” you wrote last week to show as a “proof of concept” is going to go directly into production no matter how much you beg and plead and try to explain it’s not ready to go into production… it’s going to go straight there.. and it’s like herpes.. it doesn’t go away and there’s no fixing it once it’s in there.  So, why not write ALL your code like it will be put in production? It MIGHT take a little longer, but in the long run it will be easier to maintain, get help on, and you won’t be embarrassed that it’s sitting on a production server for everyone to use and see. So, now that you are getting comfortable and writing code for production it is important to to remember the… KISS Principle… Learn It… Love It… Keep It Simple Stupid Seriously.. don’t try to show how smart you are by writing the most complicated code in history. Break your problem up into discrete steps and write each step. If it turns out you have some redundancy, you can always go back and tweak your code later.  How bad is it when you write code that LOOKS cocky? I’ve seen it before… some of the most abstract and complicated classes when a class wasn’t even needed! Or the most elaborate unreadable code jammed into one really long line when it could have been written in three lines, performed just as well, and been SOOO much easier to maintain. Keep it clear and simple.. baby steps people. This will help you learn the technology, debug problems, AND it will help others help you find your problems if they don’t have to decipher the Dead Sea Scrolls just to figure out what you are trying to do…. Really.. don’t be that guy… try to curb your ego and… Keep an Open Mind No matter how smart you are… how fast you type… or how much you get paid, don’t let your ego get in the way. There is probably a better way to do everything you’ve ever done. Don’t become so cocky that you can’t think someone knows more than you. There’s a lot of brilliant, helpful people out there willing to show you tricks if you just give them a chance. A very super-awesome developer once told me “So what if you’ve been writing code for 10 years or more! Does your code look basically the same? Are you not growing as a developer?” Those 10 years become pretty meaningless if you just “know” that you are right and have not picked up new tips, tricks, methods, and patterns along the way. Learn from others and find out what’s new in development land (you know you don’t have to specifically use pointers anymore??). Along those same lines… If it’s not working, first assume you are doing something wrong. You have no idea how much it annoys people who are trying to help you when you first assume that the help they are trying to give you is wrong. Just MAYBE… you… the person learning is making some small mistake? Maybe you didn’t describe your problem correctly? Maybe you are using the wrong terminology? “I did exactly what you said and it didn’t work.”  Oh really? Are you SURE about that? “Your solution doesn’t work.”  Well… I’m pretty sure it works, I’ve used it 200 times… What are you doing differently? First try some humility and appreciation.. it will go much further, especially when it turns out YOU are the one that is wrong. When all else fails…. Try Professional Training Some people just don’t have the mindset to go and figure stuff out. It’s a gift and not everyone has it. If everyone could do it I wouldn’t have a job and there wouldn’t be professional training available.  So, if you’ve tried everything else and no light bulbs are coming on, contact the experts who specialize in training. Be careful though, there is bad training out there. Want to know the names of some good places? Just shoot me a message and I’ll let you know. I’m boycotting endorsing Andrew Connell anymore until I get that free course dangit!! So… that’s it.. that’s all I got right now. Maybe you thought all of this is common sense, maybe you think I’m smoking crack. If so, don’t just sit there, there’s a comments section for a reason. Finally, what about you? What tips do you have to help this aspiring to learn the dark arts??

    Read the article

  • CodePlex Daily Summary for Saturday, October 22, 2011

    CodePlex Daily Summary for Saturday, October 22, 2011Popular ReleasesWatchersNET CKEditor™ Provider for DotNetNuke®: CKEditor Provider 1.12.17: Changes Added FilePath Length Check when Uploading Files. Fixed Issue #6550 Fixed Issue #6536 Fixed Issue #6525 Fixed Issue #6500 Fixed Issue #6401 Fixed Issue #6490DotNet.Framework.Common: DotNet.Framework.Common 4.0: ??????????,????????????XML Explorer: XML Explorer 4.0.5: Changes in 4.0.5: Added 'Copy Attribute XPath to Address Bar' feature. Added methods for decoding node text and value from Base64 encoded strings, and copying them to the clipboard. Added 'ChildNodeDefinitions' to the options, which allows for easier navigation of parent-child and ID-IDREF relationships. Discovery happens on-demand, as nodes are expanded and child nodes are added. Nodes can now have 'virtual' child nodes, defined by an xpath to select an identifier (usually relative to ...Media Companion: MC 3.419b Weekly: A couple of minor bug fixes, but the important fix in this release is to tackle the extremely long load times for users with large TV collections (issue #130). A note has been provided by developer Playos: "One final note, you will have to suffer one final long load and then it should be fixed... alternatively you can delete the TvCache.xml and rebuild your library... The fix was to include the file extension so it doesn't have to look for the video file (checking to see if a file exists is a...CODE Framework: 4.0.11021.0: This build adds a lot of our WPF components, including our MVVC and MVC components as well as a "Metro" and "Battleship" style.Manejo de tags - PHP sobre apache: tagqrphp: Primera version: Para que funcione el programa se debe primero obtener un id para desarrollo del tag eso lo entrega Microsoft registrandose en el sitio. http://tag.microsoft.com - En tagm.php que llama a la libreria Microsoft Tag PHP Library (Codigo que sirve para trabajar con PHP y Tag) - Llenamos los datos del formulario y ejecutamos para obtener el codigo tag de microsoft el cual apunte a la url que le indicamos en el formulario - Libreria MStag.php (tiene mucha explicación del funciona...GridLibre para Visual FoxPro: GridLibre para Visual FoxPro v3.5: GridLibre Para Visual FoxPro: esta herramienta ayudara a los usuarios y programadores en los manejos de los datos, como Filtrar, multiseleccion y el autoformato a las columnas como la asignacion del controlsource.Self-Tracking Entity Generator for WPF and Silverlight: Self-Tracking Entity Generator v 0.9.9: Self-Tracking Entity Generator v 0.9.9 for Entity Framework 4.0Umbraco CMS: Umbraco 5.0 CMS Alpha 3: Umbraco 5 Alpha 3Umbraco 5 (aka Jupiter) will be the next version of everyone's favourite, friendly ASP.NET CMS that already powers over 100,000 websites worldwide. Try out the Alpha of v5 today! If you're new to Umbraco and would like to get a low-down on our popular and easy-to-learn approach to content management, check out our intro video. What's Alpha 3?This is our third Alpha release. It's intended for developers looking to become familiar with the codebase & architecture, or for thos...Vkontakte WP: Vkontakte: source codeWay2Sms Applications for Android, Desktop/Laptop & Java enabled phones: Way2SMS Desktop App v2.0: 1. Fixed issue with sending messages due to changes to Way2Sms site 2. Updated the character limit to 160 from 140GART - Geo Augmented Reality Toolkit: 1.0.1: About Release 1.0.1 Release 1.0.1 is a service release that addresses several issues and improves performance. As always, check the Documentation tab for instructions on how to get started. If you don't have the Windows Phone SDK yet, grab it here. Breaking Change Please note: There is a breaking change in this release. As noted below, the WorldCalculationMode property of ARItem has been replaced by a user-definable function. ARItem is now automatically wired up with a function that perform...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.32: Fix for issue #16710 - string literals in "constant literal operations" which contain ASP.NET substitutions should not be considered "constant." Move the JS1284 error (Misplaced Function Declaration) so it only fires when in strict mode. I got a couple complaints that people didn't like that error popping up in their existing code when they could verify that the location of that function, although not strict JS, still functions as expected cross-browser.Naked Objects: Naked Objects Release 4.0.110.0: Corresponds to the packaged version 4.0.110.0 available via NuGet. Please note that the easiest way to install and run the Naked Objects Framework is via the NuGet package manager: just search the Official NuGet Package Source for 'nakedobjects'. It is only necessary to download the source code (from here) if you wish to modify or re-build the framework yourself. If you do wish to re-build the framework, consul the file HowToBuild.txt in the release. Documentation Please note that after ...myCollections: Version 1.5: New in this version : Added edit type for selected elements Added clean for selected elements Added Amazon Italia Added Amazon China Added TVDB Italia Added TVDB China Added Turkish language You can now manually add artist Added Order by Rating Improved Add by Media Improved Artist Detail Upgrade Sqlite engine View, Zoom, Grouping, Filter are now saved by category Added group by Artist Added CubeCover View BugFixingFacebook C# SDK: 5.3: This is a BETA release which adds new features and bug fixes to v5.2.1. removed dependency from Code Contracts enabled Task Parallel Support in .NET 4.0+ added support for early preview for .NET 4.5 added additional method overloads for .NET 4.5 to support IProgress<T> for upload progress added new CS-WinForms-AsyncAwait.sln sample demonstrating the use of async/await, upload progress report using IProgress<T> and cancellation support Query/QueryAsync methods uses graph api instead...IronPython: 2.7.1 RC: This is the first release candidate of IronPython 2.7.1. Like IronPython 54498, this release requires .NET 4 or Silverlight 4. This release will replace any existing IronPython installation. If there are no showstopping issues, this will be the only release candidate for 2.7.1, so please speak up if you run into any roadblocks. The highlights of 2.7.1 are: Updated the standard library to match CPython 2.7.2. Add the ast, csv, and unicodedata modules. Fixed several bugs. IronPython To...Rawr: Rawr 4.2.6: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr AddonWe now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including bag and bank items) like Char...Home Access Plus+: v7.5: Change Log: New Booking System (out of Beta) New Help Desk (out of Beta) New My Files (Developer Preview) Token now saved into Cookie so the system doesn't TIMEOUT as much File Changes: ~/bin/hap.ad.dll ~/bin/hap.web.dll ~/bin/hap.data.dll ~/bin/hap.web.configuration.dll ~/bookingsystem/admin/default.aspx ~/bookingsystem/default.aspx REMOVED ~/bookingsystem/bookingpopup.ascx REMOVED ~/bookingsystem/daylist.ascx REMOVED ~/bookingsystem/new.aspx ~/helpdesk/default.aspx ...Visual Micro - Arduino for Visual Studio: Arduino for Visual Studio 2008 and 2010: Arduino for Visual Studio 2010 has been extended to support Visual Studio 2008. The same functionality and configuration exists between the two versions. The 2010 addin runs .NET4 and the 2008 addin runs .NET3.5, both are installed using a single msi and both share the same configuration settings. The only known issue in 2008 is that the button and menu icons are missing. Please logon to the visual micro forum and let us know if things are working or not. Read more about this Visual Studio ...New Projects#foo Core: Core functionality extensions of .NET used by all HashFoo projects.#foo Nhib: #foo NHibernate extensions.Aagust G: Hello all ! Its a free JQuery Image Slider....ACP Log Analyzer: ACP Log Analyzer provides a quick and easy mechanism for generating a report on your ACP-based astronomical observing activities. Developed in Microsoft Visual Studio 2010 using C#, the .NET Framework version 4 and WPF.BlobShare Sample: TBDCompletedWorkflowCleanUp: This tool once executed on a list delete all completed workflow instancesCRM 2011 Visual Ribbon Editor: Visual Ribbon Editor is a tool for Microsoft Dynamics CRM 2011 that lets you edit CRM ribbons. This ribbon editor shows a preview of the CRM ribbon as you are editing it and allows you to add ribbon buttons and groups without needing to fully understand the ribbon XML schema.GearMerge: Organizes Movies and TV Series files from one Hard Drive to another. I created it for myself to update external drives with movies and TV shows from my collection.Generic Object Storage Helper for WinRT: ObjectStorageHelper<T> is a Generic class that simplifies storage of data in WinRT applications.Government Sanctioned Espionage RPG: Government Sanctioned is a modern SRD-like espionage game server. Visit http://wiki.government-sanctioned.us:8040 for game design and play information or homepage http://www.government-sanctioned.us Government Sanctioned is an online, text-based espionage RPG (similar to a MUD/MOO) that takes place against the backdrop of a highly-secretive U.S. Government agency whose stated goals don't always match the dirty work the agents tend to find themselves in. - over 15 starting profession...GridLibre para Visual FoxPro: GridLibre Para Visual FoxPro: esta herramienta ayudara a los usuarios y programadores en los manejos de los datos, como Filtrar, multiseleccion y el autoformato a las columnas como la asignacion del controlsource.HTML5 Video Web Part for SharePoint 2010: A web part for SharePoint 2010 that enable users playing video into the page using the Ribbon bar.Jogo do Galo: JOGO DO GALO REGRAS •O tabuleiro é a matriz de três linhas em três colunas. •Dois jogadores escolhem três peças cada um. •Os jogadores jogam alternadamente, uma peça de cada vez, num espaço que esteja vazio. •O objectivo é conseguir três peças iguais em linha, quer horizontal, vKarol sie uczy silverlajta: on sie naprawde tego uczy!Manejo de tags - PHP sobre apache: Hago uso de la libreria Microsoft Tag PHP Library para que pueda funcionar la aplicación sobre Apache finalmente puede crear tag de micrsosoft desde el formulario creado. Modem based SMS Gateway: It is an easy to use modem based SMS server that provide easier solutions for SMS marketing or SMS based services. It is highly programmable and the easy to use API interface makes SMS integration very easy. Embedded SMS processor provides customized solution to many of your needs even without building any custom software.Mund Publishing Feture: Mund Publishing FeatureMyTFSTest: TestNHS HL7 CDA Document Developer: A project to demonstrate how templated HL7 CDA documents can be created by using a common API. The API is designed to be used in .NET applications. A number of examples are provided using C#OpenShell: OpenShell is an open source implementation of the Windows PowerShell engine. It is to make integrating PowerShell into standalone console programs simple.Powershell Script to Copy Lists in a Site Collection in MOSS 2007 and SPS 2010: Hi, This is a powershell script file that copies a list within the same site collection. This works in Sharepoint 2007 and Sharepoint 2010 as well. THis will flash the messages before taking the input values. This will in this way provide the clear ideas about the values to beSharePoint Desktop: SharePoint Desktop is a explorer-like webpart that makes it possible to drag and drop items (documents and folders), copy and paste items and explore all SharePoint document libraries in 1 place.SQL floating point compare function: Comparison of floating point values in SQL Server not always gives the expected result. With this function, comparison is only done on the first 15 significant digits. Since SQL Server only garantees a precision of 15 digits for float datatypes, this is expected to be secure.Stock Analyzer: It is a stock management software. It's main job is to store market realtime data on a database to be able to analyse it latter and create automatic systems that operate directly on the stock exchange market. It will have different modules to do this task: - Realtime data capture. - Realtime data analysis - Historic analysis. - Order execution. - Strategy test. - Strategy execution. It's developed in C# and with WPF.VB_Skype: VB_Skype utilizza la libreria Skype4COM per integrare i servizi Skype con un'applicazione Visual Basic (Windows Forms). L'applicazione comprende un progetto di controllo personalizzato che costituisce il "wrapper" verso la libreria Skype4COM e un progetto con una demo di utilizzo. Progetto che dovrebbe essere utilizzato nella mia sessione, come uno degli speaker della conferenza "WPC 2011" che si terrà ad Assago (MI) nei giorni 22-23-24 Novembre 2011. La mia sessione è in agenda per il 24...Word Template Generator: Custom Template Generator for Microsoft Word, developed in C#?????OA??: ?????OA??

    Read the article

  • How do i mount my SD Card? I am using ubuntu 10.04

    - by shobhit
    root@shobhit:/media# lsusb Bus 002 Device 017: ID 14cd:125c Super Top Bus 002 Device 003: ID 0c45:6421 Microdia Bus 002 Device 002: ID 8087:0020 Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 011: ID 413c:8160 Dell Computer Corp. Bus 001 Device 006: ID 413c:8162 Dell Computer Corp. Bus 001 Device 005: ID 413c:8161 Dell Computer Corp. Bus 001 Device 004: ID 138a:0008 DigitalPersona, Inc Bus 001 Device 003: ID 0a5c:4500 Broadcom Corp. BCM2046B1 USB 2.0 Hub (part of BCM2046 Bluetooth) Bus 001 Device 002: ID 8087:0020 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub root@shobhit:/home/shobhit/scripts/internalUtilities# sudo lspci -v -nn 00:1a.0 USB Controller [0c03]: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller [8086:3b3c] (rev 06) (prog-if 20) Subsystem: Dell Device [1028:0441] Flags: bus master, medium devsel, latency 0, IRQ 16 Memory at fbc08000 (32-bit, non-prefetchable) [size=1K] Capabilities: [50] Power Management version 2 Capabilities: [58] Debug port: BAR=1 offset=00a0 Capabilities: [98] PCIe advanced features <?> Kernel driver in use: ehci_hcd 00:1d.0 USB Controller [0c03]: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller [8086:3b34] (rev 06) (prog-if 20) Subsystem: Dell Device [1028:0441] Flags: bus master, medium devsel, latency 0, IRQ 23 Memory at fbc07000 (32-bit, non-prefetchable) [size=1K] Capabilities: [50] Power Management version 2 Capabilities: [58] Debug port: BAR=1 offset=00a0 Capabilities: [98] PCIe advanced features <?> Kernel driver in use: ehci_hcd 00:1e.0 PCI bridge [0604]: Intel Corporation 82801 Mobile PCI Bridge [8086:2448] (rev a6) (prog-if 01) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=20, subordinate=20, sec-latency=32 Capabilities: [50] Subsystem: Dell Device [1028:0441] 00:1f.0 ISA bridge [0601]: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller [8086:3b0b] (rev 06) Subsystem: Dell Device [1028:0441] Flags: bus master, medium devsel, latency 0 Capabilities: [e0] Vendor Specific Information <?> Kernel modules: iTCO_wdt 00:1f.2 SATA controller [0106]: Intel Corporation 5 Series/3400 Series Chipset 6 port SATA AHCI Controller [8086:3b2f] (rev 06) (prog-if 01) Subsystem: Dell Device [1028:0441] Flags: bus master, 66MHz, medium devsel, latency 0, IRQ 29 I/O ports at f070 [size=8] I/O ports at f060 [size=4] I/O ports at f050 [size=8] I/O ports at f040 [size=4] I/O ports at f020 [size=32] Memory at fbc06000 (32-bit, non-prefetchable) [size=2K] Capabilities: [80] Message Signalled Interrupts: Mask- 64bit- Queue=0/0 Enable+ Capabilities: [70] Power Management version 3 Capabilities: [a8] SATA HBA <?> Capabilities: [b0] PCIe advanced features <?> Kernel driver in use: ahci Kernel modules: ahci 00:1f.3 SMBus [0c05]: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller [8086:3b30] (rev 06) Subsystem: Dell Device [1028:0441] Flags: medium devsel, IRQ 3 Memory at fbc05000 (64-bit, non-prefetchable) [size=256] I/O ports at f000 [size=32] Kernel modules: i2c-i801 00:1f.6 Signal processing controller [1180]: Intel Corporation 5 Series/3400 Series Chipset Thermal Subsystem [8086:3b32] (rev 06) Subsystem: Dell Device [1028:0441] Flags: bus master, fast devsel, latency 0, IRQ 3 Memory at fbc04000 (64-bit, non-prefetchable) [size=4K] Capabilities: [50] Power Management version 3 Capabilities: [80] Message Signalled Interrupts: Mask- 64bit- Queue=0/0 Enable- 12:00.0 Network controller [0280]: Broadcom Corporation Device [14e4:4727] (rev 01) Subsystem: Dell Device [1028:0010] Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at fbb00000 (64-bit, non-prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Capabilities: [58] Vendor Specific Information <?> Capabilities: [48] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable- Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting <?> Capabilities: [13c] Virtual Channel <?> Capabilities: [160] Device Serial Number cb-c0-8b-ff-ff-38-00-00 Capabilities: [16c] Power Budgeting <?> Kernel driver in use: wl Kernel modules: wl 13:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller [10ec:8168] (rev 03) Subsystem: Dell Device [1028:0441] Flags: bus master, fast devsel, latency 0, IRQ 28 I/O ports at e000 [size=256] Memory at d0b04000 (64-bit, prefetchable) [size=4K] Memory at d0b00000 (64-bit, prefetchable) [size=16K] Expansion ROM at fba00000 [disabled] [size=128K] Capabilities: [40] Power Management version 3 Capabilities: [50] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable+ Capabilities: [70] Express Endpoint, MSI 01 Capabilities: [ac] MSI-X: Enable- Mask- TabSize=4 Capabilities: [cc] Vital Product Data <?> Capabilities: [100] Advanced Error Reporting <?> Capabilities: [140] Virtual Channel <?> Capabilities: [160] Device Serial Number 00-e0-4c-68-00-00-00-03 Kernel driver in use: r8169 Kernel modules: r8169 root@shobhit:/home/shobhit/scripts/internalUtilities# sudo lshw shobhit description: Portable Computer product: Vostro 3500 vendor: Dell Inc. version: A10 serial: FV1L3N1 width: 32 bits capabilities: smbios-2.6 dmi-2.6 smp-1.4 smp configuration: boot=normal chassis=portable cpus=2 uuid=44454C4C-5600-1031-804C-C6C04F334E31 *-core description: Motherboard product: 0G2R51 vendor: Dell Inc. physical id: 0 version: A10 serial: .FV1L3N1.CN7016612H00PW. slot: To Be Filled By O.E.M. *-cpu:0 description: CPU product: Intel(R) Core(TM) i5 CPU M 480 @ 2.67GHz vendor: Intel Corp. physical id: 4 bus info: cpu@0 version: 6.5.5 serial: 0002-0655-0000-0000-0000-0000 slot: CPU 1 size: 1197MHz capacity: 2926MHz width: 64 bits clock: 533MHz capabilities: boot fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp x86-64 constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid cpufreq configuration: id=4 *-cache:0 description: L1 cache physical id: 5 slot: L1-Cache size: 64KiB capacity: 64KiB capabilities: internal write-back unified *-cache:1 description: L2 cache physical id: 6 slot: L2-Cache size: 512KiB capacity: 512KiB capabilities: internal varies unified *-cache:2 description: L3 cache physical id: 7 slot: L3-Cache size: 3MiB capacity: 3MiB capabilities: internal varies unified *-logicalcpu:0 description: Logical CPU physical id: 4.1 width: 64 bits capabilities: logical *-logicalcpu:1 description: Logical CPU physical id: 4.2 width: 64 bits capabilities: logical *-logicalcpu:2 description: Logical CPU physical id: 4.3 width: 64 bits capabilities: logical *-logicalcpu:3 description: Logical CPU physical id: 4.4 width: 64 bits capabilities: logical *-logicalcpu:4 description: Logical CPU physical id: 4.5 width: 64 bits capabilities: logical *-logicalcpu:5 description: Logical CPU physical id: 4.6 width: 64 bits capabilities: logical *-logicalcpu:6 description: Logical CPU physical id: 4.7 width: 64 bits capabilities: logical *-logicalcpu:7 description: Logical CPU physical id: 4.8 width: 64 bits capabilities: logical *-logicalcpu:8 description: Logical CPU physical id: 4.9 width: 64 bits capabilities: logical *-logicalcpu:9 description: Logical CPU physical id: 4.a width: 64 bits capabilities: logical *-logicalcpu:10 description: Logical CPU physical id: 4.b width: 64 bits capabilities: logical *-logicalcpu:11 description: Logical CPU physical id: 4.c width: 64 bits capabilities: logical *-logicalcpu:12 description: Logical CPU physical id: 4.d width: 64 bits capabilities: logical *-logicalcpu:13 description: Logical CPU physical id: 4.e width: 64 bits capabilities: logical *-logicalcpu:14 description: Logical CPU physical id: 4.f width: 64 bits capabilities: logical *-logicalcpu:15 description: Logical CPU physical id: 4.10 width: 64 bits capabilities: logical *-memory description: System Memory physical id: 1d slot: System board or motherboard size: 3GiB *-bank:0 description: DIMM Synchronous 1333 MHz (0.8 ns) product: HMT112S6TFR8C-H9 vendor: AD80 physical id: 0 serial: 5525C935 slot: DIMM_A size: 1GiB width: 64 bits clock: 1333MHz (0.8ns) *-bank:1 description: DIMM Synchronous 1333 MHz (0.8 ns) product: HMT125S6TFR8C-H9 vendor: AD80 physical id: 1 serial: 3441D6CA slot: DIMM_B size: 2GiB width: 64 bits clock: 1333MHz (0.8ns) *-firmware description: BIOS vendor: Dell Inc. physical id: 0 version: A10 (10/25/2010) size: 64KiB capacity: 1984KiB capabilities: mca pci upgrade shadowing escd cdboot bootselect socketedrom edd int13floppy1200 int13floppy720 int13floppy2880 int5printscreen int9keyboard int14serial int17printer int10video acpi usb zipboot biosbootspecification *-cpu:1 physical id: 1 bus info: cpu@1 version: 6.5.5 serial: 0002-0655-0000-0000-0000-0000 size: 1197MHz capacity: 1197MHz capabilities: vmx ht cpufreq configuration: id=4 *-logicalcpu:0 description: Logical CPU physical id: 4.1 capabilities: logical *-logicalcpu:1 description: Logical CPU physical id: 4.2 capabilities: logical *-logicalcpu:2 description: Logical CPU physical id: 4.3 capabilities: logical *-logicalcpu:3 description: Logical CPU physical id: 4.4 capabilities: logical *-logicalcpu:4 description: Logical CPU physical id: 4.5 capabilities: logical *-logicalcpu:5 description: Logical CPU physical id: 4.6 capabilities: logical *-logicalcpu:6 description: Logical CPU physical id: 4.7 capabilities: logical *-logicalcpu:7 description: Logical CPU physical id: 4.8 capabilities: logical *-logicalcpu:8 description: Logical CPU physical id: 4.9 capabilities: logical *-logicalcpu:9 description: Logical CPU physical id: 4.a capabilities: logical *-logicalcpu:10 description: Logical CPU physical id: 4.b capabilities: logical *-logicalcpu:11 description: Logical CPU physical id: 4.c capabilities: logical *-logicalcpu:12 description: Logical CPU physical id: 4.d capabilities: logical *-logicalcpu:13 description: Logical CPU physical id: 4.e capabilities: logical *-logicalcpu:14 description: Logical CPU physical id: 4.f capabilities: logical *-logicalcpu:15 description: Logical CPU physical id: 4.10 capabilities: logical *-pci description: Host bridge product: Core Processor DRAM Controller vendor: Intel Corporation physical id: 100 bus info: pci@0000:00:00.0 version: 18 width: 32 bits clock: 33MHz configuration: driver=agpgart-intel resources: irq:0 *-display description: VGA compatible controller product: Core Processor Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 18 width: 64 bits clock: 33MHz capabilities: msi pm bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:30 memory:fac00000-faffffff memory:c0000000-cfffffff(prefetchable) ioport:f080(size=8) *-communication UNCLAIMED description: Communication controller product: 5 Series/3400 Series Chipset HECI Controller vendor: Intel Corporation physical id: 16 bus info: pci@0000:00:16.0 version: 06 width: 64 bits clock: 33MHz capabilities: pm msi bus_master cap_list configuration: latency=0 resources: memory:fbc09000-fbc0900f *-usb:0 description: USB Controller product: 5 Series/3400 Series Chipset USB2 Enhanced Host Controller vendor: Intel Corporation physical id: 1a bus info: pci@0000:00:1a.0 version: 06 width: 32 bits clock: 33MHz capabilities: pm debug bus_master cap_list configuration: driver=ehci_hcd latency=0 resources: irq:16 memory:fbc08000-fbc083ff *-multimedia description: Audio device product: 5 Series/3400 Series Chipset High Definition Audio vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 06 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=HDA Intel latency=0 resources: irq:22 memory:fbc00000-fbc03fff *-pci:0 description: PCI bridge product: 5 Series/3400 Series Chipset PCI Express Root Port 1 vendor: Intel Corporation physical id: 1c bus info: pci@0000:00:1c.0 version: 06 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm bus_master cap_list configuration: driver=pcieport resources: irq:24 ioport:2000(size=4096) memory:bc000000-bc1fffff memory:bc200000-bc3fffff(prefetchable) *-pci:1 description: PCI bridge product: 5 Series/3400 Series Chipset PCI Express Root Port 2 vendor: Intel Corporation physical id: 1c.1 bus info: pci@0000:00:1c.1 version: 06 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm bus_master cap_list configuration: driver=pcieport resources: irq:25 ioport:3000(size=4096) memory:fbb00000-fbbfffff memory:bc400000-bc5fffff(prefetchable) *-network description: Wireless interface product: Broadcom Corporation vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:12:00.0 logical name: eth1 version: 01 serial: c0:cb:38:8b:aa:d8 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.60.48.36 ip=10.0.1.50 latency=0 multicast=yes wireless=IEEE 802.11 resources: irq:17 memory:fbb00000-fbb03fff *-pci:2 description: PCI bridge product: 5 Series/3400 Series Chipset PCI Express Root Port 3 vendor: Intel Corporation physical id: 1c.2 bus info: pci@0000:00:1c.2 version: 06 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm bus_master cap_list configuration: driver=pcieport resources: irq:26 ioport:e000(size=4096) memory:fba00000-fbafffff ioport:d0b00000(size=1048576) *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:13:00.0 logical name: eth0 version: 03 serial: 78:2b:cb:cc:0e:2a size: 10MB/s capacity: 1GB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10MB/s resources: irq:28 ioport:e000(size=256) memory:d0b04000-d0b04fff(prefetchable) memory:d0b00000-d0b03fff(prefetchable) memory:fba00000-fba1ffff(prefetchable) *-pci:3 description: PCI bridge product: 5 Series/3400 Series Chipset PCI Express Root Port 5 vendor: Intel Corporation physical id: 1c.4 bus info: pci@0000:00:1c.4 version: 06 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm bus_master cap_list configuration: driver=pcieport resources: irq:27 ioport:d000(size=4096) memory:fb000000-fb9fffff ioport:d0000000(size=10485760) *-usb:1 description: USB Controller product: 5 Series/3400 Series Chipset USB2 Enhanced Host Controller vendor: Intel Corporation physical id: 1d bus info: pci@0000:00:1d.0 version: 06 width: 32 bits clock: 33MHz capabilities: pm debug bus_master cap_list configuration: driver=ehci_hcd latency=0 resources: irq:23 memory:fbc07000-fbc073ff *-pci:4 description: PCI bridge product: 82801 Mobile PCI Bridge vendor: Intel Corporation physical id: 1e bus info: pci@0000:00:1e.0 version: a6 width: 32 bits clock: 33MHz capabilities: pci bus_master cap_list *-isa description: ISA bridge product: Mobile 5 Series Chipset LPC Interface Controller vendor: Intel Corporation physical id: 1f bus info: pci@0000:00:1f.0 version: 06 width: 32 bits clock: 33MHz capabilities: isa bus_master cap_list configuration: latency=0 *-storage description: SATA controller product: 5 Series/3400 Series Chipset 6 port SATA AHCI Controller vendor: Intel Corporation physical id: 1f.2 bus info: pci@0000:00:1f.2 logical name: scsi0 logical name: scsi1 version: 06 width: 32 bits clock: 66MHz capabilities: storage msi pm bus_master cap_list emulated configuration: driver=ahci latency=0 resources: irq:29 ioport:f070(size=8) ioport:f060(size=4) ioport:f050(size=8) ioport:f040(size=4) ioport:f020(size=32) memory:fbc06000-fbc067ff *-disk description: ATA Disk product: WDC WD3200BEKT-7 vendor: Western Digital physical id: 0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: 01.0 serial: WD-WX21AC0W1945 size: 298GiB (320GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=77e3ed41 *-volume:0 description: Windows NTFS volume physical id: 1 bus info: scsi@0:0.0.0,1 logical name: /dev/sda1 version: 3.1 serial: aa69-51c0 size: 98MiB capacity: 100MiB capabilities: primary bootable ntfs initialized configuration: clustersize=4096 created=2012-04-03 02:00:15 filesystem=ntfs label=System Reserved state=clean *-volume:1 description: Windows NTFS volume physical id: 2 bus info: scsi@0:0.0.0,2 logical name: /dev/sda2 version: 3.1 serial: 9854ff5c-1dea-a147-84a6-624e758f44b8 size: 48GiB capacity: 48GiB capabilities: primary ntfs initialized configuration: clustersize=4096 created=2012-04-10 13:55:31 filesystem=ntfs modified_by_chkdsk=true mounted_on_nt4=true resize_log_file=true state=dirty upgrade_on_mount=true *-volume:2 description: Extended partition physical id: 3 bus info: scsi@0:0.0.0,3 logical name: /dev/sda3 size: 48GiB capacity: 48GiB capabilities: primary extended partitioned partitioned:extended *-logicalvolume:0 description: Linux swap / Solaris partition physical id: 5 logical name: /dev/sda5 capacity: 1952MiB capabilities: nofs *-logicalvolume:1 description: Linux filesystem partition physical id: 6 logical name: /dev/sda6 logical name: / capacity: 46GiB configuration: mount.fstype=ext4 mount.options=rw,relatime,errors=remount-ro,barrier=1,data=ordered state=mounted *-volume:3 description: Windows NTFS volume physical id: 4 bus info: scsi@0:0.0.0,4 logical name: /dev/sda4 logical name: /media/56AA8094AA807273 version: 3.1 serial: 22a29e8d-56c7-9a4a-adea-528103948f6d size: 200GiB capacity: 200GiB capabilities: primary ntfs initialized configuration: clustersize=4096 created=2012-04-02 20:17:15 filesystem=ntfs modified_by_chkdsk=true mount.fstype=fuseblk mount.options=rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096 mounted_on_nt4=true resize_log_file=true state=mounted upgrade_on_mount=true *-cdrom description: DVD-RAM writer product: DVD+-RW TS-L633J vendor: TSSTcorp physical id: 1 bus info: scsi@1:0.0.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/dvdrw logical name: /dev/scd0 logical name: /dev/sr0 version: D200 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=nodisc *-serial UNCLAIMED description: SMBus product: 5 Series/3400 Series Chipset SMBus Controller vendor: Intel Corporation physical id: 1f.3 bus info: pci@0000:00:1f.3 version: 06 width: 64 bits clock: 33MHz configuration: latency=0 resources: memory:fbc05000-fbc050ff ioport:f000(size=32) *-generic UNCLAIMED description: Signal processing controller product: 5 Series/3400 Series Chipset Thermal Subsystem vendor: Intel Corporation physical id: 1f.6 bus info: pci@0000:00:1f.6 version: 06 width: 64 bits clock: 33MHz capabilities: pm msi bus_master cap_list configuration: latency=0 resources: memory:fbc04000-fbc04fff *-scsi physical id: 2 bus info: usb@2:1.1 logical name: scsi15 capabilities: emulated scsi-host configuration: driver=usb-storage *-disk description: SCSI Disk physical id: 0.0.0 bus info: scsi@15:0.0.0 logical name: /dev/sdb I have tried all options like fdisk /dev/sdb , pmount /dev/sdb but nothing is working .Pls guide me

    Read the article

< Previous Page | 35 36 37 38 39 40 41  | Next Page >