Search Results

Search found 14936 results on 598 pages for 'format conversion'.

Page 223/598 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • Sharepoint (active directory account creation mode) - Using STSADM

    - by vivek m
    This question is regarding using STSADM command to create new site collection in Active Directory Account creation mode. My setup is like this- I have 2 virtual PCs in a Windows XP Pro SP3 host. Both VPCs are Windows Server 2003 R2. One VPC acts as the DC, DNS Server, DHCP server, has Active Directory installed and is also the Database Server. The other VPC is the domain member and it is the IIS web server, POP/SMTP server and it has WSS 3.0 installed. I created a new site using the GUI in Central Admin page. For creating a site collection under the newly created site, I needed to use the STSADM command line tool since it cannot be done from Central Admin page in Active Directory Account creation mode. Thats where i got into a problem- stsadm.exe -o createsite -url http://vivek-c5ba48dca:1111/sites/Sales -owneremail [email protected] -sitetemplate STS#1 The format of the specified domain name is invalid. (Exception from HRESULT: 0x800704BC) The following is the output from the SHarepoint LOG- * stsadm: Running createsite 9e7d Medium Initializing the configuration database connection. 95kp High Creating site http://vivek-c5ba48dca:1111/sites/Sales in content database WSS_Content_Sharepoint_1111 95kq High Creating top level site at http://vivek-c5ba48dca:1111/sites/Sales 72jz Medium Creating site: URL "/sites/Sales" 72e1 High Unable to get domain DNS or forest DNS for domain sharepointsvc.com. ErrorCode=1212 8jvc Warning #1e0046: Adding user "spsalespadmin" to OU "sharepoint_ou" in domain "sharepointsvc.com" FAILED with HRESULT -2147023684. 72k1 High Cannot create site: "http://vivek-c5ba48dca:1111/sites/Sales" for owner "@\@", Error: , 0x800704bc 8e2s Medium Unknown SPRequest error occurred. More information: 0x800704bc 95ks Critical The site /sites/Sales could not be created. The following exception occured: The format of the specified domain name is invalid. (Exception from HRESULT: 0x800704BC). 72ju High stsadm: The format of the specified domain name is invalid. (Exception from HRESULT: 0x800704BC) Callstack: at Microsoft.SharePoint.Library.SPRequest.CreateSite(Guid gApplicationId, String bstrUrl, Int32 lZone, Guid gSiteId, Guid gDatabaseId, String bstrDatabaseServer, String bstrDatabaseName, String bstrDatabaseUsername, String bstrDatabasePassword, String bstrTitle, String bstrDescription, UInt32 nLCID, String bstrWebTemplate, String bstrOwnerLogin, String bstrOwnerUserKey, String bstrOwnerName, String bstrOwnerEmail, String bstrSecondaryContactLogin, String bstrSecondaryContactUserKey, String bstrSecondaryContactName, String bstrSecondaryContactEmail, Boolean bADAccountMode, Boolean bHostHeaderIsSiteName) at Microsoft.SharePoint.Administration.SPSiteCollection.Add(SPContentDataba... 72ju High ...se database, String siteUrl, String title, String description, UInt32 nLCID, String webTemplate, String ownerLogin, String ownerName, String ownerEmail, String secondaryContactLogin, String secondaryContactName, String secondaryContactEmail, String quotaTemplate, String sscRootWebUrl, Boolean useHostHeaderAsSiteName) at Microsoft.SharePoint.Administration.SPSiteCollection.Add(String siteUrl, String title, String description, UInt32 nLCID, String webTemplate, String ownerLogin, String ownerName, String ownerEmail, String secondaryContactLogin, String secondaryContactName, String secondaryContactEmail, Boolean useHostHeaderAsSiteName) at Microsoft.SharePoint.StsAdmin.SPCreateSite.Run(StringDictionary keyValues) at Microsoft.SharePoint.StsAdmin.SPStsAdmin.RunOperation(SPGlobalAdmi... 72ju High ...n globalAdmin, String strOperation, StringDictionary keyValues, SPParamCollection pars) 8wsw High Now terminating ULS (STSADM.EXE, onetnative.dll) * Seems to me that the trouble started with this - Unable to get domain DNS or forest DNS for domain sharepointsvc.com. ErrorCode=1212 Network connection to the sharepointsvc.com domain seems to be fine. C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN>stsadm -o getproperty -pn ADAccountDomain <Property Exist="Yes" Value="sharepointsvc.com" /> C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN>stsadm -o getproperty -pn ADAccountOU <Property Exist="Yes" Value="sharepoint_ou" /> C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN>nslookup sharepointsvc.com Server: vm-winsrvr2003.sharepointsvc.com Address: 192.168.0.5 Name: sharepointsvc.com Addresses: 192.168.0.21, 192.168.0.5 Is there any way of checking the domain connection from within Sharepoint (like using some getproperty of the STSADM tool) Does anyone have any clue about this ? (any pointers would be very helpful) Thanks.

    Read the article

  • Marking multi-level nested forms as "dirty" in Rails

    - by Charles Kihe
    I have a three-level multi-nested form in Rails. The setup is like this: Projects have many Milestones, and Milestones have many Notes. The goal is to have everything editable within the page with JavaScript, where we can add multiple new Milestones to a Project within the page, and add new Notes to new and existing Milestones. Everything works as expected, except that when I add new notes to an existing Milestone (new Milestones work fine when adding notes to them), the new notes won't save unless I edit any of the fields that actually belong to the Milestone to mark the form "dirty"/edited. Is there a way to flag the Milestone so that the new Notes that have been added will save? Edit: sorry, it's hard to paste in all of the code because there's so many parts, but here goes: Models class Project < ActiveRecord::Base has_many :notes, :dependent => :destroy has_many :milestones, :dependent => :destroy accepts_nested_attributes_for :milestones, :allow_destroy => true accepts_nested_attributes_for :notes, :allow_destroy => true, :reject_if => proc { |attributes| attributes['content'].blank? } end class Milestone < ActiveRecord::Base belongs_to :project has_many :notes, :dependent => :destroy accepts_nested_attributes_for :notes, :allow_destroy => true, :allow_destroy => true, :reject_if => proc { |attributes| attributes['content'].blank? } end class Note < ActiveRecord::Base belongs_to :milestone belongs_to :project scope :newest, lambda { |*args| order('created_at DESC').limit(*args.first || 3) } end I'm using an jQuery-based, unobtrusive version of Ryan Bates' combo helper/JS code to get this done. Application Helper def add_fields_for_association(f, association, partial) new_object = f.object.class.reflect_on_association(association).klass.new fields = f.fields_for(association, new_object, :child_index => "new_#{association}") do |builder| render(partial, :f => builder) end end I render the form for the association in a hidden div, and then use the following JavaScript to find it and add it as needed. JavaScript function addFields(link, association, content, func) { var newID = new Date().getTime(); var regexp = new RegExp("new_" + association, "g"); var form = content.replace(regexp, newID); var link = $(link).parent().next().before(form).prev(); if (func) { func.call(); } return link; } I'm guessing the only other relevant piece of code that I can think of would be the create method in the NotesController: def create respond_with(@note = @owner.notes.create(params[:note])) do |format| format.js { render :json => @owner.notes.newest(3).all.to_json } format.html { redirect_to((@milestone ? [@project, @milestone, @note] : [@project, @note]), :notice => 'Note was successfully created.') } end end The @owner ivar is created in the following before filter: def load_milestone @milestone = @project.milestones.find(params[:milestone_id]) if params[:milestone_id] end def determine_owner @owner = load_milestone @owner ||= @project end Thing is, all this seems to work fine, except when I'm adding new notes to existing milestones. The milestone has to be "touched" in order for new notes to save, or else Rails won't pay attention.

    Read the article

  • jquery small refactoring , json call

    - by Alexander Corotchi
    Hi everybody, I need you suggestion to make some refactoring in jquery code because now it looks terrible for me. I have 4 json calls but the difference it is just the URL call. EX: var userId = MyuserID; var perPage = '45'; var showOnPage = '45'; var tag = 'tag1'; var tag1 = 'tag2'; var tag2 = 'tagn'; $.getJSON('http://api.flickr.com/services/rest/?format=json&method='+ 'flickr.photos.search&api_key=' + apiKey + '&user_id=' + userId + '&tags=' + tag + '&per_page=' + perPage + '&jsoncallback=?', function(data){ var classShown = 'class="lightbox"'; var classHidden = 'class="lightbox hidden"'; $.each(data.photos.photo, function(i, rPhoto){ var basePhotoURL = 'http://farm' + rPhoto.farm + '.static.flickr.com/' + rPhoto.server + '/' + rPhoto.id + '_' + rPhoto.secret; var thumbPhotoURL = basePhotoURL + '_s.jpg'; var mediumPhotoURL = basePhotoURL + '.jpg'; var photoStringStart = '<li><a '; var photoStringEnd = 'title="' + rPhoto.title + '" href="'+ mediumPhotoURL +'"><img src="' + thumbPhotoURL + '" alt="' + rPhoto.title + '"/></a><span>'+rPhoto.title+'</span></li>;' var photoString = (i < showOnPage) ? photoStringStart + classShown + photoStringEnd : photoStringStart + classHidden + photoStringEnd; $(photoString).appendTo("#flickr ul"); }); $("#flickr a").fancybox(); }); $.getJSON('http://api.flickr.com/services/rest/?format=json&method='+ 'flickr.photos.search&api_key=' + apiKey + '&user_id=' + userId + '&tags=' + tag1 + '&per_page=' + perPage + '&jsoncallback=?', function(data){ var classShown = 'class="lightbox"'; var classHidden = 'class="lightbox hidden"'; $.each(data.photos.photo, function(i, rPhoto){ var basePhotoURL = 'http://farm' + rPhoto.farm + '.static.flickr.com/' + rPhoto.server + '/' + rPhoto.id + '_' + rPhoto.secret; var thumbPhotoURL = basePhotoURL + '_s.jpg'; var mediumPhotoURL = basePhotoURL + '.jpg'; var photoStringStart = '<li><a '; var photoStringEnd = 'title="' + rPhoto.title + '" href="'+ mediumPhotoURL +'"><img src="' + thumbPhotoURL + '" alt="' + rPhoto.title + '"/></a><span>'+rPhoto.title+'</span></li>;' var photoString = (i < showOnPage) ? photoStringStart + classShown + photoStringEnd : photoStringStart + classHidden + photoStringEnd; $(photoString).appendTo(".SetPinos1 ul"); }); $(".Sets a").fancybox(); }); $.getJSON('http://api.flickr.com/services/rest/?format=json&method='+ 'flickr.photos.search&api_key=' + apiKey + '&user_id=' + userId + '&tags=' + tagn + '&per_page=' + perPage + '&jsoncallback=?', function(data){ var classShown = 'class="lightbox"'; var classHidden = 'class="lightbox hidden"'; $.each(data.photos.photo, function(i, rPhoto){ var basePhotoURL = 'http://farm' + rPhoto.farm + '.static.flickr.com/' + rPhoto.server + '/' + rPhoto.id + '_' + rPhoto.secret; var thumbPhotoURL = basePhotoURL + '_s.jpg'; var mediumPhotoURL = basePhotoURL + '.jpg'; var photoStringStart = '<li><a '; var photoStringEnd = 'title="' + rPhoto.title + '" href="'+ mediumPhotoURL +'"><img src="' + thumbPhotoURL + '" alt="' + rPhoto.title + '"/></a><span>'+rPhoto.title+'</span></li>;' var photoString = (i < showOnPage) ? photoStringStart + classShown + photoStringEnd : photoStringStart + classHidden + photoStringEnd; $(photoString).appendTo(".SetPinos ul"); }); $(".Sets a").fancybox(); }); var tag is only one difference in this url : Can somebody help me not to repeat all this stuff ?? Sorry by so long garbage :(

    Read the article

  • How to create a Facebook-App style notifications custom table?

    - by Tim Büthe
    I want to add a custom table to my iPhone app, that should look and work like the one used in the facebook app showing the notifications. It should contain rows with links in it. The text should be black, while the tap-able parts should appear blue. As a already figured out, labels only have one font, color and so on and you can't mix styles within it, I would have to use different components. The tab-able text parts maybe UIButtons with custom style without any border. I tried different approches to build the custom cell. I used Interface Builder, to layout components, but since the different parts have variable lengths they overlaid each other. I also tried to add instances of UIButton and UILabel as a subview to the cell's contentView as descibed in Apple's tutorial. The problem with this is that: The position and size of the components is fix and set when they get created using "initWithFrame:...". My code in "tableView:cellForRowAtIndexPath:" looks like this: UIButton *usernameButton; UILabel *mainLabel, *secondLabel; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { // cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; cell = [[[UITableViewCell alloc] initWithFrame:CGRectMake(0.0, 0.0, 220.0, 60.0) reuseIdentifier:CellIdentifier] autorelease]; [cell setBackgroundColor: [UIColor yellowColor]]; usernameButton = [[[UIButton alloc] initWithFrame:CGRectMake(0.0, 0.0, 220.0, 15.0)] autorelease]; usernameButton.tag = USERNAME_BUTTON_TAG; // ... format, font etc. usernameButton.autoresizingMask = UIViewAutoresizingFlexibleLeftMargin | UIViewAutoresizingFlexibleHeight; [cell.contentView addSubview:usernameButton]; mainLabel = [[[UILabel alloc] initWithFrame:CGRectMake(0.0, 20.0, 220.0, 15.0)] autorelease]; mainLabel.tag = MAINLABEL_TAG; // ... format, font etc. mainLabel.autoresizingMask = UIViewAutoresizingFlexibleLeftMargin | UIViewAutoresizingFlexibleHeight; [cell.contentView addSubview:mainLabel]; secondLabel = [[[UILabel alloc] initWithFrame:CGRectMake(0.0, 40.0, 220.0, 15.0)] autorelease]; secondLabel.tag = SECONDLABEL_TAG; // ... format, font etc. secondLabel.autoresizingMask = UIViewAutoresizingFlexibleLeftMargin | UIViewAutoresizingFlexibleHeight; [cell.contentView addSubview:secondLabel]; } else { usernameButton = (UIButton *) [cell.contentView viewWithTag:USERNAME_BUTTON_TAG]; mainLabel = (UILabel *)[cell.contentView viewWithTag:MAINLABEL_TAG]; secondLabel = (UILabel *)[cell.contentView viewWithTag:SECONDLABEL_TAG]; } usernameButton.titleLabel.text = @"Peter"; mainLabel.text = @"sent you a message: ..."; secondLabel.text = @"11 minutes ago"; return cell; In a nutshell, I want to add the labels and buttons without a fix position or size, the buttons clickable and the cell height should automatically set according to the content. Here is an example of the cell's content: Peter sent you a message: "Hello! What are you doing?" Peter should be click/tab-able and his message, which has a variable length, should be italic and wrapped if necessary.

    Read the article

  • Export GridView to Excel

    - by nCdy
    using Matt's util code (a bit edited for Unicode text) public class GridViewExportUtil { /// <param name="fileName"></param> /// <param name="gv"></param> public static void Export(string fileName, GridView gv) { HttpContext.Current.Response.Clear(); HttpContext.Current.Response.ContentType = "application/ms-excel"; HttpContext.Current.Response.Cache.SetCacheability(HttpCacheability.NoCache); HttpContext.Current.Response.Charset = System.Text.Encoding.Unicode.EncodingName; HttpContext.Current.Response.ContentEncoding = System.Text.Encoding.Unicode; HttpContext.Current.Response.BinaryWrite(System.Text.Encoding.Unicode.GetPreamble()); HttpContext.Current.Response.AddHeader( "content-disposition", string.Format(//"content-disposition", "attachment; filename=Report.xml"));//, fileName)); // Need .XLS file using (StringWriter sw = new StringWriter()) { using (HtmlTextWriter htw = new HtmlTextWriter(sw)) { // Create a form to contain the grid Table table = new Table(); // add the header row to the table if (gv.HeaderRow != null) { GridViewExportUtil.PrepareControlForExport(gv.HeaderRow); table.Rows.Add(gv.HeaderRow); } // add each of the data rows to the table foreach (GridViewRow row in gv.Rows) { GridViewExportUtil.PrepareControlForExport(row); table.Rows.Add(row); } // add the footer row to the table if (gv.FooterRow != null) { GridViewExportUtil.PrepareControlForExport(gv.FooterRow); table.Rows.Add(gv.FooterRow); } // render the table into the htmlwriter table.RenderControl(htw); // render the htmlwriter into the response HttpContext.Current.Response.Write(sw.ToString()); HttpContext.Current.Response.End(); } } } /// <summary> /// Replace any of the contained controls with literals /// </summary> /// <param name="control"></param> private static void PrepareControlForExport(Control control) { for (int i = 0; i < control.Controls.Count; i++) { Control current = control.Controls[i]; if (current is LinkButton) { control.Controls.Remove(current); control.Controls.AddAt(i, new LiteralControl((current as LinkButton).Text)); } else if (current is ImageButton) { control.Controls.Remove(current); control.Controls.AddAt(i, new LiteralControl((current as ImageButton).AlternateText)); } else if (current is HyperLink) { control.Controls.Remove(current); control.Controls.AddAt(i, new LiteralControl((current as HyperLink).Text)); } else if (current is DropDownList) { control.Controls.Remove(current); control.Controls.AddAt(i, new LiteralControl((current as DropDownList).SelectedItem.Text)); } else if (current is CheckBox) { control.Controls.Remove(current); control.Controls.AddAt(i, new LiteralControl((current as CheckBox).Checked ? "True" : "False")); } if (current.HasControls()) { GridViewExportUtil.PrepareControlForExport(current); } } } Question : How to make downloaded file editable (not Read only) And ... XLS wont opens with Unicode format. When I changing format to UTF8 I can't see Russian words :S Second question : How to make Unicode for .xls Third question : How can I save table lines ? Thank you.

    Read the article

  • PHP text formating: Detectng several symbols in a row

    - by ilnur777
    I have a kind of strange thing that I really nead for my text formating. Don't ask me please why I did this strange thing! ;-) So, my PHP script replaces all line foldings "\n" with one of the speacial symbol like "|". When I insert text data to database, the PHP script replaces all line foldings with the symbol "|" and when the script reads text data from the database, it replaces all special symbols "|" with line folding "\n". I want to restrict text format in the way that it will cut line foldings if there are more than 2 line foldings used in each separating texts. Here is the example of the text I want the script to format: this is text... this is text... this is text...this is text...this is text... this is text... this is text... this is text... this is text... this is text... this is text... this is text... this is text... this is text... this is text... this is text... this is text... this is text... this is text... this is text... I want to restict format like: this is text... this is text... this is text...this is text...this is text... this is text... this is text... this is text... this is text... this is text... this is text... this is text... this is text... this is text... this is text... this is text... this is text... this is text... this is text... this is text... So, at the first example there is only one line folding between 2 texts and on the second example there are 3 line foldings between 2 texts. How it can be possible to replace more than 2 line foldings symbols "|" if they are detected on the text? This is a kind of example I want the script to do: $text = str_replace("|||", "||", $text); $text = str_replace("||||", "||", $text); $text = str_replace("|||||", "||", $text); $text = str_replace("||||||", "||", $text); $text = str_replace("|||||||", "||", $text); ... $text = str_replace("||||||||||", "||", $text); $text = str_replace("|", "<br>", $text);

    Read the article

  • Linq2XML: Get the count of elements where candidate has won2

    - by user287798
    I had an XML Document in the format below <Pronvice_Data> <Pronvice>PronviceA</Pronvice> <Registered_Voters>115852</Registered_Voters> <Sam_Kea>100</Sam_Kea> <Jeje>500</Jeje> <John_Doe>400</John_Doe> </Pronvice_Data> <Pronvice_Data> <Pronvice>PronviceA</Pronvice> <Registered_Voters>25852</Registered_Voters> <Sam_Kea>200</Sam_Kea> <Jeje>100</Jeje> <John_Doe>300</John_Doe> </Pronvice_Data> <Pronvice_Data> <Pronvice>PronviceC</Pronvice> <Registered_Voters>317684</Registered_Voters> <Sam_Kea>1000</Sam_Kea> <Jeje>1200</Jeje> <John_Doe>190</John_Doe> </Pronvice_Data> As suggested by help here,I used the code below to give me this format below C# Code: void Main() { XDocument xmlVectors2 = XDocument.Load(@"C:\\Province_Data.xml"); var elemList= from elem in xmlVectors2.Descendants("Province_Data") where elem.Elements().Count() > 1 select elem; foreach(XElement element in elemList) { XElement elem1= new XElement("Candidates", new XElement(element.Element("Sam_Kea").Name), new XElement(element.Element("Jeje").Name), new XElement(element.Element("John_Doe").Name) ); XElement elem2 = new XElement("Provinces", new XAttribute("Province", (string)element.Element("Province").Value), new XAttribute("Registered_Voters",element.Element("Registered_Voters").Value), new XElement ("Candidate", new XAttribute("Name",element.Element("Sam_Kea").Name), new XAttribute("Votes",element.Element("Sam_Kea").Value) ), new XElement ("Candidate", new XAttribute("Name",element.Element("Jeje").Name), new XAttribute("Votes",element.Element("Jeje").Value) ), new XElement ("Candidate", new XAttribute("Name",element.Element("John_Doe").Name), new XAttribute("Votes",element.Element("John_Doe").Value) )); } } New Format: <root> <Candidates> <Sam_Kea/> <Jeje/> <John_Doe/> </Candidates> <Provinces> <Province name='ProvinceA' Registered_Voters='115852'> <Candidate name='Sam_Kea' votes='100'/> <Candidate name='Jeje' votes='500'/> <Candidate name='John_Doe' votes='400'/> </Province> <Province name='ProvinceB' Registered_Voters='25852'> <Candidate name='Sam_Kea' votes='200'/> <Candidate name='Jeje' votes='100'/> <Candidate name='John_Doe' votes='300'/> </Province> <Province name='ProvinceC' Registered_Voters='317684'> <Candidate name='Sam_Kea' votes='1000'/> <Candidate name='Jeje' votes='1200'/> <Candidate name='John_Doe' votes='190'/> </Province> </Provinces> </root> How can i use the "foreach" in in my code to get the result in this link. http://stackoverflow.com/questions/2396203/get-the-count-of-elements-where-candidate-has-won

    Read the article

  • Unable to add separator in list view

    - by Suru
    This is my code @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.email_list_main); emailResults = new ArrayList<GetEmailFromDatabase>(); //int[] colors = {0,0xFFFF0000,0}; //getListView().setDivider(new GradientDrawable(Orientation.RIGHT_LEFT, colors)); //getListView().setDividerHeight(2); emailListFeedAdapter = new EmailListFeedAdapter(this, R.layout.email_listview_row, emailResults); setListAdapter(this.emailListFeedAdapter); getResults(); if(emailResults != null && emailResults.size() > -1){ emailListFeedAdapter.notifyDataSetChanged(); for(int i=0;i< emailResults.size();i++){ try { Here I getting email Sent date emailListFeedAdapter.add( emailResults.get(i)); datetime_text1 = emailResults.get(i).getDate(); formatter1 = new SimpleDateFormat(); formatter1 = DateFormat.getDateInstance((DateFormat.MEDIUM)); Calendar currentDate1 = Calendar.getInstance(); Item_Date1 = formatter1.parse(datetime_text1); current_Date1 = formatter1.format(currentDate1.getTime()); current_System_Date1 = formatter1.parse(current_Date1); currentDate1.add(Calendar.DATE, -1); yesterdaydate = formatter1.format(currentDate1.getTime()); yeaterday_Date = formatter1.parse(yesterdaydate); currentDate1.add(Calendar.DATE, -2); threeDaysback = formatter1.format(currentDate1.getTime()); Three_Days_Back = formatter1.parse(threeDaysback); Here I am comparing current date with list view item date, and here is my problem, dates are matching but it is not entering in if condition I tried in so many ways but nothing worked the code for separator is bellow. if(Item_Date.compareTo(current_System_Date)==0){ if(index1){ emailListFeedAdapter.addSeparatorItem("SEPARATOR"); //i--; index1=false; } } else if(yeaterday_Date.compareTo(Item_Date1)==0){ if(index2){ emailListFeedAdapter.addSeparatorItem("SEPARATOR"); //i--; index2 = false; } } else if(Item_Date1.compareTo(Three_Days_Back)==0){ if(index3){ emailListFeedAdapter.addSeparatorItem("SEPARATOR"); //i--; index3 = false; } } } catch (ParseException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } } In EmailListFeedAdapter private TreeSet<Integer> mSeparatorsSet = new TreeSet<Integer>(); public void addSeparatorItem(final String item) { //itemss.add(emailResults.get(0)); // save separator position mSeparatorsSet.add(itemss.size() - 1); notifyDataSetChanged(); } @Override public int getItemViewType(int position) { return mSeparatorsSet.contains(position) ? TYPE_SEPARATOR : TYPE_ITEM; } holder = new ViewHolder(); switch (type) { case TYPE_ITEM: emailView= inflater.inflate(R.layout.email_listview_row, null); break; case TYPE_SEPARATOR: emailView= inflater.inflate(R.layout.item2, null); holder.textView = (TextView)emailView.findViewById(R.id.textSeparator); emailView.setTag(holder); holder.textView.setText("SEPARATOR"); break; } Here is ViewHolder class public static class ViewHolder { public TextView textView; } if anybody knows then please tell me where I am doing wrong. Thanx

    Read the article

  • Ruby on Rails is complaining about a method that doesn't exist that is built into Active Record. Wha

    - by grg-n-sox
    This will probably just be a simple problem and I am just blind or an idiot but I could use some help. So I am going over some basic guides in Rails, reviewing the basics and such for an upcoming exam. One of the guides included was the sort-of-standard getting started guide over at guide.rubyonrails.org. Here is the link if you need it. Also all my code is for my app is from there, so I have no problem releasing any of my code since it should be the same as shown there. I didn't do a copy paste, but I basically was typing with Vim in one half of my screen and the web page in the other half, typing what I see. http://guides.rubyonrails.org/getting_started.html So like I said, I am going along the guide when I noticed past a certain point in the tutorial, I was always getting an error on the site. To find the section of code, just hit Ctrl+f on the page (or whatever you have search/find set to) and enter "accepts_". This should immediately direct you to this chunk of code. class Post < ActiveRecord::Base validates_presence_of :name, :title validates_length_of :title, :minimum => 5 has_many :comments has_many :tags accepts_nested_attributes_for :tags, :allow_destroy => :true , :reject_if => proc { |attrs| attrs.all? { |k, v| v.blank? } } end So I tried putting this in my code. It is in ~/Rails/blog/app/models/post.rb in case you are wondering. However, even after all the other code I put in past that in the guide, hoping I was just missing some line of code that would come up later in the guide. But nothing, same error every time. This is what I get. NoMethodError in PostsController#index undefined method `accepts_nested_attributes_for' for #<Class:0xb7109f98> /usr/lib/ruby/gems/1.8/gems/activerecord-2.2.2/lib/active_record/base.rb:1833:in `method_missing' app/models/post.rb:7 app/controllers/posts_controller.rb:9:in `index' Request Parameters: None Response Headers: {"Content-Type"=>"", "cookie"=>[], "Cache-Control"=>"no-cache"} Now, I copied the above code from the guide. The two code sections I edited mentioned in the error message I will paste as is below. class PostsController < ApplicationController # GET /posts # GET /posts.xml before_filter :find_post, :only => [:show, :edit, :update, :destroy] def index @posts = Post.find(:all) # <= the line 9 referred to in error message respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } end end class Post < ActiveRecord::Base validates_presence_of :name, :title validates_length_of :title, :minimum => 5 has_many :comments has_many :tags accepts_nested_attributes_for :tags, :allow_destroy => :true , # <= problem :reject_if => proc { |attrs| attrs.all? { |k, v| v.blank? } } end Also here is gem local gem list. I do note that they are a bit out of date, but the default Rails install any of the school machines (an environment likely for my exam) is basically 'gem install rails --version 2.2.2' and since they are windows machines, they come with all the normal windows ruby gems that comes with the ruby installer. However, I am running this off a Debian virtual machine of mine, but trying to set it up similarly and I figured the windows ruby gems wouldn't change anything in Rails. *** LOCAL GEMS *** actionmailer (2.2.2) actionpack (2.2.2) activerecord (2.2.2) activeresource (2.2.2) activesupport (2.2.2) gem_plugin (0.2.3) hpricot (0.8.2) linecache (0.43) log4r (1.1.7) ptools (1.1.9) rack (1.1.0) rails (2.2.2) rake (0.8.7) sqlite3-ruby (1.2.3) So any ideas on what the problem is? Thanks in advanced.

    Read the article

  • How to copy this portion of a text file out and put into a hash using rails? (VATsim datafile)

    - by Rusty Broderick
    Hi I'm trying to work out how i can cut out the section between !CLIENTS and the '; ;' and then to parse it into a hash in order to make an xml file. Honestly have no idea how to do it. The file is as follows: vatsim-data.txt original file here ; Created at 30/12/2010 01:29:14 UTC by Data Server V4.0 ; ; Data is the property of VATSIM.net and is not to be used for commercial purposes without the express written permission of the VATSIM.net Founders or their designated agent(s ). ; ; Sections are: ; !GENERAL contains general settings ; !CLIENTS contains informations about all connected clients ; !PREFILE contains informations about all prefiled flight plans ; !SERVERS contains a list of all FSD running servers to which clients can connect ; !VOICE SERVERS contains a list of all running voice servers that clients can use ; ; Data formats of various sections are: ; !GENERAL section - VERSION is this data format version ; RELOAD is time in minutes this file will be updated ; UPDATE is the last date and time this file has been updated. Format is yyyymmddhhnnss ; ATIS ALLOW MIN is time in minutes to wait before allowing manual Atis refresh by way of web page interface ; CONNECTED CLIENTS is the number of clients currently connected ; !CLIENTS section - callsign:cid:realname:clienttype:frequency:latitude:longitude:altitude:groundspeed:planned_aircraft:planned_tascruise:planned_depairport:planned_altitude:planned_destairport:server:protrevision:rating:transponder:facilitytype:visualrange:planned_revision:planned_flighttype:planned_deptime:planned_actdeptime:planned_hrsenroute:planned_minenroute:planned_hrsfuel:planned_minfuel:planned_altairport:planned_remarks:planned_route:planned_depairport_lat:planned_depairport_lon:planned_destairport_lat:planned_destairport_lon:atis_message:time_last_atis_received:time_logon:heading:QNH_iHg:QNH_Mb: ; !PREFILE section - callsign:cid:realname:clienttype:frequency:latitude:longitude:altitude:groundspeed:planned_aircraft:planned_tascruise:planned_depairport:planned_altitude:planned_destairport:server:protrevision:rating:transponder:facilitytype:visualrange:planned_revision:planned_flighttype:planned_deptime:planned_actdeptime:planned_hrsenroute:planned_minenroute:planned_hrsfuel:planned_minfuel:planned_altairport:planned_remarks:planned_route:planned_depairport_lat:planned_depairport_lon:planned_destairport_lat:planned_destairport_lon:atis_message:time_last_atis_received:time_logon:heading:QNH_iHg:QNH_Mb: ; !SERVERS section - ident:hostname_or_IP:location:name:clients_connection_allowed: ; !VOICE SERVERS section - hostname_or_IP:location:name:clients_connection_allowed:type_of_voice_server: ; ; Field separator is : character ; ; !GENERAL: VERSION = 8 RELOAD = 2 UPDATE = 20101230012914 ATIS ALLOW MIN = 5 CONNECTED CLIENTS = 515 ; ; !VOICE SERVERS: voice2.vacc-sag.org:Nurnberg:Europe-CW:1:R: voice.vatsim.fi:Finland - Sponsored by Verkkokauppa.com and NBL Solutions:Finland:1:R: rw.liveatc.net:USA, California:Liveatc:1:R: rw1.vatpac.org:Melbourne, Australia:Oceania:1:R: spain.vatsim.net:Spain:Vatsim Spain Server:1:R: voice.nyartcc.org:Sponsored by NY ARTCC:NY-ARTCC:1:R: voice.zhuartcc.net:Sponsored by Houston ARTCC:ZHU-ARTCC:1:R: ; ; !CLIENTS: 01PD:1090811:prentis gibbs KJFK:PILOT::40.64841:-73.81030:15:0::0::::USA-E:100:1:1200::::::::::::::::::::20101230010851:28:30.1:1019: 4X-BRH:1074589:george sandoval LLJR:PILOT::50.05618:-125.84429:10819:206:C337/G:150:CYAL:FL120:CCI9:EUROPE-C2:100:1:6043:::2:I:110:110:1:26:2:59:: /T/:DCT:0:0:0:0:::20101230005323:129:29.76:1007: 50125:1109107:Dave Frew KEDU:PILOT::46.52736:-121.95317:23877:471:B/B744/F:530:KTCM:30000:KLSV:USA-E:100:1:7723:::1:I:0:116:0:0:0:0:::GPS DIRECT.:0:0:0:0:::20101230012346:164:29.769:1008: 85013:1126003:Dmitry Abramov UWWW:PILOT::76.53819:71.54782:33444:423:T/ZZZZ/G:500:UUDD:FL330:ULAA:EUROPE-C2:100:1:2200:::2:I:0:2139:0:0:0:0:ULLI::BITSA DCT WM/N0485S1010 DCT KS DCT NE R22 ULWW B153 LAPEK B210 SU G476 OLATA:0:0:0:0:::20101229215815:62:53.264:1803: ; ; !SERVERS: EUROPE-C2:88.198.19.202:Europe:Center Europe Server Two:1: ; ; END I want to format the html with the tags with client being the parent and the nested tags as follows: callsign:cid:realname:clienttype:frequency:latitude:longitude:altitude:groundspeed:planned_aircraft:planned_tascruise:planned_depairport:planned_altitude:planned_destairport:server:protrevision:rating:transponder:facilitytype:visualrange:planned_revision:planned_flighttype:planned_deptime:planned_actdeptime:planned_hrsenroute:planned_minenroute:planned_hrsfuel:planned_minfuel:planned_altairport:planned_remarks:planned_route:planned_depairport_lat:planned_depairport_lon:planned_destairport_lat:planned_destairport_lon:atis_message:time_last_atis_received:time_logon:heading:QNH_iHg:QNH_Mb: Any help in solving this would be much appreciated!

    Read the article

  • Advanced Regex: Smart auto detect and replace URLs with anchor tags

    - by Robert Koritnik
    I've written a regular expression that automatically detects URLs in free text that users enter. This is not such a simple task as it may seem at first. Jeff Atwood writes about it in his post. His regular expression works, but needs extra code after detection is done. I've managed to write a regular expression that does everything in a single go. This is how it looks like (I've broken it down into separate lines to make it more understandable what it does): 1 (?<outer>\()? 2 (?<scheme>http(?<secure>s)?://)? 3 (?<url> 4 (?(scheme) 5 (?:www\.)? 6 | 7 www\. 8 ) 9 [a-z0-9] 10 (?(outer) 11 [-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+(?=\)) 12 | 13 [-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+ 14 ) 15 ) 16 (?<ending>(?(outer)\))) As you may see, I'm using named capture groups (used later in Regex.Replace()) and I've also included some local characters (cšžcd), that allow our localised URLs to be parsed as well. You can easily omit them if you'd like. Anyway. Here's what it does (referring to line numbers): 1 - detects if URL starts with open braces (is contained inside braces) and stores it in "outer" named capture group 2 - checks if it starts with URL scheme also detecting whether scheme is SSL or not 3 - start parsing URL itself (will store it in "url" named capture group) 4-8 - if statement that says: if "sheme" was present then www. part is optional, otherwise mandatory for a string to be a link (so this regular expression detects all strings that start with either http or www) 9 - first character after http:// or www. should be either a letter or a number (this can be extended if you'd like to cover even more links, but I've decided not to because I can't think of a link that would start with some obscure character) 10-14 - if statement that says: if "outer" (braces) was present capture everything up to the last closing braces otherwise capture all 15 - closes the named capture group for URL 16 - if open braces were present, capture closing braces as well and store it in "ending" named capture group First and last line used to have \s* in them as well, so user could also write open braces and put a space inside before pasting link. Anyway. My code that does link replacement with actual anchor HTML elements looks exactly like this: value = Regex.Replace( value, @"(?<outer>\()?(?<scheme>http(?<secure>s)?://)?(?<url>(?(scheme)(?:www\.)?|www\.)[a-z0-9](?(outer)[-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+(?=\))|[-a-z0-9/+&@#/%?=~_()|!:,.;cšžcd]+))(?<ending>(?(outer)\)))", "${outer}<a href=\"http${secure}://${url}\">http${secure}://${url}</a>${ending}", RegexOptions.Compiled | RegexOptions.CultureInvariant | RegexOptions.IgnoreCase); As you can see I'm using named capture groups to replace link with an Anchor tag: "${outer}<a href=\"http${secure}://${url}\">http${secure}://${url}</a>${ending}" I could as well omit the http(s) part in anchor display to make links look friendlier, but for now I decided not to. Question I would like my links to be replaced with shortenings as well. So when user copies a very long link (for instance if they would copy a link from google maps that usually generates long links) I would like to shorten the visible part of the anchor tag. Link would work, but visible part of an anchor tag would be shortened to some number of characters. I could as well append ellipsis at the end of at all possible (and make things even more perfect). Does Regex.Replace() method support replacement notations so that I can still use a single call? Something similar as string.Format() method does when you'd like to format values in string format (decimals, dates etc...).

    Read the article

  • Constructor ambiguous quesiton

    - by Crystal
    I'm trying to create a simple date class, but I get an error on my main file that says, "call of overloaded Date() is ambiguous." I'm not sure why since I thought as long as I had different parameters for my constructor, I was ok. Here is my code: header file: #ifndef DATE_H #define DATE_H using std::string; class Date { public: static const int monthsPerYear = 12; // num of months in a yr Date(int = 1, int = 1, int = 1900); // default constructor Date(); // uses system time to create object void print() const; // print date in month/day/year format ~Date(); // provided to confirm destruction order string getMonth(int month) const; // gets month in text format private: int month; // 1 - 12 int day; // 1 - 31 int year; // any year int checkDay(int) const; }; #endif .cpp file #include <iostream> #include <iomanip> #include <string> #include <ctime> #include "Date.h" using namespace std; Date::Date() { time_t seconds = time(NULL); struct tm* t = localtime(&seconds); month = t->tm_mon; day = t->tm_mday; year = t->tm_year; } Date::Date(int mn, int dy, int yr) { if (mn > 0 && mn <= monthsPerYear) month = mn; else { month = 1; // invalid month set to 1 cout << "Invalid month (" << mn << ") set to 1.\n"; } year = yr; // could validate yr day = checkDay(dy); // validate the day // output Date object to show when its constructor is called cout << "Date object constructor for date "; print(); cout << endl; } void Date::print() const { string str; cout << month << '/' << day << '/' << year << '\n'; // new code for HW2 cout << setfill('0') << setw(3) << day; // prints in ddd cout << " " << year << '\n'; // yyyy format str = getMonth(month); // prints in month (full word), day, year cout << str << " " << day << ", " << year << '\n'; } and my main.cpp #include <iostream> #include "Date.h" using std::cout; int main() { Date date1(4, 30, 1980); date1.print(); cout << '\n'; Date date2; date2.print(); }

    Read the article

  • Destroying a record via RJS TemplateError (Called ID for nil...)

    - by bgadoci
    I am trying to destroy a record in my table via RJS and having some trouble. I have successfully implemented this before so can't quite understand what is not working here. Here is the setup: I am trying to allow a user of my app to select an answer from another user as the 'winning' answer to their question. Much like StackOverflow does. I am calling this selected answer 'winner'. class Winner < ActiveRecord::Base belongs_to :site belongs_to :user belongs_to :question validates_uniqueness_of :user_id, :scope => [:question_id] end I'll spare you the reverse has_many associations but I believe they are correct (I am using has_many with the validation as I might want to allow for multiple later). Also, think of site like an answer to the question. My link calling the destroy action of the WinnersController is located in the /views/winners/_winner.html.erb and has the following code: <% div_for winner do %> Selected <br/> <%=link_to_remote "Destroy", :url => winner, :method => :delete %> <% end %> This partial is being called by another partial `/views/sites/_site.html.erb and is located in this code block: <% if site.winners.blank? %> <% remote_form_for [site, Winner.new] do |f| %> <%= f.hidden_field :question_id, :value => @question.id %> <%= f.hidden_field :winner, :value => "1" %> <%= submit_tag "Select This Answer" %> Make sure you unselect any previously selected answers. <% end %> <% else %> <div id="winner_<%= site.id %>" class="votes"> <%= render :partial => site.winners%> </div> <% end %> <div id="winner_<%= site.id %>" class="votes"> </div> And the /views/sites/_site.html.erb partial is being called in the /views/questions/show.html.erb file. My WinnersController#destroy action is the following: def destroy @winner = Winner.find(params[:id]) @winner.destroy respond_to do |format| format.html { redirect_to Question.find(params[:post_id]) } format.js end end And my /views/winners/destroy.js.rjs code is the following: page[dom_id(@winner)].visual_effect :fade I am getting the following error and not really sure where I am going wrong: Processing WinnersController#destroy (for 127.0.0.1 at 2010-05-30 16:05:48) [DELETE] Parameters: {"authenticity_token"=>"nn1Wwr2PZiS2jLgCZQDLidkntwbGzayEoHWwR087AfE=", "id"=>"24", "_"=>""} Rendering winners/destroy ActionView::TemplateError (Called id for nil, which would mistakenly be 4 -- if you really wanted the id of nil, use object_id) on line #1 of app/views/winners/destroy.js.rjs: 1: page[dom_id(@winner)].visual_effect :fade app/views/winners/destroy.js.rjs:1:in `_run_rjs_app47views47winners47destroy46js46rjs' app/views/winners/destroy.js.rjs:1:in `_run_rjs_app47views47winners47destroy46js46rjs' Rendered rescues/_trace (137.1ms) Rendered rescues/_request_and_response (0.3ms) Rendering rescues/layout (internal_server_error)

    Read the article

  • urllib2 misbehaving with dynamically loaded content

    - by Sheena
    Some Code headers = {} headers['user-agent'] = 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0' headers['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' headers['Accept-Language'] = 'en-gb,en;q=0.5' #headers['Accept-Encoding'] = 'gzip, deflate' request = urllib.request.Request(sURL, headers = headers) try: response = urllib.request.urlopen(request) except error.HTTPError as e: print('The server couldn\'t fulfill the request.') print('Error code: {0}'.format(e.code)) except error.URLError as e: print('We failed to reach a server.') print('Reason: {0}'.format(e.reason)) else: f = open('output/{0}.html'.format(sFileName),'w') f.write(response.read().decode('utf-8')) A url http://groupon.cl/descuentos/santiago-centro The situation Here's what I did: enable javascript in browser open url above and keep an eye on the console disable javascript repeat step 2 use urllib2 to grab the webpage and save it to a file enable javascript open the file with browser and observe console repeat 7 with javascript off results In step 2 I saw that a whole lot of the page content was loaded dynamically using ajax. So the HTML that arrived was a sort of skeleton and ajax was used to fill in the gaps. This is fine and not at all surprising Since the page should be seo friendly it should work fine without js. in step 4 nothing happens in the console and the skeleton page loads pre-populated rendering the ajax unnecessary. This is also completely not confusing in step 7 the ajax calls are made but fail. this is also ok since the urls they are using are not local, the calls are thus broken. The page looks like the skeleton. This is also great and expected. in step 8: no ajax calls are made and the skeleton is just a skeleton. I would have thought that this should behave very much like in step 4 question What I want to do is use urllib2 to grab the html from step 4 but I cant figure out how. What am I missing and how could I pull this off? To paraphrase If I was writing a spider I would want to be able to grab plain ol' HTML (as in that which resulted in step 4). I dont want to execute ajax stuff or any javascript at all. I don't want to populate anything dynamically. I just want HTML. The seo friendly site wants me to get what I want because that's what seo is all about. How would one go about getting plain HTML content given the situation I outlined? To do it manually I would turn off js, navigate to the page and copy the html. I want to automate this. stuff I've tried I used wireshark to look at packet headers and the GETs sent off from my pc in steps 2 and 4 have the same headers. Reading about SEO stuff makes me think that this is pretty normal otherwise techniques such as hijax wouldn't be used. Here are the headers my browser sends: Host: groupon.cl User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-gb,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Here are the headers my script sends: Accept-Encoding: identity Host: groupon.cl Accept-Language: en-gb,en;q=0.5 Connection: close Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 User-Agent: User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0 The differences are: my script has Connection = close instead of keep-alive. I can't see how this would cause a problem my script has Accept-encoding = identity. This might be the cause of the problem. I can't really see why the host would use this field to determine the user-agent though. If I change encoding to match the browser request headers then I have trouble decoding it. I'm working on this now... watch this space, I'll update the question as new info comes up

    Read the article

  • How to play audio in Java Application

    - by user577829
    I'm making a java application and I need to play audio. I'm playing mainly small sound files of my cannon firing (its a cannon shooting game) and the projectiles exploding, though I plan on having looping background music. I have found two different methods to accomplish this, but both don't work how I want. The first method is literally a method: public void playSoundFile(File file) {//http://java.ittoolbox.com/groups/technical-functional/java-l/sound-in-an-application-90681 try { //get an AudioInputStream AudioInputStream ais = AudioSystem.getAudioInputStream(file); //get the AudioFormat for the AudioInputStream AudioFormat audioformat = ais.getFormat(); System.out.println("Format: " + audioformat.toString()); System.out.println("Encoding: " + audioformat.getEncoding()); System.out.println("SampleRate:" + audioformat.getSampleRate()); System.out.println("SampleSizeInBits: " + audioformat.getSampleSizeInBits()); System.out.println("Channels: " + audioformat.getChannels()); System.out.println("FrameSize: " + audioformat.getFrameSize()); System.out.println("FrameRate: " + audioformat.getFrameRate()); System.out.println("BigEndian: " + audioformat.isBigEndian()); //ULAW format to PCM format conversion if ((audioformat.getEncoding() == AudioFormat.Encoding.ULAW) || (audioformat.getEncoding() == AudioFormat.Encoding.ALAW)) { AudioFormat newformat = new AudioFormat(AudioFormat.Encoding.PCM_SIGNED, audioformat.getSampleRate(), audioformat.getSampleSizeInBits() * 2, audioformat.getChannels(), audioformat.getFrameSize() * 2, audioformat.getFrameRate(), true); ais = AudioSystem.getAudioInputStream(newformat, ais); audioformat = newformat; } //checking for a supported output line DataLine.Info datalineinfo = new DataLine.Info(SourceDataLine.class, audioformat); if (!AudioSystem.isLineSupported(datalineinfo)) { //System.out.println("Line matching " + datalineinfo + " is not supported."); } else { //System.out.println("Line matching " + datalineinfo + " is supported."); //opening the sound output line SourceDataLine sourcedataline = (SourceDataLine) AudioSystem.getLine(datalineinfo); sourcedataline.open(audioformat); sourcedataline.start(); //Copy data from the input stream to the output data line int framesizeinbytes = audioformat.getFrameSize(); int bufferlengthinframes = sourcedataline.getBufferSize() / 8; int bufferlengthinbytes = bufferlengthinframes * framesizeinbytes; byte[] sounddata = new byte[bufferlengthinbytes]; int numberofbytesread = 0; while ((numberofbytesread = ais.read(sounddata)) != -1) { int numberofbytesremaining = numberofbytesread; sourcedataline.write(sounddata, 0, numberofbytesread); } } } catch (Exception e) { e.printStackTrace(); } } The problem with this is that my entire program stops until the sound file is finished, or at least nearly finished. The second method is this: File file = new File("Launch1.wav"); AudioClip clip; try { clip = JApplet.newAudioClip(file.toURL()); clip.play(); } catch (Exception e) { e.getMessage(); } The problem I have here is that every time the sound file ends early or doesn't play at all depending on where I place the code. Is their any way to play sound without the above mentioned problems? Am I doing something wrong? Any help is greatly appreciated.

    Read the article

  • MVC: returning multiple results on stream connection to implement HTML5 SSE

    - by eddo
    I am trying to set up a lightweight HTML5 Server-Sent Event implementation on my MVC 4 Web, without using one of the libraries available to implement sockets and similars. The lightweight approach I am trying is: Client side: EventSource (or jquery.eventsource for IE) Server side: long polling with AsynchController (sorry for dropping here the raw test code but just to give an idea) public class HTML5testAsyncController : AsyncController { private static int curIdx = 0; private static BlockingCollection<string> _data = new BlockingCollection<string>(); static HTML5testAsyncController() { addItems(10); } //adds some test messages static void addItems(int howMany) { _data.Add("started"); for (int i = 0; i < howMany; i++) { _data.Add("HTML5 item" + (curIdx++).ToString()); } _data.Add("ended"); } // here comes the async action, 'Simple' public void SimpleAsync() { AsyncManager.OutstandingOperations.Increment(); Task.Factory.StartNew(() => { var result = string.Empty; var sb = new StringBuilder(); string serializedObject = null; //wait up to 40 secs that a message arrives if (_data.TryTake(out result, TimeSpan.FromMilliseconds(40000))) { JavaScriptSerializer ser = new JavaScriptSerializer(); serializedObject = ser.Serialize(new { item = result, message = "MSG content" }); sb.AppendFormat("data: {0}\n\n", serializedObject); } AsyncManager.Parameters["serializedObject"] = serializedObject; AsyncManager.OutstandingOperations.Decrement(); }); } // callback which returns the results on the stream public ActionResult SimpleCompleted(string serializedObject) { ServerSentEventResult sar = new ServerSentEventResult(); sar.Content = () => { return serializedObject; }; return sar; } //pushes the data on the stream in a format conforming HTML5 SSE public class ServerSentEventResult : ActionResult { public ServerSentEventResult() { } public delegate string GetContent(); public GetContent Content { get; set; } public int Version { get; set; } public override void ExecuteResult(ControllerContext context) { if (context == null) { throw new ArgumentNullException("context"); } if (this.Content != null) { HttpResponseBase response = context.HttpContext.Response; // this is the content type required by chrome 6 for server sent events response.ContentType = "text/event-stream"; response.BufferOutput = false; // this is important because chrome fails with a "failed to load resource" error if the server attempts to put the char set after the content type response.Charset = null; string[] newStrings = context.HttpContext.Request.Headers.GetValues("Last-Event-ID"); if (newStrings == null || newStrings[0] != this.Version.ToString()) { string value = this.Content(); response.Write(string.Format("data:{0}\n\n", value)); //response.Write(string.Format("id:{0}\n", this.Version)); } else { response.Write(""); } } } } } The problem is on the server side as there is still a big gap between the expected result and what's actually going on. Expected result: EventSource opens a stream connection to the server, the server keeps it open for a safe time (say, 2 minutes) so that I am protected from thread leaking from dead clients, as new message events are received by the server (and enqueued to a thread safe collection such as BlockingCollection) they are pushed in the open stream to the client: message 1 received at T+0ms, pushed to the client at T+x message 2 received at T+200ms, pushed to the client at T+x+200ms Actual behaviour: EventSource opens a stream connection to the server, the server keeps it open until a message event arrives (thanks to long polling) once a message is received, MVC pushes the message and closes the connection. EventSource has to reopen the connection and this happens after a couple of seconds. message 1 received at T+0ms, pushed to the client at T+x message 2 received at T+200ms, pushed to the client at T+x+3200ms This is not OK as it defeats the purpose of using SSE as the clients start again reconnecting as in normal polling and message delivery gets delayed. Now, the question: is there a native way to keep the connection open after sending the first message and sending further messages on the same connection?

    Read the article

  • Ambiguous constructor call

    - by Crystal
    I'm trying to create a simple date class, but I get an error on my main file that says, "call of overloaded Date() is ambiguous." I'm not sure why since I thought as long as I had different parameters for my constructor, I was ok. Here is my code: header file: #ifndef DATE_H #define DATE_H using std::string; class Date { public: static const int monthsPerYear = 12; // num of months in a yr Date(int = 1, int = 1, int = 1900); // default constructor Date(); // uses system time to create object void print() const; // print date in month/day/year format ~Date(); // provided to confirm destruction order string getMonth(int month) const; // gets month in text format private: int month; // 1 - 12 int day; // 1 - 31 int year; // any year int checkDay(int) const; }; #endif .cpp file #include <iostream> #include <iomanip> #include <string> #include <ctime> #include "Date.h" using namespace std; Date::Date() { time_t seconds = time(NULL); struct tm* t = localtime(&seconds); month = t->tm_mon; day = t->tm_mday; year = t->tm_year; } Date::Date(int mn, int dy, int yr) { if (mn > 0 && mn <= monthsPerYear) month = mn; else { month = 1; // invalid month set to 1 cout << "Invalid month (" << mn << ") set to 1.\n"; } year = yr; // could validate yr day = checkDay(dy); // validate the day // output Date object to show when its constructor is called cout << "Date object constructor for date "; print(); cout << endl; } void Date::print() const { string str; cout << month << '/' << day << '/' << year << '\n'; // new code for HW2 cout << setfill('0') << setw(3) << day; // prints in ddd cout << " " << year << '\n'; // yyyy format str = getMonth(month); // prints in month (full word), day, year cout << str << " " << day << ", " << year << '\n'; } and my main.cpp #include <iostream> #include "Date.h" using std::cout; int main() { Date date1(4, 30, 1980); date1.print(); cout << '\n'; Date date2; date2.print(); }

    Read the article

  • Override `drop` for a custom sequence

    - by Bruno Reis
    In short: in Clojure, is there a way to redefine a function from the standard sequence API (which is not defined on any interface like ISeq, IndexedSeq, etc) on a custom sequence type I wrote? 1. Huge data files I have big files in the following format: A long (8 bytes) containing the number n of entries n entries, each one being composed of 3 longs (ie, 24 bytes) 2. Custom sequence I want to have a sequence on these entries. Since I cannot usually hold all the data in memory at once, and I want fast sequential access on it, I wrote a class similar to the following: (deftype DataSeq [id ^long cnt ^long i cached-seq] clojure.lang.IndexedSeq (index [_] i) (count [_] (- cnt i)) (seq [this] this) (first [_] (first cached-seq)) (more [this] (if-let [s (next this)] s '())) (next [_] (if (not= (inc i) cnt) (if (next cached-seq) (DataSeq. id cnt (inc i) (next cached-seq)) (DataSeq. id cnt (inc i) (with-open [f (open-data-file id)] ; open a memory mapped byte array on the file ; seek to the exact position to begin reading ; decide on an optimal amount of data to read ; eagerly read and return that amount of data )))))) The main idea is to read ahead a bunch of entries in a list and then consume from that list. Whenever the cache is completely consumed, if there are remaining entries, they are read from the file in a new cache list. Simple as that. To create an instance of such a sequence, I use a very simple function like: (defn ^DataSeq load-data [id] (next (DataSeq. id (count-entries id) -1 []))) ; count-entries is a trivial "open file and read a long" memoized As you can see, the format of the data allowed me to implement count in very simply and efficiently. 3. drop could be O(1) In the same spirit, I'd like to reimplement drop. The format of these data files allows me to reimplement drop in O(1) (instead of the standard O(n)), as follows: if dropping less then the remaining cached items, just drop the same amount from the cache and done; if dropping more than cnt, then just return the empty list. otherwise, just figure out the position in the data file, jump right into that position, and read data from there. My difficulty is that drop is not implemented in the same way as count, first, seq, etc. The latter functions call a similarly named static method in RT which, in turn, calls my implementation above, while the former, drop, does not check if the instance of the sequence it is being called on provides a custom implementation. Obviously, I could provide a function named anything but drop that does exactly what I want, but that would force other people (including my future self) to remember to use it instead of drop every single time, which sucks. So, the question is: is it possible to override the default behaviour of drop? 4. A workaround (I dislike) While writing this question, I've just figured out a possible workaround: make the reading even lazier. The custom sequence would just keep an index and postpone the reading operation, that would happen only when first was called. The problem is that I'd need some mutable state: the first call to first would cause some data to be read into a cache, all the subsequent calls would return data from this cache. There would be a similar logic on next: if there's a cache, just next it; otherwise, don't bother populating it -- it will be done when first is called again. This would avoid unnecessary disk reads. However, this is still less than optimal -- it is still O(n), and it could easily be O(1). Anyways, I don't like this workaround, and my question is still open. Any thoughts? Thanks.

    Read the article

  • Version Assemblies with TFS 2010 Continuous Integration

    - by Steve Michelotti
    When I first heard that TFS 2010 had moved to Workflow Foundation for Team Build, I was *extremely* skeptical. I’ve loved MSBuild and didn’t quite understand the reasons for this change. In fact, given that I’ve been exclusively using Cruise Control for Continuous Integration (CI) for the last 5+ years of my career, I was skeptical of TFS for CI in general. However, after going through the learning process for TFS 2010 recently, I’m starting to become a believer. I’m also starting to see some of the benefits with Workflow Foundation for the overall processing because it gives you constructs not available in MSBuild such as parallel tasks, better control flow constructs, and a slightly better customization story. The first customization I had to make to the build process was to version the assemblies of my solution. This is not new. In fact, I’d recommend reading Mike Fourie’s well known post on Versioning Code in TFS before you get started. This post describes several foundational aspects of versioning assemblies regardless of your version of TFS. The main points are: 1) don’t use source control operations for your version file, 2) use a schema like <Major>.<Minor>.<IncrementalNumber>.0, and 3) do not keep AssemblyVersion and AssemblyFileVersion in sync. To do this in TFS 2010, the best post I’ve found has been Jim Lamb’s post of building a custom TFS 2010 workflow activity. Overall, this post is excellent but the primary issue I have with it is that the assembly version numbers produced are based in a date and look like this: “2010.5.15.1”. This is definitely not what I want. I want to be able to communicate to the developers and stakeholders that we are producing the “1.1 release” or “1.2 release” – which would have an assembly version number of “1.1.317.0” for example. In this post, I’ll walk through the process of customizing the assembly version number based on this method – customizing the concepts in Lamb’s post to suit my needs. I’ll also be combining this with the concepts of Fourie’s post – particularly with regards to the standards around how to version the assemblies. The first thing I’ll do is add a file called SolutionAssemblyVersionInfo.cs to the root of my solution that looks like this: 1: using System; 2: using System.Reflection; 3: [assembly: AssemblyVersion("1.1.0.0")] 4: [assembly: AssemblyFileVersion("1.1.0.0")] I’ll then add that file as a Visual Studio link file to each project in my solution by right-clicking the project, “Add – Existing Item…” then when I click the SolutionAssemblyVersionInfo.cs file, making sure I “Add As Link”: Now the Solution Explorer will show our file. We can see that it’s a “link” file because of the black arrow in the icon within all our projects. Of course you’ll need to remove the AssemblyVersion and AssemblyFileVersion attributes from the AssemblyInfo.cs files to avoid the duplicate attributes since they now leave in the SolutionAssemblyVersionInfo.cs file. This is an extremely common technique so that all the projects in our solution can be versioned as a unit. At this point, we’re ready to write our custom activity. The primary consideration is that I want the developer and/or tech lead to be able to easily be in control of the Major.Minor and then I want the CI process to add the third number with a unique incremental number. We’ll leave the fourth position always “0” for now – it’s held in reserve in case the day ever comes where we need to do an emergency patch to Production based on a branched version.   Writing the Custom Workflow Activity Similar to Lamb’s post, I’m going to write two custom workflow activities. The “outer” activity (a xaml activity) will be pretty straight forward. It will check if the solution version file exists in the solution root and, if so, delegate the replacement of version to the AssemblyVersionInfo activity which is a CodeActivity highlighted in red below:   Notice that the arguments of this activity are the “solutionVersionFile” and “tfsBuildNumber” which will be passed in. The tfsBuildNumber passed in will look something like this: “CI_MyApplication.4” and we’ll need to grab the “4” (i.e., the incremental revision number) and put that in the third position. Then we’ll need to honor whatever was specified for Major.Minor in the SolutionAssemblyVersionInfo.cs file. For example, if the SolutionAssemblyVersionInfo.cs file had “1.1.0.0” for the AssemblyVersion (as shown in the first code block near the beginning of this post), then we want to resulting file to have “1.1.4.0”. Before we do anything, let’s put together a unit test for all this so we can know if we get it right: 1: [TestMethod] 2: public void Assembly_version_should_be_parsed_correctly_from_build_name() 3: { 4: // arrange 5: const string versionFile = "SolutionAssemblyVersionInfo.cs"; 6: WriteTestVersionFile(versionFile); 7: var activity = new VersionAssemblies(); 8: var arguments = new Dictionary<string, object> { 9: { "tfsBuildNumber", "CI_MyApplication.4"}, 10: { "solutionVersionFile", versionFile} 11: }; 12:   13: // act 14: var result = WorkflowInvoker.Invoke(activity, arguments); 15:   16: // assert 17: Assert.AreEqual("1.2.4.0", (string)result["newAssemblyFileVersion"]); 18: var lines = File.ReadAllLines(versionFile); 19: Assert.IsTrue(lines.Contains("[assembly: AssemblyVersion(\"1.2.0.0\")]")); 20: Assert.IsTrue(lines.Contains("[assembly: AssemblyFileVersion(\"1.2.4.0\")]")); 21: } 22: 23: private void WriteTestVersionFile(string versionFile) 24: { 25: var fileContents = "using System.Reflection;\n" + 26: "[assembly: AssemblyVersion(\"1.2.0.0\")]\n" + 27: "[assembly: AssemblyFileVersion(\"1.2.0.0\")]"; 28: File.WriteAllText(versionFile, fileContents); 29: }   At this point, the code for our AssemblyVersion activity is pretty straight forward: 1: [BuildActivity(HostEnvironmentOption.Agent)] 2: public class AssemblyVersionInfo : CodeActivity 3: { 4: [RequiredArgument] 5: public InArgument<string> FileName { get; set; } 6:   7: [RequiredArgument] 8: public InArgument<string> TfsBuildNumber { get; set; } 9:   10: public OutArgument<string> NewAssemblyFileVersion { get; set; } 11:   12: protected override void Execute(CodeActivityContext context) 13: { 14: var solutionVersionFile = this.FileName.Get(context); 15: 16: // Ensure that the file is writeable 17: var fileAttributes = File.GetAttributes(solutionVersionFile); 18: File.SetAttributes(solutionVersionFile, fileAttributes & ~FileAttributes.ReadOnly); 19:   20: // Prepare assembly versions 21: var majorMinor = GetAssemblyMajorMinorVersionBasedOnExisting(solutionVersionFile); 22: var newBuildNumber = GetNewBuildNumber(this.TfsBuildNumber.Get(context)); 23: var newAssemblyVersion = string.Format("{0}.{1}.0.0", majorMinor.Item1, majorMinor.Item2); 24: var newAssemblyFileVersion = string.Format("{0}.{1}.{2}.0", majorMinor.Item1, majorMinor.Item2, newBuildNumber); 25: this.NewAssemblyFileVersion.Set(context, newAssemblyFileVersion); 26:   27: // Perform the actual replacement 28: var contents = this.GetFileContents(newAssemblyVersion, newAssemblyFileVersion); 29: File.WriteAllText(solutionVersionFile, contents); 30:   31: // Restore the file's original attributes 32: File.SetAttributes(solutionVersionFile, fileAttributes); 33: } 34:   35: #region Private Methods 36:   37: private string GetFileContents(string newAssemblyVersion, string newAssemblyFileVersion) 38: { 39: var cs = new StringBuilder(); 40: cs.AppendLine("using System.Reflection;"); 41: cs.AppendFormat("[assembly: AssemblyVersion(\"{0}\")]", newAssemblyVersion); 42: cs.AppendLine(); 43: cs.AppendFormat("[assembly: AssemblyFileVersion(\"{0}\")]", newAssemblyFileVersion); 44: return cs.ToString(); 45: } 46:   47: private Tuple<string, string> GetAssemblyMajorMinorVersionBasedOnExisting(string filePath) 48: { 49: var lines = File.ReadAllLines(filePath); 50: var versionLine = lines.Where(x => x.Contains("AssemblyVersion")).FirstOrDefault(); 51:   52: if (versionLine == null) 53: { 54: throw new InvalidOperationException("File does not contain [assembly: AssemblyVersion] attribute"); 55: } 56:   57: return ExtractMajorMinor(versionLine); 58: } 59:   60: private static Tuple<string, string> ExtractMajorMinor(string versionLine) 61: { 62: var firstQuote = versionLine.IndexOf('"') + 1; 63: var secondQuote = versionLine.IndexOf('"', firstQuote); 64: var version = versionLine.Substring(firstQuote, secondQuote - firstQuote); 65: var versionParts = version.Split('.'); 66: return new Tuple<string, string>(versionParts[0], versionParts[1]); 67: } 68:   69: private string GetNewBuildNumber(string buildName) 70: { 71: return buildName.Substring(buildName.LastIndexOf(".") + 1); 72: } 73:   74: #endregion 75: }   At this point the final step is to incorporate this activity into the overall build template. Make a copy of the DefaultTempate.xaml – we’ll call it DefaultTemplateWithVersioning.xaml. Before the build and labeling happens, drag the VersionAssemblies activity in. Then set the LabelName variable to “BuildDetail.BuildDefinition.Name + "-" + newAssemblyFileVersion since the newAssemblyFileVersion was produced by our activity.   Configuring CI Once you add your solution to source control, you can configure CI with the build definition window as shown here. The main difference is that we’ll change the Process tab to reflect a different build number format and choose our custom build process file:   When the build completes, we’ll see the name of our project with the unique revision number:   If we look at the detailed build log for the latest build, we’ll see the label being created with our custom task:     We can now look at the history labels in TFS and see the project name with the labels (the Assignment activity I added to the workflow):   Finally, if we look at the physical assemblies that are produced, we can right-click on any assembly in Windows Explorer and see the assembly version in its properties:   Full Traceability We now have full traceability for our code. There will never be a question of what code was deployed to Production. You can always see the assembly version in the properties of the physical assembly. That can be traced back to a label in TFS where the unique revision number matches. The label in TFS gives you the complete snapshot of the code in your source control repository at the time the code was built. This type of process for full traceability has been used for many years for CI – in fact, I’ve done similar things with CCNet and SVN for quite some time. This is simply the TFS implementation of that pattern. The new features that TFS 2010 give you to make these types of customizations in your build process are quite easy once you get over the initial curve.

    Read the article

  • Real Excel Templates I

    - by Tim Dexter
    As promised, I'm starting to document the new Excel templates that I teased you all with a few weeks back. Leslie is buried in 11g documentation and will not get to officially documenting the templates for a while. I'll do my best to be professional and not ramble on about this and that, although the weather here has finally turned and its 'scorchio' here in Colorado today. Maybe our stand of Aspen will finally come into leaf ... but I digress. Preamble These templates are not actually that new, I helped in a small way to develop them a few years back with Excel 'meistress' Shirley for a company that was trying to use the Report Manager(RR) Excel FSG outputs under EBS 12. The functionality they needed was just not there in the RR FSG templates, the templates are actually XSL that is created from the the RR Excel template builder and fed to BIP for processing. Think of Excel from our RTF templates and you'll be there ie not really Excel but HTML masquerading as Excel. Although still under controlled release in EBS they have now made their way to the standlone release and are willing to share their Excel goodness. You get everything you have with hte Excel Analyzer Excel templates plus so much more. Therein lies a question, what will happen to the Analyzer templates? My understanding is that both will come together into a single Excel template format some time in the post-11g release world. The new XLSX format for Exce 2007/10 is also in the mix too so watch this space. What more do these templates offer? Well, you can structure data in the Excel output. Similar to RTF templates you can create sheets of data that have master-detail n relationships. Although the analyzer templates can do this, you have to get into macros whereas BIP will do this all for you. You can also use native XSL functions in your data to manipulate it prior to rendering. BP functions are not currently supported. The most impressive, for me at least, is the sheet 'bursting'. You can split your hierarchical data across multiple sheets and dynamically name those sheets. Finally, you of course, still get all the native Excel functionality. Pre-reqs You must be on 10.1.3.4.1 plus the latest rollup patch, 9546699. You can patch upa BIP instance running with OBIEE, no problem You need Excel 2000 or above to build the templates Some patience - there is no Excel template builder for these new templates. So its all going to have to be done by hand. Its not that tough but can get a little 'fiddly'. You can not test the template from Excel , it has to be deployed and then run. Limitations The new templates are definitely superior to the Analyzer templates but there are a few limitations. Re-grouping is not supported. You can only follow a data hierarchy not bend it to your will unless you want to get into macros. No support for BIP functions. The templates support native XSL functions only. No template builder Getting Started The templates make the use of named cells and groups of cells to allow BIP to find the insertion point for data points. It also uses a hidden sheet to store calculation mappings from named cells to XML data elements. To start with, in the great BIP tradition, we need some sample XML data. Becasue I wanted to show the master-detail output we need some hierarchical data. If you have not yet gotten into the data templates, now is a good time, I wrote a post a while back starting from the simple to more complex. They generate ideal data sets for these templates. Im working with the following data set: <EMPLOYEES> <LIST_G_DEPT> <G_DEPT> <DEPARTMENT_ID>10</DEPARTMENT_ID> <DEPARTMENT_NAME>Administration</DEPARTMENT_NAME> <LIST_G_EMP> <G_EMP> <EMPLOYEE_ID>200</EMPLOYEE_ID> <EMP_NAME>Jennifer Whalen</EMP_NAME> <EMAIL>JWHALEN</EMAIL> <PHONE_NUMBER>515.123.4444</PHONE_NUMBER> <HIRE_DATE>1987-09-17T00:00:00.000-06:00</HIRE_DATE> <SALARY>4400</SALARY> </G_EMP> </LIST_G_EMP> <TOTAL_EMPS>1</TOTAL_EMPS> <TOTAL_SALARY>4400</TOTAL_SALARY> <AVG_SALARY>4400</AVG_SALARY> <MAX_SALARY>4400</MAX_SALARY> <MIN_SALARY>4400</MIN_SALARY> </G_DEPT> ... <LIST_G_DEPT> <EMPLOYEES> Simple enough to follow and bread and butter stuff for an RTF template. Building the Template For an Excel template we need to start by thinking about how we want to render the data. Come up with a sample output in Excel. Its all dummy data, nothing marked up yet with one row of data for each level. I have the department name and then a repeating row for the employees. You can apply Excel formatting to the layout. The total is going to be derived from a data element. We'll get to Excel functions later. Marking Up Cells Next we need to start marking up the cells with custom names to map them to data elements. The cell names need to follow a specific format: For data grouping, XDO_GROUP_?group_name? For data elements, XDO_?element_name? Notice the question mark delimter, the group_name and element_name are case sensitive. The next step is to find how to name cells; the easiest method is to highlight the cell and then type in the name. You can also find the Name Manager dialog. I use 2007 and its available on the ribbon under the Formulas section Go thorugh the process of naming all the cells for the element values you have. Using my data set from above.You should end up with something like this in your 'Name Manager' dialog. You can update any mistakes you might have made through this dialog. Creating Groups In the image above you can see there are a couple of named group cells. To create these its a simple case of highlighting the cells that make up the group and then naming them. For the EMP group, highlight the employee row and then type in the name, XDO_GROUP?G_EMP? Notice the 10,000 total is outside of the G_EMP group. Its actually named, XDO_?TOTAL_SALARY?, a query calculated value. For the department group, we need to include the department name cell and the sub EMP grouping and name it, XDO_GROUP?G_DEPT? Notice, the 10,000 total is included in the G_DEPT group. This will ensure it repeats at the department level. Lastly, we do need to include a special sheet in the workbook. We will not have anything meaningful in there for now, but it needs to be present. Create a new sheet and name it XDO_METADATA. The name is important as the BIP rendering engine will looking for it. For our current example we do not need anything other than the required stuff in our XDO_METADATA sheet but, it must be present. Easy enough to hide it. Here's what I have: The only cell that is important is the 'Data Constraints:' cell. The rest is optional. To save curious users getting distracted, hide the metadata sheet. Deploying & Running Templates We should now have a usable Excel template. Loading it into a report is easy enough using the browser UI, just like an RTF template. Set the template type to Excel. You will now be able to run the report and hopefully get something like this. You will not get the red highlighting, thats just some conditional formatting I added to the template using Excel functionality. Your dates are probably going to look raw too. I got around this for now using an Excel function on the cell: =--REPLACE(SUBSTITUTE(E8,"T"," "),LEN(E8)-6,6,"") Google to the rescue on that one. Try some other stuff out. To avoid constantly loading the template through the UI. If you have BIP running locally or you can access the reports repository, once you have loaded the template the first time. Just save the template directly into the report folder. I have put together a sample report using a sample data set, available here. Just drop the xml data file, EmpbyDeptExcelData.xml into 'demo files' folder and you should be good to go. Thats the basics, next we'll start using some XSL functions in the template and move onto the 'bursting' across sheets.

    Read the article

  • Dependency Injection in ASP.NET MVC NerdDinner App using Unity 2.0

    - by shiju
    In my previous post Dependency Injection in ASP.NET MVC NerdDinner App using Ninject, we did dependency injection in NerdDinner application using Ninject. In this post, I demonstrate how to apply Dependency Injection in ASP.NET MVC NerdDinner App using Microsoft Unity Application Block (Unity) v 2.0.Unity 2.0Unity 2.0 is available on Codeplex at http://unity.codeplex.com . In earlier versions of Unity, the ObjectBuilder generic dependency injection mechanism, was distributed as a separate assembly, is now integrated with Unity core assembly. So you no longer need to reference the ObjectBuilder assembly in your applications. Two additional Built-In Lifetime Managers - HierarchicalifetimeManager and PerResolveLifetimeManager have been added to Unity 2.0.Dependency Injection in NerdDinner using UnityIn my Ninject post on NerdDinner, we have discussed the interfaces and concrete types of NerdDinner application and how to inject dependencies controller constructors. The following steps will configure Unity 2.0 to apply controller injection in NerdDinner application. Step 1 – Add reference for Unity Application BlockOpen the NerdDinner solution and add  reference to Microsoft.Practices.Unity.dll and Microsoft.Practices.Unity.Configuration.dllYou can download Unity from at http://unity.codeplex.com .Step 2 – Controller Factory for Unity The controller factory is responsible for creating controller instances.We extend the built in default controller factory with our own factory for working Unity with ASP.NET MVC. public class UnityControllerFactory : DefaultControllerFactory {     protected override IController GetControllerInstance(RequestContext reqContext, Type controllerType)     {         IController controller;         if (controllerType == null)             throw new HttpException(                     404, String.Format(                         "The controller for path '{0}' could not be found" +         "or it does not implement IController.",                     reqContext.HttpContext.Request.Path));           if (!typeof(IController).IsAssignableFrom(controllerType))             throw new ArgumentException(                     string.Format(                         "Type requested is not a controller: {0}",                         controllerType.Name),                         "controllerType");         try         {             controller = MvcUnityContainer.Container.Resolve(controllerType)                             as IController;         }         catch (Exception ex)         {             throw new InvalidOperationException(String.Format(                                     "Error resolving controller {0}",                                     controllerType.Name), ex);         }         return controller;     }   }   public static class MvcUnityContainer {     public static IUnityContainer Container { get; set; } }  Step 3 – Register Types and Set Controller Factory private void ConfigureUnity() {     //Create UnityContainer               IUnityContainer container = new UnityContainer()     .RegisterType<IFormsAuthentication, FormsAuthenticationService>()     .RegisterType<IMembershipService, AccountMembershipService>()     .RegisterInstance<MembershipProvider>(Membership.Provider)     .RegisterType<IDinnerRepository, DinnerRepository>();     //Set container for Controller Factory     MvcUnityContainer.Container = container;     //Set Controller Factory as UnityControllerFactory     ControllerBuilder.Current.SetControllerFactory(                         typeof(UnityControllerFactory));            } Unity 2.0 provides a fluent interface for type configuration. Now you can call all the methods in a single statement.The above Unity configuration specified in the ConfigureUnity method tells that, to inject instance of DinnerRepositiry when there is a request for IDinnerRepositiry and  inject instance of FormsAuthenticationService when there is a request for IFormsAuthentication and inject instance of AccountMembershipService when there is a request for IMembershipService. The AccountMembershipService class has a dependency with ASP.NET Membership provider. So we configure that inject the instance of Membership Provider.After the registering the types, we set UnityControllerFactory as the current controller factory. //Set container for Controller Factory MvcUnityContainer.Container = container; //Set Controller Factory as UnityControllerFactory ControllerBuilder.Current.SetControllerFactory(                     typeof(UnityControllerFactory)); When you register a type  by using the RegisterType method, the default behavior is for the container to use a transient lifetime manager. It creates a new instance of the registered, mapped, or requested type each time you call the Resolve or ResolveAll method or when the dependency mechanism injects instances into other classes. The following are the LifetimeManagers provided by Unity 2.0ContainerControlledLifetimeManager - Implements a singleton behavior for objects. The object is disposed of when you dispose of the container.ExternallyControlledLifetimeManager - Implements a singleton behavior but the container doesn't hold a reference to object which will be disposed of when out of scope.HierarchicalifetimeManager - Implements a singleton behavior for objects. However, child containers don't share instances with parents.PerResolveLifetimeManager - Implements a behavior similar to the transient lifetime manager except that instances are reused across build-ups of the object graph.PerThreadLifetimeManager - Implements a singleton behavior for objects but limited to the current thread.TransientLifetimeManager - Returns a new instance of the requested type for each call. (default behavior)We can also create custome lifetime manager for Unity container. The following code creating a custom lifetime manager to store container in the current HttpContext. public class HttpContextLifetimeManager<T> : LifetimeManager, IDisposable {     public override object GetValue()     {         return HttpContext.Current.Items[typeof(T).AssemblyQualifiedName];     }     public override void RemoveValue()     {         HttpContext.Current.Items.Remove(typeof(T).AssemblyQualifiedName);     }     public override void SetValue(object newValue)     {         HttpContext.Current.Items[typeof(T).AssemblyQualifiedName]             = newValue;     }     public void Dispose()     {         RemoveValue();     } }  Step 4 – Modify Global.asax.cs for configure Unity container In the Application_Start event, we call the ConfigureUnity method for configuring the Unity container and set controller factory as UnityControllerFactory void Application_Start() {     RegisterRoutes(RouteTable.Routes);       ViewEngines.Engines.Clear();     ViewEngines.Engines.Add(new MobileCapableWebFormViewEngine());     ConfigureUnity(); }Download CodeYou can download the modified NerdDinner code from http://nerddinneraddons.codeplex.com

    Read the article

  • SignalR Auto Disconnect when Page Changed in AngularJS

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/05/30/signalr-auto-disconnect-when-page-changed-in-angularjs.aspxIf we are using SignalR, the connection lifecycle was handled by itself very well. For example when we connect to SignalR service from browser through SignalR JavaScript Client the connection will be established. And if we refresh the page, close the tab or browser, or navigate to another URL then the connection will be closed automatically. This information had been well documented here. In a browser, SignalR client code that maintains a SignalR connection runs in the JavaScript context of a web page. That's why the SignalR connection has to end when you navigate from one page to another, and that's why you have multiple connections with multiple connection IDs if you connect from multiple browser windows or tabs. When the user closes a browser window or tab, or navigates to a new page or refreshes the page, the SignalR connection immediately ends because SignalR client code handles that browser event for you and calls the "Stop" method. But unfortunately this behavior doesn't work if we are using SignalR with AngularJS. AngularJS is a single page application (SPA) framework created by Google. It hijacks browser's address change event, based on the route table user defined, launch proper view and controller. Hence in AngularJS we address was changed but the web page still there. All changes of the page content are triggered by Ajax. So there's no page unload and load events. This is the reason why SignalR cannot handle disconnect correctly when works with AngularJS. If we dig into the source code of SignalR JavaScript Client source code we will find something below. It monitors the browser page "unload" and "beforeunload" event and send the "stop" message to server to terminate connection. But in AngularJS page change events were hijacked, so SignalR will not receive them and will not stop the connection. 1: // wire the stop handler for when the user leaves the page 2: _pageWindow.bind("unload", function () { 3: connection.log("Window unloading, stopping the connection."); 4:  5: connection.stop(asyncAbort); 6: }); 7:  8: if (isFirefox11OrGreater) { 9: // Firefox does not fire cross-domain XHRs in the normal unload handler on tab close. 10: // #2400 11: _pageWindow.bind("beforeunload", function () { 12: // If connection.stop() runs runs in beforeunload and fails, it will also fail 13: // in unload unless connection.stop() runs after a timeout. 14: window.setTimeout(function () { 15: connection.stop(asyncAbort); 16: }, 0); 17: }); 18: }   Problem Reproduce In the codes below I created a very simple example to demonstrate this issue. Here is the SignalR server side code. 1: public class GreetingHub : Hub 2: { 3: public override Task OnConnected() 4: { 5: Debug.WriteLine(string.Format("Connected: {0}", Context.ConnectionId)); 6: return base.OnConnected(); 7: } 8:  9: public override Task OnDisconnected() 10: { 11: Debug.WriteLine(string.Format("Disconnected: {0}", Context.ConnectionId)); 12: return base.OnDisconnected(); 13: } 14:  15: public void Hello(string user) 16: { 17: Clients.All.hello(string.Format("Hello, {0}!", user)); 18: } 19: } Below is the configuration code which hosts SignalR hub in an ASP.NET WebAPI project with IIS Express. 1: public class Startup 2: { 3: public void Configuration(IAppBuilder app) 4: { 5: app.Map("/signalr", map => 6: { 7: map.UseCors(CorsOptions.AllowAll); 8: map.RunSignalR(new HubConfiguration() 9: { 10: EnableJavaScriptProxies = false 11: }); 12: }); 13: } 14: } Since we will host AngularJS application in Node.js in another process and port, the SignalR connection will be cross domain. So I need to enable CORS above. In client side I have a Node.js file to host AngularJS application as a web server. You can use any web server you like such as IIS, Apache, etc.. Below is the "index.html" page which contains a navigation bar so that I can change the page/state. As you can see I added jQuery, AngularJS, SignalR JavaScript Client Library as well as my AngularJS entry source file "app.js". 1: <html data-ng-app="demo"> 2: <head> 3: <script type="text/javascript" src="jquery-2.1.0.js"></script> 1:  2: <script type="text/javascript" src="angular.js"> 1: </script> 2: <script type="text/javascript" src="angular-ui-router.js"> 1: </script> 2: <script type="text/javascript" src="jquery.signalR-2.0.3.js"> 1: </script> 2: <script type="text/javascript" src="app.js"></script> 4: </head> 5: <body> 6: <h1>SignalR Auto Disconnect with AngularJS by Shaun</h1> 7: <div> 8: <a href="javascript:void(0)" data-ui-sref="view1">View 1</a> | 9: <a href="javascript:void(0)" data-ui-sref="view2">View 2</a> 10: </div> 11: <div data-ui-view></div> 12: </body> 13: </html> Below is the "app.js". My SignalR logic was in the "View1" page and it will connect to server once the controller was executed. User can specify a user name and send to server, all clients that located in this page will receive the server side greeting message through SignalR. 1: 'use strict'; 2:  3: var app = angular.module('demo', ['ui.router']); 4:  5: app.config(['$stateProvider', '$locationProvider', function ($stateProvider, $locationProvider) { 6: $stateProvider.state('view1', { 7: url: '/view1', 8: templateUrl: 'view1.html', 9: controller: 'View1Ctrl' }); 10:  11: $stateProvider.state('view2', { 12: url: '/view2', 13: templateUrl: 'view2.html', 14: controller: 'View2Ctrl' }); 15:  16: $locationProvider.html5Mode(true); 17: }]); 18:  19: app.value('$', $); 20: app.value('endpoint', 'http://localhost:60448'); 21: app.value('hub', 'GreetingHub'); 22:  23: app.controller('View1Ctrl', function ($scope, $, endpoint, hub) { 24: $scope.user = ''; 25: $scope.response = ''; 26:  27: $scope.greeting = function () { 28: proxy.invoke('Hello', $scope.user) 29: .done(function () {}) 30: .fail(function (error) { 31: console.log(error); 32: }); 33: }; 34:  35: var connection = $.hubConnection(endpoint); 36: var proxy = connection.createHubProxy(hub); 37: proxy.on('hello', function (response) { 38: $scope.$apply(function () { 39: $scope.response = response; 40: }); 41: }); 42: connection.start() 43: .done(function () { 44: console.log('signlar connection established'); 45: }) 46: .fail(function (error) { 47: console.log(error); 48: }); 49: }); 50:  51: app.controller('View2Ctrl', function ($scope, $) { 52: }); When we went to View1 the server side "OnConnect" method will be invoked as below. And in any page we send the message to server, all clients will got the response. If we close one of the client, the server side "OnDisconnect" method will be invoked which is correct. But is we click "View 2" link in the page "OnDisconnect" method will not be invoked even though the content and browser address had been changed. This might cause many SignalR connections remain between the client and server. Below is what happened after I clicked "View 1" and "View 2" links four times. As you can see there are 4 live connections.   Solution Since the reason of this issue is because, AngularJS hijacks the page event that SignalR need to stop the connection, we can handle AngularJS route or state change event and stop SignalR connect manually. In the code below I moved the "connection" variant to global scope, added a handler to "$stateChangeStart" and invoked "stop" method of "connection" if its state was not "disconnected". 1: var connection; 2: app.run(['$rootScope', function ($rootScope) { 3: $rootScope.$on('$stateChangeStart', function () { 4: if (connection && connection.state && connection.state !== 4 /* disconnected */) { 5: console.log('signlar connection abort'); 6: connection.stop(); 7: } 8: }); 9: }]); Now if we refresh the page and navigated to View 1, the connection will be opened. At this state if we clicked "View 2" link the content will be changed and the SignalR connection will be closed automatically.   Summary In this post I demonstrated an issue when we are using SignalR with AngularJS. The connection cannot be closed automatically when we navigate to other page/state in AngularJS. And the solution I mentioned below is to move the SignalR connection as a global variant and close it manually when AngularJS route/state changed. You can download the full sample code here. Moving the SignalR connection as a global variant might not be a best solution. It's just for easy to demo here. In production code I suggest wrapping all SignalR operations into an AngularJS factory. Since AngularJS factory is a singleton object, we can safely put the connection variant in the factory function scope.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • CodePlex Daily Summary for Sunday, September 30, 2012

    CodePlex Daily Summary for Sunday, September 30, 2012Popular ReleasesCAPTCHA Solver: Initial Release: This is the initial Release :) Still very much a WIP.MCEBuddy 2.x: MCEBuddy 2.2.17: Reccomended update to 2.2.16 Changelog for 2.2.17 (32bit and 64bit) 1. Fixed bugs around thread synchronization with new remote model (fixes cause the app to crash or hang) 2. Updated UPnP code base, faster and more reliable now 3. Now you can get audio/video properties for multiple files on main page. Selected multiple files and right click, all selected files properties will be shown. 4. Fix a bug, not able to enter a conversion task name in the GUIAggravation: Version 1.0: This version 1.0 release is pretty stable. You need the Silverlight 4 runtime, developer tools, and Experssion Blend 4 installed.Readable Passphrase Generator: KeePass Plugin 0.7.1: See the KeePass Plugin Step By Step Guide for instructions on how to install the plugin. Changes Built against KeePass 2.20Windows 8 Toolkit - Charts and More: Beta 1.0: The First Compiled Version of my LibraryPDF.NET: PDF.NET.Ver4.5-OpenSourceCode: PDF.NET Ver4.5 ????,????Web??????。 PDF.NET Ver4.5 Open Source Code,include a sample Web application project.D3 Loot Tracker: 1.4: Session name is displayed in the UI. Changes data directory for clickonce deployment so that sessions files are persisted between versions. Added a delete button in the sessions list window. Allow opening of the sessions local folder from the session list widow. Display the session name in the main window Ability to select which diablo process to hook up to when pressing new () function BUT only if multi-process support is selected in the generals settings tab menu. Session picker...CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor 1.1 Beta: Visual Ribbon Editor 1.1 Beta What's New: Fixed scrolling issue in UnHide dialog Added support for connecting via ADFS / IFD Added support for more than one action for a button Added support for empty StringParameter for Javascript functions Fixed bug in rule CrmClientTypeRule when selecting Outlook option Extended Prefix field in New Button dialogVisual Studio Icon Patcher: Version 1.5.2: This version contains no new images from v1.5.1 Contains the following improvements: Better support for detecting the installed languages The extract & inject commands won’t run if Visual Studio is running You may now run in extract or inject mode The p/invoke code was cleaned up based on Code Analysis recommendations When a p/invoke method fails the Win32 error message is now displayed Error messages use red text Status messages use green textZXing.Net: ZXing.Net 0.9.0.0: On the way to a release 1.0 the API should be stable now with this version. sync with rev. 2393 of the java version improved api better Unity support Windows RT binaries Windows CE binaries new Windows Service demo new WPF demo WindowsCE Hotfix: Fixes an error with ISO8859-1 encoding and scannning of QR-Codes. The hotfix is only needed for the WindowsCE platform.C.B.R. : Comic Book Reader: CBR 0.7: Synthesis since 0.6 : ePUB : Complete refactoring Add a new dedicated feed viewer for opds stream PDF conversion : improved with image merge Make all backstage panel scrollable Integrate the new AvalonDock 2 library. Support multi-document. Library explorer and Table of content are now toolboxes Designer for dynamic books is now mvvm and much better New BrowserForControl Customized xps viewer to suppress toolbars and bind it to cbr commands Add quick start manual and button ...menu4web: menu4web 1.0 - free javascript menu for web sites: menu4web 1.0 has been tested with all major browsers: Firefox, Chrome, IE, Opera and Safari. Minified m4w.js library is less than 9K. Includes 21 menu examples of different styles. Can be freely distributed under The MIT License (MIT).Rawr: Rawr 5.0.0: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Coevery - Free CRM: Coevery 1.0.0.26: The zh-CN issue has been solved. We also add a project management module.VidCoder: 1.4.1 Beta: Updated to HandBrake 4971. This should fix some issues with stuck PGS subtitles. Fixed build break which prevented pre-compiled XML serializers from showing up. Fixed problem where a preset would get errantly marked as modified when re-opening the encode settings window or importing a new preset.Snake!: Snake 1.0: Version 1 StablePaging SharePoint ListItems using listitems position: Paginglistitems V1.0: This is a console application which has two methods both on CSOM and SOM to display the listitems in a paged manner.SharePoint Move Discussion Threads: SharePoint Move Discussion Threads ver 0.1: ver 0.1NTCPMSG: V1.1.1.0: increase the performance. Support .net framework 4.0.BlackJumboDog: Ver5.7.2: 2012.09.23 Ver5.7.2 (1)InetTest?? (2)HTTP?????????????????100???????????New Projects2D Sprite Editor: This is a 2d sprite editor. Import your sprite sheet, trace your animations frame and export the coordinates points in a simple txt file, ready to import.caifenweb1: test project.CatchThatException: This is a small logging library We created at developerpath.com to help us log exceptions. It write it to a text file and you can easilay open that txt.FsxWs - WebServices for Microsoft FSX: WebServices for MS Flight Simulator. Get flights data as JSON, KML. !! Still in SetUp phase - be patient !!GetTPB: Some training in downloading and parsing web pages, with multithreading too.JSON-RPC Client Generator (for XBMC): The goal of this project is to provide a .Net client for the XBMC JSONRPC API. The main part is not XBMC dependent and may be used for any JSON-RPC client.matlab-silhouette-pose-wtf: Whatevermfp: this is random codeMVC Grid: MVC Grid ExampleMyWebSocketTry: sssssssssssssssssssssssssssssssssssssssNetduino Console: Netduino Console is an interface with built in messaging layers that allows you as a developer to dynamically create plugins following a provided interface to iSharePoint ASP.NET Verifier: Project will allow to verify SharePoint 2010 components using ASP.NET web applicationSharepoint Custom Upload: This is a SharePoint solution that allows an administrator to customize the upload page individually for each document library in a site.. It allows you to makeWinWeb Browser Deluxe: WinWeb Browser Deluxe es un navegador web de código abierto basado en Internet Explorer hecho en Visual Basic .NET. Descargalo ya!writethatoutput: This is the official release page for WriteThatOutPut from developerpath.com

    Read the article

  • Stream Media and Live TV Across the Internet with Orb

    - by DigitalGeekery
    Looking for a way to stream your media collection across the Internet? Or perhaps watch and record TV remotely? Today we are going to look at how to do all that and more with Orb. Requirements Windows XP / Vista / 7 or Intel based Mac w/ OS X 10.5 or later. 1 GB RAM or more Pentium 4 2.4 GHz or higher / AMD Athlon 3200+ Broadband connections TV Tuner for streaming and recording live TV (optional) Note: Slower internet connections may result in stuttering during playback. Installation and Setup Download and install Orb on your home computer. (Download link below) You’ll want to take the defaults for the initial portion of the install. When we get to the Orb Account setup portion of the install is when we will have to enter information and make some decisions. Choose your language and click Next. We’ll need to create and user account and password. A valid email address is required as we’ll need to confirm the account later. Click Next.   Now you’ll want to choose your media sources. Orb will automatically look for folders that may contain media files. You can add or remove folders click on the (+) or (-) buttons. To remove a folder, click on it once to select it from the list and then click the minus (-) button. To add a folder, click the plus (+) button and browse for the folder. You can add local folders as well as shared folders from networked computers and USB attached storage. Note: Both the host computer running Orb and the networked computer will need to be running to access shared network folders remotely. When you’ve selected all your media files, click Next. Orb will proceed to index your media files… When the indexing is complete, click Next. Orb TV Setup Note: Streaming Live TV to Macs is not currently supported. If you have a TV tuner card connected to your PC, you can opt to configure Orb to stream live or recorded TV. Click Next  to configure TV. Or, choose Skip if you don’t wish to configure Orb for TV.   If you have a Digital tuner card, type in your Zip Code and click Get List to pull your channel listings. Select a TV provider from the list and click Next. If not, click Skip.   You can select or deselect any channels by checking or un-checking the box to each channel. Select Auto Scan to let Orb find more channels or disable the ones with no reception. Click Next when finished.   Next choose an analog provider, if necessary, and click Next.   Select “Yes” or “No” for a set top box and click Next. Just as we did with the Digital tuner, select or deselect any channels by checking or un-checking the box to each channel. Select Auto Scan to let Orb find more channels or disable the ones with no reception. Click Next when finished.   Now we’re finished with the setup. Click Close. Accessing your Media Remotely Media files are accessed through a web-based interface. Before we go any further, however, we’ll need to confirm our username and password. Check your inbox for an email from Orb Networks. Click the enclosed confirmation link. You’ll be prompted to enter the username and password you selected in your browser then click Next.   Your account will be confirmed. Now, we’re ready to enjoy our media remotely. To get started, point your browser to the MyCast website from your remote computer. (See link below) Enter your credentials and click Log In. Once logged in, you’ll be presented with the MyCast Home screen. By default you’ll see a handful of “channels” such as a TV program guide, random audio and photos, video favorites, and weather. You can add, remove, or customize channels. To add additional channels, click on Add Channels at the top right…   …and select from the dropdown list. To access your full media libraries, click Open Application at the top left and select from one of the options. Live and Recorded TV If you have a TV tuner card you configured for Orb, you’ll see your program guide on the TV / Webcams screen. To watch or record a show, click on the program listing to bring up a detail box. Then click the red button to record, or the green button to play. When recording a show, you’ll see a pulsating red icon at the top right of the listing in the program guide. If you want to watch Live TV, you may be prompted to choose your media player, depending on your browser and settings. Playback should begin shortly.   Note for Windows Media Center Users If you try to stream live TV in Orb while Windows Media Center is running on your PC, you’ll get an error message. Click the Stop MediaCenter button and then try again.   Audio On the Audio screen, you’ll find your music files indexed by genre, artist, and album. You can play a selection by clicking once and then clicking the green play button, or by simply double-clicking.   Playback will begin in the default media player for the streaming format.   Video Video works essentially the same as audio. Click on a selection and press the green play button, or double-click on the video title. Video playback will begin in the default media player for the streaming format.   Streaming Formats You can change the default streaming format in the control panel settings. To access the Control Panel, click on Open Applications  and select Control Panel. You can also click Settings at the top right.   Select General from the drop down list and then click on the Streaming Formats tab. You are provided four options. Flash, Windows Media, .SDP, and .PLS.   Creating Playlists To create playlists, drag and drop your media title to the playlist work area on the right, or click Add to playlist on the top menu. Click Save when finished.    Sharing your Media Orb allows you to share media playlists across the Internet with friends and family. There are a few ways to accomplish this. We’ll start by click the Share button at the bottom of the playlist work area after you’ve compiled your playlist. You’ll be prompted to choose a method by which to share your playlist. You’ll have the option to share your playlist publicly or privately. You can share publically through links, blogs, or on your Orb public profile.  By choosing the Public Profile option, Orb will automatically create a profile page for you with a URL like http://public.orb.com/username that anyone can easily access on the Internet. The private sharing option allows you to invite friends by email and requires recipients to register with Orb. You can also give your playlist a custom name, or accept the auto-generated title. Click OK when finished. Users who visit your public profile will be able to view and stream any of your shared playlists to their computer or supported device.   Portable Media Devices and Smartphones Orb can stream media to many portable devices and 3G phones. Streaming audio is supported on the iPhone and iPod Touch through the Safari browser. However, video and live TV streaming requires the Orb Live iPhone App.  Orb Live is available in the App store for $9.99. To stream media to your portable device, go to the MyCast website in your mobile browser and login. Browse for your media or playlist. Make a selection and play the media. Playback will begin. We found streaming music to both the Droid and the iPhone to work quite nicely. Video playback on the Droid, however, left a bit to be desired. The video looked good, but the audio tended to be out of sync. System Tray Control Panel By default Orb runs in the system tray on start up. To access the System Tray Control Panel, right-click on the Orb icon in the system tray and select Control Panel. Login with your Orb username and  password and click OK.   From here you can add or remove media sources, add manage accounts, change your password, and more. If you’d rather not run Orb on Startup, click the General icon.   Unselect the checkbox next to Start Orb when the system starts. Conclusion It may seem like a lot of steps, but getting Orb up and running isn’t terribly difficult. Orb is available for both Windows and Intel based Macs. It also supports streaming to many Game Consoles such as the Wii, PS3, and XBox 360. If you are running Windows 7 on multiple computers, you may want to check out our write-up on how to stream music and video over the Internet with Windows Media Player 12. Downloads Download Orb Logon to MyCast Similar Articles Productive Geek Tips Stream Music and Video Over the Internet with Windows Media Player 12Enable Media Streaming in Windows Home Server to Windows Media PlayerStream Media from Windows 7 to XP with VLC Media PlayerShare Digital Media With Other Computers on a Home Network with Windows 7Automatically Start Windows 7 Media Center in Live TV Mode TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go

    Read the article

  • Preserving Permalinks

    - by Daniel Moth
    One of the things that gets me on a rant is websites that break permalinks. If you have posted something somewhere and there is a public URL pointing to it, that URL should never ever return a 404. You are breaking all websites that ever linked to you and you are breaking all search engine links to your content (that others will try and follow). It is a pet peeve of mine. So when I had to move my blog, obviously I would preserve the root URL (www.danielmoth.com/Blog/), but I also wanted to preserve every URL my blog has generated over the years. To be clear, our focus here is on the URL formatting, not the content migration which I'll talk about in my next post. In this post, I'll describe my solution first and then what it solves. 1. The IIS7 Rewrite Module and web.config There are a few ways you can map an old URL to a new one (so when requests to the old URL come in, they get redirected to the new one). The new blog engine I use (dasBlog) has built-in functionality to do that (Scott refers to it here). Instead, the way I chose to address the issue was to use the IIS7 rewrite module. The IIS7 rewrite module allows redirecting URLs based on pattern matching, regular expressions and, of course, hardcoded full URLs for things that don't fall into any pattern. You can configure it visually from IIS Manager using a handy dialog that allows testing patterns against input URLs. Here is what mine looked like after configuring a few rules: To learn more about this technology check out this video, the reference page and this overview blog post; all 3 pages have a collection of related resources at the bottom worth checking out too. All the visual configuration ends up in a web.config file at the root folder of your website. If you are on a shared hosting service, probably the only way you can use the Rewrite Module is by directly editing the web.config file. Next, I'll describe the URLs I had to map and how that manifested itself in the web.config file. What I did was create the rules locally using the GUI, and then took the generated web.config file and uploaded it to my live site. You can view my web.config here. 2. Monthly Archives Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/2004_07_01_mothblog_archive.html dasBlog: /Blog/default,month,2004-07.aspx In my web.config file, the rule that deals with this is the one named "monthlyarchive_redirect". 3. Categories Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/labels/Personal.html dasBlog: /Blog/CategoryView,category,Personal.aspx In my web.config file the rule that deals with this is the one named "category_redirect". 4. Posts Observe the difference between the way the two blog engines generate this type of URL Blogger: /Blog/2004/07/hello-world.html dasBlog: /Blog/Hello-World.aspx In my web.config file the rule that deals with this is the one named "post_redirect". Note: The decision is taken to use dasBlog URLs that do not include the date info (see the description of my Appearance settings). If we included the date info then it would have to include the day part, which blogger did not generate. This makes it impossible to redirect correctly and to have a single permalink for blog posts moving forward. An implication of this decision, is that no two blog posts can have the same title. The tool I will describe in my next post (inelegantly) deals with duplicates, but not with triplicates or higher. 5. Unhandled by a generic rule Unfortunately, the two blog engines use different rules for generating URLs for blog posts. Most of the time the conversion is as simple as the example of the previous section where a post titled "Hello World" generates a URL with the words separated by a hyphen. Some times that is not the case, for example: /Blog/2006/05/medc-wrap-up.html /Blog/MEDC-Wrapup.aspx or /Blog/2005/01/best-of-moth-2004.html /Blog/Best-Of-The-Moth-2004.aspx or /Blog/2004/11/more-windows-mobile-2005-details.html /Blog/More-Windows-Mobile-2005-Details-Emerge.aspx In short, blogger does not add words to the title beyond ~39 characters, it drops some words from the title generation (e.g. a, an, on, the), and it preserve hyphens that appear in the title. For this reason, we need to detect these and explicitly list them for redirects (no regular expression can help here because the full set of rules is not listed anywhere). In my web.config file the rule that deals with this is the one named "Redirect rule1 for FullRedirects" combined with the rewriteMap named "StaticRedirects". Note: The tool I describe in my next post will detect all the URLs that need to be explicitly redirected and will list them in a file ready for you to copy them to your web.config rewriteMap. 6. C# code doing the same as the web.config I wrote some naive code that does the same thing as the web.config: given a string it will return a new string converted according to the 3 rules above. It does not take into account the 4th case where an explicit hard-coded conversion is needed (the tool I present in the next post does take that into account). static string REGEX_post_redirect = "[0-9]{4}/[0-9]{2}/([0-9a-z-]+).html"; static string REGEX_category_redirect = "labels/([_0-9a-z-% ]+).html"; static string REGEX_monthlyarchive_redirect = "([0-9]{4})_([0-9]{2})_[0-9]{2}_mothblog_archive.html"; static string Redirect(string oldUrl) { GroupCollection g; if (RunRegExOnIt(oldUrl, REGEX_post_redirect, 2, out g)) return string.Concat(g[1].Value, ".aspx"); if (RunRegExOnIt(oldUrl, REGEX_category_redirect, 2, out g)) return string.Concat("CategoryView,category,", g[1].Value, ".aspx"); if (RunRegExOnIt(oldUrl, REGEX_monthlyarchive_redirect, 3, out g)) return string.Concat("default,month,", g[1].Value, "-", g[2], ".aspx"); return string.Empty; } static bool RunRegExOnIt(string toRegEx, string pattern, int groupCount, out GroupCollection g) { if (pattern.Length == 0) { g = null; return false; } g = new Regex(pattern, RegexOptions.IgnoreCase | RegexOptions.Compiled).Match(toRegEx).Groups; return (g.Count == groupCount); } Comments about this post welcome at the original blog.

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >