Search Results

Search found 1816 results on 73 pages for 'attach'.

Page 50/73 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Do you care about your Oracle System Support experience?

    - by user12244613
    It has been a while since I blogged about Systems Support within Oracle. I want to take this opportunity to raise awareness of how Oracle is communicating out to its systems customers. Previously every item to be communicated was sent independently via an email message however, not all messages appear to be being getting the attention they require. In an effort to ensure Oracle is reaching all of our Sun and Oracle System customers, we have created the Oracle Systems Support Newsletter. This monthly newsletter will have a summary of customer support relevant information for you to use and will cover topics that impact your support experience. For example: 1. Did you know that sending explorer content to email addresses with @sun.com is going away soon? For more information, review the Document 1362484.1 2. Are you an Auto Service Request (ASR) user? If yes, here are the latest changes: · ASR Manager accepts My Oracle Support User Name (email address) and password. [Doc ID 1345484.1] · ASR IP Address for secure file transfer has changed [Doc ID 1338575.1] · ASR No Heartbeat Status - Find out how to resolve [Doc ID 1346328.1] 3. Did you notice we have changed the Service Request options for Hardware and introduced a new problem category called “Automated Diagnosis”? This service streamlines the data you send in and then automatically provides an update of known issues found in your My Oracle Support Service Request. This feature also fast tracks hardware failures by sending parts as soon as the data is analyzed. Have you used this new feature? If yes tell us about it – take the 5minute survey 4. Are you being proactive or are you still ‘fire fighting’ in the reactive mode? If you are being proactive for your Oracle System products you might have used Oracle Sun System Analysis. Did you finding this helpful? Can we improve it? You tell us, take the 5minute survey 5. Are you aware that if you attach files to your Service Request it enables the support engineer to start work straight away? For a summary of products and files review the Newsletter. 6. Are you struggling to find patches or firmware or product downloads? If yes, these types of issues are all addressed in the Newsletter. If this is the type of information you want to know about each month, then take time to read the Newsletter link and bookmark it in My Oracle Support so you can stay informed. Thanks for your time.

    Read the article

  • Identify memory leak in Java app

    - by Vincent Ma
    One important advantage of java is programer don't care memory management and GC handle it well. Maybe this is one reason why java is more popular. As Java programer you real dont care it? After you meet Out of memory you will realize it it’s not true. Java GC and memory is big topic you can get some information in here Today just let me show how to identify memory leak quickly. Let quickly review demo java code, it’s one kind of memory leak in our code, using static collection and always add some object. import java.util.ArrayList;import java.util.List; public class MemoryTest { public static void main(String[] args) { new Thread(new MemoryLeak(), "MemoryLeak").start(); }} class MemoryLeak implements Runnable { public static List<Integer> leakList = new ArrayList<Integer>(); public void run() { int num =0; while(true) { try { Thread.sleep(1); } catch (InterruptedException e) { } num++; Integer i = new Integer(num); leakList.add(i); } }} run it with java -verbose:gc -XX:+PrintGCDetails -Xmx60m -XX:MaxPermSize=160m MemoryTest after about some minuts you will get Exception in thread "MemoryLeak" java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2760) at java.util.Arrays.copyOf(Arrays.java:2734) at java.util.ArrayList.ensureCapacity(ArrayList.java:167) at java.util.ArrayList.add(ArrayList.java:351) at MemoryLeak.run(MemoryTest.java:25) at java.lang.Thread.run(Thread.java:619)Heap def new generation total 18432K, used 3703K [0x045e0000, 0x059e0000, 0x059e0000) eden space 16384K, 22% used [0x045e0000, 0x0497dde0, 0x055e0000) from space 2048K, 0% used [0x055e0000, 0x055e0000, 0x057e0000) to space 2048K, 0% used [0x057e0000, 0x057e0000, 0x059e0000) tenured generation total 40960K, used 40959K [0x059e0000, 0x081e0000, 0x081e0000) the space 40960K, 99% used [0x059e0000, 0x081dfff8, 0x081e0000, 0x081e0000) compacting perm gen total 12288K, used 2083K [0x081e0000, 0x08de0000, 0x10de0000) the space 12288K, 16% used [0x081e0000, 0x083e8c50, 0x083e8e00, 0x08de0000)No shared spaces configured. OK let us quickly identify it using JProfile Download JProfile in here  Run JProfile and attach MemoryTest get largest size of  Objects in Memory View in here is Integer then select Integer and go to Heap Walker. get GC Graph for this object  Then you get detail code raise this issue quickly now.  That is enjoy it.

    Read the article

  • How to create Custom ListForm WebPart

    - by DipeshBhanani
    Mostly all who works extensively on SharePoint (including meJ) don’t like to use out-of-box list forms (DispForm.aspx, EditForm.aspx, NewForm.aspx) as interface. Actually these OOB list forms bind hands of developers for the customization. It gives headache to developers to add just one post back event, for a dropdown field and to populate other fields in NewForm.aspx or EditForm.aspx. On top of that clients always ask such stuff. So here I am going to give you guys a flight for SharePoint Customization world. In this blog, I will explain, how to create CustomListForm WebPart. In my next blogs, I am going to explain easy deployment of List Forms through features and last, guidance on using SharePoint web controls. 1.       First thing, create a class library project through Visual Studio and inherit the class with WebPart class.     public class CustomListForm : WebPart   2.       Declare the public variables and properties which we are going to use throughout the class. You will get to know these once you see them in use.         #region "Variable Declaration"           Table spTableCntl;         FormToolBar formToolBar;         Literal ltAlertMessage;         Guid SiteId;         Guid ListId;         int ItemId;         string ListName;           #endregion           #region "Properties"           SPControlMode _ControlMode = SPControlMode.New;         [Personalizable(PersonalizationScope.Shared),          WebBrowsable(true),          WebDisplayName("Control Mode"),          WebDescription("Set Control Mode"),          DefaultValue(""),          Category("Miscellaneous")]         public SPControlMode ControlMode         {             get { return _ControlMode; }             set { _ControlMode = value; }         }           #endregion     The property “ControlMode” is used to identify the mode of the List Form. The property is of type SPControlMode which is an enum type with values (Display, Edit, New and Invalid). When we will add this WebPart to DispForm.aspx, EditForm.aspx and NewForm.aspx, we will set the WebPart property “ControlMode” to Display, Edit and New respectively.     3.       Now, we need to override the CreateChildControl method and write code to manually add SharePoint Web Controls related to each list fields as well as ToolBar controls.         protected override void CreateChildControls()         {             base.CreateChildControls();               try             {                 SiteId = SPContext.Current.Site.ID;                 ListId = SPContext.Current.ListId;                 ListName = SPContext.Current.List.Title;                   if (_ControlMode == SPControlMode.Display || _ControlMode == SPControlMode.Edit)                     ItemId = SPContext.Current.ItemId;                   SPSecurity.RunWithElevatedPrivileges(delegate()                 {                     using (SPSite site = new SPSite(SiteId))                     {                         //creating a new SPSite with credentials of System Account                         using (SPWeb web = site.OpenWeb())                         {                               //<Custom Code for creating form controls>                         }                     }                 });             }             catch (Exception ex)             {                 ShowError(ex, "CreateChildControls");             }         }   Here we are assuming that we are developing this WebPart to plug into List Forms. Hence we will get the List Id and List Name from the current context. We can have Item Id only in case of Display and Edit Mode. We are putting our code into “RunWithElevatedPrivileges” to elevate privileges to System Account. Now, let’s get deep down into the main code and expand “//<Custom Code for creating form controls>”. Before initiating any SharePoint control, we need to set context of SharePoint web controls explicitly so that it will be instantiated with elevated System Account user. Following line does the job.     //To create SharePoint controls with new web object and System Account credentials     SPControl.SetContextWeb(Context, web);   First thing, let’s add main table as container for all controls.     //Table to render webpart     Table spTableMain = new Table();     spTableMain.CellPadding = 0;     spTableMain.CellSpacing = 0;     spTableMain.Width = new Unit(100, UnitType.Percentage);     this.Controls.Add(spTableMain);   Now we need to add Top toolbar with Save and Cancel button at top as you see in the below screen shot.       // Add Row and Cell for Top ToolBar     TableRow spRowTopToolBar = new TableRow();     spTableMain.Rows.Add(spRowTopToolBar);     TableCell spCellTopToolBar = new TableCell();     spRowTopToolBar.Cells.Add(spCellTopToolBar);     spCellTopToolBar.Width = new Unit(100, UnitType.Percentage);         ToolBar toolBarTop = (ToolBar)Page.LoadControl("/_controltemplates/ToolBar.ascx");     toolBarTop.CssClass = "ms-formtoolbar";     toolBarTop.ID = "toolBarTbltop";     toolBarTop.RightButtons.SeparatorHtml = "<td class=ms-separator> </td>";       if (_ControlMode != SPControlMode.Display)     {         SaveButton btnSave = new SaveButton();         btnSave.ControlMode = _ControlMode;         btnSave.ListId = ListId;           if (_ControlMode == SPControlMode.New)             btnSave.RenderContext = SPContext.GetContext(web);         else         {             btnSave.RenderContext = SPContext.GetContext(this.Context, ItemId, ListId, web);             btnSave.ItemContext = SPContext.GetContext(this.Context, ItemId, ListId, web);             btnSave.ItemId = ItemId;         }         toolBarTop.RightButtons.Controls.Add(btnSave);     }       GoBackButton goBackButtonTop = new GoBackButton();     toolBarTop.RightButtons.Controls.Add(goBackButtonTop);     goBackButtonTop.ControlMode = SPControlMode.Display;       spCellTopToolBar.Controls.Add(toolBarTop);   Here we have use “SaveButton” and “GoBackButton” which are internal SharePoint web controls for save and cancel functionality. I have set some of the properties of Save Button with if-else condition because we will not have Item Id in case of New Mode. Item Id property is used to identify which SharePoint List Item need to be saved. Now, add Form Toolbar to the page which contains “Attach File”, “Delete Item” etc buttons.       // Add Row and Cell for FormToolBar     TableRow spRowFormToolBar = new TableRow();     spTableMain.Rows.Add(spRowFormToolBar);     TableCell spCellFormToolBar = new TableCell();     spRowFormToolBar.Cells.Add(spCellFormToolBar);     spCellFormToolBar.Width = new Unit(100, UnitType.Percentage);       FormToolBar formToolBar = new FormToolBar();     formToolBar.ID = "formToolBar";     formToolBar.ListId = ListId;     if (_ControlMode == SPControlMode.New)         formToolBar.RenderContext = SPContext.GetContext(web);     else     {         formToolBar.RenderContext = SPContext.GetContext(this.Context, ItemId, ListId, web);         formToolBar.ItemContext = SPContext.GetContext(this.Context, ItemId, ListId, web);         formToolBar.ItemId = ItemId;     }     formToolBar.ControlMode = _ControlMode;     formToolBar.EnableViewState = true;       spCellFormToolBar.Controls.Add(formToolBar);     The ControlMode property will take care of which button to be displayed on the toolbar. E.g. “Attach files”, “Delete Item” in new/edit forms and “New Item”, “Edit Item”, “Delete Item”, “Manage Permissions” etc in display forms. Now add main section which contains form field controls.     //Create Form Field controls and add them in Table "spCellCntl"     CreateFieldControls(web);     //Add public variable "spCellCntl" containing all form controls to the page     spRowCntl.Cells.Add(spCellCntl);     spCellCntl.Width = new Unit(100, UnitType.Percentage);     spCellCntl.Controls.Add(spTableCntl);       //Add a Blank Row with height of 5px to render space between ToolBar table and Control table     TableRow spRowLine1 = new TableRow();     spTableMain.Rows.Add(spRowLine1);     TableCell spCellLine1 = new TableCell();     spRowLine1.Cells.Add(spCellLine1);     spCellLine1.Height = new Unit(5, UnitType.Pixel);     spCellLine1.Controls.Add(new LiteralControl("<IMG SRC='/_layouts/images/blank.gif' width=1 height=1 alt=''>"));       //Add Row and Cell for Form Controls Section     TableRow spRowCntl = new TableRow();     spTableMain.Rows.Add(spRowCntl);     TableCell spCellCntl = new TableCell();       //Create Form Field controls and add them in Table "spCellCntl"     CreateFieldControls(web);     //Add public variable "spCellCntl" containing all form controls to the page     spRowCntl.Cells.Add(spCellCntl);     spCellCntl.Width = new Unit(100, UnitType.Percentage);     spCellCntl.Controls.Add(spTableCntl);       TableRow spRowLine2 = new TableRow();     spTableMain.Rows.Add(spRowLine2);     TableCell spCellLine2 = new TableCell();     spRowLine2.Cells.Add(spCellLine2);     spCellLine2.CssClass = "ms-formline";     spCellLine2.Controls.Add(new LiteralControl("<IMG SRC='/_layouts/images/blank.gif' width=1 height=1 alt=''>"));       // Add Blank row with height of 5 pixel     TableRow spRowLine3 = new TableRow();     spTableMain.Rows.Add(spRowLine3);     TableCell spCellLine3 = new TableCell();     spRowLine3.Cells.Add(spCellLine3);     spCellLine3.Height = new Unit(5, UnitType.Pixel);     spCellLine3.Controls.Add(new LiteralControl("<IMG SRC='/_layouts/images/blank.gif' width=1 height=1 alt=''>"));   You can add bottom toolbar also to get same look and feel as OOB forms. I am not adding here as the blog will be much lengthy. At last, you need to write following lines to allow unsafe updates for Save and Delete button.     // Allow unsafe update on web for save button and delete button     if (this.Page.IsPostBack && this.Page.Request["__EventTarget"] != null         && (this.Page.Request["__EventTarget"].Contains("IOSaveItem")         || this.Page.Request["__EventTarget"].Contains("IODeleteItem")))     {         SPContext.Current.Web.AllowUnsafeUpdates = true;     }   So that’s all. We have finished writing Custom Code for adding field control. But something most important is skipped. In above code, I have called function “CreateFieldControls(web);” to add SharePoint field controls to the page. Let’s see the implementation of the function:     private void CreateFieldControls(SPWeb pWeb)     {         SPList listMain = pWeb.Lists[ListId];         SPFieldCollection fields = listMain.Fields;           //Main Table to render all fields         spTableCntl = new Table();         spTableCntl.BorderWidth = new Unit(0);         spTableCntl.CellPadding = 0;         spTableCntl.CellSpacing = 0;         spTableCntl.Width = new Unit(100, UnitType.Percentage);         spTableCntl.CssClass = "ms-formtable";           SPContext controlContext = SPContext.GetContext(this.Context, ItemId, ListId, pWeb);           foreach (SPField listField in fields)         {             string fieldDisplayName = listField.Title;             string fieldInternalName = listField.InternalName;               //Skip if the field is system field or hidden             if (listField.Hidden || listField.ShowInVersionHistory == false)                 continue;               //Skip if the control mode is display and field is read-only             if (_ControlMode != SPControlMode.Display && listField.ReadOnlyField == true)                 continue;               FieldLabel fieldLabel = new FieldLabel();             fieldLabel.FieldName = listField.InternalName;             fieldLabel.ListId = ListId;               BaseFieldControl fieldControl = listField.FieldRenderingControl;             fieldControl.ListId = ListId;             //Assign unique id using Field Internal Name             fieldControl.ID = string.Format("Field_{0}", fieldInternalName);             fieldControl.EnableViewState = true;               //Assign control mode             fieldLabel.ControlMode = _ControlMode;             fieldControl.ControlMode = _ControlMode;             switch (_ControlMode)             {                 case SPControlMode.New:                     fieldLabel.RenderContext = SPContext.GetContext(pWeb);                     fieldControl.RenderContext = SPContext.GetContext(pWeb);                     break;                 case SPControlMode.Edit:                 case SPControlMode.Display:                     fieldLabel.RenderContext = controlContext;                     fieldLabel.ItemContext = controlContext;                     fieldLabel.ItemId = ItemId;                       fieldControl.RenderContext = controlContext;                     fieldControl.ItemContext = controlContext;                     fieldControl.ItemId = ItemId;                     break;             }               //Add row to display a field row             TableRow spCntlRow = new TableRow();             spTableCntl.Rows.Add(spCntlRow);               //Add the cells for containing field lable and control             TableCell spCellLabel = new TableCell();             spCellLabel.Width = new Unit(30, UnitType.Percentage);             spCellLabel.CssClass = "ms-formlabel";             spCntlRow.Cells.Add(spCellLabel);             TableCell spCellControl = new TableCell();             spCellControl.Width = new Unit(70, UnitType.Percentage);             spCellControl.CssClass = "ms-formbody";             spCntlRow.Cells.Add(spCellControl);               //Add the control to the table cells             spCellLabel.Controls.Add(fieldLabel);             spCellControl.Controls.Add(fieldControl);               //Add description if there is any in case of New and Edit Mode             if (_ControlMode != SPControlMode.Display && listField.Description != string.Empty)             {                 FieldDescription fieldDesc = new FieldDescription();                 fieldDesc.FieldName = fieldInternalName;                 fieldDesc.ListId = ListId;                 spCellControl.Controls.Add(fieldDesc);             }               //Disable Name(Title) in Edit Mode             if (_ControlMode == SPControlMode.Edit && fieldDisplayName == "Name")             {                 TextBox txtTitlefield = (TextBox)fieldControl.Controls[0].FindControl("TextField");                 txtTitlefield.Enabled = false;             }         }         fields = null;     }   First of all, I have declared List object and got list fields in field collection object called “fields”. Then I have added a table for the container of all controls and assign CSS class as "ms-formtable" so that it gives consistent look and feel of SharePoint. Now it’s time to navigate through all fields and add them if required. Here we don’t need to add hidden or system fields. We also don’t want to display read-only fields in new and edit forms. Following lines does this job.             //Skip if the field is system field or hidden             if (listField.Hidden || listField.ShowInVersionHistory == false)                 continue;               //Skip if the control mode is display and field is read-only             if (_ControlMode != SPControlMode.Display && listField.ReadOnlyField == true)                 continue;   Let’s move to the next line of code.             FieldLabel fieldLabel = new FieldLabel();             fieldLabel.FieldName = listField.InternalName;             fieldLabel.ListId = ListId;               BaseFieldControl fieldControl = listField.FieldRenderingControl;             fieldControl.ListId = ListId;             //Assign unique id using Field Internal Name             fieldControl.ID = string.Format("Field_{0}", fieldInternalName);             fieldControl.EnableViewState = true;               //Assign control mode             fieldLabel.ControlMode = _ControlMode;             fieldControl.ControlMode = _ControlMode;   We have used “FieldLabel” control for displaying field title. The advantage of using Field Label is, SharePoint automatically adds red star besides field label to identify it as mandatory field if there is any. Here is most important part to understand. The “BaseFieldControl”. It will render the respective web controls according to type of the field. For example, if it’s single line of text, then Textbox, if it’s look up then it renders dropdown. Additionally, the “ControlMode” property tells compiler that which mode (display/edit/new) controls need to be rendered with. In display mode, it will render label with field value. In edit mode, it will render respective control with item value and in new mode it will render respective control with empty value. Please note that, it’s not always the case when dropdown field will be rendered for Lookup field or Choice field. You need to understand which controls are rendered for which list fields. I am planning to write a separate blog which I hope to publish it very soon. Moreover, we also need to assign list field specific properties like List Id, Field Name etc to identify which SharePoint List field is attached with the control.             switch (_ControlMode)             {                 case SPControlMode.New:                     fieldLabel.RenderContext = SPContext.GetContext(pWeb);                     fieldControl.RenderContext = SPContext.GetContext(pWeb);                     break;                 case SPControlMode.Edit:                 case SPControlMode.Display:                     fieldLabel.RenderContext = controlContext;                     fieldLabel.ItemContext = controlContext;                     fieldLabel.ItemId = ItemId;                       fieldControl.RenderContext = controlContext;                     fieldControl.ItemContext = controlContext;                     fieldControl.ItemId = ItemId;                     break;             }   Here, I have separate code for new mode and Edit/Display mode because we will not have Item Id to assign in New Mode. We also need to set CSS class for cell containing Label and Controls so that those controls get rendered with SharePoint theme.             spCellLabel.CssClass = "ms-formlabel";             spCellControl.CssClass = "ms-formbody";   “FieldDescription” control is used to add field description if there is any.    Now it’s time to add some more customization,               //Disable Name(Title) in Edit Mode             if (_ControlMode == SPControlMode.Edit && fieldDisplayName == "Name")             {                 TextBox txtTitlefield = (TextBox)fieldControl.Controls[0].FindControl("TextField");                 txtTitlefield.Enabled = false;             }   The above code will disable the title field in edit mode. You can add more code here to achieve more customization according to your requirement. Some of the examples are as follow:             //Adding post back event on UserField to auto populate some other dependent field             //in new mode and disable it in edit mode             if (_ControlMode != SPControlMode.Display && fieldDisplayName == "Manager")             {                 if (fieldControl.Controls[0].FindControl("UserField") != null)                 {                     PeopleEditor pplEditor = (PeopleEditor)fieldControl.Controls[0].FindControl("UserField");                     if (_ControlMode == SPControlMode.New)                         pplEditor.AutoPostBack = true;                     else                         pplEditor.Enabled = false;                 }             }               //Add JavaScript Event on Dropdown field. Don't forget to add the JavaScript function on the page.             if (_ControlMode == SPControlMode.Edit && fieldDisplayName == "Designation")             {                 DropDownList ddlCategory = (DropDownList)fieldControl.Controls[0];                 ddlCategory.Attributes.Add("onchange", string.Format("javascript:DropdownChangeEvent('{0}');return false;", ddlCategory.ClientID));             }    Following are the screenshots of my Custom ListForm WebPart. Let’s play a game, check out your OOB List forms of SharePoint, compare with these screens and find out differences.   DispForm.aspx:   EditForm.aspx:   NewForm.aspx:   Enjoy the SharePoint Soup!!! ­­­­­­­­­­­­­­­­­­­­

    Read the article

  • Keybindings for individual letter keys (not modifier-combinations) on a GtkTextView widget (Gtk3 and PyGI)

    - by monotasker
    I've been able to set several keybord shortcuts for a GtkTextView and a GtkTextEntry using the new css provider system. I'm finding, though, that I only seem to be able to establish keybindings for combinations including a modifier key. The widget doesn't respond to any bindings I set up that use: the delete key the escape key individual letter or punctuation keys alone Here's the code where I set up the css provider for the keybindings: #set up style context keys = Gtk.CssProvider() keys.load_from_path(os.path.join(data_path, 'keybindings.css')) #set up style contexts and css providers widgets = {'window': self.window, 'vbox': self.vbox, 'toolbar': self.toolbar, 'search_entry': self.search_entry, 'paned': self.paned, 'notelist_treeview': self.notelist_treeview, 'notelist_window': self.notelist_window, 'notetext_window': self.notetext_window, 'editor': self.editor, 'statusbar': self.statusbar } for l, w in widgets.iteritems(): w.get_style_context().add_provider(keys, Gtk.STYLE_PROVIDER_PRIORITY_USER) Then in keybindings.css this is an example of what works: @binding-set gtk-vi-text-view { bind "<ctrl>b" { "move-cursor" (display-lines, -5, 0) }; /* 5 lines up */ bind "<ctrl>k" { "move-cursor" (display-lines, -1, 0) }; /* down */ bind "<ctrl>j" { "move-cursor" (display-lines, 1, 0) }; /* up */ } Part of what I'm trying to do is just add proper delete-key function to the text widgets (right now the delete key does nothing at all). So if I add a binding like one of these, nothing happens: bind "Delete" { "delete-selection" () }; bind "Delete" { "delete-from-cursor" (chars, 1) }; The other part of what I want to do is more elaborate. I want to set up something like Vim's command and visual modes. So at the moment I'm just playing around with (a) setting the widget to editable=false by hitting the esc key; and (b) using homerow letters to move the cursor (as a proof-of-concept exercise). So far there's no response from the escape key or from the letter keys, even though the bindings work when I apply them to modifier-key combinations. For example, I do this in the css for the text-widget: bind "j" { "move-cursor" (display-lines, 1, 0) }; /* down */ bind "k" { "move-cursor" (display-lines, -1, 0) }; /* up */ bind "l" { "move-cursor" (logical-positions, 1, 0) }; /* right */ bind "h" { "move-cursor" (logical-positions, -1, 0) }; /* left */ but none of these bindings does anything, even if other bindings in the same set are respected. What's especially odd is that the vim-like movement bindings above are respected when I attach them to a GtkTreeView widget for navigating the tree-view options: @binding-set gtk-vi-tree-view { bind "j" { "move-cursor" (display-lines, 1) }; /* selection down */ bind "k" { "move-cursor" (display-lines, -1) }; /* selection up */ } So it seems like there are limitations or overrides of some kind on keybindings for the TextView widget (and for the del key?), but I can't find documentation of anything like that. Are these just things that can't be done with the css providers? If so, what are my alternatives for non-modified keybindings? Thanks.

    Read the article

  • Customer won't decide, how to deal?

    - by Crazy Eddie
    I write software that involves the use of measured quantities, many input by the user, most displayed, that are fed into calculation models to simulate various physical thing-a-majigs. We have created a data type that allows us to associate a numeric value with a unit, we call these "quantities" (big duh). Quantities and units are unique to dimension. You can't attach kilogram to a length for example. Math on quantities does automatic unit conversion to SI and the type is dimension safe (you can't assign a weight to a pressure for example). Custom UI components have been developed that display the value and its unit and/or allow the user to edit them. Dimensionless quantities, having no units, are a single, custom case implemented within the system. There's a set of related quantities such that our target audience apparently uses them interchangeably. The quantities are used in special units that embed the conversion factors for the related quantity dimensions...in other words, when using these units converting from one to another simply involves multiplying the value by 1 to the dimensional difference. However, conversion to/from the calculation system (SI) still involves these factors. One of these related quantities is a dimensionless one that represents a ratio. I simply can't get the "customer" to recognize the necessity of distinguishing these values and their use. They've picked one and want to use it everywhere, customizing the way we deal with it in special places. In this case they've picked one of the dimensions that has a unit...BUT, they don't want there to be a unit (GRR!!!). This of course is causing us to implement these special overrides for our UI elements and such. That of course is often times forgotten and worse...after a couple months everyone forgets why it was necessary and why we're using this dimensional value, calling it the wrong thing, and disabling the unit. I could just ignore the "customer" and implement the type as the dimensionless quantity, which makes most sense. However, that leaves the team responsible for figuring it out when they've given us a formula using one of the other quantities. We have to not only figure out that it's happening, we have to decide what to do. This isn't a trivial deal. The other option is just to say to hell with it, do it the customer's way, and let it waste continued time and effort because it's just downright confusing as hell. However, I can't count the amount of times someone has said, "Why is this being done this way, it makes no sense at all," and the team goes off the deep end trying to figure it out. What would you do? Currently I'm still attempting to convince them that even if they use terms interchangeably, we at the least can't do that within the product discussion. Don't have high hopes though.

    Read the article

  • how to update child records when updating the Master table using Linq [closed]

    - by user20358
    I currently use a general repositry class that can update only a single table like so public abstract class MyRepository<T> : IRepository<T> where T : class { protected IObjectSet<T> _objectSet; protected ObjectContext _context; public MyRepository(ObjectContext Context) { _objectSet = Context.CreateObjectSet<T>(); _context = Context; } public IQueryable<T> GetAll() { return _objectSet.AsQueryable(); } public IQueryable<T> Find(Expression<Func<T, bool>> filter) { return _objectSet.Where(filter); } public void Add(T entity) { _objectSet.AddObject(entity); _context.ObjectStateManager.ChangeObjectState(entity, System.Data.EntityState.Added); _context.SaveChanges(); } public void Update(T entity) { _context.ObjectStateManager.ChangeObjectState(entity, System.Data.EntityState.Modified); _context.SaveChanges(); } public void Delete(T entity) { _objectSet.Attach(entity); _context.ObjectStateManager.ChangeObjectState(entity, System.Data.EntityState.Deleted); _objectSet.DeleteObject(entity); _context.SaveChanges(); } } For every table class generated by my EDMX designer I create another class like this public class CustomerRepo : MyRepository<Customer> { public CustomerRepo (ObjectContext context) : base(context) { } } for any updates that I need to make to a particular table I do this: Customer CustomerObj = new Customer(); CustomerObj.Prop1 = ... CustomerObj.Prop2 = ... CustomerObj.Prop3 = ... CustomerRepo.Update(CustomerObj); This works perfectly well when I am updating just to the specific table called Customer. Now if I need to also update each row of another table which is a child of Customer called Orders what changes do I need to make to the class MyRepository. Orders table will have multiple records for a Customer record and multiple fields too, say for example Field1, Field2, Field3. So my questions are: 1.) If I only need to update Field1 of the Orders table for some rows based on a condition and Field2 for some other rows based on a different condition then what changes I need to do? 2.) If there is no such condition and all child rows need to be updated with the same value for all rows then what changes do I need to do? Thanks for taking the time. Look forward to your inputs...

    Read the article

  • Newsletter sent with drupal goes to Spam Folder [closed]

    - by HerrSerker
    Possible Duplicate: How could I prevent my mail from being recognized as spam? I'm sending a newsletter with drupals simplenews module The website is hosted on an 1und1 server in germany (as seen in in header domains online.de and kundenserver.de) When I send it, it goes to Spam folder in Yahoo & GMail Mailbox, but not in Spam Folder in web.de, hotmail and GMX Mailboxes Here is, what I have in the Mail Header (for yahoo in this example) Received: from 12.345.678.90 (EHLO sXXXXXXXXX.online.de) (12.345.678.90) by mtaXXX.mail.kks.yahoo.co.jp with SMTP; Fri, 15 Jun 2012 18:45:24 +0900 Received: from [127.0.0.1] (helo=infongdXXXXX.rtr.kundenserver.de) by sXXXXXXXXX.online.de with esmtp (Exim 4.72) (envelope-from <[email protected]>) id 1SfT5k-00068r-Q8 for [email protected]; Fri, 15 Jun 2012 11:45:20 +0200 Received: from 83.136.130.41 (IP may be forged by CGI script) by infongdXXXXX.rtr.kundenserver.de with HTTP id 0Z04SW-1SQTKp3LPr-00YxYk; Fri, 15 Jun 2012 11:45:20 +0200 From: SENDER <[email protected]> To: "[email protected]" <[email protected]> Date: Fri, 15 Jun 2012 11:45:20 +0200 Subject: This is the subject of the newsletter Thread-Topic: This is the subject of the newsletter Thread-Index: Ac1K3nT42juzo7uCSkq5dTlby1ZvpQ== List-Unsubscribe: <http://www.example.com/newsletter/confirm/remove/XXXXXXXXX> X-MS-Has-Attach: X-Auto-Response-Suppress: All X-MS-TNEF-Correlator: x-originating-ip: [12.345.678.90] authentication-results: mtaXXX.mail.kks.yahoo.co.jp from=example.com; domainkeys=neutral (no sig); dkim=neutral (no sig) [email protected] errors-to: "SENDER" <[email protected]> received-spf: none (sXXXXXXXXX.online.de: domain of [email protected] does not designate permitted sender hosts) x-apparently-to: [email protected] via 123.45.67.890; Fri, 15 Jun 2012 18:45:25 +0900 x-sender-info: <[email protected]> content-length: 13762 Content-Type: multipart/alternative; boundary="_000_7471797868716571796675707173696675806577726778666766687_" MIME-Version: 1.0 I cannot see any direct spam filter message in this. But I'm kind of stunned by the Received: from 83.136.130.41 (IP may be forged by CGI script) part. After I searched a bit, it seems, that this is a special 'feature' of 1und1 Mail servers. Here are my questions: Is it possible that, if I get rid of the 'Ip maybe forged' part, that the Mail is not regarded as spam anymore? If so, Does anyone know, how I can get rid of it in drupal?

    Read the article

  • How to wrap console utils in webserver

    - by Alex Brown
    I have a big dataset (100Mbs/day) and a bunch of console a TCL/TK tools to view it - I want to turn it into a web app that I can build, and others can maintain. In long: my group runs simulations yielding 100s of Mbs of data daily, in multiple (mostly but not only) text forms. We have a bunch of scripts and tools, mostly old school 1990's style stuff requiring a 5-button mouse, as well as lots of ad-hoc scripts that engineers build out of frustration every month or so. These produces UIs, graphs, spreadsheets (various sizes), logs, event histories etc. I want to replace (or at least supplement) the xwindows / console style UI with a web-based one, so I need the following properties: pleasant to program can wrap existing command-line tools in separate views (I don't need to scrape GUIs or anything) as I port logic from the existing scripts I can create a modularised and pleasant codebase to replace it I can attach a web-ui to navigate between views - each view is likely to contain keys which might make sense to view in another I am new to building systems that have logic on the back-end and front-end of a web-server. from that point of view, they do this: backend wraps old-school executables, constructs calls into them and them takes the output and wraps it up, niceifies it and delivers it to the web client. For instance the tool might generate a number of indexed images (per invocation) which I might deliver all at once or on-demand. May (probably) need to to heavy stats on some sources. frontend provides navigation connecting multiple views, performs requests from one view for data from another (or self to self), etc. Probably will have some views with a lot of interactivity. Can people please point me towards viable solutions for this? I know it's a bit of an open question so as answers come in I hope to refine the spec until we have a good match. I guess I expect to see answers like "RoR!" "beans!" "Scala!" but please give an indication of why those are a good fit; I know nothing! I got bumped off SO for asking an open-ended question, so sorry if its OT here too (let me know). I take the policy that I use the best/closest matched language for a project but most of my team are extremely low level (ie pipeline stages and CDyn) so I don't have the peer group to know where to start.

    Read the article

  • Cloning a dual boot system from HDD to SSD

    - by Alex
    I'm planning on replacing my laptop's HDD with a 256GB SSD, but I have a dual-boot (12.04 and Windows 7) setup and I'd like to be able to directly migrate Ubuntu over without having to reinstall and lose all of my settings. GParted reports the following partition setup on my HDD. I am, of course, able to modify it if necessary. /dev/sda1 (NTFS) 66.92 out of 200.00 MB used I'm honestly not sure what this partition is for. Maybe for Windows 7 system files? I'm hesitant to mess with it. (edit; it turns out it is a partition for Windows recovery files in the event of OS corruption, so I don't want to remove it. Plus it also appears to be a major pain to remove anyways) /dev/sda2 (NTFS) 116.35 out of 339.06 GB used (boot) This partition is the C:/ drive on my Windows installation. I don't use it on my Ubuntu installation, except it is the boot partition and thus has grub on it. /dev/sda4 (extended) > /dev/sda5 (ext4) 14.49 out of 91.34 GB used > /dev/sda6 (linux-swap) 5.92 GB These are my Ubuntu partitions. /sda5 contains my documents and all of the files I use on Ubuntu, and (as far as I know) the system files for Ubuntu itself (it's the partition I created when prompted by the Live-DVD installer). /sda6 is, of course, the swap partition which I only need for hibernation (6GB of RAM). /dev/sda3 (NTFS) 9.89 out of 14.75 GB used This is an annoying partition that Lenovo created to store some drivers and files that I might need later on. For example, it allows me to use OneKeyRecovery for a quick factory recovery if absolutely necessary, not sure if that'll work on an SSD. It also contains not-so-important files for bloatware installation. In total, my HDD only has about 150GB of files on it so it should fit comfortably on the SSD. The problem is, I want to exactly migrate my files, partitions, OSes, MBR, etc. from my HDD to my SSD and I'm not quite sure how to do this. I've seen CloneZilla referenced before, but I'm not all too experienced and the documentation for it quite frankly seems a bit like a foreign language to me. So, put simply, is there any way I can exactly clone this HDD to an SSD without a massive headache? Also, if it matters, I'll probably be using an external hard drive case (as recommended in online tutorials) to externally attach the SSD to my laptop during the cloning process due to the lack of two hard drive slots in the machine.

    Read the article

  • How to Add a Business Card Image to a Signature in Outlook 2013 Without the vCard (.vcf) File

    - by Lori Kaufman
    When you add a business card to a signature, an image of the business card is inserted into the signature and the vCard (.vcf) file is attached. If you don’t want to attach the vCard file, you can insert the image only into your signature. To insert only the image of your business card without the .vcf file, click People on the Navigation Bar at the bottom of the Outlook window. To get a business card image we can use, we must view the contacts in any form other than People, so we can open the full contact editing window. To do this, click on a different view in the Current View section of the Home tab. We chose to view our contacts in the Business Card format. Double-click on your contact in the current view. The full contact editing window displays with an image of the business card on the right. Right-click on the business card image and select Copy Image from the popup menu. To close the contact editing window, click the File tab and click Close in the menu list on the left. NOTE: You can also click the X in the upper, right corner of the contact editing window to close it. To open the signature editor, click the File tab. Click Options in the menu list on the left side of the Account Information screen. On the Outlook Options dialog box, click Mail in the list of options on the left side of the dialog box. On the Mail screen, click Signatures in the Compose messages section. NOTE: You can also access the Signatures and Stationery dialog box from the Message window for new emails and drafts. Click New Email on the Home tab or double-click an email in the Drafts folder to access the Message window. For more information, see our article about assigning a default signature. In the signature editor, right-click and select Paste from the popup menu. The image is inserted into the signature. You can also use this method to copy a business card image for use in other documents and programs. It’s also possible to insert the vCard (.vcf) file into a signature without the image. We’ll cover that topic tomorrow.     

    Read the article

  • Oops, I left my kernel zone configuration behind!

    - by mgerdts
    Most people use boot environments to move in one direction.  A system starts with an initial installation and from time to time new boot environments are created - typically as a result of pkg update - and then the new BE is booted.  This post is of little interest to those people as no hackery is needed.  This post is about some mild hackery. During development, I commonly test different scenarios across multiple boot environments.  Many times, those tests aren't related to the act of configuring or installing zone and I so it's kinda handy to avoid the effort involved of zone configuration and installation.  A somewhat common order of operations is like the following: # beadm create -e golden -a test1 # reboot Once the system is running in the test1 BE, I install a kernel zone. # zonecfg -z a178 create -t SYSsolaris-kz # zoneadm -z a178 install Time passes, and I do all kinds of stuff to the test1 boot environment and want to test other scenarios in a clean boot environment.  So then I create a new one from my golden BE and reboot into it. # beadm create -e golden -a test2 # reboot Since the test2 BE was created from the golden BE, it doesn't have the configuration for the kernel zone that I configured and installed.  Getting that zone over to the test2 BE is pretty easy.  My test1 BE is really known as s11fixes-2. root@vzl-212:~# beadm mount s11fixes-2 /mnt root@vzl-212:~# zonecfg -R /mnt -z a178 export | zonecfg -z a178 -f - root@vzl-212:~# beadm unmount s11fixes-2 root@vzl-212:~# zoneadm -z a178 attach root@vzl-212:~# zoneadm -z a178 boot On the face of it, it would seem as though it would have been easier to just use zonecfg -z a178 create -t SYSolaris-kz within the test2 BE to get the new configuration over.  That would almost work, but it would have left behind the encryption key required for access to host data and any suspend image.  See solaris-kz(5) for more info on host data.  I very commonly have more complex configurations that contain many storage URIs and non-default resource controls.  Retyping them would be rather tedious.

    Read the article

  • Link instead of Attaching

    - by Daniel Moth
    With email storage not being an issue in many companies (I think I currently have 25GB of storage on my email account, I don’t even think about storage), this encourages bad behaviors such as liberally attaching office documents to emails instead of sharing a link to the document in SharePoint or SkyDrive or some file share etc. Attaching a file admittedly has its usage scenarios too, but it should not be the default. I thought I'd list the reasons why sharing a link can be better than attaching files directly. In no particular order: Better Review. It allows multiple recipients to review the file and their comments are aggregated into a single document. The alternative is everyone having to detach the document, add their comments, then send back to you, and then you have to collate. Wirth the alternative, you also potentially miss out on recipients reading comments from other recipients. Always up to date. The attachment becomes a fork instead of an always up to date document. For example, you send the email on Thursday, I only open it on Tuesday: between those days you could have made updates that now I am missing because you decided to share a link instead of an attachment. Better bookmarking. When I need to find that document you shared, you are forcing me to search through my email (I may not even be running outlook), instead of opening the link which I have bookmarked in my browser or my collection of links in my OneNote or from the recent/pinned links of the office app on my task bar, etc. Can control access. If someone accidentally or naively forwards your link to someone outside your group/org who you’d prefer not to have access to it, the location of the document can be protected with specific access control. Can add more recipients. If someone adds people to the email thread in outlook, your attachment doesn't get re-attached - instead, the person added is left without the attachment unless someone remembers to re-attach it. If it was a link, they are immediately caught up without further actions. Enable Discovery. If you put it on a share, I may be able to discover other cool stuff that lives alongside that document. Save on storage. So this doesn't apply to me given my opening statement, but if in your company you do have such limitations, attaching files eats up storage on all recipients accounts and will also get "lost" when those people archive email (and lose completely at some point if they follow the company retention policy). Like I said, attachments do have their place, but they should be an explicit choice for explicit reasons rather than the default. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Specifying and applying broad changes to a program

    - by Victor Nicollet
    How do you handle incomplete feature requests, when the ones asking for the feature cannot possibly write a complete request? Consider an imaginary situation. You are a tech lead working on a piece of software that revolves around managing profiles (maybe they're contacts in a CRM-type application, or employees in an HR application), with many operations being directly or indirectly performed on those profiles — edit fields, add comments, attach documents, send e-mail... The higher-ups decide that a lock functionality should be added whereby a profile can be locked to prevent anyone else from doing any operations on it until it's unlocked — this feature would be used by security agents to prevent anyone from touching a profile pending a security audit. Obviously, such a feature interacts with many other existing features related to profiles. For example: Can one add a comment to a locked profile? Can one see e-mails that were sent by the system to the owner of a locked profile? Can one see who recently edited a locked profile? If an e-mail was in the process of being sent when the lock happened, is the e-mail sending canceled, delayed or performed as if nothing happened? If I just changed a profile and click the "cancel" link on the confirmation, does the lock prevent the cancel or does it still go through? In all of these cases, how do I tell the user that a lock is in place? Depending on the software, there could be hundreds of such interactions, and each interaction requires a decision — is the lock going to apply and if it does, how will it be displayed to the user? And the higher-ups asking for the feature probably only see a small fraction of these, so you will probably have a lot of questions coming up while you are working on the feature. How would you and your team handle this? Would you expect the higher-ups to come up with a complete description of all cases where the lock should apply (and how), and treat all other cases as if the lock did not exist? Would you try to determine all potential interactions based on existing specifications and code, list them and ask the higher-ups to make a decision on all those where the decision is not obvious? Would you just start working and ask questions as they come up? Would you try to change their minds and settle on a more easily described feature with similar effects? The information about existing features is, as I understand it, in the code — how do you bridge the gap between the decision-makers and that information they cannot access?

    Read the article

  • What is the most appropriate testing method in this scenario?

    - by Daniel Bruce
    I'm writing some Objective-C apps (for OS X/iOS) and I'm currently implementing a service to be shared across them. The service is intended to be fairly self-contained. For the current functionality I'm envisioning there will be only one method that clients will call to do a fairly complicated series of steps both using private methods on the class, and passing data through a bunch of "data mangling classes" to arrive at an end result. The gist of the code is to fetch a log of changes, stored in a service-internal data store, that has occurred since a particular time, simplify the log to only include the last applicable change for each object, attach the serialized values for the affected objects and return this all to the client. My question then is, how do I unit-test this entry point method? Obviously, each class would have thorough unit tests to ensure that their functionality works as expected, but the entry point seems harder to "disconnect" from the rest of the world. I would rather not send in each of these internal classes IoC-style, because they're small and are only made classes to satisfy the single-responsibility principle. I see a couple possibilities: Create a "private" interface header for the tests with methods that call the internal classes and test each of these methods separately. Then, to test the entry point, make a partial mock of the service class with these private methods mocked out and just test that the methods are called with the right arguments. Write a series of fatter tests for the entry point without mocking out anything, testing the entire functionality in one go. This looks, to me, more like "integration testing" and seems brittle, but it does satisfy the "only test via the public interface" principle. Write a factory that returns these internal services and take that in the initializer, then write a factory that returns mocked versions of them to use in tests. This has the downside of making the construction of the service annoying, and leaks internal details to the client. Write a "private" initializer that take these services as extra parameters, use that to provide mocked services, and have the public initializer back-end to this one. This would ensure that the client code still sees the easy/pretty initializer and no internals are leaked. I'm sure there's more ways to solve this problem that I haven't thought of yet, but my question is: what's the most appropriate approach according to unit testing best practices? Especially considering I would prefer to write this test-first, meaning I should preferably only create these services as the code indicates a need for them.

    Read the article

  • How are OpenGL ES 1 framebuffers and textures sized?

    - by jens
    I am trying to draw to a texture using a framebuffer using OpenGL ES 1.1 on Android, Java. Afterwords I want to overlay this texture full-screen over my game. In theory, this works like a charm, but somehow the coordinates are off. For testing I drew something at (0,0) with width and height 200, and it partly is off-screen. This is how I create the framebuffer: fb = new int[1]; depthRb = new int[1]; renderTex = new int[1]; gl11ep.glGenFramebuffersOES(1, fb, 0); gl11ep.glGenRenderbuffersOES(1, depthRb, 0); // the depth buffer gl.glGenTextures(1, renderTex, 0);// generate texture gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer(); gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, texW, texH, 0, GL10.GL_LUMINANCE, GL10.GL_UNSIGNED_BYTE, texBuffer); gl11ep.glBindRenderbufferOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); gl11ep.glRenderbufferStorageOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, GL11ExtensionPack.GL_DEPTH_COMPONENT16, texW, texH); Before I draw, I do this: gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]); gl.glClearColor(0f, 0f, 0f, 0f); // specify texture as color attachment gl11ep.glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0); // attach render buffer as depth buffer gl11ep.glFramebufferRenderbufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_DEPTH_ATTACHMENT_OES, GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); I set texW = 1024 and texH = 512. When rendering this texture fullscreen, with a lightmask (size 200x200) placed at (0, 0) and (texW/2, texH/2). You can see that it seems like the coordinate system doesnt start at (0,0) as that light overlaps the screen and the images are not drawn as squares (my lightcone-texture is a circle, not an ellipse). So, how is the coordinate system of this offscreen-drawn texture defined? Thanks

    Read the article

  • (Android) How are OpenGL ES 1 framebuffers and textures sized?

    - by jens
    I am trying to draw to a texture using a framebuffer using OpenGL ES 1.1 on Android, Java. Afterwords I want to overlay this texture full-screen over my game. In theory, this works like a charm, but somehow the coordinates are off. For testing I drew something at (0,0) with width and height 200, and it partly is off-screen. This is how I create the framebuffer: fb = new int[1]; depthRb = new int[1]; renderTex = new int[1]; gl11ep.glGenFramebuffersOES(1, fb, 0); gl11ep.glGenRenderbuffersOES(1, depthRb, 0); // the depth buffer gl.glGenTextures(1, renderTex, 0);// generate texture gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer(); gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_LUMINANCE, texW, texH, 0, GL10.GL_LUMINANCE, GL10.GL_UNSIGNED_BYTE, texBuffer); gl11ep.glBindRenderbufferOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); gl11ep.glRenderbufferStorageOES(GL11ExtensionPack.GL_RENDERBUFFER_OES, GL11ExtensionPack.GL_DEPTH_COMPONENT16, texW, texH); Before I draw, I do this: gl11ep.glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]); gl.glClearColor(0f, 0f, 0f, 0f); // specify texture as color attachment gl11ep.glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0); // attach render buffer as depth buffer gl11ep.glFramebufferRenderbufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_DEPTH_ATTACHMENT_OES, GL11ExtensionPack.GL_RENDERBUFFER_OES, depthRb[0]); I set texW = 1024 and texH = 512. When rendering this texture fullscreen, with a lightmask (size 200x200) placed at (0, 0) and (texW/2, texH/2). You can see that it seems like the coordinate system doesnt start at (0,0) as that light overlaps the screen and the images are not drawn as squares (my lightcone-texture is a circle, not an ellipse). So, how is the coordinate system of this offscreen-drawn texture defined? Thanks

    Read the article

  • Are we queueing and serializing properly?

    - by insta
    We process messages through a variety of services (one message will touch probably 9 services before it's done, each doing a specific IO-related function). Right now we have a combination of the worst-case (XML data contract serialization) and best-case (in-memory MSMQ) for performance. The nature of the message means that our serialized data ends up about 12-15 kilobytes, and we process about 4 million messages per week. Persistent messages in MSMQ were too slow for us, and as the data grows we are feeling the pressure from MSMQ's memory-mapped files. The server is at 16GB of memory usage and growing, just for queueing. Performance also suffers when the memory usage is high, as the machine starts swapping. We're already doing the MSMQ self-cleanup behavior. I feel like there's a part we're doing wrong here. I tried using RavenDB to persist the messages and just queueing an identifier, but the performance there was very slow (1000 messages per minute, at best). I'm not sure if that's a result of using the development version or what, but we definitely need a higher throughput[1]. The concept worked very well in theory but performance was not up to the task. The usage pattern has one service acting as a router, which does all reads. The other services will attach information based on their 3rd party hook, and forward back to the router. Most objects are touched 9-12 times, although about 10% are forced to loop around in this system for awhile until the 3rd parties respond appropriately. The services right now account for this and have appropriate sleeping behaviors, as we utilize the priority field of the message for this reason. So, my question, is what is an ideal stack for message passing between discrete-but-LAN'ed machines in a C#/Windows environment? I would normally start with BinaryFormatter instead of XML serialization, but that's a rabbit hole if a better way is to offload serialization to a document store. Hence, my question. [1]: The nature of our business means the sooner we process messages, the more money we make. We've empirically proven that processing a message later in the week means we are less likely to make that money. While performance of "1000 per minute" sounds plenty fast, we really need that number upwards of 10k/minute. Just because I'm giving numbers in messages per week doesn't mean we have a whole week to process those messages.

    Read the article

  • What are good design practices when working with Entity Framework

    - by AD
    This will apply mostly for an asp.net application where the data is not accessed via soa. Meaning that you get access to the objects loaded from the framework, not Transfer Objects, although some recommendation still apply. This is a community post, so please add to it as you see fit. Applies to: Entity Framework 1.0 shipped with Visual Studio 2008 sp1. Why pick EF in the first place? Considering it is a young technology with plenty of problems (see below), it may be a hard sell to get on the EF bandwagon for your project. However, it is the technology Microsoft is pushing (at the expense of Linq2Sql, which is a subset of EF). In addition, you may not be satisfied with NHibernate or other solutions out there. Whatever the reasons, there are people out there (including me) working with EF and life is not bad.make you think. EF and inheritance The first big subject is inheritance. EF does support mapping for inherited classes that are persisted in 2 ways: table per class and table the hierarchy. The modeling is easy and there are no programming issues with that part. (The following applies to table per class model as I don't have experience with table per hierarchy, which is, anyway, limited.) The real problem comes when you are trying to run queries that include one or many objects that are part of an inheritance tree: the generated sql is incredibly awful, takes a long time to get parsed by the EF and takes a long time to execute as well. This is a real show stopper. Enough that EF should probably not be used with inheritance or as little as possible. Here is an example of how bad it was. My EF model had ~30 classes, ~10 of which were part of an inheritance tree. On running a query to get one item from the Base class, something as simple as Base.Get(id), the generated SQL was over 50,000 characters. Then when you are trying to return some Associations, it degenerates even more, going as far as throwing SQL exceptions about not being able to query more than 256 tables at once. Ok, this is bad, EF concept is to allow you to create your object structure without (or with as little as possible) consideration on the actual database implementation of your table. It completely fails at this. So, recommendations? Avoid inheritance if you can, the performance will be so much better. Use it sparingly where you have to. In my opinion, this makes EF a glorified sql-generation tool for querying, but there are still advantages to using it. And ways to implement mechanism that are similar to inheritance. Bypassing inheritance with Interfaces First thing to know with trying to get some kind of inheritance going with EF is that you cannot assign a non-EF-modeled class a base class. Don't even try it, it will get overwritten by the modeler. So what to do? You can use interfaces to enforce that classes implement some functionality. For example here is a IEntity interface that allow you to define Associations between EF entities where you don't know at design time what the type of the entity would be. public enum EntityTypes{ Unknown = -1, Dog = 0, Cat } public interface IEntity { int EntityID { get; } string Name { get; } Type EntityType { get; } } public partial class Dog : IEntity { // implement EntityID and Name which could actually be fields // from your EF model Type EntityType{ get{ return EntityTypes.Dog; } } } Using this IEntity, you can then work with undefined associations in other classes // lets take a class that you defined in your model. // that class has a mapping to the columns: PetID, PetType public partial class Person { public IEntity GetPet() { return IEntityController.Get(PetID,PetType); } } which makes use of some extension functions: public class IEntityController { static public IEntity Get(int id, EntityTypes type) { switch (type) { case EntityTypes.Dog: return Dog.Get(id); case EntityTypes.Cat: return Cat.Get(id); default: throw new Exception("Invalid EntityType"); } } } Not as neat as having plain inheritance, particularly considering you have to store the PetType in an extra database field, but considering the performance gains, I would not look back. It also cannot model one-to-many, many-to-many relationship, but with creative uses of 'Union' it could be made to work. Finally, it creates the side effet of loading data in a property/function of the object, which you need to be careful about. Using a clear naming convention like GetXYZ() helps in that regards. Compiled Queries Entity Framework performance is not as good as direct database access with ADO (obviously) or Linq2SQL. There are ways to improve it however, one of which is compiling your queries. The performance of a compiled query is similar to Linq2Sql. What is a compiled query? It is simply a query for which you tell the framework to keep the parsed tree in memory so it doesn't need to be regenerated the next time you run it. So the next run, you will save the time it takes to parse the tree. Do not discount that as it is a very costly operation that gets even worse with more complex queries. There are 2 ways to compile a query: creating an ObjectQuery with EntitySQL and using CompiledQuery.Compile() function. (Note that by using an EntityDataSource in your page, you will in fact be using ObjectQuery with EntitySQL, so that gets compiled and cached). An aside here in case you don't know what EntitySQL is. It is a string-based way of writing queries against the EF. Here is an example: "select value dog from Entities.DogSet as dog where dog.ID = @ID". The syntax is pretty similar to SQL syntax. You can also do pretty complex object manipulation, which is well explained [here][1]. Ok, so here is how to do it using ObjectQuery< string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); The first time you run this query, the framework will generate the expression tree and keep it in memory. So the next time it gets executed, you will save on that costly step. In that example EnablePlanCaching = true, which is unnecessary since that is the default option. The other way to compile a query for later use is the CompiledQuery.Compile method. This uses a delegate: static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => ctx.DogSet.FirstOrDefault(it => it.ID == id)); or using linq static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet where dog.ID == id select dog).FirstOrDefault()); to call the query: query_GetDog.Invoke( YourContext, id ); The advantage of CompiledQuery is that the syntax of your query is checked at compile time, where as EntitySQL is not. However, there are other consideration... Includes Lets say you want to have the data for the dog owner to be returned by the query to avoid making 2 calls to the database. Easy to do, right? EntitySQL string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)).Include("Owner"); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); CompiledQuery static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet.Include("Owner") where dog.ID == id select dog).FirstOrDefault()); Now, what if you want to have the Include parametrized? What I mean is that you want to have a single Get() function that is called from different pages that care about different relationships for the dog. One cares about the Owner, another about his FavoriteFood, another about his FavotireToy and so on. Basicly, you want to tell the query which associations to load. It is easy to do with EntitySQL public Dog Get(int id, string include) { string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)) .IncludeMany(include); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); } The include simply uses the passed string. Easy enough. Note that it is possible to improve on the Include(string) function (that accepts only a single path) with an IncludeMany(string) that will let you pass a string of comma-separated associations to load. Look further in the extension section for this function. If we try to do it with CompiledQuery however, we run into numerous problems: The obvious static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.Include(include) where dog.ID == id select dog).FirstOrDefault()); will choke when called with: query_GetDog.Invoke( YourContext, id, "Owner,FavoriteFood" ); Because, as mentionned above, Include() only wants to see a single path in the string and here we are giving it 2: "Owner" and "FavoriteFood" (which is not to be confused with "Owner.FavoriteFood"!). Then, let's use IncludeMany(), which is an extension function static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.IncludeMany(include) where dog.ID == id select dog).FirstOrDefault()); Wrong again, this time it is because the EF cannot parse IncludeMany because it is not part of the functions that is recognizes: it is an extension. Ok, so you want to pass an arbitrary number of paths to your function and Includes() only takes a single one. What to do? You could decide that you will never ever need more than, say 20 Includes, and pass each separated strings in a struct to CompiledQuery. But now the query looks like this: from dog in ctx.DogSet.Include(include1).Include(include2).Include(include3) .Include(include4).Include(include5).Include(include6) .[...].Include(include19).Include(include20) where dog.ID == id select dog which is awful as well. Ok, then, but wait a minute. Can't we return an ObjectQuery< with CompiledQuery? Then set the includes on that? Well, that what I would have thought so as well: static readonly Func<Entities, int, ObjectQuery<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, ObjectQuery<Dog>>((ctx, id) => (ObjectQuery<Dog>)(from dog in ctx.DogSet where dog.ID == id select dog)); public Dog GetDog( int id, string include ) { ObjectQuery<Dog> oQuery = query_GetDog(id); oQuery = oQuery.IncludeMany(include); return oQuery.FirstOrDefault; } That should have worked, except that when you call IncludeMany (or Include, Where, OrderBy...) you invalidate the cached compiled query because it is an entirely new one now! So, the expression tree needs to be reparsed and you get that performance hit again. So what is the solution? You simply cannot use CompiledQueries with parametrized Includes. Use EntitySQL instead. This doesn't mean that there aren't uses for CompiledQueries. It is great for localized queries that will always be called in the same context. Ideally CompiledQuery should always be used because the syntax is checked at compile time, but due to limitation, that's not possible. An example of use would be: you may want to have a page that queries which two dogs have the same favorite food, which is a bit narrow for a BusinessLayer function, so you put it in your page and know exactly what type of includes are required. Passing more than 3 parameters to a CompiledQuery Func is limited to 5 parameters, of which the last one is the return type and the first one is your Entities object from the model. So that leaves you with 3 parameters. A pitance, but it can be improved on very easily. public struct MyParams { public string param1; public int param2; public DateTime param3; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where dog.Age == myParams.param2 && dog.Name == myParams.param1 and dog.BirthDate > myParams.param3 select dog); public List<Dog> GetSomeDogs( int age, string Name, DateTime birthDate ) { MyParams myParams = new MyParams(); myParams.param1 = name; myParams.param2 = age; myParams.param3 = birthDate; return query_GetDog(YourContext,myParams).ToList(); } Return Types (this does not apply to EntitySQL queries as they aren't compiled at the same time during execution as the CompiledQuery method) Working with Linq, you usually don't force the execution of the query until the very last moment, in case some other functions downstream wants to change the query in some way: static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public IEnumerable<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name); } public void DataBindStuff() { IEnumerable<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } What is going to happen here? By still playing with the original ObjectQuery (that is the actual return type of the Linq statement, which implements IEnumerable), it will invalidate the compiled query and be force to re-parse. So, the rule of thumb is to return a List< of objects instead. static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public List<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name).ToList(); //<== change here } public void DataBindStuff() { List<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } When you call ToList(), the query gets executed as per the compiled query and then, later, the OrderBy is executed against the objects in memory. It may be a little bit slower, but I'm not even sure. One sure thing is that you have no worries about mis-handling the ObjectQuery and invalidating the compiled query plan. Once again, that is not a blanket statement. ToList() is a defensive programming trick, but if you have a valid reason not to use ToList(), go ahead. There are many cases in which you would want to refine the query before executing it. Performance What is the performance impact of compiling a query? It can actually be fairly large. A rule of thumb is that compiling and caching the query for reuse takes at least double the time of simply executing it without caching. For complex queries (read inherirante), I have seen upwards to 10 seconds. So, the first time a pre-compiled query gets called, you get a performance hit. After that first hit, performance is noticeably better than the same non-pre-compiled query. Practically the same as Linq2Sql When you load a page with pre-compiled queries the first time you will get a hit. It will load in maybe 5-15 seconds (obviously more than one pre-compiled queries will end up being called), while subsequent loads will take less than 300ms. Dramatic difference, and it is up to you to decide if it is ok for your first user to take a hit or you want a script to call your pages to force a compilation of the queries. Can this query be cached? { Dog dog = from dog in YourContext.DogSet where dog.ID == id select dog; } No, ad-hoc Linq queries are not cached and you will incur the cost of generating the tree every single time you call it. Parametrized Queries Most search capabilities involve heavily parametrized queries. There are even libraries available that will let you build a parametrized query out of lamba expressions. The problem is that you cannot use pre-compiled queries with those. One way around that is to map out all the possible criteria in the query and flag which one you want to use: public struct MyParams { public string name; public bool checkName; public int age; public bool checkAge; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where (myParams.checkAge == true && dog.Age == myParams.age) && (myParams.checkName == true && dog.Name == myParams.name ) select dog); protected List<Dog> GetSomeDogs() { MyParams myParams = new MyParams(); myParams.name = "Bud"; myParams.checkName = true; myParams.age = 0; myParams.checkAge = false; return query_GetDog(YourContext,myParams).ToList(); } The advantage here is that you get all the benifits of a pre-compiled quert. The disadvantages are that you most likely will end up with a where clause that is pretty difficult to maintain, that you will incur a bigger penalty for pre-compiling the query and that each query you run is not as efficient as it could be (particularly with joins thrown in). Another way is to build an EntitySQL query piece by piece, like we all did with SQL. protected List<Dod> GetSomeDogs( string name, int age) { string query = "select value dog from Entities.DogSet where 1 = 1 "; if( !String.IsNullOrEmpty(name) ) query = query + " and dog.Name == @Name "; if( age > 0 ) query = query + " and dog.Age == @Age "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); if( !String.IsNullOrEmpty(name) ) oQuery.Parameters.Add( new ObjectParameter( "Name", name ) ); if( age > 0 ) oQuery.Parameters.Add( new ObjectParameter( "Age", age ) ); return oQuery.ToList(); } Here the problems are: - there is no syntax checking during compilation - each different combination of parameters generate a different query which will need to be pre-compiled when it is first run. In this case, there are only 4 different possible queries (no params, age-only, name-only and both params), but you can see that there can be way more with a normal world search. - Noone likes to concatenate strings! Another option is to query a large subset of the data and then narrow it down in memory. This is particularly useful if you are working with a definite subset of the data, like all the dogs in a city. You know there are a lot but you also know there aren't that many... so your CityDog search page can load all the dogs for the city in memory, which is a single pre-compiled query and then refine the results protected List<Dod> GetSomeDogs( string name, int age, string city) { string query = "select value dog from Entities.DogSet where dog.Owner.Address.City == @City "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); oQuery.Parameters.Add( new ObjectParameter( "City", city ) ); List<Dog> dogs = oQuery.ToList(); if( !String.IsNullOrEmpty(name) ) dogs = dogs.Where( it => it.Name == name ); if( age > 0 ) dogs = dogs.Where( it => it.Age == age ); return dogs; } It is particularly useful when you start displaying all the data then allow for filtering. Problems: - Could lead to serious data transfer if you are not careful about your subset. - You can only filter on the data that you returned. It means that if you don't return the Dog.Owner association, you will not be able to filter on the Dog.Owner.Name So what is the best solution? There isn't any. You need to pick the solution that works best for you and your problem: - Use lambda-based query building when you don't care about pre-compiling your queries. - Use fully-defined pre-compiled Linq query when your object structure is not too complex. - Use EntitySQL/string concatenation when the structure could be complex and when the possible number of different resulting queries are small (which means fewer pre-compilation hits). - Use in-memory filtering when you are working with a smallish subset of the data or when you had to fetch all of the data on the data at first anyway (if the performance is fine with all the data, then filtering in memory will not cause any time to be spent in the db). Singleton access The best way to deal with your context and entities accross all your pages is to use the singleton pattern: public sealed class YourContext { private const string instanceKey = "On3GoModelKey"; YourContext(){} public static YourEntities Instance { get { HttpContext context = HttpContext.Current; if( context == null ) return Nested.instance; if (context.Items[instanceKey] == null) { On3GoEntities entity = new On3GoEntities(); context.Items[instanceKey] = entity; } return (YourEntities)context.Items[instanceKey]; } } class Nested { // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit static Nested() { } internal static readonly YourEntities instance = new YourEntities(); } } NoTracking, is it worth it? When executing a query, you can tell the framework to track the objects it will return or not. What does it mean? With tracking enabled (the default option), the framework will track what is going on with the object (has it been modified? Created? Deleted?) and will also link objects together, when further queries are made from the database, which is what is of interest here. For example, lets assume that Dog with ID == 2 has an owner which ID == 10. Dog dog = (from dog in YourContext.DogSet where dog.ID == 2 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Person owner = (from o in YourContext.PersonSet where o.ID == 10 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == true; If we were to do the same with no tracking, the result would be different. ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog = oDogQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>) (from o in YourContext.PersonSet where o.ID == 10 select o); oPersonQuery.MergeOption = MergeOption.NoTracking; Owner owner = oPersonQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Tracking is very useful and in a perfect world without performance issue, it would always be on. But in this world, there is a price for it, in terms of performance. So, should you use NoTracking to speed things up? It depends on what you are planning to use the data for. Is there any chance that the data your query with NoTracking can be used to make update/insert/delete in the database? If so, don't use NoTracking because associations are not tracked and will causes exceptions to be thrown. In a page where there are absolutly no updates to the database, you can use NoTracking. Mixing tracking and NoTracking is possible, but it requires you to be extra careful with updates/inserts/deletes. The problem is that if you mix then you risk having the framework trying to Attach() a NoTracking object to the context where another copy of the same object exist with tracking on. Basicly, what I am saying is that Dog dog1 = (from dog in YourContext.DogSet where dog.ID == 2).FirstOrDefault(); ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog2 = oDogQuery.FirstOrDefault(); dog1 and dog2 are 2 different objects, one tracked and one not. Using the detached object in an update/insert will force an Attach() that will say "Wait a minute, I do already have an object here with the same database key. Fail". And when you Attach() one object, all of its hierarchy gets attached as well, causing problems everywhere. Be extra careful. How much faster is it with NoTracking It depends on the queries. Some are much more succeptible to tracking than other. I don't have a fast an easy rule for it, but it helps. So I should use NoTracking everywhere then? Not exactly. There are some advantages to tracking object. The first one is that the object is cached, so subsequent call for that object will not hit the database. That cache is only valid for the lifetime of the YourEntities object, which, if you use the singleton code above, is the same as the page lifetime. One page request == one YourEntity object. So for multiple calls for the same object, it will load only once per page request. (Other caching mechanism could extend that). What happens when you are using NoTracking and try to load the same object multiple times? The database will be queried each time, so there is an impact there. How often do/should you call for the same object during a single page request? As little as possible of course, but it does happens. Also remember the piece above about having the associations connected automatically for your? You don't have that with NoTracking, so if you load your data in multiple batches, you will not have a link to between them: ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>)(from dog in YourContext.DogSet select dog); oDogQuery.MergeOption = MergeOption.NoTracking; List<Dog> dogs = oDogQuery.ToList(); ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>)(from o in YourContext.PersonSet select o); oPersonQuery.MergeOption = MergeOption.NoTracking; List<Person> owners = oPersonQuery.ToList(); In this case, no dog will have its .Owner property set. Some things to keep in mind when you are trying to optimize the performance. No lazy loading, what am I to do? This can be seen as a blessing in disguise. Of course it is annoying to load everything manually. However, it decreases the number of calls to the db and forces you to think about when you should load data. The more you can load in one database call the better. That was always true, but it is enforced now with this 'feature' of EF. Of course, you can call if( !ObjectReference.IsLoaded ) ObjectReference.Load(); if you want to, but a better practice is to force the framework to load the objects you know you will need in one shot. This is where the discussion about parametrized Includes begins to make sense. Lets say you have you Dog object public class Dog { public Dog Get(int id) { return YourContext.DogSet.FirstOrDefault(it => it.ID == id ); } } This is the type of function you work with all the time. It gets called from all over the place and once you have that Dog object, you will do very different things to it in different functions. First, it should be pre-compiled, because you will call that very often. Second, each different pages will want to have access to a different subset of the Dog data. Some will want the Owner, some the FavoriteToy, etc. Of course, you could call Load() for each reference you need anytime you need one. But that will generate a call to the database each time. Bad idea. So instead, each page will ask for the data it wants to see when it first request for the Dog object: static public Dog Get(int id) { return GetDog(entity,"");} static public Dog Get(int id, string includePath) { string query = "select value o " + " from YourEntities.DogSet as o " +

    Read the article

  • Motherboard HDDPWR1 connector

    - by Eric Leschinski
    I need help identifying the name of a connector. I have a Gateway DX4870-UB318 computer, I opened the case and wanted to attach another hard drive, but to my surprise one existing SATA hard drive was connected to the motherboard with this connector: And here is the spot on the Motherboard where the power was supplied. What is the name of this adapter and where can I get another one? Clues: This computer was bought new October 2013 from best buy, box number: DX4870-UB318. The gateway folks won't divulge the type of motherboard it has nor give specs on it. On the wire itself is an identification code: H.35090NJ01-000 Next to the connector on the motherboard it says: HDDPWR1 and the second one says HDDPWR2. This cable has two SATA power connectors and one mystery connector. The power supply has no molex power cables and no SATA power connectors! This is the most bizarre hard drive power system I've seen. I guess the motherboard folks are trying to remove the burden for desktop power supplies to provide adapters (molex, SATA, other) to CD's and hard drives. Can someone put a name on that white flat 6 pin HDD Power Connector? My Solution I can buy a "SATA Power Y Splitter Cable" to provide more spaces to power sata devices.

    Read the article

  • Email with extra '.com' behind sender email address

    - by CHT
    Currently I had a situation where I sent an email to [email protected], but when I receive mail from [email protected], it showed as [email protected], with extra '.com' behind the email address, this just happen within this week. Before this, I didn't change any setting, currently I am using Outlook 2010. When I checked the email in webmail, it also showed it as [email protected]. It seem that it has nothing to do with Outlook. However, I also tried on Thunderbird 16.0.1, but still the problem is the same. Has anyone experienced this before? Is the problem caused by the sender or receiver? Header Message as below: Return-Path: [email protected] Received: from colo4.roaringpenguin.com (not-assigned.privatedns.com [174.142.115.36] (may be forged)) by pioneerpos.com (8.12.11/8.12.11) with ESMTP id q9V6OsKU032650 for [email protected]; Wed, 31 Oct 2012 01:24:55 -0500 Received: from mail.pointsoft.com.tw (pointsoft.com.tw [59.124.242.126]) by colo4.roaringpenguin.com (8.14.3/8.14.3/Debian-9.4) with ESMTP id q9V6OmN0026374 for [email protected]; Wed, 31 Oct 2012 02:24:50 -0400 X-MimeOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----_=_NextPart_001_01CDB730.6B3D5A51" Subject: =?big5?B?scTByrPmLblzpfM=?= Date: Wed, 31 Oct 2012 14:25:16 +0800 Message-ID: X-MS-Has-Attach: yes X-MS-TNEF-Correlator: thread-topic: =?big5?B?scTByrPmLblzpfM=?= thread-index: Ac23MH3YpZuLx2ejTYqR5PfoZ+IoBw== X-Priority: 1 Priority: Urgent Importance: high From: "Alice" [email protected] To: "Bob" [email protected] X-Spam-Score: undef - pointsoft.com.tw is whitelisted. X-CanIt-Geo: ip=59.124.242.126; country=TW; region=03; city=Taipei; latitude=25.0392; longitude=121.5250; http://maps.google.com/maps?q=25.0392,121.5250&z=6 X-CanItPRO-Stream: pioneerpos-com:default (inherits from rp-customers:default,base:default) X-Canit-Stats-ID: 02IhGoMJb - 2e7fa924443e - 20121031 X-CanIt-Archive-Cluster: irqpXI7aJGyo4Ewta7qVH399FOg X-Scanned-By: CanIt (www . roaringpenguin . com) on 174.142.115.36

    Read the article

  • Firebird 2.1: gfix -online returns "database shutdown"

    - by darvids0n
    Hey all. Googling this one hasn't made a bit of difference, unfortunately, as most results specify the syntax for onlining a database after using gfix -shut -force 30 (or any other number of seconds) as gfix -online dbname, and I have run gfix -online dbname with and without login credentials for the DB in question. The message that I get is: database dbname shutdown Which is fine, except that I want to bring it online now. It's out of the question to close fbserver.exe (running on a Windows box, afaik it's Classic Server 2.1.1 but it may be Super) since we have other databases running off of that which need almost 24/7 uptime. The message from doing another gfix -shut -force or -attach or -tran is invalid shutdown mode for dbname which appears to match with the documentation of what happens if the database is already fully shut down. Ideas and input greatly appreciated, especially since at the moment time is a factor for me. Thanks! EDIT: The whole reason I shut down the DB is to clear out "active" transactions which were linked to a specific IP address, and that computer is my dev terminal (actually a virtual machine where I develop frontends for the database software) but I had no processes connecting to the database at the time. They looked like orphaned transactions to me, and they weren't in limbo afaik. Running a manual sweep didn't clear them out, deleting the rows from MON$STATEMENTS didn't work even though Firebird 2.1 supposedly supports cancelling queries that way. My last resort was to "restart" the database, hence the above issue.

    Read the article

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • Indefinite hang when restoring SQL 2005 database on a SQL 2008 server in EC2

    - by erinloy
    I'm trying to restore a 25 GB database backup taken from a Windows 2003/SQL 2005 machine to a Windows 2008/SQL 2008 machine in the Amazon EC2 cloud, using a .bak file and the SQL Management Studio. SQL Management Studio reports the restore reaches 100% complete, and then just hangs indefinitely (24+ hours) using a lot of CPU, until I restart the SQL Server service. Upon restart, SQL again uses a lot of CPU activity for what seems to be an indefinite amount of time, but the DB never comes online. Here are some details: - I have created two EBS volumes, one for DATA and one for LOGS, and I have set the default directories in SQL Server to the \DATA and \LOG directory on these respective volumes. (I wonder if the issue could be related to this, but the DB is too big to restore on the root drive.) - I have given the SQL Server user group full access to these directories. - The server can create a new empty test DB in these directories just fine, and can backup and restore the test DB. - I have tried both restoring of a .bak file and attaching directly to copies of the original .mdf/.ldf files, and the result is the same in both cases. - Both the .bak restore and the .mdf/.ldf attach occur from/to the EBS volumes. - I've also tried the above via SQL script, and "WITH RECOVERY", with no difference in the result, just less UI. - The backup contains two full text indexes. - I have to use "WITH MOVE" for most of the files in the backup. - There's nothing wrong with the backup or .mdf/.ldf files, as this works just fine on a Windows 2003/SQL 2005 machine in the Amazon EC2, but not Windows 2008/SQL 2008. - The DB is NOT marked as "Restoring" in the SQL Management Studio - it is just listed as a normal database, but throws errors when I try to do anything with it (expand the object browser tree, view properties, etc.) Any ideas?

    Read the article

  • Error 0x80300001 Installing Windows Server 2008 R2 64bit on FastTrak TX4660 RAID volume

    - by Konstantin Boyandin
    I am trying to install Windows Server 2008 R2 Enterprise 64bit on the following hardware: motherboard Intel DBS1200BTL Promise FastTrak TX4660 RAID controller 4 disks set up in two RAID1 arrays (handled by FastTrak) I am trying to install Windows so it would boot from RAID1 volume created with the FastTrak controller. The installation goes as in the manual, I insert the disk with the driver, select 'Browse' and specify the correct driver, it finds all the RAID arrays but notifies me that error 0x80300001 happened, Windows can't be installed on the mentioned RAID volumes, since they may not be bootable (even though the target RAID volume is the first in boot options list). If I proceed with the installation, Windows copies and unpacks itself, performs other standard actions after that. After the computer is restarted, it won't boot (Windows Boot Manager appears in the boot devices list; however, neither it nor the RAID volume itself does not boot). Is it a known problem? I can attach the boot disks to the motherboard and use its RAID capabilities instead, but I'd prefer FastTrak ones. Driver version is 1.3.0.4. Thanks.

    Read the article

  • Move database from SQL Server 2012 to 2008

    - by Rich
    I have a database on a SQL Sever 2012 instance which I would like to copy to a 2008 server. The 2008 server cannot restore backups created by a 2012 server (I have tried). I cannot find any options in 2012 to create a 2008 compatible backup. Am I missing something? Is there an easy way to export the schema and data to a version-agnostic format which I can then import into 2008? The database does not use any 2012 specific features. It contains tables, data and stored procedures. Here is what I have tried so far: I tried "tasks" - "generate scripts" on the 2012 server, and I was able to generate the schema (including stored procedures) as a sql script. This didn't include any of the data, though. After creating that schema on my 2008 machine, I was able to open the "Export Data" wizard on the 2012 machine, and after configuring the 2012 as source machine and the 2008 as target machine, I was presented with a list of tables which I could copy. I selected all my tables (300+), and clicked through the wizard. Unfortunately it spends ages generating its scripts, then fails with errors like "Failure inserting into the read-only column 'FOO_ID'". I also tried the "Copy Database Wizard", which claimed to be able to copy "from 2000 or later to 2005 or later". It has two modes: 1) "detach and attach", which failed with error: Message: Index was outside the bounds of the array. StackTrace: at Microsoft.SqlServer.Management.Smo.PropertyBag.SetValue(Int32 index, Object value) ... at Microsoft.SqlServer.Management.Smo.DataFile.get_FileName() 2) SQL Management Object Method which failed with error "Cannot read property IsFileStream.This property is not available on SQL Server 7.0."

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >