Search Results

Search found 32512 results on 1301 pages for 'object oriented analysis'.

Page 277/1301 | < Previous Page | 273 274 275 276 277 278 279 280 281 282 283 284  | Next Page >

  • MS project publishing to TFS web portal display

    - by denis bastarache
    So, when we initially created our MPP schedule, I made use of indends / subordinates to break down the project by the various stages of the lifecycle, which is fine... no issues there... But now that I'm trying to publish this over to TFS display, it'll only pick up the actual "action items / sub-tasks" seeing as I have resource allocation specified. So for example I have an "Analysis" phase with a few items underneath, and "System Requirements" phase with the same items, so when I publish these to TFS, it won't display the "Parent" distinction between items, so both "Tasks" instances are being published in TFS under the exact same name... So, if I can't do this Automatically, I'll likely have edit each tasks with "Analysis - Item 1", "Analysis - item 2", "SRD - Item 1", "SRD - item 2"... is there a way to do this automatically, or will have to go the manual route??

    Read the article

  • HTTP, TCP, UDP and connectionless

    - by user132199
    I am a bit confused with HTTP lately. Some facts are that TCP can operate connection orientated or connectionless this I understand. TCP though is connection-oriented while UDP is connectionless which is used when the message itself can be fit into a single message. Question: If HTTP uses TCP, and TCP provides reliable conjnections for multiple message excahnge, and HTTP is said to be connectionless then how is this possible? TCP is connection-oriented? So how is HTTP connectionless????

    Read the article

  • Common filesystem for servers behind a rackspace load balancer

    - by thanos panousis
    Our PHP application consists of a single web server that will receive files from clients and perform a CPU-intensive analysis on them. Right now, analysis of a single user upload can take 3sec to conclude and take 100% CPU. This makes our system capacity amount to 1/3 requests per second. My team's requirement is to increase capacity without a lot of code reengineering. A possible solution would be to set up a load balancer in front of multiple servers running the same app, connecting to a common DB. The problem is that the analysis outputs files on disk. A load balancer would increase capacity, but then files won't be available between servers so consequent client requests may fail. We are hosted on Rackspace, is there a way to configure some sort of "common" storage for all servers, without having to rewrite our file persistance code? Current code relies on simple fopens etc. What are our options?

    Read the article

  • Prevent ListBox from focusing but leave ListBoxItem(s) focusable (wpf)

    - by modosansreves
    Here is what happens: I have a listbox with items. Listbox has focus. Some item (say, 5th) is selected (has a blue background), but has no 'border'. When I press 'Down' key, the focus moves from ListBox to the first ListBoxItem. (What I want is to make 6th item selected, regardless of the 'border') When I navigate using 'Tab', the Listbox never receives the focus again. But when the collection is emptied and filled again, ListBox itself gets focus, pressing 'Down' moves the focus to the item. How to prevent ListBox from gaining focus? P.S. listBox1.SelectedItem is my own class, I don't know how to make ListBoxItem out of it to .Focus() it. EDIT: the code Xaml: <UserControl.Resources> <me:BooleanToVisibilityConverter x:Key="visibilityConverter"/> <me:BooleanToItalicsConverter x:Key="italicsConverter"/> </UserControl.Resources> <ListBox x:Name="lbItems"> <ListBox.ItemTemplate> <DataTemplate> <Grid> <ProgressBar HorizontalAlignment="Stretch" VerticalAlignment="Bottom" Visibility="{Binding Path=ShowProgress, Converter={StaticResource visibilityConverter}}" Maximum="1" Margin="4,0,0,0" Value="{Binding Progress}" /> <TextBlock Text="{Binding Path=VisualName}" FontStyle="{Binding Path=IsFinished, Converter={StaticResource italicsConverter}}" Margin="4" /> </Grid> </DataTemplate> </ListBox.ItemTemplate> <me:OuterItem Name="Regular Folder" IsFinished="True" Exists="True" IsFolder="True"/> <me:OuterItem Name="Regular Item" IsFinished="True" Exists="True"/> <me:OuterItem Name="Yet to be created" IsFinished="False" Exists="False"/> <me:OuterItem Name="Just created" IsFinished="False" Exists="True"/> <me:OuterItem Name="In progress" IsFinished="False" Exists="True" Progress="0.7"/> </ListBox> where OuterItem is: public class OuterItem : IOuterItem { public Guid Id { get; set; } public string Name { get; set; } public bool IsFolder { get; set; } public bool IsFinished { get; set; } public bool Exists { get; set; } public double Progress { get; set; } /// Code below is of lesser importance, but anyway /// #region Visualization helper properties public bool ShowProgress { get { return !IsFinished && Exists; } } public string VisualName { get { return IsFolder ? "[ " + Name + " ]" : Name; } } #endregion public override string ToString() { if (IsFinished) return Name; if (!Exists) return " ??? " + Name; return Progress.ToString("0.000 ") + Name; } public static OuterItem Get(IOuterItem item) { return new OuterItem() { Id = item.Id, Name = item.Name, IsFolder = item.IsFolder, IsFinished = item.IsFinished, Exists = item.Exists, Progress = item.Progress }; } } ?onverters are: /// Are of lesser importance too (for understanding), but will be useful if you copy-paste to get it working public class BooleanToItalicsConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { bool normal = (bool)value; return normal ? FontStyles.Normal : FontStyles.Italic; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } } public class BooleanToVisibilityConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { bool exists = (bool)value; return exists ? Visibility.Visible : Visibility.Collapsed; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } } But most important, is that UserControl.Loaded() has: lbItems.Items.Clear(); lbItems.ItemsSource = fsItems; where fsItems is ObservableCollection<OuterItem>. The usability problem I describe takes place when I Clear() that collection (fsItems) and fill with new items.

    Read the article

  • Marshal a C# struct to C++ VARIANT

    - by jortan
    To start with, I'm not very familiar with the COM-technology, this is the first time I'm working with it so bear with me. I'm trying to call a COM-object function from C#. This is the interface in the idl-file: [id(6), helpstring("vConnectInfo=ConnectInfoType")] HRESULT ConnectTarget([in,out] VARIANT* vConnectInfo); This is the interop interface I got after running tlbimp: void ConnectTarget(ref object vConnectInfo); The c++ code in COM object for the target function: STDMETHODIMP PCommunication::ConnectTarget(VARIANT* vConnectInfo) { if (!((vConnectInfo->vt & VT_ARRAY) && (vConnectInfo->vt & VT_BYREF))) { return E_INVALIDARG; } ConnectInfoType *pConnectInfo = (ConnectInfoType *)((*vConnectInfo->pparray)->pvData); ... } This COM-object is running in another process, it is not in a dll. I can add that the COM object is also used from another program written in C++. In that case there is no problem because in C++ a VARIANT is created and pparray-pvData is set to the connInfo data-structure and then the COM-object is called with the VARIANT as parameter. In C#, as I understand, my struct should be marshalled as a VARIANT automatically. These are two methods I've been using (or actually I've tried a lot more...) to call this method from C#: private void method1_Click(object sender, EventArgs e) { pcom.PCom PCom = new pcom.PCom(); pcom.IGeneralManagementServices mgmt = (pcom.IGeneralManagementServices)PCom; m_ci = new ConnectInfoType(); fillConnInfo(ref m_ci); mgmt.ConnectTarget(m_ci); } In the above case the struct gets marshalled as VT_UNKNOWN. This is a simple case and works if the parameter is not a struct (eg. works for int). private void method4_Click(object sender, EventArgs e) { ConnectInfoType ci = new ConnectInfoType(); fillConnInfo(ref ci); pcom PCom = new pcom.PCom(); pcom.IGeneralManagementServices mgmt = (pcom.IGeneralManagementServices)PCom; ParameterModifier[] pms = new ParameterModifier[1]; ParameterModifier pm = new ParameterModifier(1); pm[0] = true; pms[0] = pm; object[] param = new object[1]; param[0] = ci; object[] args = new object[1]; args[0] = param; mgmt.GetType().InvokeMember("ConnectTarget", BindingFlags.InvokeMethod, null, mgmt, args, pms, null, null); } In this case it gets marshalled as VT_ARRAY | VT_BYREF | VT_VARIANT. The problem is that when debugging the "target-function" ConnectTarget I cannot find the data I send in the SAFEARRAY-struct (or in any other place in memory either) What do I do with a VT_VARIANT? Any ideas on how to get my struct-data? Update: The ConnectInfoType struct: [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi)] public class ConnectInfoType { public short network; public short nodeNumber; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 51)] public string connTargPassWord; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 8)] public string sConnectId; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 16)] public string sConnectPassword; public EnuConnectType eConnectType; public int hConnectHandle; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 8)] public string sAccessPassword; }; And the corresponding struct in c++: typedef struct ConnectInfoType { short network; short nodeNumber; char connTargPassWord[51]; char sConnectId[8]; char sConnectPassword[16]; EnuConnectType eConnectType; int hConnectHandle; char sAccessPassword[8]; } ConnectInfoType;

    Read the article

  • How do I get jqGrid to work using ASP.NET + JSON on the backend?

    - by briandus
    Hi friends, ok, I'm back. I totally simplified my problem to just three simple fields and I'm still stuck on the same line using the addJSONData method. I've been stuck on this for days and no matter how I rework the ajax call, the json string, blah blah blah...I can NOT get this to work! I can't even get it to work as a function when adding one row of data manually. Can anyone PLEASE post a working sample of jqGrid that works with ASP.NET and JSON? Would you please include 2-3 fields (string, integer and date preferably?) I would be happy to see a working sample of jqGrid and just the manual addition of a JSON object using the addJSONData method. Thanks SO MUCH!! If I ever get this working, I will post a full code sample for all the other posting for help from ASP.NET, JSON users stuck on this as well. Again. THANKS!! tbl.addJSONData(objGridData); //err: tbl.addJSONData is not a function!! Here is what Firebug is showing when I receive this message: • objGridData Object total=1 page=1 records=5 rows=[5] ? Page "1" Records "5" Total "1" Rows [Object ID=1 PartnerID=BCN, Object ID=2 PartnerID=BCN, Object ID=3 PartnerID=BCN, 2 more... 0=Object 1=Object 2=Object 3=Object 4=Object] (index) 0 (prop) ID (value) 1 (prop) PartnerID (value) "BCN" (prop) DateTimeInserted (value) Thu May 29 2008 12:08:45 GMT-0700 (Pacific Daylight Time) * There are three more rows Here is the value of the variable tbl (value) 'Table.scroll' <TABLE cellspacing="0" cellpadding="0" border="0" style="width: 245px;" class="scroll grid_htable"><THEAD><TR><TH class="grid_sort grid_resize" style="width: 55px;"><SPAN> </SPAN><DIV id="jqgh_ID" style="cursor: pointer;">ID <IMG src="http://localhost/DNN5/js/jQuery/jqGrid-3.4.3/themes/sand/images/sort_desc.gif"/></DIV></TH><TH class="grid_resize" style="width: 90px;"><SPAN> </SPAN><DIV id="jqgh_PartnerID" style="cursor: pointer;">PartnerID </DIV></TH><TH class="grid_resize" style="width: 100px;"><SPAN> </SPAN><DIV id="jqgh_DateTimeInserted" style="cursor: pointer;">DateTimeInserted </DIV></TH></TR></THEAD></TABLE> Here is the complete function: $('table.scroll').jqGrid({ datatype: function(postdata) { mtype: "POST", $.ajax({ url: 'EDI.asmx/GetTestJSONString', type: "POST", contentType: "application/json; charset=utf-8", data: "{}", dataType: "text", //not json . let me try to parse success: function(msg, st) { if (st == "success") { var gridData; //strip of "d:" notation var result = JSON.parse(msg); for (var property in result) { gridData = result[property]; break; } var objGridData = eval("(" + gridData + ")"); //creates an object with visible data and structure var tbl = jQuery('table.scroll')[0]; alert(objGridData.rows[0].PartnerID); //displays the correct data //tbl.addJSONData(objGridData); //error received: addJSONData not a function //error received: addJSONData not a function (This uses eval as shown in the documentation) //tbl.addJSONData(eval("(" + objGridData + ")")); //the line below evaluates fine, creating an object and visible data and structure //var objGridData = eval("(" + gridData + ")"); //BUT, the same thing will not work here //tbl.addJSONData(eval("(" + gridData + ")")); //FIREBUG SHOWS THIS AS THE VALUE OF gridData: // "{"total":"1","page":"1","records":"5","rows":[{"ID":1,"PartnerID":"BCN","DateTimeInserted":new Date(1214412777787)},{"ID":2,"PartnerID":"BCN","DateTimeInserted":new Date(1212088125000)},{"ID":3,"PartnerID":"BCN","DateTimeInserted":new Date(1212088125547)},{"ID":4,"PartnerID":"EHG","DateTimeInserted":new Date(1235603192033)},{"ID":5,"PartnerID":"EMDEON","DateTimeInserted":new Date(1235603192000)}]}" } } }); }, jsonReader: { root: "rows", //arry containing actual data page: "page", //current page total: "total", //total pages for the query records: "records", //total number of records repeatitems: false, id: "ID" //index of the column with the PK in it }, colNames: [ 'ID', 'PartnerID', 'DateTimeInserted' ], colModel: [ { name: 'ID', index: 'ID', width: 55 }, { name: 'PartnerID', index: 'PartnerID', width: 90 }, { name: 'DateTimeInserted', index: 'DateTimeInserted', width: 100}], rowNum: 10, rowList: [10, 20, 30], imgpath: 'http://localhost/DNN5/js/jQuery/jqGrid-3.4.3/themes/sand/images', pager: jQuery('#pager'), sortname: 'ID', viewrecords: true, sortorder: "desc", caption: "TEST Example")};

    Read the article

  • Merge DataGrid ColumnHeaders

    - by Vishal
    I would like to merge two column-Headers. Before you go and mark this question as duplicate please read further. I don't want a super-Header. I just want to merge two column-headers. Take a look at image below: Can you see two columns with headers Mobile Number 1 and Mobile Number 2? I want to show there only 1 column header as Mobile Numbers. Here is the XAML used for creating above mentioned dataGrid: <DataGrid Grid.Row="1" Margin="0,10,0,0" ItemsSource="{Binding Ledgers}" IsReadOnly="True" AutoGenerateColumns="False"> <DataGrid.Columns> <DataGridTextColumn Header="Customer Name" Binding="{Binding LedgerName}" /> <DataGridTextColumn Header="City" Binding="{Binding City}" /> <DataGridTextColumn Header="Mobile Number 1" Binding="{Binding MobileNo1}" /> <DataGridTextColumn Header="Mobile Number 2" Binding="{Binding MobileNo2}" /> <DataGridTextColumn Header="Opening Balance" Binding="{Binding OpeningBalance}" /> </DataGrid.Columns> </DataGrid> Update1: Update2 I have created a converter as follows: public class MobileNumberFormatConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { if (value != null && value != DependencyProperty.UnsetValue) { if (value.ToString().Length <= 15) { int spacesToAdd = 15 - value.ToString().Length; string s = value.ToString().PadRight(value.ToString().Length + spacesToAdd); return s; } return value.ToString().Substring(0, value.ToString().Length - 3) + "..."; } return ""; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } } I have used it in XAML as follows: <DataGridTextColumn Header="Mobile Numbers"> <DataGridTextColumn.Binding> <MultiBinding StringFormat=" {0} {1}"> <Binding Path="MobileNo1" Converter="{StaticResource mobileNumberFormatConverter}"/> <Binding Path="MobileNo2" Converter="{StaticResource mobileNumberFormatConverter}"/> </MultiBinding> </DataGridTextColumn.Binding> </DataGridTextColumn> The output I got: Update3: At last I got the desired output. Here is the code for Converter: public class MobileNumberFormatConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { if (value != null && value != DependencyProperty.UnsetValue) { if (parameter.ToString().ToUpper() == "N") { if (value.ToString().Length <= 15) { return value.ToString(); } else { return value.ToString().Substring(0, 12); } } else if (parameter.ToString().ToUpper() == "S") { if (value.ToString().Length <= 15) { int spacesToAdd = 15 - value.ToString().Length; string spaces = ""; return spaces.PadRight(spacesToAdd); } else { return "..."; } } } return ""; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } } Here is my XAML: <DataGridTemplateColumn Header="Mobile Numbers"> <DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock> <Run Text="{Binding MobileNo1, Converter={StaticResource mobileNumberFormatConverter}, ConverterParameter=N}" /> <Run Text="{Binding MobileNo1, Converter={StaticResource mobileNumberFormatConverter}, ConverterParameter=S}" FontFamily="Consolas"/> <Run Text=" " FontFamily="Consolas"/> <Run Text="{Binding MobileNo2, Converter={StaticResource mobileNumberFormatConverter}, ConverterParameter=N}" /> <Run Text="{Binding MobileNo2, Converter={StaticResource mobileNumberFormatConverter}, ConverterParameter=S}" FontFamily="Consolas"/> </TextBlock> </DataTemplate> </DataGridTemplateColumn.CellTemplate> </DataGridTemplateColumn> Output:

    Read the article

  • SQL SERVER – Microsoft SQL Server Migration Assistant V6.0 Released

    - by Pinal Dave
    Every company makes a different decision about the database when they start, but as they move forward they mature and make the decision which is based on their experience and best interest of the organization. Similarly, quite a many organizations make different decisions on database, like Sybase, MySQL, Oracle or Access and as time passes by they learn that now they want to move to a different platform. Microsoft makes it easy for SQL Server professional by releasing various Migration Assistant tools. Last week, Microsoft released Microsoft SQL Server Migration Assistant v6.0. Here are different tools released earlier last week to migrate various product to SQL Server. Microsoft SQL Server Migration Assistant v6.0 for Sybase SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Sybase Adaptive Server Enterprise (ASE) to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for MySQL SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from MySQL to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for Oracle SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Oracle to SQL Server and Azure SQL DB. SSMA automates all aspects of migration including migration assessment analysis, schema and SQL statement conversion, data migration as well as migration testing. Microsoft SQL Server Migration Assistant v6.0 for Access SQL Server Migration Assistant (SSMA) is a free supported tool from Microsoft that simplifies database migration process from Access to SQL Server. SSMA for Access automates conversion of Microsoft Access database objects to SQL Server database objects, loads the objects into SQL Server and Azure SQL DB, and then migrates data from Microsoft Access to SQL Server and Azure SQL DB. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Migration

    Read the article

  • Getting Query Parameters in Javascript

    - by PhubarBaz
    I find myself needing to get query parameters that are passed into a web app on the URL quite often. At first I wrote a function that creates an associative array (aka object) with all of the parameters as keys and returns it. But then I was looking at the revealing module pattern, a nice javascript design pattern designed to hide private functions, and came up with a way to do this without even calling a function. What I came up with was this nice little object that automatically initializes itself into the same associative array that the function call did previously. // Creates associative array (object) of query params var QueryParameters = (function() {     var result = {};     if (window.location.search)     {         // split up the query string and store in an associative array         var params = window.location.search.slice(1).split("&");         for (var i = 0; i < params.length; i++)         {             var tmp = params[i].split("=");             result[tmp[0]] = unescape(tmp[1]);         }     }     return result; }()); Now all you have to do to get the query parameters is just reference them from the QueryParameters object. There is no need to create a new object or call any function to initialize it. var debug = (QueryParameters.debug === "true"); or if (QueryParameters["debug"]) doSomeDebugging(); or loop through all of the parameters. for (var param in QueryParameters) var value = QueryParameters[param]; Hope you find this object useful.

    Read the article

  • Take Control of Workflow with Workflow Analyzer!

    - by user793553
    Take Control of Workflow with Workflow Analyzer! Immediate Analysis and Output of your EBS Workflow Environment The EBS Workflow Analyzer is a script that reviews the current Workflow Footprint, analyzes the configurations, environment, providing feedback, and recommendations on Best Practices and areas of concern. Go to Doc ID 1369938.1  for more details and script download with a short overview video on it. Proactive Benefits: Immediate Analysis and Output of Workflow Environment Identifies Aged Records Identifies Workflow Errors & Volumes Identifies looping Workflow items and stuck activities Identifies Workflow System Setup and configurations Identifies and Recommends Workflow Best Practices Easy To Add Tool for regular Workflow Maintenance Execute Analysis anytime to compare trending from past outputs The Workflow Analyzer presents key details in an easy to review graphical manner.   See the examples below. Workflow Runtime Data Table Gauge The Workflow Runtime Data Table Gauge will show critical (red), bad (yellow) and good (green) depending on the number of workflow items (WF_ITEMS).   Workflow Error Notifications Pie Chart A pie chart shows the workflow error notification types.   Workflow Runtime Table Footprint Bar Chart A pie chart shows the workflow error notification types and a bar chart shows the workflow runtime table footprint.   The analyzer also gives detailed listings of setups and configurations. As an example the workflow services are listed along with their status for review:   The analyzer draws attention to key details with yellow and red boxes highlighting areas of review:   You can extend on any query by reviewing the SQL Script and then running it on your own or making modifications for your own needs:     Find more details in these notes: Doc ID 1369938.1 Workflow Analyzer script for E-Business Suite Worklfow Monitoring and Maintenance Doc ID 1425053.1 How to run EBS Workflow Analyzer Tool as a Concurrent Request Or visit the My Oracle Support EBS - Core Workflow Community  

    Read the article

  • Android SDK having trouble with ADB

    - by MowDownJoe
    So, I installed the Android SDK, Eclipse, and the ADT. Upon firing up Eclipse the first time after setting up the ADT, this error popped up: [2012-05-29 12:11:06 - adb] /home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-05-29 12:11:06 - adb] 'adb version' failed! /home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-05-29 12:11:06 - adb] Failed to parse the output of 'adb version': Standard Output was: Error Output was: /home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-05-29 12:11:06 - adb] /home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-05-29 12:11:06 - adb] 'adb version' failed! /home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-05-29 12:11:06 - adb] Failed to parse the output of 'adb version': Standard Output was: Error Output was: /home/drsmith/Downloads/android-sdk-linux/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory I'm not quite sure how this is. Feels weird that there's a missing library there. I'm using Ubuntu 12.04. No adb is a pretty big blow as an Android developer. How do I fix?

    Read the article

  • Duck checker in Python: does one exist?

    - by elliot42
    Python uses duck-typing, rather than static type checking. But many of the same concerns ultimately apply: does an object have the desired methods and attributes? Do those attributes have valid, in-range values? Whether you're writing constraints in code, or writing test cases, or validating user input, or just debugging, inevitably somewhere you'll need to verify that an object is still in a proper state--that it still "looks like a duck" and "quacks like a duck." In statically typed languages you can simply declare "int x", and anytime you create or mutate x, it will always be a valid int. It seems feasible to decorate a Python object to ensure that it is valid under certain constraints, and that every time that object is mutated it is still valid under those constraints. Ideally there would be a simple declarative syntax to express "hasattr length and length is non-negative" (not in those words. Not unlike Rails validators, but less human-language and more programming-language). You could think of this as ad-hoc interface/type system, or you could think of it as an ever-present object-level unit test. Does such a library exist to declare and validate constraint/duck-checking on Python-objects? Is this an unreasonable tool to want? :) (Thanks!) Contrived example: rectangle = {'length': 5, 'width': 10} # We live in a fictional universe where multiplication is super expensive. # Therefore any time we multiply, we need to cache the results. def area(rect): if 'area' in rect: return rect['area'] rect['area'] = rect['length'] * rect['width'] return rect['area'] print area(rectangle) rectangle['length'] = 15 print area(rectangle) # compare expected vs. actual output! # imagine the same thing with object attributes rather than dictionary keys.

    Read the article

  • Is there a pattern or best practice for passing a reference type to multiple classes vs a static class?

    - by Dave
    My .NET application creates HTML files, and as such, the structure looks like variable myData BuildHomePage() variable graph = new BuildGraphPage(myData) variable table = BuildTablePage(myData) BuildGraphPage and BuildTablePage both require access data, the myData object. In the above example, I've passed the myData object to 2 constructors. This is what I'm doing now, in my current project. The myData object, and it's properties are all readonly. The problem is, the number of pages which will require this object has grown. In the real project, there are currently 4, but the new spec is to have about 20. Passing this object to the constructor of each new object and assigning it to a field is a little time consuming, but not a hardship! This poses the question whether it's better practice to continue as I have, or to refactor and create a new static class for myData which can be referenced from any where in my project. I guess my abilities to use Google are poor, because I did try and find an appropriate pattern as I am sure this type of design must be common place but my results returned nothing. Is there a pattern which is suited, or do best practices lean towards one implementation over another.

    Read the article

  • BizTalk HL7 Receive Pipeline Exception

    - by Paul Petrov
    If you experience sequence of errors below with BizTalk HL7 MLLP receive ports you may need to request a hotfix from Microsoft. Knowledge base article number is 2454887 but it’s still not available on the KB site. The hotfix is recently released and you may need to open support ticket to get to it. It requires three other hotfixes installed: ·         970492 (DASM 3.7.502.2) ·         973909 (additional ACK codes) ·         981442 (Microsoft.solutions.btahl7.mllp.dll 3.7.509.2) If the exceptions below repeatedly appear in the event log you most likely would be helped by the hotfix: Fatal error encountered in 2XDasm. Exception information is Cannot access a disposed object. Object name: 'CEventingReadStream'. There was a failure executing the receive pipeline: "BTAHL72XPipelines.BTAHL72XReceivePipeline, BTAHL72XPipelines, Version=1.3.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" Source: "BTAHL7 2.X Disassembler" Receive Port: "ReceivePortName" URI: "IPAddress:portNumber" Reason: Cannot access a disposed object. Object name: 'CEventingReadStream'. The Messaging Engine received an error from transport adapter "MLLP" when notifying the adapter with the BatchComplete event. Reason "Object reference not set to an instance of an object." We’ve been through a lot of troubleshooting with Microsoft Product Support and they did a great job finding an issue and releasing a fix.

    Read the article

  • The Return Of __FILE__ And __LINE__ In .NET 4.5

    - by Alois Kraus
    Good things are hard to kill. One of the most useful predefined compiler macros in C/C++ were __FILE__ and __LINE__ which do expand to the compilation units file name and line number where this value is encountered by the compiler. After 4.5 versions of .NET we are on par with C/C++ again. It is of course not a simple compiler expandable macro it is an attribute but it does serve exactly the same purpose. Now we do get CallerLineNumberAttribute  == __LINE__ CallerFilePathAttribute        == __FILE__ CallerMemberNameAttribute  == __FUNCTION__ (MSVC Extension)   The most important one is CallerMemberNameAttribute which is very useful to implement the INotifyPropertyChanged interface without the need to hard code the name of the property anymore. Now you can simply decorate your change method with the new CallerMemberName attribute and you get the property name as string directly inserted by the C# compiler at compile time.   public string UserName { get { return _userName; } set { _userName=value; RaisePropertyChanged(); // no more RaisePropertyChanged(“UserName”)! } } protected void RaisePropertyChanged([CallerMemberName] string member = "") { var copy = PropertyChanged; if(copy != null) { copy(new PropertyChangedEventArgs(this, member)); } } Nice and handy. This was obviously the prime reason to implement this feature in the C# 5.0 compiler. You can repurpose this feature for tracing to get your hands on the method name of your caller along other stuff very fast now. All infos are added during compile time which is much faster than other approaches like walking the stack. The example on MSDN shows the usage of this attribute with an example public static void TraceMessage(string message, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int sourceLineNumber = 0) { Console.WriteLine("Hi {0} {1} {2}({3})", message, memberName, sourceFilePath, sourceLineNumber); }   When I do think of tracing I do usually want to have a API which allows me to Trace method enter and leave Trace messages with a severity like Info, Warning, Error When I do print a trace message it is very useful to print out method and type name as well. So your API must either be able to pass the method and type name as strings or extract it automatically via walking back one Stackframe and fetch the infos from there. The first glaring deficiency is that there is no CallerTypeAttribute yet because the C# compiler team was not satisfied with its performance.   A usable Trace Api might therefore look like   enum TraceTypes { None = 0, EnterLeave = 1 << 0, Info = 1 << 1, Warn = 1 << 2, Error = 1 << 3 } class Tracer : IDisposable { string Type; string Method; public Tracer(string type, string method) { Type = type; Method = method; if (IsEnabled(TraceTypes.EnterLeave,Type, Method)) { } } private bool IsEnabled(TraceTypes traceTypes, string Type, string Method) { // Do checking here if tracing is enabled return false; } public void Info(string fmt, params object[] args) { } public void Warn(string fmt, params object[] args) { } public void Error(string fmt, params object[] args) { } public static void Info(string type, string method, string fmt, params object[] args) { } public static void Warn(string type, string method, string fmt, params object[] args) { } public static void Error(string type, string method, string fmt, params object[] args) { } public void Dispose() { // trace method leave } } This minimal trace API is very fast but hard to maintain since you need to pass in the type and method name as hard coded strings which can change from time to time. But now we have at least CallerMemberName to rid of the explicit method parameter right? Not really. Since any acceptable usable trace Api should have a method signature like Tracexxx(… string fmt, params [] object args) we not able to add additional optional parameters after the args array. If we would put it before the format string we would need to make it optional as well which would mean the compiler would need to figure out what our trace message and arguments are (not likely) or we would need to specify everything explicitly just like before . There are ways around this by providing a myriad of overloads which in the end are routed to the very same method but that is ugly. I am not sure if nobody inside MS agrees that the above API is reasonable to have or (more likely) that the whole talk about you can use this feature for diagnostic purposes was not a core feature at all but a simple byproduct of making the life of INotifyPropertyChanged implementers easier. A way around this would be to allow for variable argument arrays after the params keyword another set of optional arguments which are always filled by the compiler but I do not know if this is an easy one. The thing I am missing much more is the not provided CallerType attribute. But not in the way you would think of. In the API above I did add some filtering based on method and type to stay as fast as possible for types where tracing is not enabled at all. It should be no more expensive than an additional method call and a bool variable check if tracing for this type is enabled at all. The data is tightly bound to the calling type and method and should therefore become part of the static type instance. Since extending the CLR type system for tracing is not something I do expect to happen I have come up with an alternative approach which allows me basically to attach run time data to any existing type object in super fast way. The key to success is the usage of generics.   class Tracer<T> : IDisposable { string Method; public Tracer(string method) { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public void Dispose() { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public static void Info(string fmt, params object[] args) { } /// <summary> /// Every type gets its own instance with a fresh set of variables to describe the /// current filter status. /// </summary> /// <typeparam name="T"></typeparam> internal class TraceData<UsingType> { internal static TraceData<UsingType> Instance = new TraceData<UsingType>(); public bool IsInitialized = false; // flag if we need to reinit the trace data in case of reconfigured trace settings at runtime public TraceTypes Enabled = TraceTypes.None; // Enabled trace levels for this type } } We do not need to pass the type as string or Type object to the trace Api. Instead we define a generic Api that accepts the using type as generic parameter. Then we can create a TraceData static instance which is due to the nature of generics a fresh instance for every new type parameter. My tests on my home machine have shown that this approach is as fast as a simple bool flag check. If you have an application with many types using tracing you do not want to bring the app down by simply enabling tracing for one special rarely used type. The trace filter performance for the types which are not enabled must be therefore the fasted code path. This approach has the nice side effect that if you store the TraceData instances in one global list you can reconfigure tracing at runtime safely by simply setting the IsInitialized flag to false. A similar effect can be achieved with a global static Dictionary<Type,TraceData> object but big hash tables have random memory access semantics which is bad for cache locality and you always need to pay for the lookup which involves hash code generation, equality check and an indexed array access. The generic version is wicked fast and allows you to add more features to your tracing Api with minimal perf overhead. But it is cumbersome to write the generic type argument always explicitly and worse if you do refactor code and move parts of it to other classes it might be that you cannot configure tracing correctly. I would like therefore to decorate my type with an attribute [CallerType] class Tracer<T> : IDisposable to tell the compiler to fill in the generic type argument automatically. class Program { static void Main(string[] args) { using (var t = new Tracer()) // equivalent to new Tracer<Program>() { That would be really useful and super fast since you do not need to pass any type object around but you do have full type infos at hand. This change would be breaking if another non generic type exists in the same namespace where now the generic counterpart would be preferred. But this is an acceptable risk in my opinion since you can today already get conflicts if two generic types of the same name are defined in different namespaces. This would be only a variation of this issue. When you do think about this further you can add more features like to trace the exception in your Dispose method if the method is left with an exception with that little trick I did write some time ago. You can think of tracing as a super fast and configurable switch to write data to an output destination or to execute alternative actions. With such an infrastructure you can e.g. Reconfigure tracing at run time. Take a memory dump when a specific method is left with a specific exception. Throw an exception when a specific trace statement is hit (useful for testing error conditions). Execute a passed delegate which e.g. dumps additional state when enabled. Write data to an in memory ring buffer and dump it when specific events do occur (e.g. method is left with an exception, triggered from outside). Write data to an output device. …. This stuff is really useful to have when your code is in production on a mission critical server and you need to find the root cause of sporadic crashes of your application. It could be a buggy graphics card driver which throws access violations into your application (ok with .NET 4 not anymore except if you enable a compatibility flag) where you would like to have a minidump or you have reached after two weeks of operation a state where you need a full memory dump at a specific point in time in the middle of an transaction. At my older machine I do get with this super fast approach 50 million traces/s when tracing is disabled. When I do know that tracing is enabled for this type I can walk the stack by using StackFrameHelper.GetStackFramesInternal to check further if a specific action or output device is configured for this method which is about 2-3 times faster than the regular StackTrace class. Even with one String.Format I am down to 3 million traces/s so performance is not so important anymore since I do want to do something now. The CallerMemberName feature of the C# 5 compiler is nice but I would have preferred to get direct access to the MethodHandle and not to the stringified version of it. But I really would like to see a CallerType attribute implemented to fill in the generic type argument of the call site to augment the static CLR type data with run time data.

    Read the article

  • can anyone help me through the preparation of Eclipse IDE for android developer in ubuntu 12.04?

    - by csbl
    I'm new to to Linux, in this particular case, to Ubuntu. I have a small android project I have to finnish until this Friday and I'm still stuck with the install and preparing of the development envrironment. The only thing I did was install the Eclipse IDE. I'm still missing the SDK, JAVA and anything else that might be needed. Can someone help me through this? It's only because I'm running out of time to develop, or else I would embarc on a deeper investigation of this OS. I tried the step to install android platforms, through Eclispe-Help-Install New Software, and I got the follwoing error messages, at the end of the process: [2012-06-06 17:35:56 - adb] /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] 'adb version' failed! /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] Failed to parse the output of 'adb version': Standard Output was: Error Output was: /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] 'adb version' failed! /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] Failed to parse the output of 'adb version': Standard Output was: Error Output was: /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory Can anyone help, please??

    Read the article

  • Linux to Solaris @ Morgan Stanley

    - by mgerdts
    I came across this blog entry and the accompanying presentation by Robert Milkoski about his experience switching from Linux to Oracle Solaris 11 for a distributed OpenAFS file serving environment at Morgan Stanley. If you are an IT manager, the presentation will show you: Running Solaris with a support contract can cost less than running Linux (even without a support contract) because of technical advantages of Solaris. IT departments can benefit from hiring computer scientists into Systems Programmer or similar roles.  Their computer science background should be nurtured so that they can continue to deliver value (savings and opportunity) to the business as technology advances. If you are a sysadmin, developer, or somewhere in between, the presentation will show you: A presentation that explains your technical analysis can be very influential. Learning and using the non-default options of an OS can make all the difference as to whether one OS is better suited than another.  For example, see the graphs on slides 3 - 5.  The ZFS default is to not use compression. When trying to convince those that hold the purse strings that your technical direction should be taken, the financial impact can be the part that closes the deal.  See slides 6, 9, and 10.  Sometimes reducing rack space requirements can be the biggest impact because it may stave off or completely eliminate the need for facilities growth. DTrace can be used to shine light on performance problems that may be suspected but not diagnosed.  It is quite likely that these problems have existed in OpenAFS for a decade or more.  DTrace made diagnosis possible. DTrace can be used to create performance analysis tools without modifying the source of software that is under analysis.  See slides 29 - 32. Microstate accounting, visible in the prstat output on slide 37 can be used to quickly draw focus to problem areas that affect CPU saturation.  Note that prstat without -m gives a time-decayed moving average that is not nearly as useful. Instruction level probes (slides 33 - 34) are a super-easy way to identify which part of a function is hot.

    Read the article

  • EBS Seed Data Comparison Reports Now Available

    - by Steven Chan (Oracle Development)
    Earlier this year we released a reporting tool that reports on the differences in E-Business Suite database objects between one release and another.  That's a very useful reference, but EBS defaults are delivered as seed data within the database objects themselves. What about the differences in this seed data between one release and another? I'm pleased to announce the availability of a new tool that provides comparison reports of E-Business Suite seed data between EBS 11.5.10.2, 12.0.4, 12.0.6, 12.1.1, and 12.1.3.  This new tool complements the information in the data model comparison tool.  You can download the new seed data comparison tool here: EBS ATG Seed Data Comparison Report (Note 1327399.1) The EBS ATG Seed Data Comparison Report provides report on the changes between different EBS releases based upon the seed data changes delivered by the product data loader files (.ldt extension) based on EBS ATG loader control (.lct extension) files.  You can use this new tool to report on the differences in the following types of seed data: Concurrent Program definitions Descriptive Flexfield entity definitions Application Object Library profile option definitions Application Object Library (AOL) key flexfield, function, lookups, value set definitions Application Object Library (AOL) menu and responsibility definitions Application Object Library messages Application Object Library request set definitions Application Object Library printer styles definitions Report Manager / WebADI component and integrator entity definitions Business Intelligence Publisher (BI Publisher) entity definitions BIS Request Set Generator entity definitions ... and more Your feedback is welcomeThis new tool was produced by our hard-working EBS Release Management team, and they're actively seeking your feedback.  Please feel free to share your experiences with it by posting a comment here.  You can also request enhancements to this tool via the distribution list address included in Note 1327399.1.Related Articles Oracle E-Business Suite Release 12.1.3 Now Available New Whitepaper: Upgrading EBS 11i Forms + OA Framework Personalizations to EBS 12 EBS 12.0 Minimum Requirements for Extended Support Finalized Five Key Resources for Upgrading to E-Business Suite Release 12 E-Business Suite Release 12.1.1 Consolidated Upgrade Patch 1 Now Available New Whitepaper: Planning Your E-Business Suite Upgrade from Release 11i to 12.1

    Read the article

  • When creating an library published on CodePlex, how "bad" would it be for the unit-test projects to rely on commercial products?

    - by Lasse V. Karlsen
    I have started a project on CodePlex for a WebDAV server implementation for .NET, so that I can host a WebDAV server in my own programs. This is both a learning/research project (WebDAV + server portion) as well as a project I think I can have much fun with, both in terms of making it and using it. However, I see a need to do mocking of types here in order to unit-testing properly. For instance, I will be relying on HttpListener for the web server portion of the WebDAV server, and since this type has no interface, and is sealed, I cannot easily make mocks or stubs out of it. Unless I use something like TypeMock. So if I used TypeMock in the unit-test projects on this library, how bad would this be for potential users? The projects are made in C# 3.5 for .NET 3.5 and 4.0, and the project files was created with Visual Studio 2010 Professional. The actual class libraries you would end up referencing in your software would of course not be encumbered with anything remotely like this, only the unit-test libraries. What's your thoughts on this? As an example, I have in my old code-base, which is private, the ability to just initiate a WebDAV server with just this: var server = new WebDAVServer(); This constructs, and owns, a HttpListener instance internally, and I would like to verify through unit-tests that if I dispose of this server object, the internal listener is disposed of. If, on the other hand, I use the overload where I hand it a listener object, this object should not be disposed of. Short of exposing the internal listener object to the outside world, something I'm a bit loath to do, how can I in a good way ensure that the object was disposed of? With TypeMock I can mock away parts of this object even though it isn't accessed through interfaces. The alternative would be for me to wrap everything in wrapper classes, where I have complete control.

    Read the article

  • ASP.NET website deployment [on hold]

    - by Rei Brazilva
    I am getting my hands wet with ASP and I have been following the tutorials. I deployed the site and in Azure and it worked great. Today I started actually designing the site. And when I published, it looks as if it doesn't read any of the files I just updated, added, and modified. It works on my localhost, but not in the Azure. I thought when you publish, everything goes up, including the new files. I don't have enough reputation to add a picture so, you'll forgive me. SO, basically, how do I get my entire site uploaded? In case anyone does stop by, I was able to pull this out just recently: CA0058 Error Running Code Analysis CA0058 : The referenced assembly 'DotNetOpenAuth.AspNet, Version=4.0.0.0, Culture=neutral, PublicKeyToken=2780ccd10d57b246' could not be found. This assembly is required for analysis and was referenced by: C:\Users\lotusms\Desktop\LOTUS MARKETING\ASP.NET\WebsiteManager\WebsiteManager\bin\WebsiteManager.dll, C:\Users\lotusms\Desktop\LOTUS MARKETING\ASP.NET\WebsiteManager\packages\Microsoft.AspNet.WebPages.OAuth.2.0.20710.0\lib\net40\Microsoft.Web.WebPages.OAuth.dll. [Errors and Warnings] (Global) CA0001 Error Running Code Analysis CA0001 : The following error was encountered while reading module 'Microsoft.Web.WebPages.OAuth': Assembly reference cannot be resolved: DotNetOpenAuth.AspNet, Version=4.0.0.0, Culture=neutral, PublicKeyToken=2780ccd10d57b246. [Errors and Warnings] (Global) Could this have something to do with the problem?

    Read the article

  • C++ Numerical Recipes &ndash; A New Adventure!

    - by JoshReuben
    I am about to embark on a great journey – over the next 6 weeks I plan to read through C++ Numerical Recipes 3rd edition http://amzn.to/YtdpkS I'll be reading this with an eye to C++ AMP, thinking about implementing the suitable subset (non-recursive, additive, commutative) to run on the GPU. APIs supporting HPC, GPGPU or MapReduce are all useful – providing you have the ability to choose the correct algorithm to leverage on them. I really think this is the most fascinating area of programming – a lot more exciting than LOB CRUD !!! When you think about it , everything is a function – we categorize & we extrapolate. As abstractions get higher & less leaky, sooner or later information systems programming will become a non-programmer task – you will be using WYSIWYG designers to build: GUIs MVVM service mapping & virtualization workflows ORM Entity relations In the data source SharePoint / LightSwitch are not there yet, but every iteration gets closer. For information workers, managed code is a race to the bottom. As MS futures are a bit shaky right now, the provider agnostic nature & higher barriers of entry of both C++ & Numerical Analysis seem like a rational choice to me. Its also fascinating – stepping outside the box. This is not the first time I've delved into numerical analysis. 6 months ago I read Numerical methods with Applications, which can be found for free online: http://nm.mathforcollege.com/ 2 years ago I learned the .NET Extreme Optimization library www.extremeoptimization.com – not bad 2.5 years ago I read Schaums Numerical Analysis book http://amzn.to/V5yuLI - not an easy read, as topics jump back & forth across chapters: 3 years ago I read Practical Numerical Methods with C# http://amzn.to/V5yCL9 (which is a toy learning language for this kind of stuff) I also read through AI a Modern Approach 3rd edition END to END http://amzn.to/V5yQSp - this took me a few years but was the most rewarding experience. I'll post progress updates – see you on the other side !

    Read the article

  • JUnit Testing in Multithread Application

    - by e2bady
    This is a problem me and my team faces in almost all of the projects. Testing certain parts of the application with JUnit is not easy and you need to start early and to stick to it, but that's not the question I'm asking. The actual problem is that with n-Threads, locking, possible exceptions within the threads and shared objects the task of testing is not as simple as testing the class, but testing them under endless possible situations within threading. To be more precise, let me tell you about the design of one of our applications: When a user makes a request several threads are started that each analyse a part of the data to complete the analysis, these threads run a certain time depending on the size of the chunk of data (which are endless and of uncertain quality) to analyse, or they may fail if the data was insufficient/lacking quality. After each completed its analysis they call upon a handler which decides after each thread terminates if the collected analysis-data is sufficient to deliver an answer to the request. All of these analysers share certain parts of the applications (some parts because the instances are very big and only a certain number can be loaded into memory and those instances are reusable, some parts because they have a standing connection, where connecting takes time, ex.gr. sql connections) so locking is very common (done with reentrant-locks). While the applications runs very efficient and fast, it's not very easy to test it under real-world conditions. What we do right now is test each class and it's predefined conditions, but there are no automated tests for interlocking and synchronization, which in my opionion is not very good for quality insurances. Given this example how would you handle testing the threading, interlocking and synchronization?

    Read the article

  • Collision within a poly

    - by G1i1ch
    For an html5 engine I'm making, for speed I'm using a path poly. I'm having trouble trying to find ways to get collision with the walls of the poly. To make it simple I just have a vector for the object and an array of vectors for the poly. I'm using Cartesian vectors and they're 2d. Say poly = [[550,0],[169,523],[-444,323],[-444,-323],[169,-523]], it's just a pentagon I generated. The object that will collide is object, object.pos is it's position and object.vel is it's velocity. They're both 2d vectors too. I've had some success to get it to find a collision, but it's just black box code I ripped from a c++ example. It's very obscure inside and all it does though is return true/false and doesn't return what vertices are collided or collision point, I'd really like to be able to understand this and make my own so I can have more meaningful collision. I'll tackle that later though. Again the question is just how does one find a collision to walls of a poly given you know the poly vertices and the object's position + velocity? If more info is needed please let me know. And if all anyone can do is point me to the right direction that's great.

    Read the article

  • Ideal data structure/techniques for storing generic scheduler data in C#

    - by GraemeMiller
    I am trying to implement a generic scheduler object in C# 4 which will output a table in HTML. Basic aim is to show some object along with various attributes, and whether it was doing something in a given time period. The scheduler will output a table displaying the headers: Detail Field 1 ....N| Date1.........N I want to initialise the table with a start date and an end date to create the date range (ideally could also do other time periods e.g. hours but that isn't vital). I then want to provide a generic object which will have associated events. Where an object has events within the period I want a table cell to be marked E.g. Name Height Weight 1/1/2011 2/1/2011 3/1/20011...... 31/1/2011 Ben 5.11 75 X X X Bill 5.7 83 X X So I created scheduler with Start Date=1/1/2011 and end date 31/1/2011 I'd like to give it my person object (already sorted) and tell it which fields I want displayed (Name, Height, Weight) Each person has events which have a start date and end date. Some events will start and end outwith but they should still be shown on the relevant date etc. Ideally I'd like to have been able to provide it with say a class booking object as well. So I'm trying to keep it generic. I have seen Javasript implementations etc of similar. What would a good data structure be for this? Any thoughts on techniques I could use to make it generic. I am not great with generics so any tips appreciated.

    Read the article

  • whats the name of this pattern?

    - by Wes
    I see this a lot in frameworks. You have a master class which other classes register with. The master class then decides which of the registered classes to delegate the request to. An example based passed in class may be something this. public interface Processor { public boolean canHandle(Object objectToHandle); public void handle(Object objectToHandle); } public class EvenNumberProcessor extends Processor { public boolean canHandle(Object objectToHandle) { if (!isNumeric(objectToHandle)){ return false } return isEven(objectToHandle); } public void handle(objectToHandle) { //Optionally call canHandleAgain to ensure the calling class is fufilling its contract doSomething(); } } public class OddNumberProcessor extends Processor { public boolean canHandle(Object objectToHandle) { if (!isNumeric(objectToHandle)){ return false } return isOdd(objectToHandle); } public void handle(objectToHandle) { //Optionally call canHandleAgain to ensure the calling class is fufilling its contract doSomething(); } } //Can optionally implement processor interface public class processorDelegator { private List processors; public void addProcessor(Processor processor) { processors.add(processor); } public void process(Object objectToProcess) { //Lookup relevant processor either by keeping a list of what they can process //Or query each one to see if it can process the object. chosenProcessor=chooseProcessor(objectToProcess); chosenProcessor.handle(objectToProcess); } } Note there are a few variations I see on this. In one variation the sub classes provide a list of things they can process which the ProcessorDelegator understands. The other variation which is listed above in fake code is where each is queried in turn. This is similar to chain of command but I don't think its the same as chain of command means that the processor needs to pass to other processors. The other variation is where the ProcessorDelegator itself implements the interface which means you can get trees of ProcessorDelegators which specialise further. In the above example you could have a numeric processor delegator which delegates to an even/odd processor and a string processordelegator which delegates to different strings. My question is does this pattern have a name.

    Read the article

< Previous Page | 273 274 275 276 277 278 279 280 281 282 283 284  | Next Page >