Search Results

Search found 251 results on 11 pages for 'preserved'.

Page 9/11 | < Previous Page | 5 6 7 8 9 10 11  | Next Page >

  • Reordering methods in ComImport interfaces throws COMException (0x80041001)

    - by Ohad Schneider
    Consider the following code for COM interop with internet shortcuts: [ComImport] [InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] [Guid("CABB0DA0-DA57-11CF-9974-0020AFD79762")] public interface IUniformResourceLocatorW { void SetUrl([MarshalAs(UnmanagedType.LPWStr)] string pcszUrl, int dwInFlags); void GetUrl([MarshalAs(UnmanagedType.LPWStr)] out StringBuilder ppszUrl); void InvokeCommand(IntPtr purlici); } [ComImport] [Guid("FBF23B40-E3F0-101B-8488-00AA003E56F8")] public class InternetShortcut { } The following works as expected: var ishort = new InternetShortcut(); ((IPersistFile)ishort).Load("MyLink.url", 0); ((IUniformResourceLocatorW)ishort).GetUrl(out url); However: If I comment out IUniformResourceLocatorW.SetUrl (which I am not using), IUniformResourceLocatorW.GetUrl throws a COMException (HResult 0x80041001). If I switch between IUniformResourceLocatorW.SetUrl and IUniformResourceLocatorW.GetUrl (that is place the former below the latter) the same exception is thrown If I comment out IUniformResourceLocatorW.InvokeCommand the code runs fine. It's as if the order has to be preserved "up to" the invoked method. Is this behavior by design? documented somewhere? I'm asking because some COM interfaces are composed of many methods with possibly many supporting types and I'd rather avoid defining what I don't need if possible.

    Read the article

  • How to Implement Overlay blend method using opengles 1.1

    - by Cylon
    Blow is the algorithm of overlay. and i want using it on iphone, but iphone 3g only support opengles 1.1, can not using glsl. can i using blend function or texture combine to implement it. thank you /////////Reference from OpenGL Shading® Language Third Edition /////////// 19.6.12 Overlay OVERLAY first computes the luminance of the base value. If the luminance value is less than 0.5, the blend and base values are multiplied together. If the luminance value is greater than 0.5, a screen operation is performed. The effect is that the base value is mixed with the blend value, rather than being replaced. This allows patterns and colors to overlay the base image, but shadows and highlights in the base image are preserved. A discontinuity occurs where luminance = 0.5. To provide a smooth transition, we actually do a linear blend of the two equations for luminance in the range [0.45,0.55]. float luminance = dot(base, lumCoeff); if (luminance < 0.45) result = 2.0 * blend * base; else if (luminance 0.55) result = white - 2.0 * (white - blend) * (white - base); else { vec4 result1 = 2.0 * blend * base; vec4 result2 = white - 2.0 * (white - blend) * (white - base); result = mix(result1, result2, (luminance - 0.45) * 10.0); }

    Read the article

  • How can I use a single-table inheritance and single controller to make this more DRY?

    - by Angela
    I have three models, Calls, Emails, and Letters and those are basically templates of what gets sent to individuals, modeled as Contacts. When a Call is made, a row in model in ContactCalls gets created. If an Email is sent, an entry in ContactEmails is made. Each has its own controller: contact_calls_controller.rb and contact_emails_controller.rb. I would like to create a single table inheritance called ContactEvents which has types Calls, Emails, and Letters. But I'm not clear how I pass the type information or how to consolidate the controllers. Here's the two controllers I have, as you can see, there's alot of duplication, but some differences that needs to be preserved. In the case of letter and postcards (another Model), it's even more so. class ContactEmailsController < ApplicationController def new @contact_email = ContactEmail.new @contact_email.contact_id = params[:contact] @contact_email.email_id = params[:email] @contact = Contact.find(params[:contact]) @company = Company.find(@contact.company_id) contacts = @company.contacts.collect(&:full_name) contacts.each do |contact| @colleagues = contacts.reject{ |c| [email protected]_name } end @email = Email.find(@contact_email.email_id) @contact_email.subject = @email.subject @contact_email.body = @email.message @email.message.gsub!("{FirstName}", @contact.first_name) @email.message.gsub!("{Company}", @contact.company_name) @email.message.gsub!("{Colleagues}", @colleagues.to_sentence) @email.message.gsub!("{NextWeek}", (Date.today + 7.days).strftime("%A, %B %d")) @contact_email.status = "sent" end def create @contact_email = ContactEmail.new(params[:contact_email]) @contact = Contact.find_by_id(@contact_email.contact_id) @email = Email.find_by_id(@contact_email.email_id) if @contact_email.save flash[:notice] = "Successfully created contact email." # send email using class in outbound_mailer.rb OutboundMailer.deliver_campaign_email(@contact,@contact_email) redirect_to todo_url else render :action => 'new' end end AND: class ContactCallsController < ApplicationController def new @contact_call = ContactCall.new @contact_call.contact_id = params[:contact] @contact_call.call_id = params[:call] @contact_call.status = params[:status] @contact = Contact.find(params[:contact]) @company = Company.find(@contact.company_id) @contact = Contact.find(@contact_call.contact_id) @call = Call.find(@contact_call.call_id) @contact_call.title = @call.title contacts = @company.contacts.collect(&:full_name) contacts.each do |contact| @colleagues = contacts.reject{ |c| [email protected]_name } end @contact_call.script = @call.script @call.script.gsub!("{FirstName}", @contact.first_name) @call.script.gsub!("{Company}", @contact.company_name ) @call.script.gsub!("{Colleagues}", @colleagues.to_sentence) end def create @contact_call = ContactCall.new(params[:contact_call]) if @contact_call.save flash[:notice] = "Successfully created contact call." redirect_to contact_path(@contact_call.contact_id) else render :action => 'new' end end

    Read the article

  • How do I programatically verify, create, and update SQL table structure?

    - by JYelton
    Scenario: I have an application (C#) that expects a SQL database and login, which are set by a user. Once connected, it checks for the existence of several table and creates them if not found. I'd like to expand on this by having the program be capable of adding columns to those tables if I release a new version of the program which relies upon the new columns. Question: What is the best way to programatically check the structure of an existing SQL table and create or update it to match an expected structure? I am planning to iterate through the list of required columns and alter the existing table whenever it does not contain the new column. I can't help but wonder if there's an approach that is different or better. Criteria: Here are some of my expectations and self-imposed rules: Newer versions of the program might no longer use certain columns, but they would be retained for data logging purposes. In other words, no columns will be removed. Existing data in the table must be preserved, so the table cannot simply be dropped and recreated. In all cases, newly added columns would allow null data, so the population of old records is taken care of by having default null values. Example: Here is a sample table (because visual examples help!): id sensor_name sensor_status x1 x2 x3 x4 1 na019 OK 0.01 0.21 1.41 1.22 Then, in a new version, I may want to add the column x5. The "x-columns" are all data-storage columns that accept null.

    Read the article

  • How to get around DnsRecordListFree error in .NET Framework 4.0?

    - by Greg Finzer
    I am doing an MxRecordLookup. I am getting an error when calling the DnsRecordListFree in the .NET Framework 4.0. I am using Windows 7. How do I get around it? Here is the error: System.MethodAccessException: Attempt by security transparent method to call native code through method. Here is my code: [DllImport("dnsapi", EntryPoint = "DnsQuery_W", CharSet = CharSet.Unicode, SetLastError = true, ExactSpelling = true)] private static extern int DnsQuery([MarshalAs(UnmanagedType.VBByRefStr)]ref string pszName, QueryTypes wType, QueryOptions options, int aipServers, ref IntPtr ppQueryResults, int pReserved); [DllImport("dnsapi", CharSet = CharSet.Auto, SetLastError = true)] private static extern void DnsRecordListFree(IntPtr pRecordList, int FreeType); public List<string> GetMXRecords(string domain) { List<string> records = new List<string>(); IntPtr ptr1 = IntPtr.Zero; IntPtr ptr2 = IntPtr.Zero; MXRecord recMx; try { int result = DnsQuery(ref domain, QueryTypes.DNS_TYPE_MX, QueryOptions.DNS_QUERY_BYPASS_CACHE, 0, ref ptr1, 0); if (result != 0) { if (result == 9003) { //No Record Exists } else { //Some other error } } for (ptr2 = ptr1; !ptr2.Equals(IntPtr.Zero); ptr2 = recMx.pNext) { recMx = (MXRecord)Marshal.PtrToStructure(ptr2, typeof(MXRecord)); if (recMx.wType == 15) { records.Add(Marshal.PtrToStringAuto(recMx.pNameExchange)); } } } finally { DnsRecordListFree(ptr1, 0); } return records; }

    Read the article

  • Stop CDATA tags from being output-escaped when writing to XML in C#

    - by Smallgods
    We're creating a system outputting some data to an XML schema. Some of the fields in this schema need their formatting preserved, as it will be parsed by the end system into potentially a Word doc layout. To do this we're using <![CDATA[Some formatted text]]> tags inside of the App.Config file, then putting that into an appropriate property field in a xsd.exe generated class from our schema. Ideally the formatting wouldn't be out problem, but unfortunately thats just how the system is going. The App.Config section looks as follows: <header> <![CDATA[Some sample formatted data]]> </header> The data assignment looks as follows: HeaderSection header = ConfigurationManager.GetSection("header") as HeaderSection; report.header = "<[CDATA[" + header.Header + "]]>"; Finally, the Xml output is handled as follows: xs = new XmlSerializer(typeof(report)); fs = new FileStream (reportLocation, FileMode.Create); xs.Serialize(fs, report); fs.Flush(); fs.Close(); This should in theory produce in the final Xml a section that has information with CDATA tags around it. However, the angled brackets are being converted into &lt; and &gt; I've looked at ways of disabling Outout Escaping, but so far can only find references to XSLT sheets. I've also tried @"<[CDATA[" with the strings, but again no luck. Any help would be appreciated!

    Read the article

  • Which open source repository or version control systems store files' original mtime, ctime and atime

    - by sampablokuper
    I want to create a personal digital archive. I want to be able to check digital files (some several years old, some recent, some not yet created) into that archive and have them preserved, along with their metadata such as ctime, atime and mtime. I want to be able to check these files out of that archive, modify their contents and commit the changes back to the archive, while keeping the earlier commits and their metadata intact. I want the archive to be very reliable and secure, and able to be backed up remotely. I want to be able to check files in and out of the archive from PCs running Linux, Mac OS X 10.5+ or Win XP+. I want to be able to check files in and out of the archive from PCs with RAM capacities lower than the size of the files. E.g. I want to be able to check in/out a 13GB file using a PC with 2GB RAM. I thought Subversion could do all this, but apparently it can't. (At least, it couldn't a couple of years ago and as far as I know it still can't; correct me if I'm wrong.) Is there a libre VCS or similar capable of all these things? Thanks for your help.

    Read the article

  • How do I tweak columns in a Flat File Destination in SSIS?

    - by theog
    I have an OLE DB Data source and a Flat File Destination in the Data Flow of my SSIS Project. The goal is simply to pump data into a text file, and it does that. Where I'm having problems is with the formatting. I need to be able to rtrim() a couple of columns to remove trailing spaces, and I have a couple more that need their leading zeros preserved. The current process is losing all the leading zeros. The rtrim() can be done by simple truncation and ignoring the truncation errors, but that's very inelegant and error prone. I'd like to find a better way, like actually doing the rtrim() function where needed. Exploring similar SSIS questions & answers on SO, the thing to do seems to be "Use a Script Task", but that's ususally just thrown out there with no details, and it's not at all an intuitive thing to set up. I don't see how to use scripting to do what I need. Do I use a Script Task on the Control Flow, or a Script Component in the Data Flow? Can I do rtrim() and pad strings where needed in a script? Anybody got an example of doing this or similar things? Many thanks in advance.

    Read the article

  • convert old repository to mercurial

    - by nedlud
    I've been playing around with different versioning systems to find one I'm comfortable with. I started with SVN (lets call this version of the project "f1"), then changed over to GIT. But I didn't know how to convert the old SVN repo to GIT, so I just copied the folder, deleted the .svn stuff, and turned it into a GIT repo (lets call this copied version "f2"). Now I'm playing around with Mercurial and was very pleased to find that it has a Tortoise client for Windows. I was also please to find how easy it was to convert the GIT repo into Mercurial, so I preserved the history (I still cloned it first, just in case. So I'm calling this hg version "f3"). But now what I'm wondering is: what do I do with the old SVN repo that still holds my history from before I played with GIT? I guess I can convert the old SVN repo to Mercurial, but can I then merge those two histories into the one repository so I have a complete set of histories in one place? In other words, can I prepend f1 to f3?

    Read the article

  • Are TestContext.Properties usable ?

    - by DBJDBJ
    Using Visual Studio generate Test Unit class. Then comment in, the class initialization method. Inside it add your property, using the testContext argument. Upon test app startup this method is indeed called by the testing infrastructure. //Use ClassInitialize to run code before running the first test in the class [ClassInitialize()] public static void MyClassInitialize(TestContext testContext) { /* * Any user defined testContext.Properties * added here will be erased after this method exits */ testContext.Properties.Add("key", 1 ) ; // place the break point here } After leaving MyClassInitialize, any properties added by user are lost. Only the 10 "official" ones are left. Actually TestContext gets overwritten, with the inital offical one, each time before each test method is called. It it not overwritten only if user has test initialization method, the changes made over there are passed to the test. //Use TestInitialize to run code before running each test [TestInitialize()]public void MyTestInitialize(){ this.TestContext.Properties.Add("this is preserved",1) ; } This effectively means TestContext.Properties is "mostly" read only, for users. Which is not clearly documented in MSDN. It seems to me this is very messy design+implementation. Why having TestContext.Properties as an collection, at all ? Users can do many other solutions to have class wide initialization. Please discuss. --DBJ

    Read the article

  • Ruby - Writing Hpricot data to a file

    - by John
    Hey everyone, I am currently doing some XML parsing and I've chosen to use Hpricot because of it's ease of use and syntax, however I am running into some problems. I need to write a piece of XML data that I have found out to another file. However, when I do this the format is not preserved. For example, if the content should look like this: <dict> <key>item1</key><value>12345</value> <key>item2</key><value>67890</value> <key>item3</key><value>23456</value> </dict> And assuming that there are many entries like this in the document. I am iterating through the 'dict' items by using hpricot_element = Hpricot(xml_document_body) f = File.new('some_new_file.xml') (hpricot_element/:dict).each { |dict| f.write( dict.to_original_html ) } After using the above code, I would expect that the output look like the following exactly like the XML shown above. However to my surprise, the output of the file looks more like this: <dict>\n", " <key>item1</key><value>12345</value>\n", " <key>item2</key><value>67890</value>\n", " <key>item3</key><value>23456</value\n", " </dict> I've tried splitting at the "\n" characters and writing to the file one line at a time, but that didn't seem to work either as it did not recognize the "\n" characters. Any help is greatly appreciated. It might be a very simple solution, but I am having troubling finding it. Thanks!

    Read the article

  • Restore and preserve UIViewController pushed from UINavigationController, no storyboard

    - by user2908112
    I try to restore a simple UIViewController that I pushed from my initial view controller. The first one is preserved, but the second one just disappear when relaunched. I don't use storyboard. I implement the protocol in every view controller and add the restorationIdentifier and restorationClass to each one of them. The second viewController inherit from a third viewController and is initialized from a xib file. I'm not sure if I need to implement the UIViewControllerRestoration to this third since I don't use it directly. My code looks like typically like this: - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]; if (self) { // Custom initialization self.restorationIdentifier = @"EditNotificationViewController"; self.restorationClass = [self class]; } return self; } -(void)encodeRestorableStateWithCoder:(NSCoder *)coder { } -(void)decodeRestorableStateWithCoder:(NSCoder *)coder { } +(UIViewController *)viewControllerWithRestorationIdentifierPath:(NSArray *)identifierComponents coder:(NSCoder *)coder { EditNotificationViewController* envc = [[EditNotificationViewController alloc] initWithNibName:@"SearchFormViewController" bundle:nil]; return envc; } Should perhaps the navigationController be subclassed so it too can inherit from UIViewControllerRestoration?

    Read the article

  • How to prevent VC++ 9 linker from linking unnecessary global variables?

    - by sharptooth
    I'm playing with function-level linking in VC++. I've enabled /OPT:REF and /OPT:ICF and the linker is happy to eliminate all unused functions. Not so with variables. The following code is to demonstrate the problem only, I fully understand that actually having code structured that way is suboptimal. //A.cpp SomeType variable1; //B.cpp extern SomeType variable1; SomeType variable2; class ClassInB { //actually uses variable1 }; //C.cpp extern SomeType variable2; class ClassInC { //actually uses variable2; }; All those files are compiled into a static lib. The consumer project only uses ClassInC and links to the static library. Now comes the VC++ 9 linker. First the linker sees that C.obj references variable2 and includes B.obj. B.obj references variable1, so it includes A.obj. Then the unreferenced stuff elimination phase starts. It removes all functions in A.obj and B.obj, but not the variables. Both variable and variable2 are preserved together with their static initializers and deinitializers. That inflates the image size and introduces a delay for running the initializers and deinitializes. The code above is oversimplified, in actual code I really can't move variable2 into C.cpp easily. I could put it into a separate .cpp file, but that looks really dumb. Is there any better option to resolve the problem with Visual C++ 9?

    Read the article

  • Flattening out a lib directory using jarjar

    - by voodoogiant
    I'm trying to flatten out the lib directory of a project using jarjar, where it will produce a jar file with my main code and all my required libraries inside of it. I can get the project code into the jar, but I need to find a way to get every jar in the './lib/' directory extracted to the base directory of the final output jar. I don't want it flattened in the sense that I still want the package hiearchy preserved. I could manually list every jar file using zipfileset, but I was hoping to do it dynamically. I would also like to include any *.so files flattened into the base directory of the output jar so I can extract them into a temp dir easily without having to search through the jar. For example my lib directory... ./lib/library1.jar ./lib/library2.jar ./lib/foo/library3.jar ./lib/foo/bar.so would look like this when cracked open in the output jar file... /..library1_package_hierarchy../lib1.class /..library1_package_heirarchy../lib2.class ... (and so on) /..library2_package_hierarchy../lib1.class /..library2_package_heirarchy../lib2.class ... (and so on) /..library3_package_hierarchy../lib1.class <-- foo gone /..library3_package_heirarchy../lib2.class <-- foo gone /bar.so <-- foo gone

    Read the article

  • OCR combined with font recognition?

    - by Adam
    I have a bold idea where a user could take an image like the following and in a few seconds of processing, be able to edit a document which looks roughly the same. The software would use WhatTheFont (or something similar) to recognize the fonts used, and OCR and other software to handle the font size, color, line-spacing, and of course the text content itself. In the case of the example image, there would be three separate "textboxes" produced, each starting at the upper left corner of the text, and extending as far to the bottom right as it could before running into another text box. So the user would then see something like this: (The rectangles are just used to show the boundaries of each textbox.) From here, the user would be able to edit the text in each of these boxes to create a new document. Of course there are tons of obvious uses for such an application, especially on a mobile phone with a built in camera. So my questions are the following: I doubt the answer is yes, but does anything do this already? If I'm going to try to build this, what should I write it in? Can I use Python? What would be the best OCR libraries to start with? Is there a service other than WhatTheFont for font recognition that has better API support? Anybody want to help me build it? :) etc. etc. Update: One thing I wanted to mention (but forgot) is I would also like the background to be preserved. In other words, if the example above had an image behind the text, I'd like the document to use that image with text removed. I know this complicates things a lot because that would require some image editing techniques too (something akin to Photoshop CS5' "content-aware fill"). But if we can solve diminished reality on iPhones, I think we can figure this out!

    Read the article

  • How to make a piece of WPF content take up the entire application window

    - by Bojin Li
    I'm working on an application that contains a number of content areas. I want to implement a behavior such that in response to user input, any of these content areas can be toggled to fit the entire application window, and optionally back to its original position again. I experimented with several approaches and none of them seem optimal for me. Here's what I tried to do: Use the ClipToBoundsProperty on the content I want to make "Full Screen": Doesn't work because only the CanvasPanel seems to fully respect this property. The application need to be localized so I would really like to avoid the CanvasPanel. Use a Grid and collapse the other content areas, such that only the one I want to see is visible, hence taking up the entire screen: This will probably work but doesn't seem easy to implement nor maintain. The "Full Screen" content area could be several levels deep, for example residing inside a Tabcontrol, so I would have to hide the tab headers too etc. Reconstruct the content area in a separate view and display it while hiding the rest: Seems easy enough to do with DataTemplates and my ViewModel objects, but any GUI/View only states are not preserved using this approach. Somehow "lift" the GUI/View I want to "Full Screen" into the separate view and display it while hiding the rest: I don't know how to do this or even if this is possible. Anyway if anyone knows a better approach I would love to know about it. Thanks a lot!

    Read the article

  • Benefits of PerformancePoint Services Using SharePoint Server 2010

    - by Wayne
    What is PerformancePoint Services? Most of the time it happens that the metrics that make up your key performance indicators are not simple values from a data source. In SharePoint Server 2007 PerformancePoint Services, you could create two kinds of KPI metrics: Simple single value metrics from any supported data source or Complex multiple value metrics from a single Analysis Services data source using MDX. Now things are even easier with Performance Point Services in SharePoint 2010. Let us check what is it? PerformancePoint Services in SharePoint Server 2010 is a performance management service that you can use to monitor and analyze your business. By providing flexible, easy-to-use tools for building dashboards, scorecards, reports, and key performance indicators (KPIs), PerformancePoint Services can help everyone across an organization make informed business decisions that align with companywide objectives and strategy. Scorecards, dashboards, and KPIs help drive accountability. Integrated analytics help employees move quickly from monitoring information to analyzing it and, when appropriate, sharing it throughout the organization. Prior to the addition of PerformancePoint Services to SharePoint Server, Microsoft Office PerformancePoint Server 2007 functioned as a standalone server. Now PerformancePoint functionality is available as an integrated part of the SharePoint Server Enterprise license, as is the case with Excel Services in Microsoft SharePoint Server 2010. The popular features of earlier versions of PerformancePoint Services are preserved along with numerous enhancements and additional functionality. New PerformancePoint Services features PerformancePoint Services now can utilize SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. Dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. New features and enhancements of SharePoint 2010 PerformancePoint Services • With PerformancePoint Services, functioning as a service in SharePoint Server, dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. The new architecture also takes advantage of SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. You also can include and link PerformancePoint Services Web Parts with other SharePoint Server Web Parts on the same page. The new architecture also streamlines security models that simplify access to report data. • The Decomposition Tree is a new visualization report type available in PerformancePoint Services. You can use it to quickly and visually break down higher-level data values from a multi-dimensional data set to understand the driving forces behind those values. The Decomposition Tree is available in scorecards and analytic reports and ultimately in dashboards. • You can access more detailed business information with improved scorecards. Scorecards have been enhanced to make it easy for you to drill down and quickly access more detailed information. PerformancePoint scorecards also offer more flexible layout options, dynamic hierarchies, and calculated KPI features. Using this enhanced functionality, you can now create custom metrics that use multiple data sources. You can also sort, filter, and view variances between actual and target values to help you identify concerns or risks. • Better Time Intelligence filtering capabilities that you can use to create and use dynamic time filters that are always up to date. Other improved filters improve the ability for dashboard users to quickly focus in on information that is most relevant. • Ability to include and link PerformancePoint Services Web Parts together with other PerformancePoint Services Web parts on the same page. • Easier to author and publish dashboard items by using Dashboard Designer. • SQL Server Analysis Services 2008 support. • Increased support for accessibility compliance in individual reports and scorecards. • The KPI Details report is a new report type that displays contextually relevant information about KPIs, metrics, rows, columns, and cells within a scorecard. The KPI Details report works as a Web part that links to a scorecard or individual KPI to show relevant metadata to the end user in SharePoint Server. This Web part can be added to PerformancePoint dashboards or any SharePoint Server page. • Create analytics reports to better understand underlying business forces behind the results. Analytic reports have been enhanced to support value filtering, new chart types, and server-based conditional formatting. To conclude, PerformancePoint Services, by becoming tightly integrated with SharePoint Server 2010, takes advantage of many enterprise-level SharePoint Server 2010 features. Unfortunately, SharePoint Foundation 2010 doesn’t include this feature. There are still many choices in SharePoint family of products that include SharePoint Server 2010, SharePoint Foundation, SharePoint Server 2007 and associated free SharePoint web parts and templates.

    Read the article

  • Patch an Existing NK.BIN

    - by Kate Moss' Open Space
    As you know, we can use MAKEIMG.EXE tool to create OS Image file, NK.BIN, or ROMIMAGE.EXE with a BIB for more accurate. But what if the image file is already created but need to be patched or you want to extract a file from NK.BIN? The Platform Builder provide many useful command line utilities, and today I am going to introduce one, BINMOD.EXE. http://msdn.microsoft.com/en-us/library/ee504622.aspx is the official page for BINMOD tool. As the page says, The BinMod Tool (binmod.exe) extracts files from a run-time image, and replaces files in a run-time image and its usage binmod [-i imagename] [-r replacement_filename.ext | -e extraction_filename.ext] This is a simple tool and is easy to use, if we want to extract a file from nk.bin, just type binmod –i nk.bin –e filename.ext And that's it! Or use can try -r command to replace a file inside NK.BIN. The small tool is good but there is a limitation; due to the files in MODULES section are fixed up during ROMIMAGE so the original file format is not preserved, therefore extract or replace file in MODULE section will be impossible. So just like this small tool, this post supposed to be end here, right? Nah... It is not that easy. Just try the above example, and you will find, the tool is not work! Double check the file is in FILES section and the NK.BIN is good, but it just quits. Before you throw away this useless toy, we can try to fix it! Yes, the source of this tool is available in your CE6, private\winceos\COREOS\nk\tools\romimage\binmod. As it is a tool run in your Windows so you need to Windows SDK or Visual Studio to build the code. (I am going to save you some time by skipping the detail as building a desktop console mode program is fairly trivial) The cbinmod.cpp is the core logic for this program and follow up the error message we got, it looks like the following code is suspected.   //   // Extra sanity check...   //   if((DWORD)(HIWORD(pTOCLoc->dllfirst) << 16) <= pTOCLoc->dlllast &&       (DWORD)(LOWORD(pTOCLoc->dllfirst) << 16) <= pTOCLoc->dlllast)   {     dprintf("Found pTOC  = 0x%08x\n", (DWORD)dwpTOC);     fFoundIt = true;     break;   }    else    {     dprintf("NOTICE! Record %d looked like a TOC except DLL first = 0x%08X, and DLL last = 0x%08X\r\n", i, pTOCLoc->dllfirst, pTOCLoc->dlllast);   } The logic checks if dllfirst <= dlllast but look closer, the code only separated the high/low WORD from dllfirst but does not apply the same to dlllast, is that on purpose or a bug? While the TOC is created by ROMIMAGE.EXE, so let's move to ROMIMAGE. In private\winceos\coreos\nk\tools\romimage\romimage\bin.cpp    Module::s_romhdr.dllfirst  = (HIWORD(xip_mem->dll_data_bottom) << 16) | HIWORD(xip_mem->kernel_dll_bottom);   Module::s_romhdr.dlllast   = (HIWORD(xip_mem->dll_data_top) << 16)    | HIWORD(xip_mem->kernel_dll_top); It is clear now, the high word of dll first is the upper 16 bits of XIP DLL bottom and the low word is the upper 16 bits of kernel dll bottom; also, the high word of dll last is the upper 16 bits of XIP DLL top and the low word is the upper 16 bits of kernel dll top. Obviously, the correct statement should be if((DWORD)(HIWORD(pTOCLoc->dllfirst) << 16) <= (DWORD)(HIWORD(pTOCLoc->dlllast) << 16) &&    (DWORD)(LOWORD(pTOCLoc->dllfirst) << 16) <= (DWORD)(LOWORD(pTOCLoc->dlllast) << 16)) So update the code like this should fix this issue or just like the comment, it is an extra sanity check, you can just get rid of it, either way can make the code moving forward and everything worked as advertised.  "Extracting out copies of files from the nk.bin... replacing files... etc." Since the NK.BIN can be compressed, so the BinMod needs the compress.dll to decompress the data, the DLL can be found in C:\program files\microsoft platform builder\6.00\cepb\idevs\imgutils.

    Read the article

  • How to rotate a set of points on z = 0 plane in 3-D, preserving pairwise distances?

    - by cagirici
    I have a set of points double n[] on the plane z = 0. And I have another set of points double[] m on the plane ax + by + cz + d = 0. Length of n is equal to length of m. Also, euclidean distance between n[i] and n[j] is equal to euclidean distance between m[i] and m[j]. I want to rotate n[] in 3-D, such that for all i, n[i] = m[i] would be true. In other words, I want to turn a plane into another plane, preserving the pairwise distances. Here's my code in java. But it does not help so much: double[] rotate(double[] point, double[] currentEquation, double[] targetEquation) { double[] currentNormal = new double[]{currentEquation[0], currentEquation[1], currentEquation[2]}; double[] targetNormal = new double[]{targetEquation[0], targetEquation[1], targetEquation[2]}; targetNormal = normalize(targetNormal); double angle = angleBetween(currentNormal, targetNormal); double[] axis = cross(targetNormal, currentNormal); double[][] R = getRotationMatrix(axis, angle); return rotated; } double[][] getRotationMatrix(double[] axis, double angle) { axis = normalize(axis); double cA = (float)Math.cos(angle); double sA = (float)Math.sin(angle); Matrix I = Matrix.identity(3, 3); Matrix a = new Matrix(axis, 3); Matrix aT = a.transpose(); Matrix a2 = a.times(aT); double[][] B = { {0, axis[2], -1*axis[1]}, {-1*axis[2], 0, axis[0]}, {axis[1], -1*axis[0], 0} }; Matrix A = new Matrix(B); Matrix R = I.minus(a2); R = R.times(cA); R = R.plus(a2); R = R.plus(A.times(sA)); return R.getArray(); } This is what I get. The point set on the right side is actually part of a point set on the left side. But they are on another plane. Here's a 2-D representation of what I try to do: There are two lines. The line on the bottom is the line I have. The line on the top is the target line. The distances are preserved (a, b and c). Edit: I have tried both methods written in answers. They both fail (I guess). Method of Martijn Courteaux public static double[][] getRotationMatrix(double[] v0, double[] v1, double[] v2, double[] u0, double[] u1, double[] u2) { RealMatrix M1 = new Array2DRowRealMatrix(new double[][]{ {1,0,0,-1*v0[0]}, {0,1,0,-1*v0[1]}, {0,0,1,0}, {0,0,0,1} }); RealMatrix M2 = new Array2DRowRealMatrix(new double[][]{ {1,0,0,-1*u0[0]}, {0,1,0,-1*u0[1]}, {0,0,1,-1*u0[2]}, {0,0,0,1} }); Vector3D imX = new Vector3D((v0[1] - v1[1])*(u2[0] - u0[0]) - (v0[1] - v2[1])*(u1[0] - u0[0]), (v0[1] - v1[1])*(u2[1] - u0[1]) - (v0[1] - v2[1])*(u1[1] - u0[1]), (v0[1] - v1[1])*(u2[2] - u0[2]) - (v0[1] - v2[1])*(u1[2] - u0[2]) ).scalarMultiply(1/((v0[0]*v1[1])-(v0[0]*v2[1])-(v1[0]*v0[1])+(v1[0]*v2[1])+(v2[0]*v0[1])-(v2[0]*v1[1]))); Vector3D imZ = new Vector3D(findEquation(u0, u1, u2)); Vector3D imY = Vector3D.crossProduct(imZ, imX); double[] imXn = imX.normalize().toArray(); double[] imYn = imY.normalize().toArray(); double[] imZn = imZ.normalize().toArray(); RealMatrix M = new Array2DRowRealMatrix(new double[][]{ {imXn[0], imXn[1], imXn[2], 0}, {imYn[0], imYn[1], imYn[2], 0}, {imZn[0], imZn[1], imZn[2], 0}, {0, 0, 0, 1} }); RealMatrix rotationMatrix = MatrixUtils.inverse(M2).multiply(M).multiply(M1); return rotationMatrix.getData(); } Method of Sam Hocevar static double[][] makeMatrix(double[] p1, double[] p2, double[] p3) { double[] v1 = normalize(difference(p2,p1)); double[] v2 = normalize(cross(difference(p3,p1), difference(p2,p1))); double[] v3 = cross(v1, v2); double[][] M = { { v1[0], v2[0], v3[0], p1[0] }, { v1[1], v2[1], v3[1], p1[1] }, { v1[2], v2[2], v3[2], p1[2] }, { 0.0, 0.0, 0.0, 1.0 } }; return M; } static double[][] createTransform(double[] A, double[] B, double[] C, double[] P, double[] Q, double[] R) { RealMatrix c = new Array2DRowRealMatrix(makeMatrix(A,B,C)); RealMatrix t = new Array2DRowRealMatrix(makeMatrix(P,Q,R)); return MatrixUtils.inverse(c).multiply(t).getData(); } The blue points are the calculated points. The black lines indicate the offset from the real position.

    Read the article

  • Short Season, Long Models - Dealing with Seasonality

    - by Michel Adar
    Accounting for seasonality presents a challenge for the accurate prediction of events. Examples of seasonality include: ·         Boxed cosmetics sets are more popular during Christmas. They sell at other times of the year, but they rise higher than other products during the holiday season. ·         Interest in a promotion rises around the time advertising on TV airs ·         Interest in the Sports section of a newspaper rises when there is a big football match There are several ways of dealing with seasonality in predictions. Time Windows If the length of the model time windows is short enough relative to the seasonality effect, then the models will see only seasonal data, and therefore will be accurate in their predictions. For example, a model with a weekly time window may be quick enough to adapt during the holiday season. In order for time windows to be useful in dealing with seasonality it is necessary that: The time window is significantly shorter than the season changes There is enough volume of data in the short time windows to produce an accurate model An additional issue to consider is that sometimes the season may have an abrupt end, for example the day after Christmas. Input Data If available, it is possible to include the seasonality effect in the input data for the model. For example the customer record may include a list of all the promotions advertised in the area of residence. A model with these inputs will have to learn the effect of the input. It is possible to learn it specific to the promotion – and by the way learn about inter-promotion cross feeding – by leaving the list of ads as it is; or it is possible to learn the general effect by having a flag that indicates if the promotion is being advertised. For inputs to properly represent the effect in the model it is necessary that: The model sees enough events with the input present. For example, by virtue of the model lifetime (or time window) being long enough to see several “seasons” or by having enough volume for the model to learn seasonality quickly. Proportional Frequency If we create a model that ignores seasonality it is possible to use that model to predict how the specific person likelihood differs from average. If we have a divergence from average then we can transfer that divergence proportionally to the observed frequency at the time of the prediction. Definitions: Ft = trailing average frequency of the event at time “t”. The average is done over a suitable period of to achieve a statistical significant estimate. F = average frequency as seen by the model. L = likelihood predicted by the model for a specific person Lt = predicted likelihood proportionally scaled for time “t”. If the model is good at predicting deviation from average, and this holds over the interesting range of seasons, then we can estimate Lt as: Lt = L * (Ft / F) Considering that: L = (L – F) + F Substituting we get: Lt = [(L – F) + F] * (Ft / F) Which simplifies to: (i)                  Lt = (L – F) * (Ft / F)  +  Ft This latest expression can be understood as “The adjusted likelihood at time t is the average likelihood at time t plus the effect from the model, which is calculated as the difference from average time the proportion of frequencies”. The formula above assumes a linear translation of the proportion. It is possible to generalize the formula using a factor which we will call “a” as follows: (ii)                Lt = (L – F) * (Ft / F) * a  +  Ft It is also possible to use a formula that does not scale the difference, like: (iii)               Lt = (L – F) * a  +  Ft While these formulas seem reasonable, they should be taken as hypothesis to be proven with empirical data. A theoretical analysis provides the following insights: The Cumulative Gains Chart (lift) should stay the same, as at any given time the order of the likelihood for different customers is preserved If F is equal to Ft then the formula reverts to “L” If (Ft = 0) then Lt in (i) and (ii) is 0 It is possible for Lt to be above 1. If it is desired to avoid going over 1, for relatively high base frequencies it is possible to use a relative interpretation of the multiplicative factor. For example, if we say that Y is twice as likely as X, then we can interpret this sentence as: If X is 3%, then Y is 6% If X is 11%, then Y is 22% If X is 70%, then Y is 85% - in this case we interpret “twice as likely” as “half as likely to not happen” Applying this reasoning to (i) for example we would get: If (L < F) or (Ft < (1 / ((L/F) + 1)) Then  Lt = L * (Ft / F) Else Lt = 1 – (F / L) + (Ft * F / L)  

    Read the article

  • Using Windows Previous Versions to access ZFS Snapshots (July 14, 2009)

    - by user12612012
    The Previous Versions tab on the Windows desktop provides a straightforward, intuitive way for users to view or recover files from ZFS snapshots.  ZFS snapshots are read-only, point-in-time instances of a ZFS dataset, based on the same copy-on-write transactional model used throughout ZFS.  ZFS snapshots can be used to recover deleted files or previous versions of files and they are space efficient because unchanged data is shared between the file system and its snapshots.  Snapshots are available locally via the .zfs/snapshot directory and remotely via Previous Versions on the Windows desktop. Shadow Copies for Shared Folders was introduced with Windows Server 2003 but subsequently renamed to Previous Versions with the release of Windows Vista and Windows Server 2008.  Windows shadow copies, or snapshots, are based on the Volume Snapshot Service (VSS) and, as the [Shared Folders part of the] name implies, are accessible to clients via SMB shares, which is good news when using the Solaris CIFS Service.  And the nice thing is that no additional configuration is required - it "just works". On Windows clients, snapshots are accessible via the Previous Versions tab in Windows Explorer using the Shadow Copy client, which is available by default on Windows XP SP2 and later.  For Windows 2000 and pre-SP2 Windows XP, the client software is available for download from Microsoft: Shadow Copies for Shared Folders Client. Assuming that we already have a shared ZFS dataset, we can create ZFS snapshots and view them from a Windows client. zfs snapshot tank/home/administrator@snap101zfs snapshot tank/home/administrator@snap102 To view the snapshots on Windows, map the dataset on the client then right click on a folder or file and select Previous Versions.  Note that Windows will only display previous versions of objects that differ from the originals.  So you may have to modify files after creating a snapshot in order to see previous versions of those files. The screenshot above shows various snapshots in the Previous Versions window, created at different times.  On the left panel, the .zfs folder is visible, illustrating that this is a ZFS share.  The .zfs setting can be toggled as desired, it makes no difference when using previous versions.  To make the .zfs folder visible: zfs set snapdir=visible tank/home/administrator To hide the .zfs folder: zfs set snapdir=hidden tank/home/administrator The following screenshot shows the Previous Versions panel when a file has been selected.  In this case the user is prompted to view, copy or restore the file from one of the available snapshots. As can be seen from the screenshots above, the Previous Versions window doesn't display snapshot names: snapshots are listed by snapshot creation time, sorted in time order from most recent to oldest.  There's nothing we can do about this, it's the way that the interface works.  Perhaps one point of note, to avoid confusion, is that the ZFS snapshot creation time isnot the same as the root directory creation timestamp. In ZFS, all object attributes in the original dataset are preserved when a snapshot is taken, including the creation time of the root directory.  Thus the root directory creation timestamp is the time that the directory was created in the original dataset. # ls -d% all /home/administrator         timestamp: atime         Mar 19 15:40:23 2009         timestamp: ctime         Mar 19 15:40:58 2009         timestamp: mtime         Mar 19 15:40:58 2009         timestamp: crtime         Mar 19 15:18:34 2009 # ls -d% all /home/administrator/.zfs/snapshot/snap101         timestamp: atime         Mar 19 15:40:23 2009         timestamp: ctime         Mar 19 15:40:58 2009         timestamp: mtime         Mar 19 15:40:58 2009         timestamp: crtime         Mar 19 15:18:34 2009 The snapshot creation time can be obtained using the zfs command as shown below. # zfs get all tank/home/administrator@snap101NAME                             PROPERTY  VALUEtank/home/administrator@snap101  type      snapshottank/home/administrator@snap101  creation  Mon Mar 23 18:21 2009 In this example, the dataset was created on March 19th and the snapshot was created on March 23rd. In conclusion, Shadow Copies for Shared Folders provides a straightforward way for users to view or recover files from ZFS snapshots.  The Windows desktop provides an easy to use, intuitive GUI and no configuration is required to use or access previous versions of files or folders. REFERENCES FOR MORE INFORMATION ZFS ZFS Learning Center Introduction to Shadow Copies of Shared Folders Shadow Copies for Shared Folders Client

    Read the article

  • Loading XML file containing leading zeros with SSIS preserving the zeros

    - by Compudicted
    Visiting the MSDN SQL Server Integration Services Forum oftentimes I could see that people would pop up asking this question: “why I am not able to load an element from an XML file that contains zeros so the leading/trailing zeros would remain intact?”. I started to suspect that such a trivial and often-required operation perhaps is being misunderstood by the developer community. I would also like to add that the whole state of affairs surrounding the XML today is probably also going to be increasingly affected by a motion of people who dislike XML in general and many aspects of it as XSD and XSLT invoke a negative reaction at best. Nevertheless, XML is in wide use today and its importance as a bridge between diverse systems is ever increasing. Therefore, I deiced to write up an example of loading an arbitrary XML file that contains leading zeros in one of its elements using SSIS so the leading zeros would be preserved keeping in mind the goal on simplicity into a table in SQL Server database. To start off bring up your BIDS (running as admin) and add a new Data Flow Task (DFT). This DFT will serve as container to adding our XML processing elements (besides, the XML Source is not available anywhere else other than from within the DFT). Double-click your DFT and drag and drop the XMS Source component from the Tool Box’s Data Flow Sources. Now, let the fun begin! Being inspired by the upcoming Christmas I created a simple XML file with one set of data that contains an imaginary SSN number of Rudolph containing several leading zeros like 0000003. This file can be viewed here. To configure the XML Source of course it is quite intuitive to point it to our XML file, next what the XML source needs is either an embedded schema (XSD) or it can generate one for us. In lack of the one I opted to auto-generate it for me and I ended up with an XSD that looked like: <?xml version="1.0"?> <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="XMasEvent"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" name="CaseInfo"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" name="ID" type="xs:unsignedByte" /> <xs:element minOccurs="0" name="CreatedDate" type="xs:unsignedInt" /> <xs:element minOccurs="0" name="LastName" type="xs:string" /> <xs:element minOccurs="0" name="FirstName" type="xs:string" /> <xs:element minOccurs="0" name="SSN" type="xs:unsignedByte" /> <!-- Becomes string -- > <xs:element minOccurs="0" name="DOB" type="xs:unsignedInt" /> <xs:element minOccurs="0" name="Event" type="xs:string" /> <xs:element minOccurs="0" name="ClosedDate" /> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> As an aside on the XML file: if your XML file does not contain the outer node (<XMasEvent>) then you may end up in a situation where you see just one field in the output. Now please note that the SSN element’s data type was chosen to be of unsignedByte (and this is for a reason). The reason is stemming from the fact all our figures in the element are digits, this is good, but this is not exactly what we need, because if we will attempt to load the data with this XSD then we are going to either get errors on the destination or most typically lose the leading zeros. So the next intuitive choice is to change the data type to string. Besides, if a SSIS package was already created based on this XSD and the data type change was done thereafter, one should re-set the metadata by right-clicking the XML Source and choosing “Advanced Editor” in which there is a refresh button at the bottom left which will do the trick. So far so good, we are ready to load our XML file, well actually yes, and no, in my experience typically some data conversion may be required. So depending on your data destination you may need to tweak the data types targeted. Let’s add a Data Conversion Task to our DFT. Your package should look like: To make the story short I only will cover the SSN field, so in my data source the target SQL Table has it as nchar(10) and we chose string in our XSD (yes, this is a big difference), under such circumstances the SSIS will complain. So will go and manipulate on the data type of SSN by making it Unicode String (DT_WSTR), World String per se. The conversion should look like: The peek at the Metadata: We are almost there, now all we need is to configure the destination. For simplicity I chose SQL Server Destination. The mapping is a breeze, F5 and I am able to insert my data into SQL Server now! Checking the zeros – they are all intact!

    Read the article

  • Convert a PDF to a Transparent PNG with GhostScript

    - by Jonathon Wolfe
    Hi all. I am attempting, unsuccessfully, to use Ghostscript to rasterize PDF files with a transparent background to PNG files with a transparent background. I've searched high and low for questions from others attempting the same thing and none of the posted solutions, which as far as I can tell come down to specifying -sDEVICE=pngalpha, have worked with my test files. At this point I would really appreciate any advice or tips a more experienced hand could provide. My test PDF is located here: http://www.kolossus.com/files/test.pdf It could be that the issue is with this file, but I doubt it. As far as I can tell, it has no specified background, and when I open the file with a transparency-aware app like Photoshop or Illustrator, sure enough it displays with a transparent background. However, when opened with an application like Adobe Reader the file is rendered with a white background. I believe that this has more to do with the application rendering the PDF than with the PDF itself -- apps like Adobe Reader assume you want to see what a printed document will look like and therefore always show a white canvas behind the artwork -- but I can't be sure. The gs command I'm using is: gs -dNOPAUSE -dBATCH -sDEVICE=pngalpha -r72 -sOutputFile=test.png test.pdf This produces a PNG that has transparent pixels outside of the bounding box of the artwork in the file, but all pixels that are inside the artwork's bounding box are rasterized against a white background. This is a problem for me, as my artwork has drop shadows and antialiased edges that need to be preserved in the final output, and can't just be postprocessed out with ImageMagick. A sample of my PNG output is at the same location as the pdf above, with .png at the end (stackoverflow won't let me include more than one url in my post). Interestingly, I see no effects from using the -dBackgroundColor flag, even if I set it to something non-white like -dBackgroundColor=16#ff0000. Perhaps my understanding of the syntax of this flag is wrong. Also interestingly, I see no effects from using the -dTextAlphaBits=4 -dGraphicsAlphaBits=4 flags to try to enable subpixel antialiasing. I would also appreciate any advice on how to enable subpixel antialiasing, especially on text. Finally, I'm using GPL Ghostscript 8.64 on Mac OS 10.5.7, and the rendering workflow I'm trying to get set up is to generate transparent PNG images from PDFs output by PrinceXML. I'm calling Ghostscript directly for the rasterization instead of using ImageMagick because ImageMagick delegates to Ghostscript for PDF rasterization and I should be able to control the rasterization better by calling GS directly. Thanks for your help. -Jon Wolfe

    Read the article

  • A simple Python deployment problem - a whole world of pain

    - by Evgeny
    We have several Python 2.6 applications running on Linux. Some of them are Pylons web applications, others are simply long-running processes that we run from the command line using nohup. We're also using virtualenv, both in development and in production. What is the best way to deploy these applications to a production server? In development we simply get the source tree into any directory, set up a virtualenv and run - easy enough. We could do the same in production and perhaps that really is the most practical solution, but it just feels a bit wrong to run svn update in production. We've also tried fab, but it just never works first time. For every application something else goes wrong. It strikes me that the whole process is just too hard, given that what we're trying to achieve is fundamentally very simple. Here's what we want from a deployment process. We should be able to run one simple command to deploy an updated version of an application. (If the initial deployment involves a bit of extra complexity that's fine.) When we run this command it should copy certain files, either out of a Subversion repository or out of a local working copy, to a specified "environment" on the server, which probably means a different virtualenv. We have both staging and production version of the applications on the same server, so they need to somehow be kept separate. If it installs into site-packages, that's fine too, as long as it works. We have some configuration files on the server that should be preserved (ie. not overwritten or deleted by the deployment process). Some of these applications import modules from other applications, so they need to be able to reference each other as packages somehow. This is the part we've had the most trouble with! I don't care whether it works via relative imports, site-packages or whatever, as long as it works reliably in both development and production. Ideally the deployment process should automatically install external packages that our applications depend on (eg. psycopg2). That's really it! How hard can it be?

    Read the article

  • string categorization strategies

    - by Andrew Heath
    I'm the one-man dev team on a fledgling military history website. One aspect of the site is a catalog of ~1,200 individual battles, including the nations & formations (regiments, divisions, etc) which took part. The formation information (as well as the other battle info) was manually imported from a series of books by a 10-man volunteer team. The formations were listed in groups with varying formatting and abbreviation patterns. At the time I set up the data collection forms I couldn't think of a good way to process that data... and elected to store it all as strings in the MySQL database and sort it out later. Well, "later" - as it tends to happen - has arrived. :-) Each battle has 2+ records in the database - one for each nation that participated. Each record has a formations text string listing the formations present as the volunteer chose to add them. Some real examples: 39th Grenadier Rgmt, 26th Volksgrenadier Division 2nd Luftwaffe Field Division, 246th Infantry Division 247th Rifle Division, 255th Tank Brigade 2nd Luftwaffe Field Division, SS Cavalry Division 28th Tank Brigade, 158th Rifle Division, 135th Rifle Division, 81st Tank Brigade, 242nd Tank Brigade 78th Infantry Division 3rd Kure Special Naval Landing Force, Tulagi Seaplane Base personnel 1st Battalion 505th Infantry Regiment The ultimate goal is for each individual force to have an ID, so that its participation can be traced throughout the battle database. Formation hierarchy, such as the final item above 1st Battalion (of the) 505th Infantry Regiment also needs to be preserved. In that case, 1st Battalion and 505th Infantry Regiment would be split, but 1st Battalion would be flagged as belonging to the 505th. In database terms, I think I want to pull the formation field out of the current battle info table and create three new tables: FORMATION [id] [name] FORMATION_HIERARCHY [id] [parent] [child] FORMATION_BATTLE [f_id] [battle_id] It's simple to explain, but complicated to enact. What I'm looking for from the SO community is just some tips on how best to tackle this problem. Ideally there's some sort of method to solving this that I'm not aware of. However, as a last resort, I could always code a classification framework and call my volunteers back to sort through 2,500+ records...

    Read the article

< Previous Page | 5 6 7 8 9 10 11  | Next Page >