Search Results

Search found 32617 results on 1305 pages for 'jon list'.

Page 166/1305 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • A list of the most important areas to examine when moving a project from x86 to x64?

    - by AbrahamVanHelpsing
    I know to check for/use asserts and carefully examine any assembly components, but I didn't know if anyone out there has a fairly comprehensive or industry standard check-list of specific things at which to look? I am looking more at C and C++. note: There are some really helpful answers, I'm just leaving the question open for a couple days in case some folks only check questions that don't have accepted answers.

    Read the article

  • Is there a comprehensive list of the Ubuntu services for developers?

    - by tgm4883
    It seems every couple weeks I'm learning about a new service that can be used by developers to either A) get people involved, or B) make life easier for developers. Is there a list of all these services somewhere? Here are a few of them that I have recently found out about. https://errors.ubuntu.com/ http://harvest.ubuntu.com/ http://status.ubuntu.com/ :Edit: Found another. Although not strictly developer related, it's noteworthy https://apps.ubuntu.com

    Read the article

  • nginx rewrite regex for API versioning

    - by MSpreij
    What I want is for the first to be turned into the second.. /widget => /widget/index.php /widget/ => /widget/index.php /widget?act=list => /widget/index.php?act=list /widget/?act=list => /widget/index.php?act=list /widget/list => /widget/index.php?act=list /widget/v2?act=list => /widget/v2.php?act=list /widget/v2/?act=list => /widget/v2.php?act=list /widget/v2/list => /widget/v2.php?act=list v2 could also be v45, basically "v\d+" act, "list" in this case, can have many values and more will be added. Any additional query parameters would just be passed on with $args, I guess. Basically URLs not specifying the version will go to index.php, which can then decide what specific version file to include. What I am afraid of happening is loops - this should sit in location /widget { right?. (As for putting the version of the API in the URL, I'm not trying to be RESTful, and target audience is small) Resources on how to do this entirely in index.php using "routers" also welcome :-/

    Read the article

  • Issue installing FLEXnet on ubuntu for program: Geneious

    - by jon_shep
    Afternoon, I can successful in my Geneious Pro software but when I am required to have FLEXnet installed for the licensing process. The prompt I am given is : To install FLEXnet on Linux, run the following command from your shell as root: /home/shep/Geneious/licensing_service/install_fnp.sh "/home/shep/Geneious/licensing_service/linux64/FNPLicensingService" When you have done this, you can activate your license in Geneious. As Root: root@Jon:/home/shep/Geneious/licensing_service# sh install_fnp.sh Unable to locate anchor service to install, please specify correctly on command line also root@Jon:/home/shep/Geneious/licensing_service/linux64# sh FNPLicensingService FNPLicensingService: 2: FNPLicensingService: Syntax error: Unterminated quoted string Anyone have further ideas? I tried to find the software online directly, that was no good either. ~Jon

    Read the article

  • Implementing a "state-machine" logic for methods required by an object in C++

    - by user827992
    What I have: 1 hypothetical object/class + other classes and related methods that gives me functionality. What I want: linking this object to 0 to N methods in realtime on request when an event is triggered Each event is related to a single method or a class, so a single event does not necessarily mean "connect this 1 method only" but can also mean "connect all the methods from that class or a group of methods" Avoiding linked lists because I have to browse the entire list to know what methods are linked, because this does not ensure me that the linked methods are kept in a particular order (let's say an alphabetic order by their names or classes), and also because this involve a massive amount of pointers usage. Example: I have an object Employee Jon, Jon acquires knowledge and forgets things pretty easily, so his skills may vary during a period of time, I'm responsible for what Jon can add or remove from his CV, how can I implement this logic?

    Read the article

  • Dynamic Auto updating (to UI, Grid) binding list in C# Winform?

    - by Dhana
    I'm not even sure if i'm doing this correctly. But basically I have a list of objects that are built out of a class/interface. From there, I am binding the list to a datagrid view that is on a Windows Form (C#) Here the list is a Sync list which will auto update the UI, in this case datagridview. Every thing works fine now, but now i would like to have the List should have an dynamic object, that is the object will have by default two static property (ID, Name), and at run time user will select remaining properties. These should be bind to the data grid. Any update on the list should be auto reflected in the grid. I am aware that, we can use dynamic objects, but i would like to know , how to approach for solution, datagridview.DataSource = myData; // myData is AutoUpdateList<IPersonInfo> Now IPersonInfo is the type of object, need to add dynamic properties for this type at runtime. public class AutoUpdateList<T> : System.ComponentModel.BindingList<T> { private System.ComponentModel.ISynchronizeInvoke _SyncObject; private System.Action<System.ComponentModel.ListChangedEventArgs> _FireEventAction; public AutoUpdateList() : this(null) { } public AutoUpdateList(System.ComponentModel.ISynchronizeInvoke syncObject) { _SyncObject = syncObject; _FireEventAction = FireEvent; } protected override void OnListChanged(System.ComponentModel.ListChangedEventArgs args) { try { if (_SyncObject == null) { FireEvent(args); } else { _SyncObject.Invoke(_FireEventAction, new object[] { args }); } } catch (Exception) { // TODO: Log Here } } private void FireEvent(System.ComponentModel.ListChangedEventArgs args) { base.OnListChanged(args); } } Could you help out on this?

    Read the article

  • Use LINQ, to Sort and Filter items in a List<ReturnItem> collection, based on the values within a Li

    - by Daniel McPherson
    This is tricky to explain. We have a DataTable that contains a user configurable selection of columns, which are not known at compile time. Every column in the DataTable is of type String. We need to convert this DataTable into a strongly typed Collection of "ReturnItem" objects so that we can then sort and filter using LINQ for use in our application. We have made some progress as follows: We started with the basic DataTable. We then process the DataTable, creating a new "ReturnItem" object for each row This "ReturnItem" object has just two properties: ID ( string ) and Columns( List(object) ). The properties collection contains one entry for each column, representing a single DataRow. Each property is made Strongly Typed (int, string, datetime, etc). For example it would add a new "DateTime" object to the "ReturnItem" Columns List containing the value of the "Created" Datatable Column. The result is a List(ReturnItem) that we would then like to be able to Sort and Filter using LINQ based on the value in one of the properties, for example, sort on "Created" date. We have been using the LINQ Dynamic Query Library, which gets us so far, but it doesn't look like the way forward because we are using it over a List Collection of objects. Basically, my question boils down to: How can I use LINQ, to Sort and Filter items in a List(ReturnItem) collection, based on the values within a List(object) property which is part of the ReturnItem class?

    Read the article

  • How to efficiently get highest & lowest values from a List<double?>, and then modify them?

    - by DaveDev
    I have to get the sum of a list of doubles. If the sum is 100, I have to decrement from the highest number until it's = 100. If the sum is < 100, I have to increment the lowest number until it's = 100. I can do this by looping though the list, assigning the values to placeholder variables and testing which is higher or lower but I'm wondering if any gurus out there could suggest a super cool & efficient way to do this? The code below basically outlines what I'm trying to achieve: var splitValues = new List<double?>(); splitValues.Add(Math.Round(assetSplit.EquityTypeSplit() ?? 0)); splitValues.Add(Math.Round(assetSplit.PropertyTypeSplit() ?? 0)); splitValues.Add(Math.Round(assetSplit.FixedInterestTypeSplit() ?? 0)); splitValues.Add(Math.Round(assetSplit.CashTypeSplit() ?? 0)); var listSum = splitValues.Sum(split => split.Value); if (listSum != 100) { if (listSum > 100) { // how to get highest value and decrement by 1 until listSum == 100 // then reassign back into the splitValues list? var highest = // ?? } else { // how to get lowest where value is > 0, and increment by 1 until listSum == 100 // then reassign back into the splitValues list? var lowest = // ?? } } update: the list has to remain in the same order as the items are added.

    Read the article

  • How duplicate an object in a list and update property of duplicated objects ?

    - by user359706
    Hello What would be the best way to duplicate an object placed in a list of items and change a property of duplicated objects ? I thought proceed in the following manner: - get object in the list by "ref" + "article" - Cloned the found object as many times as desired (n times) - Remove the object found - Add the clones in the list What do you think? A concrete example: Private List<Product> listProduct; listProduct= new List<Product>(); Product objProduit_1 = new Produit; objProduct_1.ref = "001"; objProduct_1.article = "G900"; objProduct_1.quantity = 30; listProducts.Add(objProduct_1); ProductobjProduit_2 = new Product; objProduct_2.ref = "002"; objProduct_2.article = "G900"; objProduct_2.quantity = 35; listProduits.Add(objProduct_2); desired method: public void updateProductsList(List<Product> paramListProducts,Produit objProductToUpdate, int32 nbrDuplication, int32 newQuantity){ ... } Calling method example: updateProductsList(listProducts,objProduct_1,2,15); Waiting result: Replace follow object : ref = "001"; article = "G900"; quantite = 30; By: ref = "001"; article = "G900"; quantite = 15; ref = "001"; article = "G900"; quantite = 15; The Algorithm is correct? Would you have an idea of the method implementation "updateProductsList" Thank you in advance for your help.

    Read the article

  • i cant ping to my DMZ zone from the local inside PC

    - by Big Denzel
    HI everybody. Can anyone please help me on the following issue. I got a Cisco Asa 5520 configured at my network. I cant ping to my DMZ interface from a local inside network PC. so the only way a ping the DMZ is right from the Cisco ASA firewall, there i can pint to all 3 interfaces, Inside, Outside and DMZ,,,, But no PC from the Inside Network can access the DMZ. Can please any one help? I thank you all in advance Bellow is my Cisco ASA 5520 Firewall show run; ASA-FW# sh run : Saved : ASA Version 7.0(8) ! hostname ASA-FW enable password encrypted passwd encrypted names dns-guard ! interface GigabitEthernet0/0 description "Link-To-GW-Router" nameif outside security-level 0 ip address 41.223.156.109 255.255.255.248 ! interface GigabitEthernet0/1 description "Link-To-Local-LAN" nameif inside security-level 100 ip address 10.1.4.1 255.255.252.0 ! interface GigabitEthernet0/2 description "Link-To-DMZ" nameif dmz security-level 50 ip address 172.16.16.1 255.255.255.0 ! interface GigabitEthernet0/3 shutdown no nameif no security-level no ip address ! interface Management0/0 description "Local-Management-Interface" no nameif no security-level ip address 192.168.192.1 255.255.255.0 ! ftp mode passive access-list OUT-TO-DMZ extended permit tcp any host 41.223.156.107 eq smtp access-list OUT-TO-DMZ extended permit tcp any host 41.223.156.106 eq www access-list OUT-TO-DMZ extended permit icmp any any log access-list OUT-TO-DMZ extended deny ip any any access-list inside extended permit tcp any any eq pop3 access-list inside extended permit tcp any any eq smtp access-list inside extended permit tcp any any eq ssh access-list inside extended permit tcp any any eq telnet access-list inside extended permit tcp any any eq https access-list inside extended permit udp any any eq domain access-list inside extended permit tcp any any eq domain access-list inside extended permit tcp any any eq www access-list inside extended permit ip any any access-list inside extended permit icmp any any access-list dmz extended permit ip any any access-list dmz extended permit icmp any any access-list cap extended permit ip 10.1.4.0 255.255.252.0 172.16.16.0 255.255.25 5.0 access-list cap extended permit ip 172.16.16.0 255.255.255.0 10.1.4.0 255.255.25 2.0 no pager logging enable logging buffer-size 5000 logging monitor warnings logging trap warnings mtu outside 1500 mtu inside 1500 mtu dmz 1500 no failover asdm image disk0:/asdm-508.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 1 0.0.0.0 0.0.0.0 static (dmz,outside) tcp 41.223.156.106 www 172.16.16.80 www netmask 255.255.255 .255 static (dmz,outside) tcp 41.223.156.107 smtp 172.16.16.25 smtp netmask 255.255.2 55.255 static (inside,dmz) 10.1.0.0 10.1.16.0 netmask 255.255.252.0 access-group OUT-TO-DMZ in interface outside access-group inside in interface inside access-group dmz in interface dmz route outside 0.0.0.0 0.0.0.0 41.223.156.108 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 timeout mgcp-pat 0:05:00 sip 0:30:00 sip_media 0:02:00 timeout uauth 0:05:00 absolute http server enable http 10.1.4.0 255.255.252.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 telnet timeout 5 ssh timeout 5 console timeout 0 management-access inside ! ! match default-inspection-traffic ! ! policy-map global_policy class inspection_default inspect dns maximum-length 512 inspect ftp inspect h323 h225 inspect h323 ras inspect netbios inspect rsh inspect rtsp inspect skinny inspect esmtp inspect sqlnet inspect sunrpc inspect tftp inspect sip inspect xdmcp ! service-policy global_policy global Cryptochecksum: : end ASA-FW# Please Help. Big Denzel

    Read the article

  • Cisco Pix how to add an additional block of static ip addresses for nat?

    - by Scott Szretter
    I have a pix 501 with 5 static ip addresses. My isp just gave me 5 more. I am trying to figure out how to add the new block and then how to nat/open at least one of them to an inside machine. So far, I named a new interface "intf2", ip range is 71.11.11.58 - 62 (gateway should 71.11.11.57) imgsvr is the machine I want to nat to one of the (71.11.11.59) new ip addresses. mail (.123) is an example of a machine that is mapped to the current existing 5 ip block (96.11.11.121 gate / 96.11.11.122-127) and working fine. Building configuration... : Saved : PIX Version 6.3(4) interface ethernet0 auto interface ethernet0 vlan1 logical interface ethernet1 auto nameif ethernet0 outside security0 nameif ethernet1 inside security100 nameif vlan1 intf2 security1 enable password xxxxxxxxx encrypted passwd xxxxxxxxx encrypted hostname xxxxxxxPIX domain-name xxxxxxxxxxx no fixup protocol dns fixup protocol ftp 21 fixup protocol h323 h225 1720 fixup protocol h323 ras 1718-1719 fixup protocol http 80 fixup protocol rsh 514 fixup protocol rtsp 554 fixup protocol sip 5060 fixup protocol sip udp 5060 fixup protocol skinny 2000 no fixup protocol smtp 25 fixup protocol sqlnet 1521 fixup protocol tftp 69 names ...snip... name 192.168.10.13 mail name 192.168.10.29 imgsvr object-group network vpn1 network-object mail 255.255.255.255 access-list outside_access_in permit tcp any host 96.11.11.124 eq www access-list outside_access_in permit tcp any host 96.11.11.124 eq https access-list outside_access_in permit tcp any host 96.11.11.124 eq 3389 access-list outside_access_in permit tcp any host 96.11.11.123 eq https access-list outside_access_in permit tcp any host 96.11.11.123 eq www access-list outside_access_in permit tcp any host 96.11.11.125 eq smtp access-list outside_access_in permit tcp any host 96.11.11.125 eq https access-list outside_access_in permit tcp any host 96.11.11.125 eq 10443 access-list outside_access_in permit tcp any host 96.11.11.126 eq smtp access-list outside_access_in permit tcp any host 96.11.11.126 eq https access-list outside_access_in permit tcp any host 96.11.11.126 eq 10443 access-list outside_access_in deny ip any any access-list inside_nat0_outbound permit ip 192.168.0.0 255.255.0.0 IPPool2 255.255.255.0 access-list inside_nat0_outbound permit ip 172.17.0.0 255.255.0.0 IPPool2 255.255.255.0 access-list inside_nat0_outbound permit ip 172.16.0.0 255.255.0.0 IPPool2 255.255.255.0 ...snip... access-list inside_access_in deny tcp any any eq smtp access-list inside_access_in permit ip any any pager lines 24 logging on logging buffered notifications mtu outside 1500 mtu inside 1500 ip address outside 96.11.11.122 255.255.255.248 ip address inside 192.168.10.15 255.255.255.0 ip address intf2 71.11.11.58 255.255.255.248 ip audit info action alarm ip audit attack action alarm pdm location exchange 255.255.255.255 inside pdm location mail 255.255.255.255 inside pdm location IPPool2 255.255.255.0 outside pdm location 96.11.11.122 255.255.255.255 inside pdm location 192.168.10.1 255.255.255.255 inside pdm location 192.168.10.6 255.255.255.255 inside pdm location mail-gate1 255.255.255.255 inside pdm location mail-gate2 255.255.255.255 inside pdm location imgsvr 255.255.255.255 inside pdm location 71.11.11.59 255.255.255.255 intf2 pdm logging informational 100 pdm history enable arp timeout 14400 global (outside) 1 interface global (outside) 2 96.11.11.123 global (intf2) 3 interface global (intf2) 4 71.11.11.59 nat (inside) 0 access-list inside_nat0_outbound nat (inside) 2 mail 255.255.255.255 0 0 nat (inside) 1 0.0.0.0 0.0.0.0 0 0 static (inside,outside) tcp 96.11.11.123 smtp mail smtp netmask 255.255.255.255 0 0 static (inside,outside) tcp 96.11.11.123 https mail https netmask 255.255.255.255 0 0 static (inside,outside) tcp 96.11.11.123 www mail www netmask 255.255.255.255 0 0 static (inside,outside) 96.11.11.124 ts netmask 255.255.255.255 0 0 static (inside,outside) 96.11.11.126 mail-gate2 netmask 255.255.255.255 0 0 static (inside,outside) 96.11.11.125 mail-gate1 netmask 255.255.255.255 0 0 access-group outside_access_in in interface outside access-group inside_access_in in interface inside route outside 0.0.0.0 0.0.0.0 96.11.11.121 1 route intf2 0.0.0.0 0.0.0.0 71.11.11.57 2 timeout xlate 0:05:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 rpc 0:10:00 h225 1:00:00 timeout h323 0:05:00 mgcp 0:05:00 sip 0:30:00 sip_media 0:02:00 timeout uauth 0:05:00 absolute floodguard enable ...snip... : end [OK] Thanks!

    Read the article

  • how can we make class with Linked list recursion ...

    - by epsilon_G
    Hi , I'm newbee in The C++ heaven and the OOP, ... I'd like to make some stuffs with the Data structures ... However,I'd like to merge two linked listes ... I made a class before "List" wich contain all what I need to programme something with List ... Add , Display .. The probleme is that I programmed the function "Merge2lists" which give us the third list .. How can I display the third list in the main program after using "Merge2lists" Plz , my prob with the Class and the OOP syntaxe ... try to give me the implementation in the main program ??? Otherwise,How can I apply function given pointer in main program wich declared in class .. Thankx class Liste { private: struct node { int elem ; node *next; }*p; public: LLC(); void Merge2lists (node* a, node * b,node *&result); ~LLC(); }; void List::Merge2lists (node* a, node * b,node *&result) { result = NULL; if (a==NULL) { result=b; return;} else if (b==NULL) { result=a; return;} if (a->elem <= b->elem) { result = a; Merge2lists(a->next, b,result->next); } else { result = b; Merge2lists(a, b->next,result->next); } return; } int main() { liste a ; a.Aff_Val(46); a.Aff_Val(54); a.add_as_first(2); a.add_as_first(1); a.Display(); /*This to displat the elemnts ... Don't care about it it's easy to make*/ list liste2; b.Add(2); b.Add(14); b.Add(16); list result; Merge2lists (a,b,result); /*The probleme is here , how can I use this in my program */

    Read the article

  • Optimized OCR black/white pixel algorithm

    - by eagle
    I am writing a simple OCR solution for a finite set of characters. That is, I know the exact way all 26 letters in the alphabet will look like. I am using C# and am able to easily determine if a given pixel should be treated as black or white. I am generating a matrix of black/white pixels for every single character. So for example, the letter I (capital i), might look like the following: 01110 00100 00100 00100 01110 Note: all points, which I use later in this post, assume that the top left pixel is (0, 0), bottom right pixel is (4, 4). 1's represent black pixels, and 0's represent white pixels. I would create a corresponding matrix in C# like this: CreateLetter("I", new List<List<bool>>() { new List<bool>() { false, true, true, true, false }, new List<bool>() { false, false, true, false, false }, new List<bool>() { false, false, true, false, false }, new List<bool>() { false, false, true, false, false }, new List<bool>() { false, true, true, true, false } }); I know I could probably optimize this part by using a multi-dimensional array instead, but let's ignore that for now, this is for illustrative purposes. Every letter is exactly the same dimensions, 10px by 11px (10px by 11px is the actual dimensions of a character in my real program. I simplified this to 5px by 5px in this posting since it is much easier to "draw" the letters using 0's and 1's on a smaller image). Now when I give it a 10px by 11px part of an image to analyze with OCR, it would need to run on every single letter (26) on every single pixel (10 * 11 = 110) which would mean 2,860 (26 * 110) iterations (in the worst case) for every single character. I was thinking this could be optimized by defining the unique characteristics of every character. So, for example, let's assume that the set of characters only consists of 5 distinct letters: I, A, O, B, and L. These might look like the following: 01110 00100 00100 01100 01000 00100 01010 01010 01010 01000 00100 01110 01010 01100 01000 00100 01010 01010 01010 01000 01110 01010 00100 01100 01110 After analyzing the unique characteristics of every character, I can significantly reduce the number of tests that need to be performed to test for a character. For example, for the "I" character, I could define it's unique characteristics as having a black pixel in the coordinate (3, 0) since no other characters have that pixel as black. So instead of testing 110 pixels for a match on the "I" character, I reduced it to a 1 pixel test. This is what it might look like for all these characters: var LetterI = new OcrLetter() { Name = "I", BlackPixels = new List<Point>() { new Point (3, 0) } } var LetterA = new OcrLetter() { Name = "A", WhitePixels = new List<Point>() { new Point(2, 4) } } var LetterO = new OcrLetter() { Name = "O", BlackPixels = new List<Point>() { new Point(3, 2) }, WhitePixels = new List<Point>() { new Point(2, 2) } } var LetterB = new OcrLetter() { Name = "B", BlackPixels = new List<Point>() { new Point(3, 1) }, WhitePixels = new List<Point>() { new Point(3, 2) } } var LetterL = new OcrLetter() { Name = "L", BlackPixels = new List<Point>() { new Point(1, 1), new Point(3, 4) }, WhitePixels = new List<Point>() { new Point(2, 2) } } This is challenging to do manually for 5 characters and gets much harder the greater the amount of letters that are added. You also want to guarantee that you have the minimum set of unique characteristics of a letter since you want it to be optimized as much as possible. I want to create an algorithm that will identify the unique characteristics of all the letters and would generate similar code to that above. I would then use this optimized black/white matrix to identify characters. How do I take the 26 letters that have all their black/white pixels filled in (e.g. the CreateLetter code block) and convert them to an optimized set of unique characteristics that define a letter (e.g. the new OcrLetter() code block)? And how would I guarantee that it is the most efficient definition set of unique characteristics (e.g. instead of defining 6 points as the unique characteristics, there might be a way to do it with 1 or 2 points, as the letter "I" in my example was able to). An alternative solution I've come up with is using a hash table, which will reduce it from 2,860 iterations to 110 iterations, a 26 time reduction. This is how it might work: I would populate it with data similar to the following: Letters["01110 00100 00100 00100 01110"] = "I"; Letters["00100 01010 01110 01010 01010"] = "A"; Letters["00100 01010 01010 01010 00100"] = "O"; Letters["01100 01010 01100 01010 01100"] = "B"; Now when I reach a location in the image to process, I convert it to a string such as: "01110 00100 00100 00100 01110" and simply find it in the hash table. This solution seems very simple, however, this still requires 110 iterations to generate this string for each letter. In big O notation, the algorithm is the same since O(110N) = O(2860N) = O(N) for N letters to process on the page. However, it is still improved by a constant factor of 26, a significant improvement (e.g. instead of it taking 26 minutes, it would take 1 minute). Update: Most of the solutions provided so far have not addressed the issue of identifying the unique characteristics of a character and rather provide alternative solutions. I am still looking for this solution which, as far as I can tell, is the only way to achieve the fastest OCR processing. I just came up with a partial solution: For each pixel, in the grid, store the letters that have it as a black pixel. Using these letters: I A O B L 01110 00100 00100 01100 01000 00100 01010 01010 01010 01000 00100 01110 01010 01100 01000 00100 01010 01010 01010 01000 01110 01010 00100 01100 01110 You would have something like this: CreatePixel(new Point(0, 0), new List<Char>() { }); CreatePixel(new Point(1, 0), new List<Char>() { 'I', 'B', 'L' }); CreatePixel(new Point(2, 0), new List<Char>() { 'I', 'A', 'O', 'B' }); CreatePixel(new Point(3, 0), new List<Char>() { 'I' }); CreatePixel(new Point(4, 0), new List<Char>() { }); CreatePixel(new Point(0, 1), new List<Char>() { }); CreatePixel(new Point(1, 1), new List<Char>() { 'A', 'B', 'L' }); CreatePixel(new Point(2, 1), new List<Char>() { 'I' }); CreatePixel(new Point(3, 1), new List<Char>() { 'A', 'O', 'B' }); // ... CreatePixel(new Point(2, 2), new List<Char>() { 'I', 'A', 'B' }); CreatePixel(new Point(3, 2), new List<Char>() { 'A', 'O' }); // ... CreatePixel(new Point(2, 4), new List<Char>() { 'I', 'O', 'B', 'L' }); CreatePixel(new Point(3, 4), new List<Char>() { 'I', 'A', 'L' }); CreatePixel(new Point(4, 4), new List<Char>() { }); Now for every letter, in order to find the unique characteristics, you need to look at which buckets it belongs to, as well as the amount of other characters in the bucket. So let's take the example of "I". We go to all the buckets it belongs to (1,0; 2,0; 3,0; ...; 3,4) and see that the one with the least amount of other characters is (3,0). In fact, it only has 1 character, meaning it must be an "I" in this case, and we found our unique characteristic. You can also do the same for pixels that would be white. Notice that bucket (2,0) contains all the letters except for "L", this means that it could be used as a white pixel test. Similarly, (2,4) doesn't contain an 'A'. Buckets that either contain all the letters or none of the letters can be discarded immediately, since these pixels can't help define a unique characteristic (e.g. 1,1; 4,0; 0,1; 4,4). It gets trickier when you don't have a 1 pixel test for a letter, for example in the case of 'O' and 'B'. Let's walk through the test for 'O'... It's contained in the following buckets: // Bucket Count Letters // 2,0 4 I, A, O, B // 3,1 3 A, O, B // 3,2 2 A, O // 2,4 4 I, O, B, L Additionally, we also have a few white pixel tests that can help: (I only listed those that are missing at most 2). The Missing Count was calculated as (5 - Bucket.Count). // Bucket Missing Count Missing Letters // 1,0 2 A, O // 1,1 2 I, O // 2,2 2 O, L // 3,4 2 O, B So now we can take the shortest black pixel bucket (3,2) and see that when we test for (3,2) we know it is either an 'A' or an 'O'. So we need an easy way to tell the difference between an 'A' and an 'O'. We could either look for a black pixel bucket that contains 'O' but not 'A' (e.g. 2,4) or a white pixel bucket that contains an 'O' but not an 'A' (e.g. 1,1). Either of these could be used in combination with the (3,2) pixel to uniquely identify the letter 'O' with only 2 tests. This seems like a simple algorithm when there are 5 characters, but how would I do this when there are 26 letters and a lot more pixels overlapping? For example, let's say that after the (3,2) pixel test, it found 10 different characters that contain the pixel (and this was the least from all the buckets). Now I need to find differences from 9 other characters instead of only 1 other character. How would I achieve my goal of getting the least amount of checks as possible, and ensure that I am not running extraneous tests?

    Read the article

  • OData &ndash; The easiest service I can create: now with updates

    - by Jon Dalberg
    The other day I created a simple NastyWord service exposed via OData. It was read-only and used an in-memory backing store for the words. Today I’ll modify it to use a file instead of a list and I’ll accept new nasty words by implementing IUpdatable directly. The first thing to do is enable the service to accept new entries. This is done at configuration time by adding the “WriteAppend” access rule: 1: public class NastyWords : DataService<NastyWordsDataSource> 2: { 3: // This method is called only once to initialize service-wide policies. 4: public static void InitializeService(DataServiceConfiguration config) 5: { 6: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead | EntitySetRights.WriteAppend); 7: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 8: } 9: }   Next I placed a file, NastyWords.txt, in the “App_Data” folder and added a few *choice* words to start. This required one simple change to our NastyWordDataSource.cs file: 1: public NastyWordsDataSource() 2: { 3: UpdateFromSource(); 4: } 5:   6: private void UpdateFromSource() 7: { 8: var words = File.ReadAllLines(pathToFile); 9: NastyWords = (from w in words 10: select new NastyWord { Word = w }).AsQueryable(); 11: }   Nothing too shocking here, just reading each line from the NastyWords.txt file and exposing them. Next, I implemented IUpdatable which comes with a boat-load of methods. We don’t need all of them for now since we are only concerned with allowing new values. Here are the methods we must implement, all the others throw a NotImplementedException: 1: public object CreateResource(string containerName, string fullTypeName) 2: { 3: var nastyWord = new NastyWord(); 4: pendingUpdates.Add(nastyWord); 5: return nastyWord; 6: } 7:   8: public object ResolveResource(object resource) 9: { 10: return resource; 11: } 12:   13: public void SaveChanges() 14: { 15: var intersect = (from w in pendingUpdates 16: select w.Word).Intersect(from n in NastyWords 17: select n.Word); 18:   19: if (intersect.Count() > 0) 20: throw new DataServiceException(500, "duplicate entry"); 21:   22: var lines = from w in pendingUpdates 23: select w.Word; 24:   25: File.AppendAllLines(pathToFile, 26: lines, 27: Encoding.UTF8); 28:   29: pendingUpdates.Clear(); 30:   31: UpdateFromSource(); 32: } 33:   34: public void SetValue(object targetResource, string propertyName, object propertyValue) 35: { 36: targetResource.GetType().GetProperty(propertyName).SetValue(targetResource, propertyValue, null); 37: }   I use a simple list to contain the pending updates and only commit them when the “SaveChanges” method is called. Here’s the order these methods are called in our service during an insert: CreateResource – here we just instantiate a new NastyWord and stick a reference to it in our pending updates list. SetValue – this is where the “Word” property of the NastyWord instance is set. SaveChanges – get the list of pending updates, barfing on duplicates, write them to the file and clear our pending list. ResolveResource – the newly created resource will be returned directly here since we aren’t dealing with “handles” to objects but the actual objects themselves. Not too bad, eh? I didn’t find this documented anywhere but a little bit of digging in the OData spec and use of Fiddler made it pretty easy to figure out. Here is some client code which would add a new nasty word: 1: static void Main(string[] args) 2: { 3: var svc = new ServiceReference1.NastyWordsDataSource(new Uri("http://localhost.:60921/NastyWords.svc")); 4: svc.AddToNastyWords(new ServiceReference1.NastyWord() { Word = "shat" }); 5:   6: svc.SaveChanges(); 7: }   Here’s all of the code so far for to implement the service: 1: using System; 2: using System.Collections.Generic; 3: using System.Data.Services; 4: using System.Data.Services.Common; 5: using System.Linq; 6: using System.ServiceModel.Web; 7: using System.Web; 8: using System.IO; 9: using System.Text; 10:   11: namespace ONasty 12: { 13: [DataServiceKey("Word")] 14: public class NastyWord 15: { 16: public string Word { get; set; } 17: } 18:   19: public class NastyWordsDataSource : IUpdatable 20: { 21: private List<NastyWord> pendingUpdates = new List<NastyWord>(); 22: private string pathToFile = @"path to your\App_Data\NastyWords.txt"; 23:   24: public NastyWordsDataSource() 25: { 26: UpdateFromSource(); 27: } 28:   29: private void UpdateFromSource() 30: { 31: var words = File.ReadAllLines(pathToFile); 32: NastyWords = (from w in words 33: select new NastyWord { Word = w }).AsQueryable(); 34: } 35:   36: public IQueryable<NastyWord> NastyWords { get; private set; } 37:   38: public void AddReferenceToCollection(object targetResource, string propertyName, object resourceToBeAdded) 39: { 40: throw new NotImplementedException(); 41: } 42:   43: public void ClearChanges() 44: { 45: pendingUpdates.Clear(); 46: } 47:   48: public object CreateResource(string containerName, string fullTypeName) 49: { 50: var nastyWord = new NastyWord(); 51: pendingUpdates.Add(nastyWord); 52: return nastyWord; 53: } 54:   55: public void DeleteResource(object targetResource) 56: { 57: throw new NotImplementedException(); 58: } 59:   60: public object GetResource(IQueryable query, string fullTypeName) 61: { 62: throw new NotImplementedException(); 63: } 64:   65: public object GetValue(object targetResource, string propertyName) 66: { 67: throw new NotImplementedException(); 68: } 69:   70: public void RemoveReferenceFromCollection(object targetResource, string propertyName, object resourceToBeRemoved) 71: { 72: throw new NotImplementedException(); 73: } 74:   75: public object ResetResource(object resource) 76: { 77: throw new NotImplementedException(); 78: } 79:   80: public object ResolveResource(object resource) 81: { 82: return resource; 83: } 84:   85: public void SaveChanges() 86: { 87: var intersect = (from w in pendingUpdates 88: select w.Word).Intersect(from n in NastyWords 89: select n.Word); 90:   91: if (intersect.Count() > 0) 92: throw new DataServiceException(500, "duplicate entry"); 93:   94: var lines = from w in pendingUpdates 95: select w.Word; 96:   97: File.AppendAllLines(pathToFile, 98: lines, 99: Encoding.UTF8); 100:   101: pendingUpdates.Clear(); 102:   103: UpdateFromSource(); 104: } 105:   106: public void SetReference(object targetResource, string propertyName, object propertyValue) 107: { 108: throw new NotImplementedException(); 109: } 110:   111: public void SetValue(object targetResource, string propertyName, object propertyValue) 112: { 113: targetResource.GetType().GetProperty(propertyName).SetValue(targetResource, propertyValue, null); 114: } 115: } 116:   117: public class NastyWords : DataService<NastyWordsDataSource> 118: { 119: // This method is called only once to initialize service-wide policies. 120: public static void InitializeService(DataServiceConfiguration config) 121: { 122: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead | EntitySetRights.WriteAppend); 123: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 124: } 125: } 126: } Next time we’ll allow removing nasty words. Enjoy!

    Read the article

  • C# Memoization of functions with arbitrary number of arguments

    - by Lirik
    I'm trying to create a memoization interface for functions with arbitrary number of arguments, but I'm failing miserably. The first thing I tried is to define an interface for a function which gets memoized automatically upon execution: class EMAFunction:IFunction { Dictionary<List<object>, List<object>> map; class EMAComparer : IEqualityComparer<List<object>> { private int _multiplier = 97; public bool Equals(List<object> a, List<object> b) { List<object> aVals = (List<object>)a[0]; int aPeriod = (int)a[1]; List<object> bVals = (List<object>)b[0]; int bPeriod = (int)b[1]; return (aVals.Count == bVals.Count) && (aPeriod == bPeriod); } public int GetHashCode(List<object> obj) { // Don't compute hash code on null object. if (obj == null) { return 0; } // Get length. int length = obj.Count; List<object> vals = (List<object>) obj[0]; int period = (int) obj[1]; return (_multiplier * vals.GetHashCode() * period.GetHashCode()) + length;; } } public EMAFunction() { NumParams = 2; Name = "EMA"; map = new Dictionary<List<object>, List<object>>(new EMAComparer()); } #region IFunction Members public int NumParams { get; set; } public string Name { get; set; } public object Execute(List<object> parameters) { if (parameters.Count != NumParams) throw new ArgumentException("The num params doesn't match!"); if (!map.ContainsKey(parameters)) { //map.Add(parameters, List<double> values = new List<double>(); List<object> asObj = (List<object>)parameters[0]; foreach (object val in asObj) { values.Add((double)val); } int period = (int)parameters[1]; asObj.Clear(); List<double> ema = TechFunctions.ExponentialMovingAverage(values, period); foreach (double val in ema) { asObj.Add(val); } map.Add(parameters, asObj); } return map[parameters]; } public void ClearMap() { map.Clear(); } #endregion } Here are my tests of the function: private void MemoizeTest() { DataSet dataSet = DataLoader.LoadData(DataLoader.DataSource.FROM_WEB, 1024); List<String> labels = dataSet.DataLabels; Stopwatch sw = new Stopwatch(); IFunction emaFunc = new EMAFunction(); List<object> parameters = new List<object>(); int numRuns = 1000; long sumTicks = 0; parameters.Add(dataSet.GetValues("open")); parameters.Add(12); // First call for(int i = 0; i < numRuns; ++i) { emaFunc.ClearMap();// remove any memoization mappings sw.Start(); emaFunc.Execute(parameters); sw.Stop(); sumTicks += sw.ElapsedTicks; } Console.WriteLine("Average ticks not-memoized " + (sumTicks/numRuns)); sumTicks = 0; // Repeat call for (int i = 0; i < numRuns; ++i) { sw.Start(); emaFunc.Execute(parameters); sw.Stop(); sumTicks += sw.ElapsedTicks; } Console.WriteLine("Average ticks memoized " + (sumTicks/numRuns)); } The performance is confusing me... I expected the memoized function to be faster, but it didn't work out that way: Average ticks not-memoized 106,182 Average ticks memoized 198,854 I tried doubling the data instances to 2048, but the results were about the same: Average ticks not-memoized 232,579 Average ticks memoized 446,280 I did notice that it was correctly finding the parameters in the map and it going directly to the map, but the performance was still slow... I'm either open for troubleshooting help with this example, or if you have a better solution to the problem then please let me know what it is.

    Read the article

  • Pinning Projects and Solutions with Visual Studio 2010

    - by ScottGu
    This is the twenty-fourth in a series of blog posts I’m doing on the VS 2010 and .NET 4 release. Today’s blog post covers a very small, but still useful, feature of VS 2010 – the ability to “pin” projects and solutions to both the Windows 7 taskbar as well VS 2010 Start Page.  This makes it easier to quickly find and open projects in the IDE. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] VS 2010 Jump List on Windows 7 Taskbar Windows 7 added support for customizing the taskbar at the bottom of your screen.  You can “pin” and re-arrange your application icons on it however you want. Most developers using Visual Studio 2010 on Windows 7 probably already know that they can “pin” the Visual Studio icon to the Windows 7 taskbar – making it always present.  What you might not yet have discovered, though, is that Visual Studio 2010 also exposes a Taskbar “jump list” that you can use to quickly find and load your most recently used projects as well. To activate this, simply right-click on the VS 2010 icon in the task bar and you’ll see a list of your most recent projects.  Clicking one will load it within Visual Studio 2010: Pinning Projects on the VS 2010 Jump List with Windows 7 One nice feature also supported by VS 2010 is the ability to optionally “pin” projects to the jump-list as well – which makes them always listed at the top.  To enable this, simply hover over the project you want to pin and then click the “pin” icon that appears on the right of it: When you click the pin the project will be added to a new “Pinned” list at the top of the jumplist: This enables you to always display your own list of projects at the top of the list.  You can optionally click and drag them to display in any order you want. VS 2010 Start Page and Project Pinning VS 2010 has a new “start page” that displays by default each time you launch a new instance of Visual Studio.  In addition to displaying learning and help resources, it also includes a “Recent Projects” section that you can use to quickly load previous projects that you have recently worked on: The “Recent Projects” section of the start page also supports the concept of “pinning” a link to projects you want to always keep in the list – regardless of how recently they’ve been accessed. To “pin” a project to the list you simply select the “pin” icon that appears when you hover over an item within the list: Once you’ve pinned a project to the start page list it will always show up in it (at least until you “unpin” it). Summary This project pinning support is a small but nice usability improvement with VS 2010 and can make it easier to quickly find and load projects/solutions.  If you work with a lot of projects at the same time it offers a nice shortcut to load them. Hope this helps, Scott

    Read the article

  • Logical Domain Modeling Made Simple

    - by Knut Vatsendvik
    How can logical domain modeling be made simple and collaborative? Many non-technical end-users, managers and business domain experts find it difficult to understand the visual models offered by many UML tools. This creates trouble in capturing and verifying the information that goes into a logical domain model. The tools are also too advanced and complex for a non-technical user to learn and use. We have therefore, in our current project, ended up with using Confluence as tool for designing the logical domain model with the help of a few very useful plugins. Big thanks to Ole Nymoen and Per Spilling for their expertise in this field that made this posting possible. Confluence Plugins Here is a list of Confluence plugins used in this solution. Install these before trying out the macros used below. Plugin Description Copy Space Allows a space administrator to copy a space, including the pages within the space Metadata Supports adding metadata to Wiki pages Label Manages labeling of pages Linking Contains macros for linking to templates, the dashboard and other Table Enhances the table capability in Confluence Creating a Confluence Space First we need to create a new confluence space for the domain model. Click the link Create a Space located below the list of spaces on the Dashboard. Please contact your Confluence administrator is you do not have permissions to do this.   For illustrative purpose all attributes and entities in this posting are based on my imaginary project manager domain model. When a logical domain model is good enough for being implemented, do a copy of the Confluence Space (see Copy Space plugin). In this way you create a stable version of the logical domain model while further design can continue with the new copied space. Typical will the implementation phase result in a database design and/or a XSD schema design. Add Space Templates Go to the Home page of your Confluence Space. Navigate to the Browse drop-down menu and click on Advanced. Then click the Templates option in the left navigation panel. Click Add New Space Template to add the following three templates. Name: attribute {metadata-list} || Name | | || Type | | || Format | | || Description | | {metadata-list} {add-label:attribute} Name: primary-type {metadata-list} || Name | || || Type | || || Format | || || Description | || {metadata-list} {add-label:primary-type} Name: complex-type {metadata-list} || Name | || || Description |  || {metadata-list} h3. Attributes || Name || Type || Format || Description || | [name] | {metadata-from:name|Type} | {metadata-from:name|Format} | {metadata-from:name|Description} | {add-label:complex-type,entity} The metadata-list macro (see Metadata plugin) will save a list of metadata values to the page. The add-label macro (see Label plugin) will automatically label the page. Primary Types Page Our first page to add will act as container for our primary types. Switch to Wiki markup when adding the following content to the page. | (+) {add-page:template=primary-type|parent=@self}Add new primary type{add-page} | {metadata-report:Name,Type,Format,Description|sort=Name|root=@self|pages=@descendents} Once the page is created, click the Add new primary type (create-page macro) to start creating a new pages. Here is an example of input to the LocalDate page. Embrace the LocalDate with square brackets [] to make the page linkable. Again switch to Wiki markup before editing. {metadata-list} || Name | [LocalDate] || || Type | Date || || Format | YYYY-MM-DD || || Description | Date in local time zone. YYYY = year, MM = month and DD = day || {metadata-list} {add-label:primary-type} The metadata-report macro will show a tabular report of all child pages.   Attributes Page The next page will act as container for all of our attributes. | (+) {add-page:template=attribute|parent=@self|title=attribute}Add new attribute{add-page} | {metadata-report:Name,Type,Format,Description|sort=Name|pages=@descendants} Here is an example of input to the startDate page. {metadata-list} || Name | [startDate] || || Type | [LocalDate] || || Format | {metadata-from:LocalDate|Format} || || Description | The projects start date || {metadata-list} {add-label:attribute} Using the metadata-from macro we fetch the text from the previously created LocalDate page. Complex Types Page The last page in this example shows how attributes can be combined together to form more complex types.   h3. Intro Overview of complex types in the domain model. | (+) {add-page:template=complex-type|parent=@self}Add a new complex type{add-page}\\ | {metadata-report:Name,Description|sort=Name|root=@self|pages=@descendents} Here is an example of input to the ProjectType page. {metadata-list} || Name | [ProjectType] || || Description | Represents a project || {metadata-list} h3. Attributes || Name || Type || Format || Description || | [projectId] | {metadata-from:projectId|Type} | {metadata-from:projectId|Format} | {metadata-from:projectId|Description} | | [name] | {metadata-from:name|Type} | {metadata-from:name|Format} | {metadata-from:name|Description} | | [description] | {metadata-from:description|Type} | {metadata-from:description|Format} | {metadata-from:description|Description} | | [startDate] | {metadata-from:startDate|Type} | {metadata-from:startDate|Format} | {metadata-from:startDate|Description} | {add-label:complex-type,entity} Gives us this Conclusion Using a web-based corporate Wiki like Confluence to create a logical domain model increases the collaboration between people with different roles in the enterprise. It’s my believe that this helps the domain model to be more accurate, and better documented. In our real project we have more pages than illustrated here to complete the documentation. We do also still use UML tools to create different types of diagrams that Confluence do not support. As a last tip, an ImageMap plugin can make those diagrams clickable when used in pages. Enjoy!

    Read the article

  • On StringComparison Values

    - by Jesse
    When you use the .NET Framework’s String.Equals and String.Compare methods do you use an overloStringComparison enumeration value? If not, you should be because the value provided for that StringComparison argument can have a big impact on the results of your string comparison. The StringComparison enumeration defines values that fall into three different major categories: Culture-sensitive comparison using a specific culture, defaulted to the Thread.CurrentThread.CurrentCulture value (StringComparison.CurrentCulture and StringComparison.CurrentCutlureIgnoreCase) Invariant culture comparison (StringComparison.InvariantCulture and StringComparison.InvariantCultureIgnoreCase) Ordinal (byte-by-byte) comparison of  (StringComparison.Ordinal and StringComparison.OrdinalIgnoreCase) There is a lot of great material available that detail the technical ins and outs of these different string comparison approaches. If you’re at all interested in the topic these two MSDN articles are worth a read: Best Practices For Using Strings in the .NET Framework: http://msdn.microsoft.com/en-us/library/dd465121.aspx How To Compare Strings: http://msdn.microsoft.com/en-us/library/cc165449.aspx Those articles cover the technical details of string comparison well enough that I’m not going to reiterate them here other than to say that the upshot is that you typically want to use the culture-sensitive comparison whenever you’re comparing strings that were entered by or will be displayed to users and the ordinal comparison in nearly all other cases. So where does that leave the invariant culture comparisons? The “Best Practices For Using Strings in the .NET Framework” article has the following to say: “On balance, the invariant culture has very few properties that make it useful for comparison. It does comparison in a linguistically relevant manner, which prevents it from guaranteeing full symbolic equivalence, but it is not the choice for display in any culture. One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display. For example, if a large data file that contains a list of sorted identifiers for display accompanies an application, adding to this list would require an insertion with invariant-style sorting.” I don’t know about you, but I feel like that paragraph is a bit lacking. Are there really any “real world” reasons to use the invariant culture comparison? I think the answer to this question is, “yes”, but in order to understand why we should first think about what the invariant culture comparison really does. The invariant culture comparison is really just a culture-sensitive comparison using a special invariant culture (Michael Kaplan has a great post on the history of the invariant culture on his blog: http://blogs.msdn.com/b/michkap/archive/2004/12/29/344136.aspx). This means that the invariant culture comparison will apply the linguistic customs defined by the invariant culture which are guaranteed not to differ between different machines or execution contexts. This sort of consistently does prove useful if you needed to maintain a list of strings that are sorted in a meaningful and consistent way regardless of the user viewing them or the machine on which they are being viewed. Example: Prototype Names Let’s say that you work for a large multi-national toy company with branch offices in 10 different countries. Each year the company would work on 15-25 new toy prototypes each of which is assigned a “code name” while it is under development. Coming up with fun new code names is a big part of the company culture that everyone really enjoys, so to be fair the CEO of the company spent a lot of time coming up with a prototype naming scheme that would be fun for everyone to participate in, fair to all of the different branch locations, and accessible to all members of the organization regardless of the country they were from and the language that they spoke. Each new prototype will get a code name that begins with a letter following the previously created name using the alphabetical order of the Latin/Roman alphabet. Each new year prototype names would start back at “A”. The country that leads the prototype development effort gets to choose the name in their native language. (An appropriate Romanization system will be used for countries where the primary language is not written in the Latin/Roman alphabet. For example, the Pinyin system could be used for Chinese). To avoid repeating names, a list of all current and past prototype names will be maintained on each branch location’s company intranet site. Assuming that maintaining a single pre-sorted list is not feasible among all of the highly distributed intranet implementations, what string comparison method would you use to sort each year’s list of prototype names so that the list is both meaningful and consistent regardless of the country within which the list is being viewed? Sorting the list with a culture-sensitive comparison using the default configured culture on each country’s intranet server the list would probably work most of the time, but subtle differences between cultures could mean that two different people would see a list that was sorted slightly differently. The CEO wants the prototype names to be a unifying aspect of company culture and is adamant that everyone see the the same list sorted in the same order and there’s no way to guarantee a consistent sort across different cultures using the culture-sensitive string comparison rules. The culture-sensitive sort would produce a meaningful list for the specific user viewing it, but it wouldn’t always be consistent between different users. Sorting with the ordinal comparison would certainly be consistent regardless of the user viewing it, but would it be meaningful? Let’s say that the current year’s prototype name list looks like this: Antílope (Spanish) Babouin (French) Cahoun (Czech) Diamond (English) Flosse (German) If you were to sort this list using ordinal rules you’d end up with: Antílope Babouin Diamond Flosse Cahoun This sort is no good because the entry for “C” appears the bottom of the list after “F”. This is because the Czech entry for the letter “C” makes use of a diacritic (accent mark). The ordinal string comparison does a byte-by-byte comparison of the code points that make up each character in the string and the code point for the “C” with the diacritic mark is higher than any letter without a diacritic mark, which pushes that entry to the bottom of the sorted list. The CEO wants each country to be able to create prototype names in their native language, which means we need to allow for names that might begin with letters that have diacritics, so ordinal sorting kills the meaningfulness of the list. As it turns out, this situation is actually well-suited for the invariant culture comparison. The invariant culture accounts for linguistically relevant factors like the use of diacritics but will provide a consistent sort across all machines that perform the sort. Now that we’ve walked through this example, the following line from the “Best Practices For Using Strings in the .NET Framework” makes a lot more sense: One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display That line describes the prototype name example perfectly: we need a way to persist ordered data for a cross-culturally identical display. While this example is 100% made-up, I think it illustrates that there are indeed real-world situations where the invariant culture comparison is useful.

    Read the article

  • Samba smb.conf read only and read/write accounts

    - by Pieter
    Below you can see my smb.conf, pieter is my admin user read/write on the shares works good with that account. Then I have a leecher account that has been added with smbpasswd -a leecher to the smb users, it is set up so this user only has read access to the shares. This works on MegaSam and on Thumbnails but not on my other drives, leecher does not get any access on the other shares. [global] security = user [MegaSam] comment = MegaSam path = /media/MegaSam browsable = yes guest ok = no read list = leecher write list = pieter create mask = 0755 [SilentBob] comment = SilentBob path = /media/SilentBob browsable = yes guest ok = no read list = leecher write list = pieter create mask = 0755 [Thumbnails] comment = Thumbnails path = /media/Thumbnails browsable = yes guest ok = no read list = leecher write list = pieter create mask = 0755 [Downloads] comment = Downloads path = /media/Downloads browsable = yes guest ok = no read list = leecher write list = pieter create mask = 0755

    Read the article

  • How do I list installed software with the installed size?

    - by Lewis Goddard
    I would like to have a list the installed software on my machine, with the disk space consumed by them alongside. I would prefer to be able to order by largest/smallest, but that is not a necessity. I am the sort of person who will install software to try it, and never clean up after myself. As a result, my 7GB (Windows and my Data are on separate partitions, as well as a swap area) Ubuntu 11.04 partition is suffering, and has started regularly showing warning messages. I have cleaned my browser cache, as well as everything under Package Cleaner in Ubuntu Tweak, and am left with 149.81 MB off free space.

    Read the article

  • How to convert aspell dictionary to simple list of words?

    - by rafalmag
    I want to get list of all words from aspell dictionary. I downloaded aspell and aspell polish dictionary, then unziped it using: preunzip pl.cwl I got pl.wl: ... hippie hippies hippiesowski/bXxYc hippika/MNn hippis/NOqsT hippisiara/MnN hippiska/mMN hippisowski/bXxYc ... but they appear with sufix like /bXxYc or /MNn. These suffixes are defined in pl_affix.dat, which looks like ... SFX n Y 5 SFX n a 0 [^ij]a SFX n ja yj [^aeijoóuy]ja SFX n a 0 [aeijoóuy]ja SFX n ia ij [^drt]ia SFX n ia yj [drt]ia ... It is connected to the declination and conjugation. How can I add to the first list all forms (with all corresponding suffixes as defined in .dat file ) ? BTW: I need this list to spell-checker jazzy.

    Read the article

  • Where can I find an exhaustive list of meta tags and what they do?

    - by leeand00
    It seems to me that there are a ton of <meta> tags for all sorts of different purposes out there... Though they all follow a similar format of <meta name="" content="" /> they seem to serve a vast variety of different purposes from controlling the crawling of search engine bots, providing search engine bots with descriptions of pages, to making sure a page display correctly on a mobile device. These tags fall into so many different categories I was wondering if anyone had a wiki or master list of possible meta tags and their content.

    Read the article

  • Algorithm to reduce a bitmap mask to a list of rectangles?

    - by mos
    Before I go spend an afternoon writing this myself, I thought I'd ask if there was an implementation already available --even just as a reference. The first image is an example of a bitmap mask that I would like to turn into a list of rectangles. A bad algorithm would return every set pixel as a 1x1 rectangle. A good algorithm would look like the second image, where it returns the coordinates of the orange and red rectangles. The fact that the rectangles overlap don't matter, just that there are only two returned. To summarize, the ideal result would be these two rectangles (x, y, w, h): [ { 3, 1, 2, 6 }, { 1, 3, 6, 2 } ]

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >