Search Results

Search found 16987 results on 680 pages for 'second'.

Page 513/680 | < Previous Page | 509 510 511 512 513 514 515 516 517 518 519 520  | Next Page >

  • Awk filtering values between two files when regions intersect (any solutions welcome)

    - by user964689
    This is building upon an earlier question Awk conditional filter one file based on another (or other solutions) I have an awk program that outputs a column from rows in a text file 'refGene.txt if values in that row match 2 out of 3 values in another text file. I need to include an additional criteria for finding a match between the two files. The criteria is inclusion if the range of the 2 numberical values specified in each row in file 1 overlap with the range of the two values in a row in refGene.txt. An example of a line in File 1: chr1 10 20 chr2 10 20 and an example line in file 2(refGene.txt) of the matching columns ($3, $5, $ 6): chr1 5 30 Currently the awk program does not treat this as a match because although the first column matches neither the 2nd or 3rd columns do no. But I would like a way to treat this as a match because the region 10-20 in file 1 is WITHIN the range of 5-30 in refGene.txt. However the second line in file 1 should NOT match because the first column does not match, which is necessary. If there is a way to include cases when any of the range in file 1 overlaps with any of the range in refGene.txt that would be really helpful. It should also replace the below conditional statements as it would also find all the cases currently described below. Please let me know if my question is unclear. Any help is really appreciated, thanks it advance! (solutions do not have to be in awk) Rubal FILES=/files/*txt for f in $FILES ; do awk ' BEGIN { FS = "\t"; } FILENAME == ARGV[1] { pair[ $1, $2, $3 ] = 1; next; } { if ( pair[ $3, $5, $6 ] == 1 ) { print $13; } } ' $(basename $f) /files/refGene.txt > /files/results/$(basename $f) ; done

    Read the article

  • JS sort works in Firefox but not IE - can't work out why

    - by Max Williams
    I have a line in a javascript function that sort an array of objects based on the order of another array of strings. This is working in firefox but not in IE and i don't know why. Here's what my data looks like going into the sort call. (I'm using an array of three items just to illustrate the point here). //cherry first then the rest in alphabetical order originalData = ['cherry','apple','banana','clementine','nectarine','plum'] //data before sorting - note how clementine is second item - we wan to to to be after apple and banana csub = [ {"value":"cherry","data":["cherry"],"result":"cherry"}, {"value":"clementine","data":["clementine"],"result":"clementine"}, {"value":"apple","data":["apple"],"result":"apple"}, {"value":"banana","data":["banana"],"result":"banana"}, {"value":"nectarine","data":["nectarine"],"result":"nectarine"}, {"value":"plum","data":["plum"],"result":"plum"} ] //after sorting, csub has been rearranged but still isn't right: clementine is before banana csubSorted = [ {"value":"cherry","data":["cherry"],"result":"cherry"}, {"value":"apple","data":["apple"],"result":"apple"}, {"value":"clementine","data":["clementine"],"result":"clementine"}, {"value":"banana","data":["banana"],"result":"banana"}, {"value":"nectarine","data":["nectarine"],"result":"nectarine"}, {"value":"plum","data":["plum"],"result":"plum"} ] Here's the actual sort code: csubSorted = csub.sort(function(a,b){ return (originalData.indexOf(a.value) > originalData.indexOf(b.value)); }); Can anyone see why this wouldn't work? Is the basic javascript sort function not cross-browser compatible? Can i do this a different way (eg with jquery) that would be cross-browser? grateful for any advice - max

    Read the article

  • Single Responsibility Principle usage how can i call sub method correctly?

    - by Phsika
    i try to learn SOLID prencibles. i writed two type of code style. which one is : 1)Single Responsibility Principle_2.cs : if you look main program all instance generated from interface 1)Single Responsibility Principle_3.cs : if you look main program all instance genareted from normal class My question: which one is correct usage? which one can i prefer? namespace Single_Responsibility_Principle_2 { class Program { static void Main(string[] args) { IReportManager raporcu = new ReportManager(); IReport wordraporu = new WordRaporu(); raporcu.RaporHazirla(wordraporu, "data"); Console.ReadKey(); } } interface IReportManager { void RaporHazirla(IReport rapor, string bilgi); } class ReportManager : IReportManager { public void RaporHazirla(IReport rapor, string bilgi) { rapor.RaporYarat(bilgi); } } interface IReport { void RaporYarat(string bilgi); } class WordRaporu : IReport { public void RaporYarat(string bilgi) { Console.WriteLine("Word Raporu yaratildi:{0}",bilgi); } } class ExcellRaporu : IReport { public void RaporYarat(string bilgi) { Console.WriteLine("Excell raporu yaratildi:{0}",bilgi); } } class PdfRaporu : IReport { public void RaporYarat(string bilgi) { Console.WriteLine("pdf raporu yaratildi:{0}",bilgi); } } } Second 0ne all instance genareted from normal class namespace Single_Responsibility_Principle_3 { class Program { static void Main(string[] args) { WordRaporu word = new WordRaporu(); ReportManager manager = new ReportManager(); manager.RaporHazirla(word,"test"); } } interface IReportManager { void RaporHazirla(IReport rapor, string bilgi); } class ReportManager : IReportManager { public void RaporHazirla(IReport rapor, string bilgi) { rapor.RaporYarat(bilgi); } } interface IReport { void RaporYarat(string bilgi); } class WordRaporu : IReport { public void RaporYarat(string bilgi) { Console.WriteLine("Word Raporu yaratildi:{0}",bilgi); } } class ExcellRaporu : IReport { public void RaporYarat(string bilgi) { Console.WriteLine("Excell raporu yaratildi:{0}",bilgi); } } class PdfRaporu : IReport { public void RaporYarat(string bilgi) { Console.WriteLine("pdf raporu yaratildi:{0}",bilgi); } } }

    Read the article

  • Appropriate wx.Sizer(s) for the job?

    - by MetaHyperBolic
    I have a space in which I would like certain elements (represented here by A, B, D, and G) to each be in its own "corner" of the design. The corners ought to line up as if each of the four elements was repelling the other; a rectangle. This is to be contained within an unresizable panel. I will have several similar panels and want to keep the location of the elements as identical as possible. (I needed something a little more complex than a wx.Wizard, but with the same general idea.) AAAAAAAAAA BB CCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCC CCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCC DDD EEE FFF GGG A represents a text in a large font. B represents a numeric progress meter (e.g. "1 of 7") in a small font. C represents a large block of text. D, E, F, and G are buttons. The G button is separated from others for functionality. I have attempted nested wx.BoxSizers (horizontal boxes inside of one vertical box) without luck. My first problem with wx.BoxSizer is that the .SetMinSize on my last row has not been honored. The second problem is that I have no idea how to make the G button "take up space" without growing comically large, or how I can jam it up against the right edge and bottom edge. I have tried to use a wx.GridBagSizer, but ran into entirely different issues. After plowing through the various online tutorials and wxPython in Action, I'm a little frustrated. The relevant forums appear to see activity once every two weeks. "Playing around with it" has gotten me nowhere; I feel as if I am trying to smooth out a hump in ill-laid carpet.

    Read the article

  • How To Generate Parameter Set for the Diffie-Hellman Key Agreement Algorithm in Android

    - by sebby_zml
    Hello everyone, I am working on mobile/server security related project. I am now stuck in generating a Diffie-Hellman key agreement part. It works fine in server side program but it is not working in mobile side. Thus, I assume that it is not compactible with Android. I used the following class to get the parameters. It returns a comma-separated string of 3 values. The first number is the prime modulus P. The second number is the base generator G. The third number is bit size of the random exponent L. My question is is there anything wrong with the code or it is not compactible for android?What kind of changes should I do? Your suggestion and guidance would be very much help for me. Thanks a lot in advance. public static String genDhParams() { try { // Create the parameter generator for a 1024-bit DH key pair AlgorithmParameterGenerator paramGen = AlgorithmParameterGenerator.getInstance("DH"); paramGen.init(1024); // Generate the parameters AlgorithmParameters params = paramGen.generateParameters(); DHParameterSpec dhSpec = (DHParameterSpec)params.getParameterSpec(DHParameterSpec.class); // Return the three values in a string return ""+dhSpec.getP()+","+dhSpec.getG()+","+dhSpec.getL(); } catch (NoSuchAlgorithmException e) { } catch (InvalidParameterSpecException e) { } return null; } Regards, Sebby

    Read the article

  • How do I find, count, and display unique elements of an array using Perl?

    - by Luke
    I am a novice Perl programmer and would like some help. I have an array list that I am trying to split each element based on the pipe into two scalar elements. From there I would like to spike out only the lines that read ‘PJ RER Apts to Share’ as the first element. Then I want to print out the second element only once while counting each time the element appears. I wrote the piece of code below but can’t figure out where I am going wrong. It might be something small that I am just overlooking. Any help would be greatly appreciated. ## CODE ## my @data = ('PJ RER Apts to Share|PROVIDENCE', 'PJ RER Apts to Share|JOHNSTON', 'PJ RER Apts to Share|JOHNSTON', 'PJ RER Apts to Share|JOHNSTON', 'PJ RER Condo|WEST WARWICK', 'PJ RER Condo|WARWICK'); foreach my $line (@data) { $count = @data; chomp($line); @fields = split(/\|/,$line); if (($fields[0] =~ /PJ RER Apts to Share/g)){ @array2 = $fields[1]; my %seen; my @uniq = grep { ! $seen{$_}++ } @array2; my $count2 = scalar(@uniq); print "$array2[0] ($count2)","\n" } } print "$count","\n"; ## OUTPUT ## PROVIDENCE (1) JOHNSTON (1) JOHNSTON (1) JOHNSTON (1) 6

    Read the article

  • JDBC/OSGi and how to dynamically load drivers without explicitly stating dependencies in the bundle?

    - by Chris
    Hi, This is a biggie. I have a well-structured yet monolithic code base that has a primitive modular architecture (all modules implement interfaces yet share the same classpath). I realize the folly of this approach and the problems it represents when I go to deploy on application servers that may have different conflicting versions of my library. I'm dependent on around 30 jars right now and am mid-way though bnding them up. Now some of my modules are easy to declare the versioned dependencies of, such as my networking components. They statically reference classes within the JRE and other BNDded libraries but my JDBC related components instantiate via Class.forName(...) and can use one of any number of drivers. I am breaking everything up into OSGi bundles by service area. My core classes/interfaces. Reporting related components. Database access related components (via JDBC). etc.... I wish for my code to be able to still be used without OSGi via single jar file with all my dependencies and without OSGi at all (via JARJAR) and also to be modular via the OSGi meta-data and granular bundles with dependency information. How do I configure my bundle and my code so that it can dynamically utilize any driver on the classpath and/or within the OSGi container environment (Felix/Equinox/etc.)? Is there a run-time method to detect if I am running in an OSGi container that is compatible across containers (Felix/Equinox/etc.) ? Do I need to use a different class loading mechanism if I am in a OSGi container? Am I required to import OSGi classes into my project to be able to load an at-bundle-time-unknown JDBC driver via my database module? I also have a second method of obtaining a driver (via JNDI, which is only really applicable when running in an app server), do I need to change my JNDI access code for OSGi-aware app servers?

    Read the article

  • FLV performance and garbage collection

    - by justinbach
    I'm building a large flash site (AS3) that uses huge FLVs as transition videos from section to section. The FLVs are 1280x800 and are being scaled to 1680x1050 (much of which is not displayed to users with smaller screens), and are around 5-8 seconds apiece. I'm encoding the videos using On2's hi-def codec, VP6-S, and playback is pretty good with native FLV players, Perian-equipped Quicktime, and simple proof-of-concept FLV playback apps built in AS3. The problem I'm having is that in the context of the actual site, playback isn't as smooth; the framerate isn't quite as good as it should be, and more problematically, there's occasional jerkiness and dropped frames (sometimes pausing the video for as long as a quarter of a second or so). My guess is that this is being caused by garbage collection in the Flash player, which happens nondeterministically and is therefore hard to test and control for. I'm using a single instance of FLVPlayback to play the videos; I originally was using NetStream objects and so forth directly but switched to FLVPlayback for this reason. Has anyone experienced this sort of jerkiness with FLVPlayback (or more generally, with hi-def Flash video)? Am I right about GC being the culprit here, and if so, is there any way to prevent it during playback of these system-intensive transitions?

    Read the article

  • C++: Is there any good way to read/write without specifically stating character type in function nam

    - by Mark L.
    I'm having a problem getting a program to read from a file based on a template, for example: bool parse(basic_ifstream<T> &file) { T ch; locale loc = file.getloc(); basic_string<T> buf; file.unsetf(ios_base::skipws); if (file.is_open()) { while (file >> ch) { if(isalnum(ch, loc)) { buf += ch; } else if(!buf.empty()) { addWord(buf); buf.clear(); } } if(!buf.empty()) { addWord(buf); } return true; } return false; } This will work when I instantiate this class with <char>, but has problems when I use <wchar_t> (clearly). Outside of the class, I'm using: for (iter = mp.begin(); iter != mp.end(); ++iter ) { cout << iter->first << setw(textwidth - iter->first.length() + 1); cout << " " << iter->second << endl; } To write all of the information from this data struct (it's a map<basic_string<T>, int>), and as predicted, cout explodes if iter->first isn't a char array. I've looked online and the consensus is to use wcout, but unfortunately, since this program requires that the template can be changed at compile time (<char> - <wchar_t>) I'm not sure how I could get away with simply choosing cout or wcout. That is, unless there way a way to read/write wide characters without changing lots of code. If this explanation sounds awkwardly complicated, let me know and I'll address it as best I can.

    Read the article

  • Tableview not updating correctly after adding person

    - by tazboy
    I have to be missing something simple here but it escapes me. After the user enters a new person to a mutable array I want to update the table. The mutable array is the datasource. I believe my issue lies within cellForRowAtIndexPath. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { TextFieldCell *customCell = (TextFieldCell *)[tableView dequeueReusableCellWithIdentifier:@"TextCellID"]; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:@"cell"]; if (indexPath.row == 0) { if (customCell == nil) { NSArray *nibObjects = [[NSBundle mainBundle] loadNibNamed:@"TextFieldCell" owner:nil options:nil]; for (id currentObject in nibObjects) { if ([currentObject isKindOfClass:[TextFieldCell class]]) customCell = (TextFieldCell *)currentObject; } } customCell.nameTextField.delegate = self; cell = customCell; } else { if (cell == nil) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:@"cell"]; cell.textLabel.text = [[self.peopleArray objectAtIndex:indexPath.row-1] name]; NSLog(@"PERSON AT ROW %d = %@", indexPath.row-1, [[self.peopleArray objectAtIndex:indexPath.row-1] name]); NSLog(@"peopleArray's Size = %d", [self.peopleArray count]); } } return cell; } When I first load the view everything is great. This is what prints: PERSON AT ROW 0 = Melissa peopleArray's Size = 2 PERSON AT ROW 1 = Dave peopleArray's Size = 2 After I add someone to that array I get this: PERSON AT ROW 1 = Dave peopleArray's Size = 3 PERSON AT ROW 2 = Tom peopleArray's Size = 3 When I add a second person I get: PERSON AT ROW 2 = Tom peopleArray's Size = 4 PERSON AT ROW 3 = Ralph peopleArray's Size = 4 Why is not printing everyone in the array? This pattern continues and it only ever prints two people, and it's always the last two people. What the heck am I missing?

    Read the article

  • Django aggregate query generating SQL error

    - by meepmeep
    I'm using Django 1.1.1 on a SQL Server 2005 db using the latest sqlserver_ado library. models.py includes: class Project(models.Model): name = models.CharField(max_length=50) class Thing(models.Model): project = models.ForeignKey(Project) reference = models.CharField(max_length=50) class ThingMonth(models.Model): thing = models.ForeignKey(Thing) timestamp = models.DateTimeField() ThingMonthValue = models.FloatField() class Meta: db_table = u'ThingMonthSummary' In a view, I have retrieved a queryset called 'things' which contains 25 Things: things = Thing.objects.select_related().filter(project=1).order_by('reference') I then want to do an aggregate query to get the average ThingMonthValue for the first 20 of those Things for a certain period, and the same value for the last 5. For the first 20 I do: averageThingMonthValue = ThingMonth.objects.filter(turbine__in=things[:20],timestamp__range="2009-01-01 00:00","2010-03-00:00")).aggregate(Avg('ThingMonthValue'))['ThingMonthValue__avg'] This works fine, and returns the desired value. For the last 5 I do: averageThingMonthValue = ThingMonth.objects.filter(turbine__in=things[20:],timestamp__range="2009-01-01 00:00","2010-03-00:00")).aggregate(Avg('ThingMonthValue'))['ThingMonthValue__avg'] But for this I get an SQL error: 'Only one expression can be specified in the select list when the subquery is not introduced with EXISTS.' The SQL query being used by django reads: SELECT AVG([ThingMonthSummary].[ThingMonthValue]) AS [ThingMonthValue__avg] FROM [ThingMonthSummary] WHERE ([ThingMonthSummary].[thing_id] IN (SELECT _row_num, [id] FROM ( SELECT ROW_NUMBER() OVER ( ORDER BY [AAAA].[id] ASC) as _row_num, [AAAA].[id] FROM ( SELECT U0.[id] FROM [Thing] U0 WHERE U0.[project_id] = 1 ) AS [AAAA]) as QQQ where 20 < _row_num) AND [ThingMonthSummary].[timestamp] BETWEEN '01/01/09 00:00:00' and '03/01/10 00:00:00') Any idea why it works for one slice of the Things and not the second? I've checked and the two slices do contain the desired Things correctly.

    Read the article

  • Is there only one distribution certificate per team leader or one per application?

    - by dbonneville
    I'm confused by the dev and dist cert. I got one app in the store, but I named my certs after my first app. Was this a mistake? I'm ready to go on my second app. But XCode is selected the dist cert with the old app name on it. It built without error. Though I named it wrong, will it still work? XCode is automatically picking this cert for me. Is this right? You need a new app ID for each app so you can 1) put in plist, 2) put in code signing section on Build tab of Target info, but you don't need a new dev and dist cert for each new app. Therefore is this right too? For each app you develop, you only need your original dev and dist cert, but a new app ID for each app. This is so obscure! I wish apple had done a better job!

    Read the article

  • WPF: How to connect to a sql ce 3.5 database with ADO.NET?

    - by EV
    Hi, I'm trying to write an application that has two DataGrids - the first one displays customers and the second one selected customer's records. I have generated the ADO.NET Entity Model and the connection property are set in App.config file. Now I want to populate the datagrids with the data from sql ce 3.5 database which does not support LINQ-TO-SQL. I used this code for the first datagrid in code behind: CompanyDBEntities db; private void Window_Loaded(object sender, RoutedEventArgs e) { db = new CompanyDBEntities(); ObjectQuery<Employees> departmentQuery = db.Employees; try { // lpgrid is the name of the first datagrid this.lpgrid.ItemsSource = departmentQuery; } catch (Exception ex) { MessageBox.Show(ex.Message); } However, the datagrid is empty although there are records in a database. It would also be better to use it in MVVM pattern but I'm not sure how does the ado.net and MVVM work together and if it is even possible to use them like that. Could anyone help? Thanks in advance.

    Read the article

  • Help needed in grokking password hashes and salts

    - by javafueled
    I've read a number of SO questions on this topic, but grokking the applied practice of storing a salted hash of a password eludes me. Let's start with some ground rules: a password, "foobar12" (we are not discussing the strength of the password). a language, Java 1.6 for this discussion a database, postgreSQL, MySQL, SQL Server, Oracle Several options are available to storing the password, but I want to think about one (1): Store the password hashed with random salt in the DB, one column Found on SO and elsewhere is the automatic fail of plaintext, MD5/SHA1, and dual-columns. The latter have pros and cons MD5/SHA1 is simple. MessageDigest in Java provides MD5, SHA1 (through SHA512 in modern implementations, certainly 1.6). Additionally, most RDBMSs listed provide methods for MD5 encryption functions on inserts, updates, etc. The problems become evident once one groks "rainbow tables" and MD5 collisions (and I've grokked these concepts). Dual-column solutions rest on the idea that the salt does not need to be secret (grok it). However, a second column introduces a complexity that might not be a luxury if you have a legacy system with one (1) column for the password and the cost of updating the table and the code could be too high. But it is storing the password hashed with a random salt in single DB column that I need to understand better, with practical application. I like this solution for a couple of reasons: a salt is expected and considers legacy boundaries. Here's where I get lost: if the salt is random and hashed with the password, how can the system ever match the password? I have theory on this, and as I type I might be grokking the concept: Given a random salt of 128 bytes and a password of 8 bytes ('foobar12'), it could be programmatically possible to remove the part of the hash that was the salt, by hashing a random 128 byte salt and getting the substring of the original hash that is the hashed password. Then re hashing to match using the hash algorithm...??? So... any takers on helping. :) Am I close?

    Read the article

  • Idiomatic default sort using WCF RIA, Entity Framework 4, Silverlight 4?

    - by Duncan Bayne
    I've got two Silverlight 4.0 ComboBoxes; the second displays the children of the entity selected in the first: <ComboBox Name="cmbThings" ItemsSource="{Binding Path=Things,Mode=TwoWay}" DisplayMemberPath="Name" SelectionChanged="CmbThingsSelectionChanged" /> <ComboBox Name="cmbChildThings" ItemsSource="{Binding Path=SelectedThing.ChildThings,Mode=TwoWay}" DisplayMemberPath="Name" /> The code behind the view provides a (simple, hacky) way to databind those ComboBoxes, by loading Entity Framework 4.0 entities through a WCF RIA service: public EntitySet<Thing> Things { get; private set; } public Thing SelectedThing { get; private set; } protected override void OnNavigatedTo(NavigationEventArgs e) { var context = new SortingDomainContext(); context.Load(context.GetThingsQuery()); context.Load(context.GetChildThingsQuery()); Things = context.Things; DataContext = this; } private void CmbThingsSelectionChanged(object sender, SelectionChangedEventArgs e) { SelectedThing = (Thing) cmbThings.SelectedItem; if (PropertyChanged != null) { PropertyChanged.Invoke(this, new PropertyChangedEventArgs("SelectedThing")); } } public event PropertyChangedEventHandler PropertyChanged; What I'd like to do is have both combo boxes sort their contents alphabetically, and I'd like to specify that behaviour in the XAML if at all possible. Could someone please tell me what is the idiomatic way of doing this with the SL4 / EF4 / WCF RIA technology stack?

    Read the article

  • Access a view inside a tab navigator when a tab is clicked

    - by magnus.lassi
    Hi, I I have a view in Flex 3 where I use a tab navigator and a number of views inside the tab navigator. I need to be know which view was clicked because of it's one specific view then I need to take action, i.e. if view with id "secondTab" is clicked then do something. I have set it up to be notified, my problem is that I need to be able to know what view it is. Calling tab.GetChildByName or a similar method seems to only get me back a TabSkin object. <?xml version="1.0" encoding="utf-8"?> <mx:VBox xmlns:mx="http://www.adobe.com/2006/mxml" width="100%" height="100%" xmlns:local="*" creationComplete="onCreationComplete(event)"> <mx:Script> <![CDATA[ import mx.events.FlexEvent; import mx.controls.Button; protected function onCreationComplete(event:Event):void { for(var i:int = 0; i < myTN.getChildren().length; i++) { var tab:Button = myTN.getTabAt(i); tab.addEventListener(FlexEvent.BUTTON_DOWN, tabClickHandler); } } private function tabClickHandler(event:FlexEvent):void { var tab:Button; if(event.currentTarget is Button) { tab = event.currentTarget as Button; // how do I access the actual view hosted in a tab that was clicked? } } ]]> </mx:Script> <mx:TabNavigator id="myTN"> <local:ProductListView id="firstTab" label="First Tab" width="100%" height="100%" /> <local:ProductListView id="secondTab" label="Second Tab" width="100%" height="100%" /> </mx:TabNavigator> </mx:VBox>

    Read the article

  • Dynamically populated NSPopUpButtonCell menu in an NSOutlineView

    - by Mo
    I’m working with an NSOutlineView which has two columns. My dataSource supplies the outline view with a tree of items of a custom class which represents file types (that is, you initialise it with a UTI). The first column is the display name of the file type (e.g., “Source code”, “Interface Builder NIB document”, etc.). The second column is an NSPopUpButtonCell which is supposed to allow the user to pick a handler for the given document type (think of Xcode’s “File Types” preference pane, and you’re pretty much there). I can generate an NSMenu for a given item in the tree, populated with options based upon the Launch Services database entries for the UTI, complete with the relevant application icon and and so on. In fact, the menu itself works wonderfully, populated by way of NSPopUpButtonCellWillPopUpNotification. The problem is, try as I might, the cell when the menu isn’t popped up always contains precisely one of two things: either an empty string, or the default text for the cell, the former if the result of -handlerName on the item (the attribute assigned to the column) is non-nil, the latter otherwise. Moreover, I’m manually calling -selectItem: on the NSPopUpButtonCell instance, which just seems Wrong. In contrast, the left-hand column, which is just an NSTextFieldCell, everything just works (although granted, all it’s got to do is read the value from the item and present it). (Disclaimer: I’m fairly new at Cocoa UI stuff; I know Objective-C, and lots of other programming languages, but I’ve not a huge amount of experience of building Mac OS X UIs, so be gentle).

    Read the article

  • How do I combine two arrays in PHP based on a common key?

    - by Eoghan O'Brien
    Hi, I'm trying to join two associative arrays together based on an entry_id key. Both arrays come from individual database resources, the first stores entry titles, the second stores entry authors, the key=value pairs are as follows: array ( 'entry_id' => 1, 'title' => 'Test Entry' ) array ( 'entry_id' => 1, 'author_id' => 2 I'm trying to achieve an array structure like: array ( 'entry_id' => 1, 'author_id' => 2, 'title' => 'Test Entry' ) Currently, I've solved the problem by looping through each array and formatting the array the way I want, but I think this is a bit of a memory hog. $entriesArray = array(); foreach ($entryNames as $names) { foreach ($entryAuthors as $authors) { if ($names['entry_id'] === $authors['entry_id']) { $entriesArray[] = array( 'id' => $names['entry_id'], 'title' => $names['title'], 'author_id' => $authors['author_id'] ); } } } I'd like to know is there an easier, less memory intensive method of doing this?

    Read the article

  • Passing Variable Length Arrays to a function

    - by David Bella
    I have a variable length array that I am trying to pass into a function. The function will shift the first value off and return it, and move the remaining values over to fill in the missing spot, putting, let's say, a -1 in the newly opened spot. I have no problem passing an array declared like so: int framelist[128]; shift(framelist); However, I would like to be able to use a VLA declared in this manner: int *framelist; framelist = malloc(size * sizeof(int)); shift(framelist); I can populate the arrays the same way outside the function call without issue, but as soon as I pass them into the shift function, the one declared in the first case works fine, but the one in the second case immediately gives a segmentation fault. Here is the code for the queue function, which doesn't do anything except try to grab the value from the first part of the array... int shift(int array[]) { int value = array[0]; return value; } Any ideas why it won't accept the VLA? I'm still new to C, so if I am doing something fundamentally wrong, let me know.

    Read the article

  • iPhone: Animating a view when another view appears/disappears

    - by MacTouch
    I have the following view hierarchy UITabBarController - UINavigationController - UITableViewController When the table view appears (animated) I create a toolbar and add it as subview of the TabBar at the bottom of the page and let it animate in with the table view. Same procedure in other direction, when the table view disappears. It does not work as expected. The animation duration is OK, but somehow not exact the same as the animation of the table view when it becomes visible When I display the table view for the second time, the toolbar does not disappear at all and remains at the bottom of the parent view. What's wrong with it? - (void)animationDone:(NSString *)animationID finished:(NSNumber *)finished context:(void *)context { UIView *toolBar = [[[self tabBarController] view] viewWithTag:1000]; [toolBar removeFromSuperview]; } - (void)viewWillAppear:(BOOL)animated { UIEdgeInsets insets = UIEdgeInsetsMake(0, 0, 44, 0); [[self tableView] setContentInset:insets]; [[self tableView] setScrollIndicatorInsets:insets]; // Toolbar initially placed outside of the visible frame (x=320) UIView *toolBar = [[UIToolbar alloc] initWithFrame:CGRectMake(320, 480-44, 320, 44)]; [toolBar setTag:1000]; [[[self tabBarController] view] addSubview:toolBar]; [UIView beginAnimations:nil context:nil]; [UIView setAnimationCurve:UIViewAnimationCurveEaseInOut]; [UIView setAnimationDuration:0.35]; [toolBar setFrame:CGRectMake(0, 480-44, 320, 44)]; [UIView commitAnimations]; [toolBar release]; [super viewWillAppear:animated]; } - (void)viewWillDisappear:(BOOL)animated { UIView *toolBar = [[[self tabBarController] view] viewWithTag:1000]; [UIView beginAnimations:nil context:nil]; [UIView setAnimationCurve:UIViewAnimationCurveEaseInOut]; [UIView setAnimationDuration:0.35]; [UIView setAnimationDidStopSelector:@selector(animationDone:finished:context:)]; [toolBar setFrame:CGRectMake(320, 480-44, 320, 44)]; [UIView commitAnimations]; [super viewWillDisappear:animated]; }

    Read the article

  • URL "fragment identifier" semantics for HTML documents

    - by Pointy
    I've been working with a new installation of the "MoinMoin" wiki software. As I was playing with it, typing in mostly random test pages, I created a link with a fragment blah blah see also [[SomeStuff#whatever|some other stuff about whatever]] Then I needed to figure out how to create the anchor for that "whatever" fragment identifier. I don't recall having to do that with MediaWiki, so I had to dig around, but finally I found that MoinMoin has an "Anchor" macro: == Whatever == <<Anchor(whatever)>> Looking at the generated HTML, I was surprised to see an empty <span> tag with an "id" value of "whatever". I expected that it'd be an <a> tag with a "name" attribute of "whatever". I dug around and found the source, and there's a comment that says they changed it from an <a> tag in order to avoid some IE problem with <pre> sections. This confused me — not because of the IE thing, but because it looked to me as if their "fix" had left the whole anchor mechanism completely broken. Much to my surprise, however, further testing indicated that it worked fine. I wrote a test page with 300 <span> tags all with "id" values, and I further shocked myself when Firefox behaved exactly as I would have expected it to had I used <a> tags. It also worked when I changed all the <span> tags to <em>. So by this time, you're either as surprised as I was, or else you're thinking "how can somebody that dumb have so many reputation points?" If you're in the second category, is it really the case that I've been typing in HTML for about 15 years now — a lot of HTML — and it's somehow escaped my notice that browsers use the HTML fragment to find any sort of element with a matching "id"? mind status: blown

    Read the article

  • How to resize table via javascript in IE?

    - by MartyIX
    I've got this table: <table id="correctness" style="overflow: hidden;"> <tr><td style="overflow: hidden;"> <div id="correctness-message"></div> <span class="hide"> <button type="button" class="hide" onclick="new DisplayEffect('correctness').Hide(500);">Hide</button> </span> </td> </tr> </table> and a function for resizing the table: function resize(element, size) { element.style.height = size + "px";}; which is called for a certain amount of time (e.g. 1 second) with the ID of table (i.e. "correctness") in order to resize the table from zero height to its full height. This code works in Firefox and Chrome but it doesn't work in IE8. What it does is that it displays right away whole table even thought the height set in "resize" method is much lower. It seems that the cell sets the height of parent table and not the other way around. Is it possible to change the behaviour? I like changing the height of the table better because I can set visibility of the table easily. Thanks for any help!

    Read the article

  • Setting MinimumSize Attribute for a Control in a TFS WorkItem

    - by Jay Yother
    Tools used: Visual Studio 2008 SP1 Team Explorer Team Foundation Server Power Tools October 2008 Release. Using the Process Editor in Visual Studio, I am attempting to set the MinimumSize attribute for a control in a WorkItem template to make the default size of the input area larger. I am setting the attribute according to this website: http://msmvps.com/blogs/vstsblog/archive/2007/07/07/undocumented-attributes-for-controlling-the-work-item-form-layout.aspx No matter what I set this attribute to it has no affect. I have tried setting the attribute with and without surrounding (). I've tried different capitalization of the attribute but no luck. I have verified that the MinimumSize attribute is being correctly set in the associated xml file. The control (HtmlFieldControl) is currently setup as the second child on a Tab Page. (The first control is also an HtmlFieldControl.) I've tried adding a group to the Tab Page such that the hierarchy is TabPage-Group-Column-Control with no success. I've also tried setting the attribute for the first control with no luck. Any idea what I am doing wrong?

    Read the article

  • Translating Where() to sql

    - by MBoros
    Hi. I saw DamienG's article (http://damieng.com/blog/2009/06/24/client-side-properties-and-any-remote-linq-provider) in how to map client properties to sql. i ran throgh this article, and i saw great potential in it. Definitely mapping client properties to SQL is an awesome idea. But i wanted to use this for something a bit more complicated then just concatenating strings. Atm we are trying to introduce multilinguality to our Business objects, and i hoped we could leave all the existing linq2sql queries intact, and just change the code of the multilingual properties, so they would actually return the given property in the CurrentUICulture. The first idea was to change these fields to XMLs, and then try the Object.Property.Elements().Where(...), but it got stuck on the Elements(), as it couldnt translate it to sql. I read somewhere that XML fields are actually regarded as strings, and only on the app server they become XElements, so this way the filtering would be on the app server anyways, not the DB. Fair point, it wont work like this. Lets try something else... SO the second idea was to create a PolyGlots table (name taken from http://weblogic.sys-con.com/node/102698?page=0,1), a PolyGlotTranslations table and a Culture table, where the PolyGlots would be referenced from each internationalized property. This way i wanted to say for example: private static readonly CompiledExpression<Announcement, string> nameExpression = DefaultTranslationOf<Announcement> .Property(e => e.Name) .Is(e=> e.NamePolyGlot.PolyGlotTranslations .Where(t=> t.Culture.Code == Thread.CurrentThread.CurrentUICulture.Name) .Single().Value ); now unfortunately here i get an error that the Where() function cannot be translated to sql, what is a bit disappointing, as i was sure it will go through. I guess it is failing, cause the IEntitySet is basically an IEnumerable, not IQueryable, am i right? Is there another way to use the compiledExpressions class to achieve this goal? Any help appreciated.

    Read the article

  • xCode: iPhone Swipe Gesture crash

    - by David DelMonte
    I have an app that I'd like the swipe gesture to flip to a second view. The app is all set up with buttons that work. The swipe gesture though causes a crash ( “EXC_BAD_ACCESS”.). The gesture code is: - (void)handleSwipe:(UISwipeGestureRecognizer *)recognizer { NSLog(@"%s", __FUNCTION__); switch (recognizer.direction) { case (UISwipeGestureRecognizerDirectionRight): [self performSelector:@selector(flipper:)]; break; case (UISwipeGestureRecognizerDirectionLeft): [self performSelector:@selector(flipper:)]; break; default: break; } } and "flipper" looks like this: - (IBAction)flipper:(id)sender { FlashCardsAppDelegate *mainDelegate = (FlashCardsAppDelegate *)[[UIApplication sharedApplication] delegate]; [mainDelegate flipToFront]; } flipToBack (and flipToFront) look like this.. - (void)flipToBack { NSLog(@"%s", __FUNCTION__); BackViewController *theBackView = [[BackViewController alloc] initWithNibName:@"BackView" bundle:nil]; [self setBackViewController:theBackView]; [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:1.0]; [UIView setAnimationTransition:UIViewAnimationTransitionFlipFromLeft forView:window cache:YES]; [frontViewController.view removeFromSuperview]; [self.window addSubview:[backViewController view]]; [UIView commitAnimations]; [frontViewController release]; frontViewController = nil; [theBackView release]; // NSLog (@" FINISHED "); } Maybe I'm going about this the wrong way... All ideas are welcome...

    Read the article

< Previous Page | 509 510 511 512 513 514 515 516 517 518 519 520  | Next Page >