Search Results

Search found 8367 results on 335 pages for 'temporal difference'.

Page 314/335 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • Why is this statement treated as a string instead of its result?

    - by reve_etrange
    I am trying to perform some composition-based filtering on a large collection of strings (protein sequences). I wrote a group of three subroutines in order to take care of it, but I'm running into trouble in two ways - one minor, one major. The minor trouble is that when I use List::MoreUtils 'pairwise' I get warnings about using $a and $b only once and them being uninitialized. But I believe I'm calling this method properly (based on CPAN's entry for it and some examples from the web). The major trouble is an error "Can't use string ("17/32") as HASH ref while "strict refs" in use..." It seems like this can only happen if the foreach loop in &comp is giving the hash values as a string instead of evaluating the division operation. I'm sure I've made a rookie mistake, but can't find the answer on the web. The first time I even looked at perl code was last Wednesday... use List::Util; use List::MoreUtils; my @alphabet = ( 'A', 'R', 'N', 'D', 'C', 'Q', 'E', 'G', 'H', 'I', 'L', 'K', 'M', 'F', 'P', 'S', 'T', 'W', 'Y', 'V' ); my $gapchr = '-'; # Takes a sequence and returns letter = occurrence count pairs as hash. sub getcounts { my %counts = (); foreach my $chr (@alphabet) { $counts{$chr} = ( $[0] =~ tr/$chr/$chr/ ); } $counts{'gap'} = ( $[0] =~ tr/$gapchr/$gapchr/ ); return %counts; } # Takes a sequence and returns letter = fractional composition pairs as a hash. sub comp { my %comp = getcounts( $[0] ); foreach my $chr (@alphabet) { $comp{$chr} = $comp{$chr} / ( length( $[0] ) - $comp{'gap'} ); } return %comp; } # Takes two sequences and returns a measure of the composition difference between them, as a scalar. # Originally all on one line but it was unreadable. sub dcomp { my @dcomp = pairwise { $a - $b } @{ values( %{ comp( $[0] ) } ) }, @{ values( %{ comp( $[1] ) } ) }; @dcomp = apply { $_ ** 2 } @dcomp; my $dcomp = sqrt( sum( 0, @dcomp ) ) / 20; return $dcomp; } Much appreciation for any answers or advice!

    Read the article

  • Minutia on Objective-C Categories and Extensions.

    - by Matt Wilding
    I learned something new while trying to figure out why my readwrite property declared in a private Category wasn't generating a setter. It was because my Category was named: // .m @interface MyClass (private) @property (readwrite, copy) NSArray* myProperty; @end Changing it to: // .m @interface MyClass () @property (readwrite, copy) NSArray* myProperty; @end and my setter is synthesized. I now know that Class Extension is not just another name for an anonymous Category. Leaving a Category unnamed causes it to morph into a different beast: one that now gives compile-time method implementation enforcement and allows you to add ivars. I now understand the general philosophies underlying each of these: Categories are generally used to add methods to any class at runtime, and Class Extensions are generally used to enforce private API implementation and add ivars. I accept this. But there are trifles that confuse me. First, at a hight level: Why differentiate like this? These concepts seem like similar ideas that can't decide if they are the same, or different concepts. If they are the same, I would expect the exact same things to be possible using a Category with no name as is with a named Category (which they are not). If they are different, (which they are) I would expect a greater syntactical disparity between the two. It seems odd to say, "Oh, by the way, to implement a Class Extension, just write a Category, but leave out the name. It magically changes." Second, on the topic of compile time enforcement: If you can't add properties in a named Category, why does doing so convince the compiler that you did just that? To clarify, I'll illustrate with my example. I can declare a readonly property in the header file: // .h @interface MyClass : NSObject @property (readonly, copy) NSString* myString; @end Now, I want to head over to the implementation file and give myself private readwrite access to the property. If I do it correctly: // .m @interface MyClass () @property (readonly, copy) NSString* myString; @end I get a warning when I don't synthesize, and when I do, I can set the property and everything is peachy. But, frustratingly, if I happen to be slightly misguided about the difference between Category and Class Extension and I try: // .m @interface MyClass (private) @property (readonly, copy) NSString* myString; @end The compiler is completely pacified into thinking that the property is readwrite. I get no warning, and not even the nice compile error "Object cannot be set - either readonly property or no setter found" upon setting myString that I would had I not declared the readwrite property in the Category. I just get the "Does not respond to selector" exception at runtime. If adding ivars and properties is not supported by (named) Categories, is it too much to ask that the compiler play by the same rules? Am I missing some grand design philosophy?

    Read the article

  • Sync services not actually syncing

    - by Paul Mrozowski
    I'm attempting to sync a SQL Server CE 3.5 database with a SQL Server 2008 database using MS Sync Services. I am using VS 2008. I created a Local Database Cache, connected it with SQL Server 2008 and picked the tables I wanted to sync. I selected SQL Server Tracking. It modified the database for change tracking and created a local copy (SDF) of the data. I need two way syncing so I created a partial class for the sync agent and added code into the OnInitialized() to set the SyncDirection for the tables to Bidirectional. I've walked through with the debugger and this code runs. Then I created another partial class for cache server sync provider and added an event handler into the OnInitialized() to hook into the ApplyChangeFailed event. This code also works OK - my code runs when there is a conflict. Finally, I manually made some changes to the server data to test syncing. I use this code to fire off a sync: var agent = new FSEMobileCacheSyncAgent(); var syncStats = agent.Synchronize(); syncStats seems to show the count of the # of changes I made on the server and shows that they were applied. However, when I open the local SDF file none of the changes are there. I basically followed the instructions I found here: http://msdn.microsoft.com/en-us/library/cc761546%28SQL.105%29.aspx and here: http://keithelder.net/blog/archive/2007/09/23/Sync-Services-for-SQL-Server-Compact-Edition-3.5-in-Visual.aspx It seems like this should "just work" at this point, but the changes made on the server aren't in the local SDF file. I guess I'm missing something but I'm just not seeing it right now. I thought this might be because I appeared to be using version 1 of Sync Services so I removed the references to Microsoft.Synchronization.* assemblies, installed the Sync framework 2.0 and added the new version of the assemblies to the project. That hasn't made any difference. Ideas? Edit: I wanted to enable tracing to see if I could track this down but the only way to do that is through a WinForms app since it requires entries in the app.config file (my original project was a class library). I created a WinForms project and recreated everything and suddenly everything is working. So apparently this requires a WinForm project for some reason? This isn't really how I planned on using this - I had hoped to kick off syncing through another non-.NET application and provide the UI there so the experience was a bit more seemless to the end user. If I can't do that, that's OK, but I'd really like to know if/how to make this work as a class library project instead.

    Read the article

  • Combining FileStream and MemoryStream to avoid disk accesses/paging while receiving gigabytes of data?

    - by w128
    I'm receiving a file as a stream of byte[] data packets (total size isn't known in advance) that I need to store somewhere before processing it immediately after it's been received (I can't do the processing on the fly). Total received file size can vary from as small as 10 KB to over 4 GB. One option for storing the received data is to use a MemoryStream, i.e. a sequence of MemoryStream.Write(bufferReceived, 0, count) calls to store the received packets. This is very simple, but obviously will result in out of memory exception for large files. An alternative option is to use a FileStream, i.e. FileStream.Write(bufferReceived, 0, count). This way, no out of memory exceptions will occur, but what I'm unsure about is bad performance due to disk writes (which I don't want to occur as long as plenty of memory is still available) - I'd like to avoid disk access as much as possible, but I don't know of a way to control this. I did some testing and most of the time, there seems to be little performance difference between say 10 000 consecutive calls of MemoryStream.Write() vs FileStream.Write(), but a lot seems to depend on buffer size and the total amount of data in question (i.e the number of writes). Obviously, MemoryStream size reallocation is also a factor. Does it make sense to use a combination of MemoryStream and FileStream, i.e. write to memory stream by default, but once the total amount of data received is over e.g. 500 MB, write it to FileStream; then, read in chunks from both streams for processing the received data (first process 500 MB from the MemoryStream, dispose it, then read from FileStream)? Another solution is to use a custom memory stream implementation that doesn't require continuous address space for internal array allocation (i.e. a linked list of memory streams); this way, at least on 64-bit environments, out of memory exceptions should no longer be an issue. Con: extra work, more room for mistakes. So how do FileStream vs MemoryStream read/writes behave in terms of disk access and memory caching, i.e. data size/performance balance. I would expect that as long as enough RAM is available, FileStream would internally read/write from memory (cache) anyway, and virtual memory would take care of the rest. But I don't know how often FileStream will explicitly access a disk when being written to. Any help would be appreciated.

    Read the article

  • Why did File::Find finish short of completely traversing a large directory?

    - by Stan
    A directory exists with a total of 2,153,425 items (according to Windows folder Properties). It contains .jpg and .gif image files located within a few subdirectories. The task was to move the images into a different location while querying each file's name to retrieve some relevant info and store it elsewhere. The script that used File::Find finished at 20462 files. Out of curiosity I wrote a tiny recursive function to count the items which returned a count of 1,734,802. I suppose the difference can be accounted for by the fact that it didn't count folders, only files that passed the -f test. The problem itself can be solved differently by querying for file names first instead of traversing the directory. I'm just wondering what could've caused File::Find to finish at a small fraction of all files. The data is stored on an NTFS file system. Here is the meat of the script; I don't think including DBI stuff would be relevant since I reran the script with nothing but a counter in process_img() which returned the same number. find(\&process_img, $path_from); sub process_img { eval { return if ($_ eq "." or $_ eq ".."); ## Omitted querying and composing new paths for brevity. make_path("$path_to\\img\\$dir_area\\$dir_address\\$type"); copy($File::Find::name, "$path_to\\img\\$dir_area\\$dir_address\\$type\\$new_name"); }; if ($@) { print STDERR "eval barks: $@\n"; return } } And here is another method I used to count files: count_images($path_from); sub count_images { my $path = shift; opendir my $images, $path or die "died opening $path"; while (my $item = readdir $images) { next if $item eq '.' or $item eq '..'; $img_counter++ && next if -f "$path/$item"; count_images("$path/$item") if -d "$path/$item"; } closedir $images or die "died closing $path"; } print $img_counter;

    Read the article

  • Java: Making concurrent MySQL queries from multiple clients synchronised

    - by Misha Gale
    I work at a gaming cybercafe, and we've got a system here (smartlaunch) which keeps track of game licenses. I've written a program which interfaces with this system (actually, with it's backend MySQL database). The program is meant to be run on a client PC and (1) query the database to select an unused license from the pool available, then (2) mark this license as in use by the client PC. The problem is, I've got a concurrency bug. The program is meant to be launched simultaneously on multiple machines, and when this happens, some machines often try and acquire the same license. I think that this is because steps (1) and (2) are not synchronised, i.e. one program determines that license #5 is available and selects it, but before it can mark #5 as in use another copy of the program on another PC tries to grab that same license. I've tried to solve this problem by using transactions and table locking, but it doesn't seem to make any difference - Am I doing this right? Here follows the code in question: public LicenseKey Acquire() throws SmartLaunchException, SQLException { Connection conn = SmartLaunchDB.getConnection(); int PCID = SmartLaunchDB.getCurrentPCID(); conn.createStatement().execute("LOCK TABLE `licensekeys` WRITE"); String sql = "SELECT * FROM `licensekeys` WHERE `InUseByPC` = 0 AND LicenseSetupID = ? ORDER BY `ID` DESC LIMIT 1"; PreparedStatement statement = conn.prepareStatement(sql); statement.setInt(1, this.id); ResultSet results = statement.executeQuery(); if (results.next()) { int licenseID = results.getInt("ID"); sql = "UPDATE `licensekeys` SET `InUseByPC` = ? WHERE `ID` = ?"; statement = conn.prepareStatement(sql); statement.setInt(1, PCID); statement.setInt(2, licenseID); statement.executeUpdate(); statement.close(); conn.commit(); conn.createStatement().execute("UNLOCK TABLES"); return new LicenseKey(results.getInt("ID"), this, results.getString("LicenseKey"), results.getInt("LicenseKeyType")); } else { throw new SmartLaunchException("All licenses of type " + this.name + "are in use"); } }

    Read the article

  • WCF App using Peer Chat app as example does not work.

    - by splate
    I converted a VB .Net 3.5 app to use peer to peer WCF using the available Microsoft example of the Chat app. I made sure that I copied the app.config file for the sample(modified the names for my app), added the appropriate references. I followed all the tutorials and added the appropriate tags and structure in my app code. Everything runs without any errors, but the clients only get messages from themselves and not from the other clients. The sample chat application does run just fine with multiple clients. The only difference I could find is that the server on the sample is targeting the framework 2.0, but I assume that is wrong and it is building it in at least 3.0 or the System.ServiceModel reference would break. Is there something that has to be registered that the sample is doing behind the scenes or is the sample a special project type? I am confused. My next step is to copy all my classes and logic from my app to the sample app, but that is likely a lot of work. Here is my Client App.config: <client><endpoint name="thldmEndPoint" address="net.p2p://thldmMesh/thldmServer" binding="netPeerTcpBinding" bindingConfiguration="PeerTcpConfig" contract="THLDM_Client.IGameService"></endpoint></client> <bindings><netPeerTcpBinding> <binding name="PeerTcpConfig" port="0"> <security mode="None"></security> <resolver mode="Custom"> <custom address="net.tcp://localhost/thldmServer" binding="netTcpBinding" bindingConfiguration="TcpConfig"></custom> </resolver> </binding></netPeerTcpBinding> <netTcpBinding> <binding name="TcpConfig"> <security mode="None"></security> </binding> </netTcpBinding> </bindings> Here is my server App.config: <services> <service name="System.ServiceModel.PeerResolvers.CustomPeerResolverService"> <host> <baseAddresses> <add baseAddress="net.tcp://localhost/thldmServer"/> </baseAddresses> </host> <endpoint address="net.tcp://localhost/thldmServer" binding="netTcpBinding" bindingConfiguration="TcpConfig" contract="System.ServiceModel.PeerResolvers.IPeerResolverContract"> </endpoint> </service> </services> <bindings> <netTcpBinding> <binding name="TcpConfig"> <security mode="None"></security> </binding> </netTcpBinding> </bindings> Thanks ahead of time.

    Read the article

  • What's special about mouse capture and middle mouse button in WPF?

    - by Scott Bilas
    When I call CaptureMouse() in response to a MouseDown from the middle mouse button, it will capture and then release the mouse. Huh? I've tried using Preview events, setting Handled=true, doesn't make a difference. Am I not understanding mouse capture in WPF? Here's some minimal sample code that reproduces the problem. // TestListBox.cs using System.Diagnostics; using System.Windows.Controls; namespace Local { public class TestListBox : ListBox { public TestListBox() { MouseDown += (_, e) => { Debug.WriteLine("+MouseDown"); Debug.WriteLine(" Capture: " + CaptureMouse()); Debug.WriteLine("-MouseDown"); }; MouseUp += (_, e) => { Debug.WriteLine("+MouseUp"); ReleaseMouseCapture(); Debug.WriteLine("-MouseUp"); }; GotMouseCapture += (_, e) => Debug.WriteLine("GotMouseCapture"); LostMouseCapture += (_, e) => Debug.WriteLine("LostMouseCapture"); } } } Generating a default WPF app that has this for its main window will use the test class: <Window x:Class="Local.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:Local" Title="MainWindow" Height="350" Width="525"> <local:TestListBox> <ListBoxItem>1</ListBoxItem> <ListBoxItem>2</ListBoxItem> <ListBoxItem>3</ListBoxItem> <ListBoxItem>4</ListBoxItem> </local:TestListBox> </Window> Upon clicking the middle button down, I get this output: +MouseDown GotMouseCapture LostMouseCapture Capture: True -MouseDown So I'm calling CaptureMouse, which in turn grabs and then releases capture, yet returns true that capture was successfully acquired. What's going on here? Is it possible that this is something with my Logitech mouse driver doing something goofy, trying to initiate 'ultrascroll' or something?

    Read the article

  • EXC_BAD_ACCESS when simply casting a pointer in Obj-C

    - by AlexChilcott
    Hi all, Frequent visitor but first post here on StackOverflow, I'm hoping that you guys might be able to help me out with this. I'm fairly new to Obj-C and XCode, and I'm faced with this really... weird... problem. Googling hasn't turned up anything whatsoever. Basically, I get an EXC_BAD_ACCESS signal on a line that doesn't do any dereferencing or anything like that that I can see. Wondering if you guys have any idea where to look for this. I've found a work around, but no idea why this works... The line the broken version barfs out on is the line: LevelEntity *le = entity; where I get my bad access signal. Here goes: THIS VERSION WORKS NSArray *contacts = [self.body getContacts]; for (PhysicsContact *contact in contacts) { PhysicsBody *otherBody; if (contact.bodyA == self.body) { otherBody = contact.bodyB; } if (contact.bodyB == self.body) { otherBody = contact.bodyA; } id entity = [otherBody userData]; if (entity != nil) { LevelEntity *le = entity; CGPoint point = [contact contactPointOnBody:otherBody]; } } THIS VERSION DOESNT WORK NSArray *contacts = [self.body getContacts]; for (NSUInteger i = 0; i < [contacts count]; i++) { PhysicsContact *contact = [contacts objectAtIndex:i]; PhysicsBody *otherBody; if (contact.bodyA == self.body) { otherBody = contact.bodyB; } if (contact.bodyB == self.body) { otherBody = contact.bodyA; } id entity = [otherBody userData]; if (entity != nil) { LevelEntity *le = entity; CGPoint point = [contact contactPointOnBody:otherBody]; } } Here, the only difference between the two examples is the way I enumerate through my array. In the first version (which works) I use for (... in ...), where as in the second I use for (...; ...; ...). As far as I can see, these should be the same. This is seriously weirding me out. Anyone have any similar experience or idea whats going on here? Would be really great :) Cheers, Alex

    Read the article

  • How should I ethically approach user password storage for later plaintext retrieval?

    - by Shane
    As I continue to build more and more websites and web applications I am often asked to store user's passwords in a way that they can be retrieved if/when the user has an issue (either to email a forgotten password link, walk them through over the phone, etc.) When I can I fight bitterly against this practice and I do a lot of ‘extra’ programming to make password resets and administrative assistance possible without storing their actual password. When I can’t fight it (or can’t win) then I always encode the password in some way so that it at least isn’t stored as plaintext in the database—though I am aware that if my DB gets hacked that it won’t take much for the culprit to crack the passwords as well—so that makes me uncomfortable. In a perfect world folks would update passwords frequently and not duplicate them across many different sites—unfortunately I know MANY people that have the same work/home/email/bank password, and have even freely given it to me when they need assistance. I don’t want to be the one responsible for their financial demise if my DB security procedures fail for some reason. Morally and ethically I feel responsible for protecting what can be, for some users, their livelihood even if they are treating it with much less respect. I am certain that there are many avenues to approach and arguments to be made for salting hashes and different encoding options, but is there a single ‘best practice’ when you have to store them? In almost all cases I am using PHP and MySQL if that makes any difference in the way I should handle the specifics. Additional Information for Bounty I want to clarify that I know this is not something you want to have to do and that in most cases refusal to do so is best. I am, however, not looking for a lecture on the merits of taking this approach I am looking for the best steps to take if you do take this approach. In a note below I made the point that websites geared largely toward the elderly, mentally challenged, or very young can become confusing for people when they are asked to perform a secure password recovery routine. Though we may find it simple and mundane in those cases some users need the extra assistance of either having a service tech help them into the system or having it emailed/displayed directly to them. In such systems the attrition rate from these demographics could hobble the application if users were not given this level of access assistance, so please answer with such a setup in mind. Thanks to Everyone This has been a fun questions with lots of debate and I have enjoyed it. In the end I selected an answer that both retains password security (I will not have to keep plain text or recoverable passwords), but also makes it possible for the user base I specified to log into a system without the major drawbacks I have found from normal password recovery. As always there were about 5 answers that I would like to have marked correct for different reasons, but I had to choose the best one--all the rest got a +1. Thanks everyone!

    Read the article

  • Rendering problem with UITableview

    - by Spider-Paddy
    I have a very strange problem with a UITableview within a navigation controller on the iPhone simulator. Of the cells displayed, only some are correctly rendered. They are all supposed to look the same but the majority are missing the accessory I've set, scrolling the view changes which cell has the accessory so I suspect it's some sort of cell caching happening, although the contents are correct for each cell. I also set an image as the background and that was also only displaying sporadically but I fixed that by changing cell.backgroundView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"yellow-bar_short.png"]]; (which also only rendered a random cell with the background) to cell.backgroundColor = [[UIColor alloc] initWithPatternImage:[UIImage imageNamed:@"yellow-bar_short.png"]]; I now need to fix the problem with the accessory only showing on a random cell. I tried moving the code from cellForRowAtIndex to willDisplayCell but it made no difference. I put in a log command to confirm that it is running through each frame. Basically it's a table view (UITableViewCellStyleSubtitle) that gets its info from a server & is then updated by a delegate method calling reload. Code is: -(void)tableView:(UITableView *)tableView willDisplayCell:(UITableViewCell *)cell forRowAtIndexPath:(NSIndexPath *)indexPath { NSLog(@"%@", [NSString stringWithFormat:@"Setting colours for cell %i", indexPath.row]); // Set cell background // cell.backgroundView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"yellow-bar_short.png"]]; cell.backgroundColor = [[UIColor alloc] initWithPatternImage:[UIImage imageNamed:@"yellow-bar_short.png"]]; cell.textLabel.backgroundColor = [UIColor clearColor]; cell.detailTextLabel.backgroundColor = [UIColor clearColor]; // detailTableViewAccessory is a view containing an imageview in this view's nib // i.e. nib <- view <- imageview <- image cell.accessoryView = detailTableViewAccessory; } // Called by data fetching object when done -(void)listDataTransferComplete:(ArticleListParser *)articleListParserObject { NSLog(@"Data parsed, reloading detail table"); self.currentTotalResultPages = (((articleListParserObject.currentArticleCount - 1) / 10) + 1); self.detailTableDataSource = [articleListParserObject.returnedArray copy]; // make a local copy of the returned array // Render table again with returned array data (neither of these 2 fixed it) [self.detailTableView performSelectorOnMainThread:@selector(reloadData) withObject:nil waitUntilDone:NO]; // [self.detailTableView reloadData]; // Re-enable necessary buttons (including table cells) letUserSelectRow = TRUE; [btnByName setEnabled:TRUE]; [btnByPrice setEnabled:TRUE]; // Remove please wait message NSLog(@"Removing please wait view"); [pleaseWaitViewControllerObject.view removeFromSuperview]; } I only included code that I thought was relevant, can supply more if needed. I can't test it on an iPhone yet so I don't know if it's maybe just a simulator anomaly or a bug in my code. I've always gotten good feedback from questions, any ideas?

    Read the article

  • Simplifying const Overloading?

    - by templatetypedef
    Hello all- I've been teaching a C++ programming class for many years now and one of the trickiest things to explain to students is const overloading. I commonly use the example of a vector-like class and its operator[] function: template <typename T> class Vector { public: T& operator[] (size_t index); const T& operator[] (size_t index) const; }; I have little to no trouble explaining why it is that two versions of the operator[] function are needed, but in trying to explain how to unify the two implementations together I often find myself wasting a lot of time with language arcana. The problem is that the only good, reliable way that I know how to implement one of these functions in terms of the other is with the const_cast/static_cast trick: template <typename T> const T& Vector<T>::operator[] (size_t index) const { /* ... your implementation here ... */ } template <typename T> T& Vector<T>::operator[] (size_t index) { return const_cast<T&>(static_cast<const Vector&>(*this)[index]); } The problem with this setup is that it's extremely tricky to explain and not at all intuitively obvious. When you explain it as "cast to const, then call the const version, then strip off constness" it's a little easier to understand, but the actual syntax is frightening,. Explaining what const_cast is, why it's appropriate here, and why it's almost universally inappropriate elsewhere usually takes me five to ten minutes of lecture time, and making sense of this whole expression often requires more effort than the difference between const T* and T* const. I feel that students need to know about const-overloading and how to do it without needlessly duplicating the code in the two functions, but this trick seems a bit excessive in an introductory C++ programming course. My question is this - is there a simpler way to implement const-overloaded functions in terms of one another? Or is there a simpler way of explaining this existing trick to students? Thanks so much!

    Read the article

  • A few problems with Delphi involving Mail Merge, SQL + Databases.

    - by Daniel
    My first problem is with mail merge. I have created a a Data File and a table, yet I am not able to fill my table with information from my Data File. The << just seems to be inserted after wherever the cursor is on the page, which is not where the table is. All that is entered into the actual table is a '59'. Therefore I think I either need to to change the code or be able to move the cursor. Here is the code I am currently using: wrdDoc.Tables.Add(wrdSelection.Range, ADOTable1.FieldCount, 3); wrdDoc.Tables.Item(1).Columns.Item(1).SetWidth(51,wdAdjustNone); wrdDoc.Tables.Item(1).Columns.Item(2).SetWidth(20,wdAdjustNone); wrdDoc.Tables.Item(1).Columns.Item(3).SetWidth(100,wdAdjustNone); // Set the shading on the first row to light gray wrdDoc.Tables.Item(1).Rows.Item(1).Cells .Shading.BackgroundPatternColorIndex := wdGray25; // BOLD the first row wrdDoc.Tables.Item(1).Rows.Item(1).Range.Bold := True; // Center the text in Cell (1,1) wrdDoc.Tables.Item(1).Cell(1,1).Range.Paragraphs.Alignment := wdAlignParagraphCenter; // Fill each row of the table with data wrdDoc.Tables.Item(1).Cell(1, 1).Range.InsertAfter('Time'); wrdDoc.Tables.Item(1).Cell(1, 2).Range.InsertAfter(''); wrdDoc.Tables.Item(1).Cell(1, 3).Range.InsertAfter('Teacher'); For Count := 1 to (ADOTable1.FieldCount - 1) do begin wrdDoc.Tables.Item(1).Cell((Count + 1), 1).Range.InsertAfter(wrdSelection.Range,'Time' + IntToStr(Count)); wrdDoc.Tables.Item(1).Cell((Count + 1), 2).Range.InsertAfter(wrdSelection.Range,'THonorific' + IntToStr(Count)); wrdDoc.Tables.Item(1).Cell((Count + 1), 3).Range.InsertAfter(wrdSelection.Range,'TSurname' + IntToStr(Count)); end; My second problem is that I do not know what the correct SQL syntax is for editing the name of a column in the database (I am using Delphi 7 and Microsoft Jet Engine if that makes a difference). The third problem is that when I add a new column to my database manually (which I need to do) I get a 'violation' error in one of my units when I activate an ADOTable. This only happens on one unit and it happens when I add a column with any name anywhere in the table. I know that is vague but I can't seem to narrow down the problem any further than that. If you could help with me with any of those it would be great. Thanks.

    Read the article

  • Is there some way to make variables like $a and $b in regard to strict?

    - by Axeman
    In light of Michael Carman's comment, I have decided to rewrite the question. Note that 11 comments appear before this edit, and give credence to Michael's observation that I did not write the question in a way that made it clear what I was asking. Question: What is the standard--or cleanest way--to fake the special status that $a and $b have in regard to strict by simply importing a module? First of all some setup. The following works: #!/bin/perl use strict; print "\$a=$a\n"; print "\$b=$b\n"; If I add one more line: print "\$c=$c\n"; I get an error at compile time, which means that none of my dazzling print code gets to run. If I comment out use strict; it runs fine. Outside of strictures, $a and $b are mainly special in that sort passes the two values to be compared with those names. my @reverse_order = sort { $b <=> $a } @unsorted; Thus the main functional difference about $a and $b--even though Perl "knows their names"--is that you'd better know this when you sort, or use some of the functions in List::Util. It's only when you use strict, that $a and $b become special variables in a whole new way. They are the only variables that strict will pass over without complaining that they are not declared. : Now, I like strict, but it strikes me that if TIMTOWTDI (There is more than one way to do it) is Rule #1 in Perl, this is not very TIMTOWDI. It says that $a and $b are special and that's it. If you want to use variables you don't have to declare $a and $b are your guys. If you want to have three variables by adding $c, suddenly there's a whole other way to do it. Nevermind that in manipulating hashes $k and $v might make more sense: my %starts_upper_1_to_25 = skim { $k =~ m/^\p{IsUpper}/ && ( 1 <= $v && $v <= 25 ) } %my_hash ;` Now, I use and I like strict. But I just want $k and $v to be visible to skim for the most compact syntax. And I'd like it to be visible simply by use Hash::Helper qw<skim>; I'm not asking this question to know how to black-magic it. My "answer" below, should let you know that I know enough Perl to be dangerous. I'm asking if there is a way to make strict accept other variables, or what is the cleanest solution. The answer could well be no. If that's the case, it simply does not seem very TIMTOWTDI.

    Read the article

  • Linux configurations that would affect Java memory usage?

    - by wmacura
    Hi, Background: I have a set of java background workers I start as part of my webapp. I develop locally on Ubuntu 10.10 and deploy to an Ubuntu 10.04LTS server (a media temple (ve) instance). They're both running the same JVM: Sun JVM 1.6.0_22-b04. As part of the initialization script each worker is started with explicit Xmx, Xms, and XX:MaxPermGen settings. Yet somehow locally all 10 workers use 250MB, while on the server they use more than 2.7GB. I don't know how to begin to track this down. I thought the Ubuntu (and thus, kernel) version might make a difference, but I tried an old 10.04 VM and it behaves as expected. I've noticed that the machine does not seem to ever use memory for buffer or cache (according to htop), which seems a bit strange, but perhaps normal for a server? (edited) Some info: (server) root@devel:/app/axir/target# uname -a Linux devel 2.6.18-028stab069.5 #1 SMP Tue May 18 17:26:16 MSD 2010 x86_64 GNU/Linux (local) wiktor@beastie:~$ uname -a Linux beastie 2.6.35-25-generic #44-Ubuntu SMP Fri Jan 21 17:40:44 UTC 2011 x86_64 GNU/Linux (edited) Comparing PS output: (ps -eo "ppid,pid,cmd,rss,sz,vsz") PPID PID CMD RSS SZ VSZ (local) 1588 1615 java -cp axir-distribution. 25484 234382 937528 1615 1631 java -cp /home/wiktor/Code/ 83472 163059 652236 1615 1657 java -cp /home/wiktor/Code/ 70624 89135 356540 1615 1658 java -cp /home/wiktor/Code/ 37652 77625 310500 1615 1669 java -cp /home/wiktor/Code/ 38096 77733 310932 1615 1675 java -cp /home/wiktor/Code/ 37420 61395 245580 1615 1684 java -cp /home/wiktor/Code/ 38000 77736 310944 1615 1703 java -cp /home/wiktor/Code/ 39180 78060 312240 1615 1712 java -cp /home/wiktor/Code/ 38488 93882 375528 1615 1719 java -cp /home/wiktor/Code/ 38312 77874 311496 1615 1726 java -cp /home/wiktor/Code/ 38656 77958 311832 1615 1727 java -cp /home/wiktor/Code/ 78016 89429 357716 (server) 22522 23560 java -cp axir-distribution. 24860 285196 1140784 23560 23585 java -cp /app/axir/target/a 100764 161629 646516 23560 23667 java -cp /app/axir/target/a 72408 92682 370728 23560 23670 java -cp /app/axir/target/a 39948 97671 390684 23560 23674 java -cp /app/axir/target/a 40140 81586 326344 23560 23739 java -cp /app/axir/target/a 39688 81542 326168 They look very similar. In fact, the question now is why, if I add up the virtual memory usage on the server (3.2GB) does it more closely reflect 2.4GB of memory used (according to free), yet locally the virtual memory used adds up to a much more substantial 4.7GB but only actually uses ~250MB. It seems that perhaps memory isn't being shared as aggressively. (if that's even possible) Thank you for your help, Wiktor

    Read the article

  • Calculating the Angle Between Two vectors Using Dot Product

    - by P. Avery
    I'm trying to calculate the angle between two vectors so that I can rotate a character in the direction of an object in 3D space. I have two vectors( character & object), loc_look, and modelPos respectively. For simplicity's sake I am only trying to rotate along the up axis...yaw. loc_look = D3DXVECTOR3 (0, 0, 1), modelPos = D3DXVECTOR3 (0, 0, 15); I have written this code which seems to be the correct calculations. My problem arises, seemingly, because the rotation I apply to the character's look vector(loc_look) exceeds the value of the object's position (modelPos). Here is my code: BOOL CEntity::TARGET() { if(graphics.m_model->m_enemy) { D3DXVECTOR3 modelPos = graphics.m_model->position; D3DXVec3Normalize(&modelPos, &modelPos); //D3DXVec3Normalize(&loc_look, &loc_look); float dot = D3DXVec3Dot(&loc_look, &modelPos); float yaw = acos(dot); BOOL neg = (loc_look.x > modelPos.x) ? true : false; switch ( neg ) { case false: Yaw(yaw); return true; case true: Yaw(-yaw); return true; } } else return false; } I rotate the character's orientation matrix with the following code: void CEntity::CalculateOrientationMatrix(D3DXMATRIX *orientationMatrix) { D3DXMatrixRotationAxis(&rotY, &loc_up, loc_yaw); D3DXVec3TransformCoord(&loc_look, &loc_look, &rotY); D3DXVec3TransformCoord(&loc_right, &loc_right, &rotY); D3DXMatrixRotationAxis(&rotX, &loc_right, loc_pitch); D3DXVec3TransformCoord(&loc_look, &loc_look, &rotX); D3DXVec3TransformCoord(&loc_up, &loc_up, &rotX); D3DXMatrixRotationAxis(&rotZ, &loc_look, loc_roll); D3DXVec3TransformCoord(&loc_up, &loc_up, &rotZ); D3DXVec3TransformCoord(&loc_right, &loc_right, &rotZ); *orientationMatrix *= rotX * rotY * rotZ; orientationMatrix->_41 = loc_position.x; orientationMatrix->_42 = loc_position.y; orientationMatrix->_43 = loc_position.z; //D3DXVec3Normalize(&loc_look, &loc_look); SetYawPitchRoll(0,0,0); // Reset Yaw, Pitch, & Roll Amounts } Also to note, the modelPos.x increases by 0.1 each iteration so the character will face the object as it moves along the x-axis... Now, when I run program, in the first iteration everything is fine(I haven't rotated the character yet). On the second iteration, the loc_look.x value is greater than the modelPos.x value(I rotated the character too much using the angle specified with the dot product calculations in the TARGET function). Therefore on the second iteration my code will rotate the character left to adjust for the difference in the vectors' x values... How can I tighten up the measurements so that I do not rotate my character's look vector by too great a value?

    Read the article

  • If exist in array (php)

    - by Glister
    There are two arrays, that are given for $link from foreach. One time $link must be a first arrow, and a third - the second. So: 1 array: Array ( [width] => 800 [height] => 1142 [hwstring_small] => height='96' width='67' [file] => 2010/04/white-1051279.jpg [sizes] => Array ( [thumbnail] => Array ( [file] => white-1051279-100x150.jpg [width] => 100 [height] => 150 ) [medium] => Array ( [file] => white-1051279-200x285.jpg [width] => 200 [height] => 285 ) ) [image_meta] => Array ( [aperture] => 0 [credit] => [camera] => [caption] => [created_timestamp] => 0 [copyright] => [focal_length] => 0 [iso] => 0 [shutter_speed] => 0 [title] => ) ) 2 array: Array ( [width] => 50 [height] => 50 [hwstring_small] => height='50' width='50' [file] => 2010/04/images1.jpeg [image_meta] => Array ( [aperture] => 0 [credit] => [camera] => [caption] => [created_timestamp] => 0 [copyright] => [focal_length] => 0 [iso] => 0 [shutter_speed] => 0 [title] => ) ) The difference - first one has [sizes]. Searching for a way to detect, is there [sizes] in given array. Tryed if (in_array("[sizes]", $link)) { } else { }, but it doesnt work. Thanks.

    Read the article

  • MS Access 07 - Q re lookup column vs many-to-many; Q re checkboxes in many-to-many forms

    - by TBinLondon
    Hello, I'm creating a database with Access. This is just a test database, similar to my requirements, so I can get my skills up before creating one for work. I've created a database for a fictional school as this is a good playground and rich data (many students have many subjects have many teachers, etc). Question 1 What is the difference, if any, between using a Lookup column and a many-to-many associate table? Example: I have Tables 'Teacher' and 'Subject'. Many teachers have many subjects. I can, and have, created a table 'Teacher_Subject' and run queries with this. I have then created a lookup column in teachers table with data from subjects. The lookup column seems to take the place of the teacher_subject table. (though the data on relationships is obviously duplicated between lookup table and teacher_subject and may vary). Which one is the 'better' option? Is there a snag with using lookup tables? (I realize that this is a very 'general' question. Links to other resources and answers saying 'that depends...' are appreciated) Question 2 What attracts me to lookup tables is the following: When creating a form for entering subjects for teachers, with lookup I can simply create checkboxes and click a subject for a teacher 'on' or 'off'. Each click on/off creates/removes a record in the lookup column (which replaces teacher_subject). If I use a form from a query from teacher subject with teacher as main form and subject as subform I run into this problem: In the subform I can either select each subject that teacher has in a bombo box, i.e. click, scroll down, select, go to next row, click, scroll down, etc. (takes too long) OR I can create a list box listing all available subjects in each row but allowing me to select only one. (takes up too much space). Is it possible to have a click on/off list box for teacher_subject, creating/removing a record there with each click? Note - I know zero SQL or VB. If the correct answer is "you need to know SQL for this" then that's cool. I just need to know. Thanks!

    Read the article

  • C++ Vector vs Array (Time)

    - by vsha041
    I have got here two programs with me, both are doing exactly the same task. They are just setting an boolean array / vector to the value true. The program using vector takes 27 seconds to run whereas the program involving array with 5 times greater size takes less than 1 s. I would like to know the exact reason as to why there is such a major difference ? Are vectors really that inefficient ? Program using vectors #include <iostream> #include <vector> #include <ctime> using namespace std; int main(){ const int size = 2000; time_t start, end; time(&start); vector<bool> v(size); for(int i = 0; i < size; i++){ for(int j = 0; j < size; j++){ v[i] = true; } } time(&end); cout<<difftime(end, start)<<" seconds."<<endl; } Runtime - 27 seconds Program using Array #include <iostream> #include <ctime> using namespace std; int main(){ const int size = 10000; // 5 times more size time_t start, end; time(&start); bool v[size]; for(int i = 0; i < size; i++){ for(int j = 0; j < size; j++){ v[i] = true; } } time(&end); cout<<difftime(end, start)<<" seconds."<<endl; } Runtime - < 1 seconds Platform - Visual Studio 2008 OS - Windows Vista 32 bit SP 1 Processor Intel(R) Pentium(R) Dual CPU T2370 @ 1.73GHz Memory (RAM) 1.00 GB Thanks Amare

    Read the article

  • Unable to get set intersection to work

    - by chavanak
    Sorry for the double post, I will update this question if I can't get things to work :) I am trying to compare two files. I will list the two file content: File 1 File 2 "d.complex.1" "d.complex.1" 1 4 5 5 48 47 65 21 d.complex.10 d.complex.10 46 6 21 46 109 121 192 192 TI am trying to compare the contents of the two file but not in a trivial way. I will explain what I want with an example. If you observe the file content I have typed above, the d.complex.1 of file_1 has "5" similar to d.complex.1 in file_2; the same d.complex.1 in file_1 has nothing similar to d.complex.10 in file_2. What I am trying to do is just to print out those d.complex. which has nothing in similar with the other d.complex. Consider the d.complex. as a heading if you want. But all I am trying is compare the numbers below each d.complex. and if nothing matches, I want that particular d.complex. from both files to be printed. If even one number is present in both d.complex. of both files, I want it to be rejected. My Code: The method I chose to achieve this was to use sets and then do a difference. Code I wrote was: first_complex=open( "file1.txt", "r" ) first_complex_lines=first_complex.readlines() first_complex_lines=map( string.strip, first_complex_lines ) first_complex.close() second_complex=open( "file2.txt", "r" ) second_complex_lines=second_complex.readlines() second_complex_lines=map( string.strip, second_complex_lines ) second_complex.close() list_1=[] list_2=[] res_1=[] for line in first_complex_lines: if line.startswith( "d.complex" ): res_1.append( [] ) res_1[-1].append( line ) res_2=[] for line in second_complex_lines: if line.startswith( "d.complex" ): res_2.append( [] ) res_2[-1].append( line ) h=len( res_1 ) k=len( res_2 ) for i in res_1: for j in res_2: print i[0] print j[0] target_set=set ( i ) target_set_1=set( j ) for s in target_set: if s not in target_set_1: if s[0] != "d": print s The above code is giving an output like this (just an example): d.complex.1.dssp d.complex.1.dssp 1 48 65 d.complex.1.dssp d.complex.10.dssp 46 21 109 What I would like to have is: d.complex.1 d.complex.1 (name from file2) d.complex.1 d.complex.10 (name from file2) I am sorry for confusing you guys, but this is all that is required. I am so new to python so my concept above might be flawed. Also I have never used sets before :(. Can someone give me a hand here?

    Read the article

  • Query performs poorly unless a temp table is used

    - by Paul McLoughlin
    The following query takes about 1 minute to run, and has the following IO statistics: SELECT T.RGN, T.CD, T.FUND_CD, T.TRDT, SUM(T2.UNITS) AS TotalUnits FROM dbo.TRANS AS T JOIN dbo.TRANS AS T2 ON T2.RGN=T.RGN AND T2.CD=T.CD AND T2.FUND_CD=T.FUND_CD AND T2.TRDT<=T.TRDT JOIN TASK_REQUESTS AS T3 ON T3.CD=T.CD AND T3.RGN=T.RGN AND T3.TASK = 'UPDATE_MEM_BAL' GROUP BY T.RGN, T.CD, T.FUND_CD, T.TRDT (4447 row(s) affected) Table 'TRANSACTIONS'. Scan count 5977, logical reads 7527408, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'TASK_REQUESTS'. Scan count 1, logical reads 11, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 58157 ms, elapsed time = 61437 ms. If I instead introduce a temporary table then the query returns quickly and performs less logical reads: CREATE TABLE #MyTable(RGN VARCHAR(20) NOT NULL, CD VARCHAR(20) NOT NULL, PRIMARY KEY([RGN],[CD])); INSERT INTO #MyTable(RGN, CD) SELECT RGN, CD FROM TASK_REQUESTS WHERE TASK='UPDATE_MEM_BAL'; SELECT T.RGN, T.CD, T.FUND_CD, T.TRDT, SUM(T2.UNITS) AS TotalUnits FROM dbo.TRANS AS T JOIN dbo.TRANS AS T2 ON T2.RGN=T.RGN AND T2.CD=T.CD AND T2.FUND_CD=T.FUND_CD AND T2.TRDT<=T.TRDT JOIN #MyTable AS T3 ON T3.CD=T.CD AND T3.RGN=T.RGN GROUP BY T.RGN, T.CD, T.FUND_CD, T.TRDT (4447 row(s) affected) Table 'Worktable'. Scan count 5974, logical reads 382339, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'TRANSACTIONS'. Scan count 4, logical reads 4547, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#MyTable________________________________________________________________000000000013'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 1420 ms, elapsed time = 1515 ms. The interesting thing for me is that the TASK_REQUEST table is a small table (3 rows at present) and statistics are up to date on the table. Any idea why such different execution plans and execution times would be occuring? And ideally how to change things so that I don't need to use the temp table to get decent performance? The only real difference in the execution plans is that the temp table version introduces an index spool (eager spool) operation.

    Read the article

  • Tokenizing a string with variable whitespace

    - by Ron Holcomb
    I've read through a few threads detailing how to tokenize strings, but I'm apparently too thick to adapt their suggestions and solutions into my program. What I'm attempting to do is tokenize each line from a large (5k+) line file into two strings. Here's a sample of the lines: 0 -0.11639404 9.0702948e-05 0.00012207031 0.0001814059 0.051849365 0.00027210884 0.062103271 0.00036281179 0.034423828 0.00045351474 0.035125732 The difference I'm finding between my lines and the other sample input from other threads is that I have a variable amount of whitespace between the parts that I want to tokenize. Anyways, here's my attempt at tokenizing: #include <iostream> #include <iomanip> #include <fstream> #include <string> using namespace std; int main(int argc, char *argv[]) { ifstream input; ofstream output; string temp2; string temp3; input.open(argv[1]); output.open(argv[2]); if (input.is_open()) { while (!input.eof()) { getline(input, temp2, ' '); while (!isspace(temp2[0])) getline(input, temp2, ' '); getline (input, temp3, '\n'); } input.close(); cout << temp2 << endl; cout << temp3 << endl; return 0; } I've clipped it some, since the troublesome bits are here. The issue that I'm having is that temp2 never seems to catch a value. Ideally, it should get populated with the first column of numbers, but it doesn't. Instead, it is blank, and temp3 is populated with the entire line. Unfortunately, in my course we haven't learned about vectors, so I'm not quite sure how to implement them in the other solutions for this I've seen, and I'd like to not just copy-paste code for assignments to get things work without actually understanding it. So, what's the extremely obvious/already been answered/simple solution I'm missing? I'd like to stick to standard libraries that g++ uses if at all possible.

    Read the article

  • Why this search can not generate correct result?

    - by user482742
    Hi, All: Below is to find same customer and if he is in list, the number add one. If he is not in the list, just add him in the list. I use Search function to do this, but failed and generated incorrect records. It can not find the customer or the right number of customers. But if I use For..loop to iterate the list, it does well and can find the customer and add new customer in that for..loop search procedure. (I did not paste for ..loop search procedrue here). Another problem is that there is no difference between setting list.sorted true and false. It seems Search function is not correct. This search function is from an example of delphi textbook. The below is with Delphi 7. Thank you. Procedure Form1.create; begin list:=Tstringlist.create; list.sorted:=true; // Search function will generate exactly Same and Incorrect //records no matter list.sorted is set true or false. list.duplicates:=dupignore; .. end; Procedure addcustomer; var .. begin while p1.MatchAgain do begin //p1 is regular expression customer:=p1.MatchedExpression; if (search(customer)=false) then begin list.Add(customer+'=1'); end; allcustomer:=allcustomer+1; .. end; Function Tform1.search(customer: string): boolean; var fre:string; num:integer; L:integer; R:integer; M: Integer; CompareResult: Integer; found: boolean; begin result:=false; found:=false; L := 0; R := List.Count - 1; while (L <= R) and ( not found ) do begin M := (L + R) div 2; CompareResult := Comparetext(list.Names[m]), customer); if (compareresult=0) then begin fre:=list.ValueFromIndex [m]; num:=strtoint(fre); num:=num+1; list.ValueFromIndex[m]:=inttostr(num); Found := True; Result := true; exit; end else if compareresult > 0 then r := m - 1 else l := m + 1; end; end;

    Read the article

  • Visual Studio not recognizing "BuildStep"

    - by AmbiguousX
    I'm trying to add an automatic post-build trigger to run NDepend after an automated team build in TFS 2010. NDepend's website provided code for integrating this capability, and so I have pasted their code into my .csproj file where they said for it to go, but I receive errors on the build. The errors refer to two of the three "BuildStep" tags I have in the code snippet. The following two snippets are giving me errors: <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Message="Running NDepend analysis"> <Output TaskParameter="Id" PropertyName="StepId" /> </BuildStep> and <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Id="$(StepId)" Status="Failed" /> However, this code snippet is NOT throwing up any problems: <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Id="$(StepId)" Status="Succeeded" /> I just don't understand why one works fine and a nearly identically laid out BuildStep tag does not. Is there something simple that I'm just overlooking? EDIT: Here is how it looks all together, if this makes a difference: <Target Name="NDepend" > <PropertyGroup> <NDPath>c:\tools\NDepend\NDepend.console.exe</NDPath> <NDProject>$(SolutionDir)MyProject.ndproj</NDProject> <NDOut>$(TargetDir)NDepend</NDOut> <NDIn>$(TargetDir)</NDIn> </PropertyGroup> <Exec Command='"$(NDPath)" "$(NDProject)" /OutDir "$(NDOut)" /InDirs "$(NDIn)"'/> </Target> <Target Name="AfterBuild"> <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Message="Running NDepend analysis"> <Output TaskParameter="Id" PropertyName="StepId" /> </BuildStep> <PropertyGroup> <NDPath>c:\tools\NDepend\NDepend.console.exe</NDPath> <NDProject>$(SolutionRoot)\Main\src\MyProject.ndproj</NDProject> <NDOut>$(BinariesRoot)\NDepend</NDOut> <NDIn>$(BinariesRoot)\Release</NDIn> </PropertyGroup> <Exec Command='$(NDPath) "$(NDProject)" /OutDir "$(NDOut)" /InDirs "$(NDIn)"'/> <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Id="$(StepId)" Status="Succeeded" /> <OnError ExecuteTargets="MarkBuildStepAsFailed" /> </Target> <Target Name="MarkBuildStepAsFailed"> <BuildStep TeamFoundationServerUrl="$(TeamFoundationServerUrl)" BuildUri="$(BuildUri)" Id="$(StepId)" Status="Failed" /> </Target> EDIT: Added a bounty because I really need to get this going for my team. Thank you in advance!

    Read the article

  • sifr menu list problems, height of object calculated wrong

    - by armyofda12mnkeys
    I am using Drupal and have sifr set on list items, and also a, a:hover are set via Drupal so that the links hover. I looked at the sifr3-rules.js file drupal's render module creates and it looks good. and in fact my other sifr items look fine... But the list item ones are goofing for some reason... There is extra space below the list, so if i have a list item, and inside of that, I have a unordered list with more list items, the Flash Object made in the 1st li (which will cover the rest of the sublist items too), is too bid so you see space under the children until the next parent li comes up. (so looks like extra bottom padding on that portion of the list in IE8... in FF almost similar but each subitem has space at bottom... with javascript turned off, you see the list items look fine by themselves).... Also if the parent list item is shorter than the sub list items text, the width for the flash object is set only as long as the first list item, and therefore cuts off the rest the sublist item's text. Any idea how to resolve any of these? Only unusual thing i see i am doing is setting forceSingleLine and preventWrap (which dont make a difference if taken off). ****Edit, I may just try to figure out how to get Drupal's menu-block module to output a around my hyperlinks in the list items... then i can target the div's with my rule (and the a,a:hover rules will apply), so each menu item gets its own sifr object instead of sifr3 trying to figure out how to do the lists and sublists. Very useful to me is anyone knows a way to target a hyperlink (<a> tag) and also allow a:hover rules. I know how to do it with pretend another tag that includes hyperlinks, like if i had <h2><a>sometitle</a></h2>, i could have a rule for h2, but then use the a, a:hover rules in the sifr3-rules.js file to target that. so i would need a way to target the hyperlink in the list, but also apply a :hover to it (not sure if this can be done since its not underneith for example the h2 tag).

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >