Search Results

Search found 16134 results on 646 pages for 'reference guide'.

Page 192/646 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • Bug in Delphi XE RegularExpressions Unit

    - by Jan Goyvaerts
    Using the new RegularExpressions unit in Delphi XE, you can iterate over all the matches that a regex finds in a string like this: procedure TForm1.Button1Click(Sender: TObject); var RegEx: TRegEx; Match: TMatch; begin RegEx := TRegex.Create('\w+'); Match := RegEx.Match('One two three four'); while Match.Success do begin Memo1.Lines.Add(Match.Value); Match := Match.NextMatch; end end; Or you could save yourself two lines of code by using the static TRegEx.Match call: procedure TForm1.Button2Click(Sender: TObject); var Match: TMatch; begin Match := TRegEx.Match('One two three four', '\w+'); while Match.Success do begin Memo1.Lines.Add(Match.Value); Match := Match.NextMatch; end end; Unfortunately, due to a bug in the RegularExpressions unit, the static call doesn’t work. Depending on your exact code, you may get fewer matches or blank matches than you should, or your application may crash with an access violation. The RegularExpressions unit defines TRegEx and TMatch as records. That way you don’t have to explicitly create and destroy them. Internally, TRegEx uses TPerlRegEx to do the heavy lifting. TPerlRegEx is a class that needs to be created and destroyed like any other class. If you look at the TRegEx source code, you’ll notice that it uses an interface to destroy the TPerlRegEx instance when TRegEx goes out of scope. Interfaces are reference counted in Delphi, making them usable for automatic memory management. The bug is that TMatch and TGroupCollection also need the TPerlRegEx instance to do their work. TRegEx passes its TPerlRegEx instance to TMatch and TGroupCollection, but it does not pass the instance of the interface that is responsible for destroying TPerlRegEx. This is not a problem in our first code sample. TRegEx stays in scope until we’re done with TMatch. The interface is destroyed when Button1Click exits. In the second code sample, the static TRegEx.Match call creates a local variable of type TRegEx. This local variable goes out of scope when TRegEx.Match returns. Thus the reference count on the interface reaches zero and TPerlRegEx is destroyed when TRegEx.Match returns. When we call MatchAgain the TMatch record tries to use a TPerlRegEx instance that has already been destroyed. To fix this bug, delete or rename the two RegularExpressions.dcu files and copy RegularExpressions.pas into your source code folder. Make these changes to both the TMatch and TGroupCollection records in this unit: Declare FNotifier: IInterface; in the private section. Add the parameter ANotifier: IInterface; to the Create constructor. Assign FNotifier := ANotifier; in the constructor’s implementation. You also need to add the ANotifier: IInterface; parameter to the TMatchCollection.Create constructor. Now try to compile some code that uses the RegularExpressions unit. The compiler will flag all calls to TMatch.Create, TGroupCollection.Create and TMatchCollection.Create. Fix them by adding the ANotifier or FNotifier parameter, depending on whether ARegEx or FRegEx is being passed. With these fixes, the TPerlRegEx instance won’t be destroyed until the last TRegEx, TMatch, or TGroupCollection that uses it goes out of scope or is used with a different regular expression.

    Read the article

  • perl comparing 2 data file as array 2D for finding match one to one [migrated]

    - by roman serpa
    I'm doing a program that uses combinations of variables ( combiData.txt 63 rows x different number of columns) for analysing a data table ( j1j2_1.csv, 1000filas x 19 columns ) , to choose how many times each combination is repeated in data table and which rows come from (for instance, tableData[row][4]). I have tried to compile it , however I get the following message : Use of uninitialized value $val in numeric eq (==) at rowInData.pl line 34. Use of reference "ARRAY(0x1a2eae4)" as array index at rowInData.pl line 56. Use of reference "ARRAY(0x1a1334c)" as array index at rowInData.pl line 56. Use of uninitialized value in subtraction (-) at rowInData.pl line 56. Modification of non-creatable array value attempted, subscript -1 at rowInData.pl line 56. nothing This is my code: #!/usr/bin/perl use strict; use warnings; my $line_match; my $countTrue; open (FILE1, "<combiData.txt") or die "can't open file text1.txt\n"; my @tableCombi; while(<FILE1>) { my @row = split(' ', $_); push(@tableCombi, \@row); } close FILE1 || die $!; open (FILE2, "<j1j2_1.csv") or die "can't open file text1.txt\n"; my @tableData; while(<FILE2>) { my @row2 = split(/\s*,\s*/, $_); push(@tableData, \@row2); } close FILE2 || die $!; #function transform combiData.txt variable (position ) to the real value that i have to find in the data table. sub trueVal($){ my ($val) = $_[0]; if($val == 7){ return ('nonsynonymous_SNV'); } elsif( $val == 14) { return '1'; } elsif( $val == 15) { return '1';} elsif( $val == 16) { return '1'; } elsif( $val == 17) { return '1'; } elsif( $val == 18) { return '1';} elsif( $val == 19) { return '1';} else { print 'nothing'; } } #function IntToStr ( ) , i'm not sure if it is necessary) that transforms $ to strings , to use the function <eq> in the third loop for the array of combinations compared with the data array . sub IntToStr { return "$_[0]"; } for my $combi (@tableCombi) { $line_match = 0; for my $sheetData (@tableData) { $countTrue=0; for my $cell ( @$combi) { #my $temp =\$tableCombi[$combi][$cell] ; #if ( trueVal($tableCombi[$combi][$cell] ) eq $tableData[$sheetData][ $tableCombi[$combi][$cell] - 1 ] ){ #if ( IntToStr(trueVal($$temp )) eq IntToStr( $tableData[$sheetData][ $$temp-1] ) ){ if ( IntToStr(trueVal($tableCombi[$combi][$cell]) ) eq IntToStr($tableData[$sheetData][ $tableCombi[$combi][$cell] -1]) ){ $countTrue++;} if ($countTrue==@$combi){ $line_match++; #if ($line_match < 50){ print $tableData[$sheetData][4]." "; #} } } } print $line_match." \n"; }

    Read the article

  • How would you gather client's data on Google App Engine without using Datastore/Backend Instances too much?

    - by ruslan
    I'm relatively new to StackExchange and not sure if it's appropriate place to ask design question. Site gives me a hint "The question you're asking appears subjective and is likely to be closed". Please let me know. Anyway.. One of the projects I'm working on is online survey engine. It's my first big commercial project on Google App Engine. I need your advice on how to collect stats and efficiently record them in DataStore without bankrupting me. Initial requirements are: After user finishes survey client sends list of pairs [ID (int) + PercentHit (double)]. This list shows how close answers of this user match predefined answers of reference answerers (which identified by IDs). I call them "target IDs". Creator of the survey wants to see aggregated % for given IDs for last hour, particular timeframe or from the beginning of the survey. Some surveys may have thousands of target/reference answerers. So I created entity public class HitsStatsDO implements Serializable { @Id transient private Long id; transient private Long version = (long) 0; transient private Long startDate; @Parent transient private Key parent; // fake parent which contains target id @Transient int targetId; private double avgPercent; private long hitCount; } But writing HitsStatsDO for each target from each user would give a lot of data. For instance I had a survey with 3000 targets which was answered by ~4 million people within one week with 300K people taking survey in first day. Even if we assume they were answering it evenly for 24 hours it would give us ~1040 writes/second. Obviously it hits concurrent writes limit of Datastore. I decided I'll collect data for one hour and save that, that's why there are avgPercent and hitCount in HitsStatsDO. GAE instances are stateless so I had to use dynamic backend instance. There I have something like this: // Contains stats for one hour private class Shard { ReadWriteLock lock = new ReentrantReadWriteLock(); Map<Integer, HitsStatsDO> map = new HashMap<Integer, HitsStatsDO>(); // Key is target ID public void saveToDatastore(); public void updateStats(Long startDate, Map<Integer, Double> hits); } and map with shard for current hour and previous hour (which doesn't stay here for long) private HashMap<Long, Shard> shards = new HashMap<Long, Shard>(); // Key is HitsStatsDO.startDate So once per hour I dump Shard for previous hour to Datastore. Plus I have class LifetimeStats which keeps Map<Integer, HitsStatsDO> in memcached where map-key is target ID. Also in my backend shutdown hook method I dump stats for unfinished hour to Datastore. There is only one major issue here - I have only ONE backend instance :) It raises following questions on which I'd like to hear your opinion: Can I do this without using backend instance ? What if one instance is not enough ? How can I split data between multiple dynamic backend instances? It hard because I don't know how many I have because Google creates new one as load increases. I know I can launch exact number of resident backend instances. But how many ? 2, 5, 10 ? What if I have no load at all for a week. Constantly running 10 backend instances is too expensive. What do I do with data from clients while backend instance is dead/restarting? Thank you very much in advance for your thoughts.

    Read the article

  • Controlling server configurations with IPS

    - by barts
    I recently received a customer question regarding how they best could control which packages and which versions were used on their production Solaris 11 servers.  They had considered pointing each server at its own software repository - a common initial approach.  A simpler method leverages one of dependency mechanisms we introduced with Solaris 11, but is not immediately obvious to most people. Typically, most internal IT departments qualify particular versions for production use.  What this customer wanted to do was insure that their operations staff only installed internally qualified versions of Solaris on their servers.  The easiest way of doing this is to leverage the 'incorporate' type of dependency in a small package defined for each server type.  From the reference " Packaging and Delivering Software With the Image Packaging System in Oracle® Solaris 11.1":  The incorporate dependency specifies that if the given package is installed, it must be at the given version, to the given version accuracy. For example, if the dependent FMRI has a version of 1.4.3, then no version less than 1.4.3 or greater than or equal to 1.4.4 satisfies the dependency. Version 1.4.3.7 does satisfy this example dependency. The common way to use incorporate dependencies is to put many of them in the same package to define a surface in the package version space that is compatible. Packages that contain such sets of incorporate dependencies are often called incorporations. Incorporations are typically used to define sets of software packages that are built together and are not separately versioned. The incorporate dependency is heavily used in Oracle Solaris to ensurethat compatible versions of software are installed together. An example incorporate dependency is: depend type=incorporate fmri=pkg:/driver/network/ethernet/[email protected],5.11-0.175.0.0.0.2.1 So, to make sure only qualified versions are installed on a server, create a package that will be installed on the machines to be controlled.  This package will contain an incorporate dependency on the "entire" package, which controls the various components used to be build Solaris.  Every time a new version of Solaris has been qualified for production use, create a new version of this package specifying the new version of "entire" that was qualified.  Once this new control package is available in the repositories configured on the production server, the pkg update command will update that system to the specified version.  Unless a new version of the control package is made available, pkg update will report that no updates are available since no version of the control package can be installed that satisfies the incorporate constraint. Note that if desired, the same package can be used to specify which packages must be present on the system by adding either "require" or "group" dependencies; the latter permits removal of some of the packages, the former does not.  More details on this can be found in either the section 5 pkg man page or the previously mentioned reference document. This technique of using package dependencies to constrain system configuration leverages the SAT solver which is at the heart of IPS, and is basic to how we package Solaris itself.  

    Read the article

  • Ti Launchpad

    - by raysmithequip
    Just thought I would get a couple of notes up here for reference to anyone that is interested...it is now Feb 2011 and I have not been posting here enough to remember this blog. Back in Nov 2010 I ordered the Ti launchpad msp430, it is a little target board kit replete with a mini USB cable, two very inexpensive programmable mcu's and a couple of pin headers with a couple of led's on board, a spi connector some on board jumpers and two programmable micro switches....all for less than $5.00...INCLUDING SHIPPING!!....not bad when the ardruino's are running around 20.00 for the target board, atmega328 and cable off of eBay...I wont even mention the microchip pic right now.  Naw, for $5.00 the Ti launchpad kit is about the cheapest fun around...if-uns your a geek that is... Well, the launchpad was backordered for almost two months, came like Xmas eve in fact...I had almost forgotten it!! And really, it was way late and not my idea of an Xmas present for myself.  That would of been the web expressions 4 I bought a few weeks back.  With all the holidays, I did not even look at it till last week, in fact I passed the wrapped board around at my local ham club meeting during points of personal privilege....some oh's and ahhs but mostly duhs...I actually ordered it to avoid downloading the huge code compressor studio 4 (CCS) that was supposed to be included on the cd.  No cd.  I had already downloaded IAR  another programming IDE for these little micro bugs. In my spare time I toyed with IAR and the launchpad board but after about two days of playing delete the driver with windows I decided to just download CCS 4, the code limited version, and give that a shot......CCS 4, is a good rewrite from the earlier versions, it is based on Eclipse as an IDE and includes the drivers for the msp430 target board I received in the kit.  Once installed I quickly configured the debugger for the target chip which was already plugged into the dip socket at the factory, msp430G2131 from he drop down list and clicked ok...I was in!! The CCS4 is full of bells and whistles compared to the IAR, which I would of preferred for the simplicity.  But the code compressor studio really does have it all!!..the code limited version is free, and of all things will give you java script editor box.  The whole layout in debugger mode reminds me of any modern programmer IDE...I mean sure give me Tex anytime but you simply must admire all the boxes and options included in the GUI.  It was a simple matter to check the assembly code in the flash and ram memory that came preloaded for the launchpad kit.  Assembly.  I am right now looking for my old assembly textbooks...sure I remember how to use mov and add etc but a couple of the commands are a little more than vague anymore.  Still, these little mcu's are about 50 cents each and might just work in a couple of projects I have lined up for the near future.  I may document the code here.  Luckily, I plan to write the code in c++ for the main project but if it has to be assembly, no prob.  For reference, the program that came already on the 2131 in the kit was a temperature indicator that alternately flashed red and green leds and changed the intensity of either depending on whether the temp was rising or falling...neat.  Neat enough that it might be worthwhile banging out a little GUI in windows 7 to test the new user device system calls, maybe put a temp gauge widget up on the desktop...just to keep from getting bored.  If you see some assembly code on this blog, you know I was doing something with one of the many mcu's out there.....thats all for now, more to follow...a bit later, of course.

    Read the article

  • Is there an easy way to type in common math symbols?

    - by srcspider
    Disclaimer: I'm sure someone is going to moan about easy-of-use, for the purpose of this question consider readability to be the only factor that matters So I found this site that converts to easting northing, it's not really important what that even means but here's how the piece of javascript looks. /** * Convert Ordnance Survey grid reference easting/northing coordinate to (OSGB36) latitude/longitude * * @param {OsGridRef} gridref - easting/northing to be converted to latitude/longitude * @returns {LatLonE} latitude/longitude (in OSGB36) of supplied grid reference */ OsGridRef.osGridToLatLong = function(gridref) { var E = gridref.easting; var N = gridref.northing; var a = 6377563.396, b = 6356256.909; // Airy 1830 major & minor semi-axes var F0 = 0.9996012717; // NatGrid scale factor on central meridian var f0 = 49*Math.PI/180, ?0 = -2*Math.PI/180; // NatGrid true origin var N0 = -100000, E0 = 400000; // northing & easting of true origin, metres var e2 = 1 - (b*b)/(a*a); // eccentricity squared var n = (a-b)/(a+b), n2 = n*n, n3 = n*n*n; // n, n², n³ var f=f0, M=0; do { f = (N-N0-M)/(a*F0) + f; var Ma = (1 + n + (5/4)*n2 + (5/4)*n3) * (f-f0); var Mb = (3*n + 3*n*n + (21/8)*n3) * Math.sin(f-f0) * Math.cos(f+f0); var Mc = ((15/8)*n2 + (15/8)*n3) * Math.sin(2*(f-f0)) * Math.cos(2*(f+f0)); var Md = (35/24)*n3 * Math.sin(3*(f-f0)) * Math.cos(3*(f+f0)); M = b * F0 * (Ma - Mb + Mc - Md); // meridional arc } while (N-N0-M >= 0.00001); // ie until < 0.01mm var cosf = Math.cos(f), sinf = Math.sin(f); var ? = a*F0/Math.sqrt(1-e2*sinf*sinf); // nu = transverse radius of curvature var ? = a*F0*(1-e2)/Math.pow(1-e2*sinf*sinf, 1.5); // rho = meridional radius of curvature var ?2 = ?/?-1; // eta = ? var tanf = Math.tan(f); var tan2f = tanf*tanf, tan4f = tan2f*tan2f, tan6f = tan4f*tan2f; var secf = 1/cosf; var ?3 = ?*?*?, ?5 = ?3*?*?, ?7 = ?5*?*?; var VII = tanf/(2*?*?); var VIII = tanf/(24*?*?3)*(5+3*tan2f+?2-9*tan2f*?2); var IX = tanf/(720*?*?5)*(61+90*tan2f+45*tan4f); var X = secf/?; var XI = secf/(6*?3)*(?/?+2*tan2f); var XII = secf/(120*?5)*(5+28*tan2f+24*tan4f); var XIIA = secf/(5040*?7)*(61+662*tan2f+1320*tan4f+720*tan6f); var dE = (E-E0), dE2 = dE*dE, dE3 = dE2*dE, dE4 = dE2*dE2, dE5 = dE3*dE2, dE6 = dE4*dE2, dE7 = dE5*dE2; f = f - VII*dE2 + VIII*dE4 - IX*dE6; var ? = ?0 + X*dE - XI*dE3 + XII*dE5 - XIIA*dE7; return new LatLonE(f.toDegrees(), ?.toDegrees(), GeoParams.datum.OSGB36); } I found that to be a really nice way of writing an algorythm, at least as far as redability is concerned. Is there any way to easily write the special symbols. And by easily write I mean NOT copy/paste them.

    Read the article

  • Part 6: Extensions vs. Modifications

    - by volker.eckardt(at)oracle.com
    Customizations = Extensions + Modifications In the EBS terminology, a customization can be an extension or a modification. Extension means that you mainly create your own code from scratch. You may utilize existing views, packages and java classes, but your code is unique. Modifications are quite different, because here you take existing code and change or enhance certain areas to achieve a slightly different behavior. Important is that it doesn't matter if you place your code at the same or at another place – it is a modification. It is also not relevant if you leave the original code enabled or not! Why? Here is the answer: In case the original code piece you have taken as your base will get patched, you need to copy the source again and apply all your changes once more. If you don't do that, you may get different results or write different data compared to the standard – this causes a high risk! Here are some guidelines how to reduce the risk: Invest a bit longer when searching for objects to select data from. Rather choose a view than a table. In case Oracle development changes the underlying tables, the view will be more stable and is therefore a better choice. Choose rather public APIs over internal APIs. Same background as before: although internal structure might change, the public API is more stable. Use personalization and substitution rather than modification. Spend more time to check if the requirement can be covered with such techniques. Build a project code library, avoid that colleagues creating similar functionality multiple times. Otherwise you have to review lots of similar code to determine the need for correction. Use the technique of “flagged files”. Flagged files is a way to mark a standard deployment file. If you run the patch analyse (within Application Manager), the analyse result will list flagged standard files in case they will be patched. If you maintain a cross reference to your own CEMLIs, you can easily determine which CEMLIs have to be reviewed. Implement a code review process. This can be done by utilizing team internal or external persons. If you implement such a team internal process, your team members will come up with suggestions how to improve the code quality by themselves. Review heavy customizations regularly, to identify options to reduce complexity; let's say perform this every 6th month. You may not spend days for such a review, but a high level cross check if the customization can be reduced is suggested. De-install customizations which are no more required. Define a process for this. Add a section into the technical documentation how to uninstall and what are possible implications. Maintain a cross reference between CEMLIs and between CEMLIs, EBS modules and business processes. Keep this list up to date! Share this list! By following these guidelines, you are able to improve product stability. Although we might not be able to avoid modifications completely, we can give a much better advise to developers and to our test team. Summary: Extensions and Modifications have to be handled differently during their lifecycle. Modifications implicate a much higher risk and should therefore be reviewed more frequently. Good cross references allow you to give clear advise for the testing activities.

    Read the article

  • How can I gather client's data on Google App Engine without using Datastore/Backend Instances too much?

    - by ruslan
    One of the projects I'm working on is online survey engine. It's my first big commercial project on Google App Engine. I need your advice on how to collect stats and efficiently record them in DataStore without bankrupting me. Initial requirements are: After user finishes survey client sends list of pairs [ID (int) + PercentHit (double)]. This list shows how close answers of this user match predefined answers of reference answerers (which identified by IDs). I call them "target IDs". Creator of the survey wants to see aggregated % for given IDs for last hour, particular timeframe or from the beginning of the survey. Some surveys may have thousands of target/reference answerers. So I created entity public class HitsStatsDO implements Serializable { @Id transient private Long id; transient private Long version = (long) 0; transient private Long startDate; @Parent transient private Key parent; // fake parent which contains target id @Transient int targetId; private double avgPercent; private long hitCount; } But writing HitsStatsDO for each target from each user would give a lot of data. For instance I had a survey with 3000 targets which was answered by ~4 million people within one week with 300K people taking survey in first day. Even if we assume they were answering it evenly for 24 hours it would give us ~1040 writes/second. Obviously it hits concurrent writes limit of Datastore. I decided I'll collect data for one hour and save that, that's why there are avgPercent and hitCount in HitsStatsDO. GAE instances are stateless so I had to use dynamic backend instance. There I have something like this: // Contains stats for one hour private class Shard { ReadWriteLock lock = new ReentrantReadWriteLock(); Map<Integer, HitsStatsDO> map = new HashMap<Integer, HitsStatsDO>(); // Key is target ID public void saveToDatastore(); public void updateStats(Long startDate, Map<Integer, Double> hits); } and map with shard for current hour and previous hour (which doesn't stay here for long) private HashMap<Long, Shard> shards = new HashMap<Long, Shard>(); // Key is HitsStatsDO.startDate So once per hour I dump Shard for previous hour to Datastore. Plus I have class LifetimeStats which keeps Map<Integer, HitsStatsDO> in memcached where map-key is target ID. Also in my backend shutdown hook method I dump stats for unfinished hour to Datastore. There is only one major issue here - I have only ONE backend instance :) It raises following questions on which I'd like to hear your opinion: Can I do this without using backend instance ? What if one instance is not enough ? How can I split data between multiple dynamic backend instances? It hard because I don't know how many I have because Google creates new one as load increases. I know I can launch exact number of resident backend instances. But how many ? 2, 5, 10 ? What if I have no load at all for a week. Constantly running 10 backend instances is too expensive. What do I do with data from clients while backend instance is dead/restarting?

    Read the article

  • 3 Incredibly Useful Projects to jump-start your Kinect Development.

    - by mbcrump
    I’ve been playing with the Kinect SDK Beta for the past few days and have noticed a few projects on CodePlex worth checking out. I decided to blog about them to help spread awareness. If you want to learn more about Kinect SDK then you check out my”Busy Developer’s Guide to the Kinect SDK Beta”. Let’s get started:   KinectContrib is a set of VS2010 Templates that will help you get started building a Kinect project very quickly. Once you have it installed you will have the option to select the following Templates: KinectDepth KinectSkeleton KinectVideo Please note that KinectContrib requires the Kinect for Windows SDK beta to be installed. Kinect Templates after installing the Template Pack. The reference to Microsoft.Research.Kinect is added automatically.  Here is a sample of the code for the MainWindow.xaml in the “Video” template: <Window x:Class="KinectVideoApplication1.MainWindow" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="480" Width="640"> <Grid> <Image Name="videoImage"/> </Grid> </Window> and MainWindow.xaml.cs using System; using System.Windows; using System.Windows.Media; using System.Windows.Media.Imaging; using Microsoft.Research.Kinect.Nui; namespace KinectVideoApplication1 { public partial class MainWindow : Window { //Instantiate the Kinect runtime. Required to initialize the device. //IMPORTANT NOTE: You can pass the device ID here, in case more than one Kinect device is connected. Runtime runtime = new Runtime(); public MainWindow() { InitializeComponent(); //Runtime initialization is handled when the window is opened. When the window //is closed, the runtime MUST be unitialized. this.Loaded += new RoutedEventHandler(MainWindow_Loaded); this.Unloaded += new RoutedEventHandler(MainWindow_Unloaded); //Handle the content obtained from the video camera, once received. runtime.VideoFrameReady += new EventHandler<Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs>(runtime_VideoFrameReady); } void MainWindow_Unloaded(object sender, RoutedEventArgs e) { runtime.Uninitialize(); } void MainWindow_Loaded(object sender, RoutedEventArgs e) { //Since only a color video stream is needed, RuntimeOptions.UseColor is used. runtime.Initialize(Microsoft.Research.Kinect.Nui.RuntimeOptions.UseColor); //You can adjust the resolution here. runtime.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color); } void runtime_VideoFrameReady(object sender, Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs e) { PlanarImage image = e.ImageFrame.Image; BitmapSource source = BitmapSource.Create(image.Width, image.Height, 96, 96, PixelFormats.Bgr32, null, image.Bits, image.Width * image.BytesPerPixel); videoImage.Source = source; } } } You will find this template pack is very handy especially for those new to Kinect Development.   Next up is The Coding4Fun Kinect Toolkit which contains extension methods and a WPF control to help you develop with the Kinect SDK. After downloading the package simply add a reference to the .dll using either the WPF or WinForms version. Now you will have access to several methods that can help you save an image: (for example) For a full list of extension methods and properties, please visit the site at http://c4fkinect.codeplex.com/. Kinductor – This is a great application for just learning how to use the Kinect SDK. The project uses MVVM Light and is a great start for those looking how to structure their first Kinect Application. Conclusion: Things are already getting easier for those working with the Kinect SDK. I imagine that after a few more months we will see the SDK go out of beta and allow commercial applications to run using it. I am very excited and hope that you continue reading my blog for more Kinect, WPF and Silverlight news.  Subscribe to my feed

    Read the article

  • Why Are We Here?

    - by Jonathan Mills
    Back in the early 2000s, Toyota had a vision of building the number one best selling minivan in North America. Their current minivan, the Sienna, was small, underpowered, and badly needed help.  Yuji Yokoya was given the job of re-engineering the Sienna. There was just one problem, Yuji, lived in Japan. He did not know the people or places that he would be engineering for. Believe it or not, Japan is nothing like North America. So, what does a chief engineer do in a situation like that? He packed up his team and flew halfway around the world. He made a commitment to drive through every state in the US, every province in Canada, and Mexico. He met the people and drove the roads that the Sienna would be driving. And guess what, what he learned on that trip revolutionized the Sienna. The innovations he made, sent the Sienna to number one. Why? Because he knew who he was building his product for. He knew, why he was there.Let me ask you this, do you know why you are building what you are building? As a member of a product team, can you tell me how your product will be used in the real world? As you are writing code, building test plans, writing stories, or any of the other project tasks, can you picture the face of a person who will be using what you are building? All to often, the answer to those questions is, no. Why is it important? Because, every day, project team members make assumptions. Over a given project, it is safe to say project team members will make thousands of assumptions about what they are doing. And all to often, those assumptions are not quite right. Its not that they are not good at their job, its just that they don’t really know why they are there.So, what to do? First and foremost, stop doing what you are doing. Yes, really. Schedule some time to go visit the people who will be using your product. Don’t invite them to you, go to them. Watch them work. Interact with them. Ask them questions. Maybe even try it out yourself. This serves two purposes. One, It shows them that you care about them. They will be far more engaged in your project if they feel like you care. And nothing says you care more that spending some time. Second, if gives you the proper frame of reference for you work. It gives you something tangible to go back to as you are building your product. As you make the thousands of assumptions that you will make over the life of your project, it gives you something to see in your mind that makes it real to you.Ultimately, setting a proper frame of reference is critical to the overall success of a project. The funny thing is, it really does not even take that long. In most cases, a 2-3 hour session will give you most of what you need to get the right insight. For the project, it will be the best 2 hours you could spend.

    Read the article

  • Executing Stored Procedures in Visual Studio LightSwitch.

    - by dataintegration
    A LightSwitch Project is very easy way to visualize and manipulate information directly from one of our ADO.NET Providers. But when it comes to executing the Stored Procedures, it can be a bit more complicated. In this article, we will demonstrate how to execute a Stored Procedure in LightSwitch. For the purposes of this article, we will be using the RSSBus Email Data Provider, but the same process will work with any of our ADO.NET Providers. Creating the RIA Service. Step 1: Open Visual Studio and create a new WCF RIA Service Class Project. Step 2:Add the reference to the RSSBus Email Data Provider dll in the (ProjectName).Web project. Step 3: Add a new Domain Service Class to the (ProjectName).Web project. Step 4: In the new Domain Service Class, create a new class with the attributes needed for the Stored Procedure's parameters. In this demo, the Stored Procedure we are executing is called SendMessage. The parameters we will need are as follows: public class NewMessage{ [Key] public int ID { get; set; } public string FromEmail { get; set; } public string ToEmail { get; set; } public string Subject { get; set; } public string Text { get; set; } } Note: The created class must have an ID which will serve as the key value. Step 5: Create a new method that will executed when the insert event fires. Inside this method you can use the standards ADO.NET code which will execute the stored procedure. [Insert] public void SendMessage(NewMessage newMessage) { try { EmailConnection conn = new EmailConnection(connectionString); EmailCommand comm = new EmailCommand("SendMessage", conn); comm.CommandType = System.Data.CommandType.StoredProcedure; if (!newMessage.FromEmail.Equals("")) comm.Parameters.Add(new EmailParameter("@From", newMessage.FromEmail)); if (!newMessage.ToEmail.Equals("")) comm.Parameters.Add(new EmailParameter("@To", newMessage.ToEmail)); if (!newMessage.Subject.Equals("")) comm.Parameters.Add(new EmailParameter("@Subject", newMessage.Subject)); if (!newMessage.Text.Equals("")) comm.Parameters.Add(new EmailParameter("@Text", newMessage.Text)); comm.ExecuteNonQuery(); } catch (Exception exc) { Console.WriteLine(exc.Message); } } Step 6: Create a query method. We are not going to be using getNewMessages(), so it does not matter what it returns for the purpose of our example, but you will need to create a method for the query event as well. [Query(IsDefault=true)] public IEnumerable<NewMessage> getNewMessages() { return null; } Step 7: Rebuild the whole solution. Creating the LightSwitch Project. Step 8: Open Visual Studio and create a new LightSwitch Application Project. Step 9: On the Data Sources, add a new data source. Choose a WCF RIA Service Step 10: Choose to add a new reference and select the (Project Name).Web.dll generated from the RIA Service. Step 11: Select the entities you would like to import. In this case, we are using the recently created NewMessage entity. Step 13: On the Screens section, create a new screen and select the NewMessage entity as the Screen Data. Step 14: After you run the project, you will be able to add a new record and save it. This will execute the Stored Procedure and send the new message. If you create a screen to check the sent messages, you can refresh this screen to see the mail you sent. Sample Project To help you with get started using stored procedures in LightSwitch, download the fully functional sample project. You will also need the RSSBus Email Data Provider to make the connection. You can download a free trial here.

    Read the article

  • 11gR2 RAC ASM????

    - by Liu Maclean(???)
    11gR2 RAC?ocr?votedisk???????ASM??, ????10g??????2?RAC????????????,  ?? 11gR2 ?ASM?spfile??????ASM diskgroup???????ASM??????? ????????????,????? ASM?????mount diskgroup??????diskgroup????, ??ASM??????ASM spfile????????,?2???????? ????T.askmaclean.com?????ASM?????: hello maclean, ??spfile??ASMCMD> spget+CRSDG/rac/asmparameterfile/registry.253.787925627?????,ASM ?????ORACLE instance,?????????????diskgroup,????????????????????????????thanks.! ?????????: ?11.2??Oracle Cluterware??voting disk files?????????11.1?10.2????,11.2??voting disk file??????OCR?, ?????11.2??ocr?votedisk?????ASM? , ???11.2?voting disk file??GPNP profile??CSS voting file discovery string???? CSS voting disk file?discovery string???ASM,??????ASM discovery string???  ????????udev???????ASM???LUN, ??udev????????/dev/rasm-disk* , ????gpnptool get????gpnp profile: [grid@maclean1 trace]$ gpnptool get Warning: some command line parameters were defaulted. Resulting command line: /g01/grid/app/11.2.0/grid/bin/gpnptool.bin get -o- <?xml version="1.0" encoding="UTF-8"?><gpnp:GPnP-Profile Version="1.0" xmlns="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:gpnp="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:orcl="http://www.oracle.com/gpnp/2005/11/gpnp-profile" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd" ProfileSequence="9" ClusterUId="452185be9cd14ff4ffdc7688ec5439bf" ClusterName="maclean-cluster" PALocation=""><gpnp:Network-Profile><gpnp:HostNetwork id="gen" HostName="*"><gpnp:Network id="net1" IP="192.168.1.0" Adapter="eth0" Use="public"/><gpnp:Network id="net2" IP="172.168.1.0" Adapter="eth1" Use="cluster_interconnect"/></gpnp:HostNetwork></gpnp:Network-Profile>< orcl:CSS-Profile id="css" DiscoveryString="+asm" LeaseDuration="400"/><orcl:ASM-Profile id="asm" DiscoveryString="/dev/rasm*" SPFile="+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933"/>< ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"><ds:SignedInfo><ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><ds:Reference URI=""><ds:Transforms><ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <InclusiveNamespaces xmlns="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="gpnp orcl xsi"/></ds:Transform></ds:Transforms>< ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>L1SLg10AqGEauCQ4ne9quucITZA=</ds:DigestValue>< /ds:Reference></ds:SignedInfo><ds:SignatureValue>rTyZm9vfcQCMuian6isnAThUmsV4xPoK2fteMc1l0GIvRvHncMwLQzPM/QrXCGGTCEvgvXzUPEKzmdX2oy5vLcztN60UHr6AJtA2JYYodmrsFwEyVBQ1D6wH+HQiOe2SG9UzdQnNtWSbjD4jfZkeQWyMPfWdKm071Ek0Rfb4nxE=</ds:SignatureValue></ds:Signature></gpnp:GPnP-Profile> Success. ?????2???: <orcl:CSS-Profile id=”css” DiscoveryString=”+asm” LeaseDuration=”400?/>==»css voting disk??+ASM<orcl:ASM-Profile id=”asm” DiscoveryString=”/dev/rasm*” SPFile=”+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933?/>==»??????ASM?DiscoveryString=”/dev/rasm*”,?ASM??????????????,SPFILE???ASM Parameter FILE?ALIAS ???????GPNP???ASM Parameter FILE?ALIAS,?????ASM???????SPFILE,???Diskgroup?Mount???????ASM ALIAS?????? ??????+SYSTEMDG/maclean-cluster/asmparameterfile/registry.253.788682933??SPFILE?ASM??????: [grid@maclean1 wallets]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.3.0 Production on Tue Jul 17 05:45:35 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options SQL> set linesize 140 pagesize 1400 col "FILE NAME" format a40 set head on select NAME "FILE NAME", AU_KFFXP "AU NUMBER", NUMBER_KFFXP "FILE NUMBER", DISK_KFFXP "DISK NUMBER" from x$kffxp, v$asm_alias where GROUP_KFFXP = GROUP_NUMBER and NUMBER_KFFXP = FILE_NUMBER and name in ('REGISTRY.253.788682933') order by DISK_KFFXP,AU_KFFXP; FILE NAME AU NUMBER FILE NUMBER DISK NUMBER ---------------------------------------- ---------- ----------- ----------- REGISTRY.253.788682933 39 253 1 REGISTRY.253.788682933 35 253 3 REGISTRY.253.788682933 35 253 4 SQL> col path for a50 SQL> select disk_number,path from v$asm_disk where disk_number in (1,3,4) and GROUP_NUMBER=3; DISK_NUMBER PATH ----------- -------------------------------------------------- 3 /dev/rasm-diske 4 /dev/rasm-diskf 1 /dev/rasm-diskc ?????ASM SPFILE??????(redundancy=high),????? /dev/rasm-diskc?AU=39?/dev/rasm-diske AU=35?/dev/rasm-diskf AU=35? ????kfed?????????ASM DISK?header: [grid@maclean1 wallets]$ kfed read /dev/rasm-diske|grep spfile kfdhdb.spfile: 35 ; 0x0f4: 0x00000023 [grid@maclean1 wallets]$ kfed read /dev/rasm-diskc|grep spfile kfdhdb.spfile: 39 ; 0x0f4: 0x00000027 [grid@maclean1 wallets]$ kfed read /dev/rasm-diskf|grep spfile kfdhdb.spfile: 35 ; 0x0f4: 0x00000023 ????ASM disk header?kfdhdb.spfile??ASM SPFILE???DISK??AU NUMBER????, ASM???????????GPNP PROFILE?? DiscoveryString?????????,????ASM disk header?????kfdhdb.spfile??????,?????MOUNT DISKGROUP??????ASM SPFILE,?????ASM, ?????????????????

    Read the article

  • Using ReportViewer 9 control in VS 2010

    - by Fermin
    Hi, I am writing an ASP.NET app that uses a SQL Server 2005 with SSRS setup. I want to use the ReportViewer control but I get an error when using ReportViewer 10 because it needs SSRS 2008. How can I use ReportViewer 9 within my application. I've added a reference to the Microsoft.ReportViewer.WebForms.dll version 9 and removed the reference to version 10. My markup is as follows: <%@ Register Assembly="Microsoft.ReportViewer.WebForms, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" Namespace="Microsoft.Reporting.WebForms" TagPrefix="rsweb" %> <!-- standard markup --> <rsweb:ReportViewer ID="ReportViewer1" runat="server"></rsweb:ReportViewer> but when I try to run this I get the following error: CS0433: The type 'Microsoft.Reporting.WebForms.ReportViewer' exists in both 'c:\WINDOWS\assembly\GAC_MSIL\Microsoft.ReportViewer.WebForms\10.0.0.0__b03f5f7f11d50a3a\Microsoft.ReportViewer.WebForms.dll' and 'c:\WINDOWS\assembly\GAC_MSIL\Microsoft.ReportViewer.WebForms\9.0.0.0__b03f5f7f11d50a3a\Microsoft.ReportViewer.WebForms.dll' What have I missed!? Update: When trying to use the ReportViewer 10 I get the following error: "Remote report processing requires Microsoft SQL Server 2008 Reporting Services or later."

    Read the article

  • logparser not matching on a LIKE pattern

    - by user79339
    Hi I seem to have the strangest problem. I am using logparser to search an event log for some text that I know is there (i copied and pasted the string from the event into the sql search string). But the sql LIKE statement is returning a empty results. But other LIKE statments seem to be working file. I have even tried using two '%' symbols in case the shell was trying to replace the search pattern with an environment variable '%%NavigationOccuredEventHandler%%', escaping the % with a \ and with a ' but all these just give me "No valid LIKE mask" error My logparser command - C:\Program Files\Log Parser 2.2LogParser.exe "select * from D:\Temp\07i132ppa1_app.evt where Message like '%NavigationOccuredEventHandler%' " -i:EVT -o:Datagrid The Entry in event log (found using "Select * from D:\Temp\07i132ppa1_app.evt" and doing a copy paste of relevant row) - 'D:\Temp\07i132ppa1_app.evt 5976788 2010-03-09 11:53:23 2010-03-09 11:53:23 2 1 Error event 0 None ICP Timestamp: 9/03/2010 1:53:23 AM Message: Error # 068464030040-07I132PPA1 System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.NullReferenceException: Object reference not set to an instance of an object. at ClientRegistration.Controller.ContactDetailsController.NavigationOccuredEventHandler(Object sender, NavigateEventArgs e) at Microsoft.ApplicationBlocks.UIProcess.UIPManager.NavigateEventHandler.Invoke(Object sender, NavigateEventArgs e) at Microsoft.ApplicationBlocks.UIProcess.UIPManager.InvokeEventHandlers(State state) in . . . Truncated for brevity ' output Statistics: Elements processed: 240993 Elements output: 0 Execution time: 59.47 seconds But if i searched for the pattern '%object reference not set%' it works fine, returns results. I copied and pasted the string into a dummy sql table and ran the sql query there and it works fine. Just doesn't seem to work in logparser. Very baffling. Any help would be much appreciated

    Read the article

  • Ninject.Web, OnePerRequestModule, and IIS7 Integrated Pipeline

    - by Ted
    Using Ninject.Web with ASP.NET WebForms project. Works without issues using classic pipeline, but when it's under integrated pipeline, a null reference exception occurs on every request (which I've narrowed down to the use of the OnePerRequestModule): [NullReferenceException: Object reference not set to an instance of an object.] System.Web.PipelineStepManager.ResumeSteps(Exception error) +1216 System.Web.HttpApplication.BeginProcessRequestNotification(HttpContext context, AsyncCallback cb) +113 System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context) +616 The above always occurs unless I remove the OnePerRequestModule initializization. occurs consistently on a very basic test app I put together. On a standard app where I actually want to implement it, I can solve the issue by initializing the OnePerRequestModule like so: protected override IKernel CreateKernel() { // This will always blow up. //var module = new OnePerRequestModule(); //module.Init(this); IKernel kernel = new StandardKernel(new MyModule()); // This works on larger app, but on basic app, it makes no difference under integrated pipeline as the above exception is always thrown. var module = new OnePerRequestModule(); module.Init(this); return kernel; } Before I start spelunking further, is anybody out there using Ninject.Web extension successfully under the integrated pipeline in IIS7 AND using the OnePerRequestModule? There are certain restrictions for modules under the integrated pipeline that weren't there in previous IIS versions/classic pipeline. Quickly thrown together sample project at http://www.filedropper.com/test_59 And in case it's not obvious with Ninject.Web: it's an ASP.NET WebForms project.

    Read the article

  • Webservices error for dev.virtualearth.net/webservices/geocode

    - by Xaisoft
    I am getting the following stacktrace and have no idea what I am looking at and how to debug and fix it: Here is the error: Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Reference.svcmap: Failed to generate code for the service reference 'GeocodeService'. Cannot import wsdl:portType Detail: An exception was thrown while running a WSDL import extension: System.ServiceModel.Description.DataContractSerializerMessageContractImporter Error: Could not load file or assembly 'System.Xml, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e' or one of its dependencies. The system cannot find the file specified. XPath to Error Source: //wsdl:definitions[@targetNamespace='http//dev.virtualearth.net /webservices/v1/geocode/contracts']/wsdl:portType[@name='IGeocodeService'] Cannot import wsdl:binding Detail: There was an error importing a wsdl:portType that the wsdl:binding is dependent on. XPath to wsdl:portType: //wsdl:definitions[@targetNamespace='http: //dev.virtualearth.net/webservices/v1/geocode/contracts'] /wsdl:portType[@name='IGeocodeService'] XPath to Error Source: //wsdl:definitions[@targetNamespace='http: //dev.virtualearth.net /webservices/v1/geocode']/wsdl:binding[@name='BasicHttpBinding_IGeocodeService'] Cannot import wsdl:port Detail: There was an error importing a wsdl:binding that the wsdl:port is dependent on. XPath to wsdl:binding: //wsdl:definitions[@targetNamespace='http://dev.virtualearth.net /webservices/v1/geocode']/wsdl:binding[@name='BasicHttpBinding_IGeocodeService'] XPath to Error Source: //wsdl:definitions[@targetNamespace='http://dev.virtualearth.net /webservices/v1/geocode']/wsdl:service[@name='GeocodeService'] /wsdl:port[@name='BasicHttpBinding_IGeocodeService'] Cannot import wsdl:binding Detail: There was an error importing a wsdl:portType that the wsdl:binding is dependent on. XPath to wsdl:portType: //wsdl:definitions[@targetNamespace='http: //dev.virtualearth.net/webservices/v1/geocode/contracts'] /wsdl:portType[@name='IGeocodeService'] XPath to Error Source: //wsdl:definitions[@targetNamespace='http: //dev.virtualearth.net /webservices/v1/geocode']/wsdl:binding[@name='CustomBinding_IGeocodeService'] Cannot import wsdl:port Detail: There was an error importing a wsdl:binding that the wsdl:port is dependent on. XPath to wsdl:binding: //wsdl:definitions[@targetNamespace='http://dev.virtualearth.net /webservices/v1/geocode']/wsdl:binding[@name='CustomBinding_IGeocodeService'] XPath to Error Source: //wsdl:definitions[@targetNamespace='http://dev.virtualearth.net /webservices/v1/geocode']/wsdl:service[@name='GeocodeService'] /wsdl:port[@name='CustomBinding_IGeocodeService']

    Read the article

  • Namespace not found in MVC 3 Razor view

    - by PlTaylor
    I am adding a PagedList to my view and loosely following this Tutorial. I have installed the PagedList reference using Nuget, set up my controller as follows public ViewResult Index(int page = 1) { List<Part> model = this.db.Parts.ToList(); const int pageSize = 20; return View(model.ToPagedList(page, pageSize)); } And written my view with the following model at the top @model PagedList.IPagedList<RIS.Models.Part> When I run the page I get the following error Compiler Error Message: CS0246: The type or namespace name 'PagedList' could not be found (are you missing a using directive or an assembly reference?) Source Error: Line 27: Line 28: Line 29: public class _Page_Areas_Parts_Views_Part_Index_cshtml : System.Web.Mvc.WebViewPage<PagedList.IPagedList<RIS.Models.Part>> { The PagedList dll is being properly loaded in my controller because when I take it out of my view everything works as expected. The CopyLocal property is set to 'True' and I have tried including the namespace in the Views\Web.Config in my specific Area. What else can I do to make the View see the Namespace?

    Read the article

  • How do I pass a custom field to a hook (Invision Power Board [ipb] / PHP)

    - by Julian Young
    A long shot but here's hoping someone has some experience coding PHP hooks for Invisions Power Board forum. I'm attempting to code a status addition and the PHP works fine on it's own, it's the passing of the IPB's reference to my hook that is the issue. I.E. You setup a custom field in your forum for MSN Username, then from within a skin / template hook you pass the custom field to the hook and then use your PHP code to check on the status. Here is the IPB skin code I am hooking into on Global-userInfoPane... <if test="authorcfields:|:$author['custom_fields'] != """> <foreach loop="customFieldsOuter:$author['custom_fields'] as $group => $data"> <foreach loop="customFields:$author['custom_fields'][ $group ] as $field"> <if test="$field != ''"> <li> {$field} </li> </if> </foreach> </foreach> </if> Although I could easily add my own skin hook here. i.e. <if test="myHookHere:|:1===1"></if> Literally all I need is a single custom field entry from here passed to my hook. If I query every member when the hook is run then that will result in many extra sql queries per page view. All I want to do is pass that specific custom field to the hook... i.e. myHookHere( $customfield['msn_username'] ) Is this possible? How do you reference the customfield? Can I execute pure PHP from here? Appreciate anyone that can help! I tried the official invision forums but not had much luck.

    Read the article

  • Spring security custom ldap authentication provider

    - by wuntee
    I currently have my ldap authentication context set up like this: <ldap-server url="ldap://host/dn" manager-dn="cn=someuser" manager-password="somepass" /> <authentication-manager> <ldap-authentication-provider user-search-filter="(samaccountname={0})"/> </authentication-manager> Now, I need to be able to set up a custom authorities mapper (it uses a different ldap server) - so I am assuming I need to set up my ldap-server similar to (http://static.springsource.org/spring-security/site/docs/2.0.x/reference/ldap.html): <bean id="ldapAuthProvider" class="org.springframework.security.providers.ldap.LdapAuthenticationProvider"> <constructor-arg> <bean class="org.springframework.security.providers.ldap.authenticator.BindAuthenticator"> <constructor-arg ref="contextSource"/> <property name="userDnPatterns"> <list><value>uid={0},ou=people</value></list> </property> </bean> </constructor-arg> <constructor-arg> <bean class="org.springframework.security.ldap.populator.DefaultLdapAuthoritiesPopulator"> <constructor-arg ref="contextSource"/> <constructor-arg value="ou=groups"/> <property name="groupRoleAttribute" value="ou"/> </bean> </constructor-arg> </bean> But, how do I reference that 'ldapAuthProvider' to the ldap-server in the security context? I am also using spring-security 3, so '' does not exist...

    Read the article

  • Update XML element with LINQ to XML in VB.NET

    - by Bayonian
    Hi, I'm trying to update an element in the XML document below: Here's the code: Dim xmldoc As XDocument = XDocument.Load(theXMLSource1) Dim ql As XElement = (From ls In xmldoc.Elements("LabService") _ Where CType(ls.Element("ServiceType"), String).Equals("Scan") _ Select ls.Element("Price")).FirstOrDefault ql.SetValue("23") xmldoc.Save(theXMLSource1) Here's the XML file: <?xml version="1.0" encoding="utf-8"?> <!--Test XML with LINQ to XML--> <LabSerivceInfo> <LabService> <ServiceType>Copy</ServiceType> <Price>1</Price> </LabService> <LabService> <ServiceType>PrintBlackAndWhite</ServiceType> <Price>2</Price> </LabService> </LabSerivceInfo> But, I got this error message: Object reference not set to an instance of an object. Exception Details: System.NullReferenceException: Object reference not set to an instance of an object. Error line:ql.SetValue("23") Can you show me what the problem is? Thank you.

    Read the article

  • How to use Android's CacheManager?

    - by punnie
    I'm currently developing an Android application that fetches images using http requests. It would be quite swell if I could cache those images in order to improve to performance and bandwidth use. I came across the CacheManager class in the Android reference, but I don't really know how to use it, or what it really does. I already scoped through this example, but I need some help understanding it: /core/java/android/webkit/gears/ApacheHttpRequestAndroid.java Also, the reference states: "Network requests are provided to this component and if they can not be resolved by the cache, the HTTP headers are attached, as appropriate, to the request for revalidation of content." I'm not sure what this means or how it would work for me, since CacheManager's getCacheFile accepts only a String URL and a Map containing the headers. Not sure what the attachment mentioned means. An explanation or a simple code example would really do my day. Thanks! Update Here's what I have right now. I am clearly doing it wrong, just don't know where. public static Bitmap getRemoteImage(String imageUrl) { URL aURL = null; URLConnection conn = null; Bitmap bmp = null; CacheResult cache_result = CacheManager.getCacheFile(imageUrl, new HashMap()); if (cache_result == null) { try { aURL = new URL(imageUrl); conn = aURL.openConnection(); conn.connect(); InputStream is = conn.getInputStream(); cache_result = new CacheManager.CacheResult(); copyStream(is, cache_result.getOutputStream()); CacheManager.saveCacheFile(imageUrl, cache_result); } catch (Exception e) { return null; } } bmp = BitmapFactory.decodeStream(cache_result.getInputStream()); return bmp; }

    Read the article

  • Unknown Build Error using WPF Toolkit

    - by Tom Allen
    I installed the Feb 2010 WPF Toolkit as I'm interested in evaluating the AutoCompleteBox control and I'm having extremely limited success. I can get the control to work, but as soon as I try and set any of it's properties in XAML, I get the following: Unknown build error, 'Cannot resolve dependency to assembly 'WPFToolkit, Version=3.5.40128.1, Culture=neutral, PublicKeyToken=31bf3856ad364e35' because it has not been preloaded. When using the ReflectionOnly APIs, dependent assemblies must be pre-loaded or loaded on demand through the ReflectionOnlyAssemblyResolve event. I've been testing this on a blank WPF window in a new solution. I'm guessing I'm just missing a reference or something... Here's the XAML (I've added nothing to the .xaml.cs): <Window x:Class="WpfToolkitApplication.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:toolkit="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Input.Toolkit" Title="Window1" Height="300" Width="300"> <Grid> <toolkit:AutoCompleteBox Height="25"/> </Grid> </Window> The only reference I've added is System.Windows.Controls.Input.Toolkit. Any ideas?

    Read the article

  • Get data from selected row in Gridview in C#, WPF

    - by Will
    Hi, I am trying to retrieve data from a Gridview that I have created in XAML. <ListView Name="chartListView" selectionChanged="chartListView_SelectionChanged"> <ListView.View> <GridView> <GridViewColumn Header="Name" DisplayMemberBinding="{Binding Name}" Width="250"/> <GridViewColumn Header="Type" DisplayMemberBinding="{Binding Type}" Width="60"/> <GridViewColumn Header="ID" DisplayMemberBinding="{Binding ID}" Width="100"/> </GridView> </ListView.View> </ListView> I have seen some code like this :- GridViewRow row = GridView1.SelectedRow; TextBox2.Text = row.Cells[2].Text; However my problem is that my GridView is created in XAML, and is not named, ie I cannot (or do not know how to) create a reference to 'gridview1', and therefore cannot access objects within it. Can I name or create a reference to my gridview either from c# or XAML so I can use the above code? Secondly, can I then access the array elements by name instead of index, something like :- TextBox2.Text = row.Cells["ID"].Text Thanks for any help.

    Read the article

  • Suppress Null Value Types from Being Emitted by XmlSerializer

    - by Ben Griswold
    Please consider the following Amount value type property which is marked as a nullable XmlElement: [XmlElement(IsNullable=true)] public double? Amount { get ; set ; } When a nullable value type is set to null, the C# XmlSerializer result looks like the following: <amount xsi:nil="true" /> Rather than emitting this element, I would like the XmlSerializer to suppress the element completely. Why? We're using Authorize.NET for online payments and Authorize.NET rejects the request if this null element exists. The current solution/workaround is to not serialize the Amount value type property at all. Instead we have created a complementary property, SerializableAmount, which is based on Amount and is serialized instead. Since SerializableAmount is of type String, which like reference types are suppressed by the XmlSerializer if null by default, everything works great. /// <summary> /// Gets or sets the amount. /// </summary> [XmlIgnore] public double? Amount { get; set; } /// <summary> /// Gets or sets the amount for serialization purposes only. /// This had to be done because setting value types to null /// does not prevent them from being included when a class /// is being serialized. When a nullable value type is set /// to null, such as with the Amount property, the result /// looks like: &gt;amount xsi:nil="true" /&lt; which will /// cause the Authorize.NET to reject the request. Strings /// when set to null will be removed as they are a /// reference type. /// </summary> [XmlElement("amount", IsNullable = false)] public string SerializableAmount { get { return this.Amount == null ? null : this.Amount.ToString(); } set { this.Amount = Convert.ToDouble(value); } } Of course, this is just a workaround. Is there a cleaner way to suppress null value type elements from being emitted?

    Read the article

  • LINQ - IEnumerable.Join on Anonymous Result Set in VB.NET

    - by user337501
    I've long since built a way around this, but it still keeps bugging me... it doesnt help that my grasp of dynamic LINQ queries is still shakey. For the example: Parent has fields (ParentKey, ParentField) Child has fields (ChildKey, ParentKey, ChildField) Pet has fields (PetKey, ChildKey, PetField) Child has a foreign key reference to Parent on Child.ParentKey = Parent.ParentKey Pet has a foreign key reference to Child on Pet.Childkey = Child.ChildKey Simple enough eh? Lets say I have LINQ like this... Dim Q = FROM p in DataContext.Parent _ Join c In DataContext.Child On c.ParentKey = p.ParentKey Consider this a "base query" on which I will perform other filtering actions. Now I want to join the Pet table like this: Q = Q.Join(DataContext.Pet, _ Function(a) a.c.ChildKey, _ Function(p As Pet) p.ChildKey, _ Function(a, p As Pet) p.ChildKey = a.c.ChildKey) The above Join call doesnt work. I sort of understand why it doesnt work, but hopefully it'll show you how I tried to accomplish this task. After all this was done I would have appended a Select to finish the job. Any ideas on a better way to do this? I tried it with the PredicateBuilder with little success. I might not know how to use it right but it felt like it wasnt gonna handle the joining.

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >