Search Results

Search found 11993 results on 480 pages for 'define syntax'.

Page 380/480 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • controlling which project header file Xcode will include

    - by jdmuys
    My Xcode project builds to variations of the same product using two targets. The difference between the two is only on which version of an included library is used. For the .c source files it's easy to assign the correct version to the correct target using the target check box. However, including the header file always includes the same one. This is correct for one target, but wrong for the other one. Is there a way to control which header file is included by each target? Here is my project file hierarchy (which is replicated in Xcode): MyProject TheirOldLib theirLib.h theirLib.cpp TheirNewLib theirLib.h theirLib.cpp myCode.cpp and myCode.cpp does thing such as: #include "theirLib.h" … somecode() { #if OLDVERSION theirOldLibCall(…); #else theirNewLibCall(…); #endif } And of course, I define OLDVERSION for one target and not for the other. So is there a way to tell Xcode which theirLib.h to include per target? Constraints: - the two header files have the same name. As a last resort, I could rename one of them, but I'd rather avoid that as this will lead to major hair pulling on the other platforms. - I'm free to tweak my project as I otherwise see fit Thanks for any help.

    Read the article

  • SQL Server Search Proper Names Full Text Index vs LIKE + SOUNDEX

    - by Matthew Talbert
    I have a database of names of people that has (currently) 35 million rows. I need to know what is the best method for quickly searching these names. The current system (not designed by me), simply has the first and last name columns indexed and uses "LIKE" queries with the additional option of using SOUNDEX (though I'm not sure this is actually used much). Performance has always been a problem with this system, and so currently the searches are limited to 200 results (which still takes too long to run). So, I have a few questions: Does full text index work well for proper names? If so, what is the best way to query proper names? (CONTAINS, FREETEXT, etc) Is there some other system (like Lucene.net) that would be better? Just for reference, I'm using Fluent NHibernate for data access, so methods that work will with that will be preferred. I'm using SQL Server 2008 currently. EDIT I want to add that I'm very interested in solutions that will deal with things like commonly misspelled names, eg 'smythe', 'smith', as well as first names, eg 'tomas', 'thomas'. Query Plan |--Parallelism(Gather Streams) |--Nested Loops(Inner Join, OUTER REFERENCES:([testdb].[dbo].[Test].[Id], [Expr1004]) OPTIMIZED WITH UNORDERED PREFETCH) |--Hash Match(Inner Join, HASH:([testdb].[dbo].[Test].[Id])=([testdb].[dbo].[Test].[Id])) | |--Bitmap(HASH:([testdb].[dbo].[Test].[Id]), DEFINE:([Bitmap1003])) | | |--Parallelism(Repartition Streams, Hash Partitioning, PARTITION COLUMNS:([testdb].[dbo].[Test].[Id])) | | |--Index Seek(OBJECT:([testdb].[dbo].[Test].[IX_Test_LastName]), SEEK:([testdb].[dbo].[Test].[LastName] >= 'WHITDþ' AND [testdb].[dbo].[Test].[LastName] < 'WHITF'), WHERE:([testdb].[dbo].[Test].[LastName] like 'WHITE%') ORDERED FORWARD) | |--Parallelism(Repartition Streams, Hash Partitioning, PARTITION COLUMNS:([testdb].[dbo].[Test].[Id])) | |--Index Seek(OBJECT:([testdb].[dbo].[Test].[IX_Test_FirstName]), SEEK:([testdb].[dbo].[Test].[FirstName] >= 'THOMARþ' AND [testdb].[dbo].[Test].[FirstName] < 'THOMAT'), WHERE:([testdb].[dbo].[Test].[FirstName] like 'THOMAS%' AND PROBE([Bitmap1003],[testdb].[dbo].[Test].[Id],N'[IN ROW]')) ORDERED FORWARD) |--Clustered Index Seek(OBJECT:([testdb].[dbo].[Test].[PK__TEST__3214EC073B95D2F1]), SEEK:([testdb].[dbo].[Test].[Id]=[testdb].[dbo].[Test].[Id]) LOOKUP ORDERED FORWARD) SQL for above: SELECT * FROM testdb.dbo.Test WHERE LastName LIKE 'WHITE%' AND FirstName LIKE 'THOMAS%' Based on advice from Mitch, I created an index like this: CREATE INDEX IX_Test_Name_DOB ON Test (LastName ASC, FirstName ASC, BirthDate ASC) INCLUDE (and here I list the other columns) My searches are now incredibly fast for my typical search (last, first, and birth date).

    Read the article

  • Why does VBA Find loop fail when called from Evaluate?

    - by Abiel
    I am having some problems running a find loop inside of a subroutine when the routine is called using the Application.Evaluate or ActiveSheet.Evaluate method. For example, in the code below, I define a subroutine FindSub() which searches the sheet for a string "xxx". The routine CallSub() calls the FindSub() routine using both a standard Call statement and Evaluate. When I run Call FindSub, everything will work as expected: each matching address gets printed out to the immediate window and we get a final message "Finished up" when the code is done. However, when I do Application.Evaluate "FindSub()", only the address of the first match gets printed out, and we never reach the "Finished up" message. In other words, an error is encountered after the Cells.FindNext line as the loop tries to evaluate whether it should continue, and program execution stops without any runtime error being printed. I would expect both Call FindSub and Application.Evaluate "FindSub()" to yield the same results in this case. Can someone explain why they do not, and if possible, a way to fix this? Thanks. Note: In this example I obviously do not need to use Evaluate. This version is simplified to just focus on the particular problem I am having in a more complex situation. Sub CallSub() Call FindSub Application.Evaluate "FindSub()" End Sub Sub FindSub() Dim rngFoundCell As Range Dim rngFirstCell As Range Set rngFoundCell = Cells.Find(What:="xxx", after:=ActiveCell, LookIn:=xlValues, _ LookAt:=xlPart, SearchOrder:=xlByRows, SearchDirection:=xlNext, _ MatchCase:=False, SearchFormat:=False) If Not rngFoundCell Is Nothing Then Set rngFirstCell = rngFoundCell Do Debug.Print rngFoundCell.Address Set rngFoundCell = Cells.FindNext(after:=rngFoundCell) Loop Until (rngFoundCell Is Nothing) Or (rngFoundCell.Address = rngFirstCell.Address) End If Debug.Print "Finished up" End Sub

    Read the article

  • General N-Tier Architecture Question

    - by whatispunk
    In an N-Tier app you're supposed to have a business logic layer and a data access layer. Is it bad to simply have two assemblies: BusinessLogicLayer.dll and DataAccessLayer.dll to handle all this logic? How do you actually represent these layers. It seems silly, the way I've seen it, to have a BusinessLogic class library containing classes like: CustomerBusinessLogic.cs, OrderBusinessLogic.cs, etc. each calling their appropriately named cousin in the DataAccessLayer class library, i.e. CustomerDataAccess.cs, OrderDataAccess.cs. I want to create a web app using MVP and it doesn't seem so cut and dry as this. There are lots of opinions about where the business logic is supposed to be put in MVP and I'm not sure I've found a really great answer yet. I want this project to be easily testable, and I am trying to adhere to TDD methodologies as best I can. I intend to use MSTest and Rhino Mocks for testing. I was thinking of something like the following for my architecture: I'd use LINQ-To-SQL to talk to the database. WCF services to define data contract interfaces for the business logic layer. Then use MVP with ASP.NET Forms for the UI/BLL. Now, this isn't the start of this project, most of the LINQ stuff is already done, so its stuck. The WCF service would replace the existing DataAccessLayer assembly and the UI/BLL would replace the BusinessLogicLayer assembly etc. This sort of makes sense in my head, but its getting really late. Anyone that's traveled down this path have any guidance? Good links? Warnings? Thanks!

    Read the article

  • Cucumber could not find table; but its there. What is going on?

    - by JZ
    I'm working with cucumber and I'm running into difficulties. When I run "cucumber features", I am met with errors, cucumber is unable to find my requests table. What obvious mistake am I making? Thank you in advance! Bash: justin-zollarss-mac-pro:conversion justinz$ cucumber features Using the default profile... /Users/justinz/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement F-- (::) failed steps (::) Could not find table 'requests' (ActiveRecord::StatementInvalid) ./features/article_steps.rb:3 ./features/article_steps.rb:2:in `each' ./features/article_steps.rb:2:in `/^I have requests named (.+)$/' features/manage_articles.feature:7:in `Given I have requests named Foo, Bar' Failing Scenarios: cucumber features/manage_articles.feature:6 # Scenario: Conversion 1 scenario (1 failed) 3 steps (1 failed, 2 skipped) 0m0.154s justin-zollarss-mac-pro:conversion justinz$ Manage_articles.feature: Feature: Manage Articles In order to make sales As a customer I want to make conversions Scenario: Conversion Given I have requests named Foo, Bar When I go to the list of customers Then I should see a new "customer" Article_steps.rb: Given /^I have requests named (.+)$/ do |firsts| firsts.split(', ').each do |first| Request.create!(:first => first) pending # express the regexp above with the code you wish you had end end Then /^I should see a new "([^"]*)"$/ do |arg1| pending # express the regexp above with the code you wish you had end DB schema: ActiveRecord::Schema.define(:version => 20100528011731) do create_table "requests", :force => true do |t| t.string "institution" t.string "website" t.string "type" t.string "users" t.string "first" t.string "last" t.string "jobtitle" t.string "phone" t.string "email" t.datetime "created_at" t.datetime "updated_at" end end

    Read the article

  • New Perl user: using a hash of arrays

    - by Zach H
    I'm doing a little datamining project where a perl script grabs info from a SQL database and parses it. The data consists of several timestamps. I want to find how many of a particular type of timestamp exist on any particular day. Unfortunately, this is my first perl script, and the nature of perl when it comes to hashes and arrays is confusing me quite a bit. Code segment: my %values=();#A hash of the total values of each type of data of each day. #The key is the day, and each key stores an array of each of the values I need. my @proposal; #[drafted timestamp(0), submitted timestamp(1), attny approved timestamp(2),Organiziation approved timestamp(3), Other approval timestamp(4), Approved Timestamp(5)] while(@proposal=$sqlresults->fetchrow_array()){ #TODO: check to make sure proposal is valid #Increment the number of timestamps of each type on each particular date my $i; for($i=0;$i<=5;$i++) $values{$proposal[$i]}[$i]++; #Update rolling average of daily #TODO: To check total load, increment total load on all dates between attourney approve date and accepted date for($i=$proposal[1];$i<=$proposal[2];$i++) $values{$i}[6]++; } I keep getting syntax errors inside the for loops incrementing values. Also, considering that I'm using strict and warnings, will Perl auto-create arrays of the right values when I'm accessing them inside the hash, or will I get out-of bounds errors everywhere? Thanks for any help, Zach

    Read the article

  • Emulating a web browser

    - by Sean
    Hello, we are tasked with basically emulating a browser to fetch webpages, looking to automate tests on different web pages. This will be used for (ideally) console-ish applications that run in the background and generate reports. We tried going with .NET and the WatiN library, but it was built on a Marshalled IE, and so it lacked many features that we hacked in with calls to unmanaged native code, but at the end of the day IE is not thread safe nor process safe, and many of the needed features could only be implemented by changing registry values and it was just terribly unflexible. Proxy support JavaScript support- we have to be able to parse the actual DOM after any javascript has executed (and hopefully an event is raised to handle any ajax calls) Ability to save entire contents of page including images FROM THE loaded page's CACHE to a separate location ability to clear cookies/cache, get the cookies/cache, etc. Ability to set headers and alter post data for any browser call And for the love of drogs, an API that isn't completely cryptic Languages acceptable C++, C#, Python, anything that can be a simple little console application that doesn't have a retarded syntax like Ruby. From my own research, and believe me I am terrible at google searches, I have heard good things about WebKit... would the Qt module QtWebKit handle all these features?

    Read the article

  • How can I control the action of onbeforeunload in IE?

    - by SpawnCxy
    Hi all I've got a problem about onbeforeunload recently that I need to pop up a voting page if the user try closes their IE browser.And I did it by using <body onbeforeunload="makevote()"> And the main structure of makevote() in javascript as follows: function makevote() { comet.distruct(); if(csid != null && isvote == null) { window.event.returnValue = false window.event.returnValue='press “cancel” to vote please!' showComDiv(popvote,"popiframe",400,120,'your vote here','dovote()'); } } For last three months this voting function performed so ugly that I got only less than 8,000 votes from more than 4,50,000 vistors.I think the problem is, when the users try to close their browsers,the onbeforeunload property pops up a comfirm box which covered my voting box while most users click the OK button,which means close comfirming is done,as a habit.So my question is how can I control the comfirming box made by onbeforeunload myself? So far I can only define the message it shows.For example if I click the "OK" ,I'll go to the voting box instead of closing my IE.And if there's any other better way to do this?Help would be greatly appreciated! Regards

    Read the article

  • How to build a GridView like DataBound Templated Custom ASP.NET Server Control

    - by Deyan
    I am trying to develop a very simple templated custom server control that resembles GridView. Basically, I want the control to be added in the .aspx page like this: <cc:SimpleGrid ID="SimpleGrid1" runat="server"> <TemplatedColumn>ID: <%# Eval("ID") %></ TemplatedColumn> <TemplatedColumn>Name: <%# Eval("Name") %></ TemplatedColumn> <TemplatedColumn>Age: <%# Eval("Age") %></ TemplatedColumn> </cc:SimpleGrid> and when providing the following datasource: DataTable table = new DataTable(); table.Columns.Add("ID", typeof(int)); table.Columns.Add("Name", typeof(string)); table.Columns.Add("Age", typeof(int)); table.Rows.Add(1, "Bob", 35); table.Rows.Add(2, "Sam", 44); table.Rows.Add(3, "Ann", 26); SimpleGrid1.DataSource = table; SimpleGrid1.DataBind(); the result should be a HTML table like this one. ------------------------------- | ID: 1 | Name: Bob | Age: 35 | ------------------------------- | ID: 2 | Name: Sam | Age: 44 | ------------------------------- | ID: 3 | Name: Ann | Age: 26 | ------------------------------- The main problem is that I cannot define the TemplatedColumn. When I tried to do it like this ... private ITemplate _TemplatedColumn; [PersistenceMode(PersistenceMode.InnerProperty)] [TemplateContainer(typeof(TemplatedColumnItem))] [TemplateInstance(TemplateInstance.Multiple)] public ITemplate TemplatedColumn { get { return _TemplatedColumn; } set { _TemplatedColumn = value; } } .. and subsequently instantiate the template in the CreateChildControls I got the following result which is not what I want: ----------- | Age: 35 | ----------- | Age: 44 | ----------- | Age: 26 | ----------- I know that what I want to achieve is pointless and that I can use DataGrid to achieve it, but I am giving this very simple example because if I know how to do this I would be able to develop the control that I need. Thank you.

    Read the article

  • Ruby array index method not working returning NIL value

    - by Rails beginner
    Here is the error: => ["Mænd med navnet Kim", "30.094", "29.946", "-148", "Kvinder med navnet Kim", "341", "345", "4", "Mænd med navnet Kim Hansen", "1.586", "1.573", "-13", "Kvin der med navnet Kim Hansen", "5", "5", "0", "Mænd og kvinder med efternavnet Hans en", "226.040", "223.478", "-2.562"] irb(main):094:0> irb(main):095:0* @tester.index("Mænd med navnet Kim") => nil irb(main):096:0> @tester.index("Kvinder med navnet Kim") => 4 irb(main):097:0> @tester.index("Mænd med navnet Kim Hansen") => nil irb(main):098:0> @tester.index("Kvinder med navnet Kim Hansen") => 12 irb(main):099:0> @tester.index("Mænd og kvinder med efternavnet Hansen") => nil irb(main):100:0> Example tried Gsub method: <ap(&:text).map{|d| d.delete "'"}.map{|d| d.gsub("æ", "#844"} irb(main):113:1> ) SyntaxError: (irb):112: syntax error, unexpected '}', expecting ')' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/railties-3.0.9/lib/rails/comman ds/console.rb:44:in `start' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/railties-3.0.9/lib/rails/comman ds/console.rb:8:in `start' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/railties-3.0.9/lib/rails/comman ds.rb:23:in `<top (required)>' from script/rails:6:in `require' from script/rails:6:in `<main>' <ap(&:text).map{|d| d.delete "'"}.map{|d| d.gsub("æ", "#844")} Encoding::CompatibilityError: incompatible encoding regexp match (CP850 regexp w ith UTF-8 string) from (irb):114:in `gsub' from (irb):114:in `block in irb_binding' from (irb):114:in `map' from (irb):114 from C:/Ruby193/lib/ruby/gems/1.9.1/gems/railties-3.0.9/lib/rails/comman ds/console.rb:44:in `start' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/railties-3.0.9/lib/rails/comman ds/console.rb:8:in `start' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/railties-3.0.9/lib/rails/comman ds.rb:23:in `<top (required)>' from script/rails:6:in `require' from script/rails:6:in `<main>'

    Read the article

  • Dynamic SQL Server stored procedure

    - by Pinu
    ALTER PROCEDURE [dbo].[GetDocumentsAdvancedSearch] @SDI CHAR(10) = NULL ,@Client CHAR(4) = NULL ,@AccountNumber VARCHAR(20) = NULL ,@Address VARCHAR(300) = NULL ,@StartDate DATETIME = NULL ,@EndDate DATETIME = NULL ,@ReferenceID CHAR(14) = NULL AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- DECLARE DECLARE @Sql NVARCHAR(4000) DECLARE @ParamList NVARCHAR(4000) SELECT @Sql = 'SELECT DISTINCT ISNULL(Documents.DocumentID, '') ,Person.Name1 ,Person.Name2 ,Person.Street1 ,Person.Street2 ,Person.CityStateZip ,ISNULL(Person.ReferenceID,'') ,ISNULL(Person.AccountNumber,'') ,ISNULL(Person.HasSetPreferences,0) ,Documents.Job ,Documents.SDI ,Documents.Invoice ,ISNULL(Documents.ShippedDate,'') ,ISNULL(Documents.DocumentPages,'') ,Documents.DocumentType ,Documents.Description FROM Person LEFT OUTER JOIN Documents ON Person.PersonID = Documents.PersonID LEFT OUTER JOIN DocumentType ON Documents.DocumentType = DocumentType.DocumentType LEFT OUTER JOIN Addressess ON Person.PersonID = Addressess.PersonID' SELECT @Sql = @Sql + ' WHERE Documents.SDI IN ( '+ QUOTENAME(@sdi) + ') OR (Person.AssociationID = ' + ''' 000000 + ''' + 'AND Person.Client = ' + QUOTENAME(@Client) IF NOT (@AccountNumber IS NULL) SELECT @Sql = @Sql + 'AND Person.AccountNumber LIKE' + QUOTENAME(@AccountNumber) IF NOT (@Address IS NULL) SELECT @Sql = @Sql + 'AND Person.Name1 LIKE' +QUOTENAME(@Address)+ 'AND Person.Name2 LIKE' +QUOTENAME(@Address)+ 'AND Person.Street1 LIKE' +QUOTENAME(@Address)+ 'AND Person.Street2 LIKE' +QUOTENAME(@Address)+ 'AND Person.CityStateZip LIKE' +QUOTENAME(@Address) IF NOT (@StartDate IS NULL) SELECT @Sql = @Sql + 'AND Documents.ShippedDate >=' +@StartDate IF NOT (@EndDate IS NULL) SELECT @Sql = @Sql + 'AND Documents.ShippedDate <=' +@EndDate IF NOT (@ReferenceID IS NULL) SELECT @Sql = @Sql + 'AND Documents.ReferenceID =' +QUOTENAME(@ReferenceID) -- Insert statements for procedure here -- PRINT @Sql SELECT @ParamList = '@Psdi CHAR(10),@PClient CHAR(4),@PAccountNumber VARCHAR(20),@PAddress VARCHAR(300),@PStartDate DATETIME ,@PEndDate DATETIME,@PReferenceID CHAR(14)' EXEC SP_EXECUTESQL @Sql,@ParamList,@Sdi,@Client,@AccountNumber,@Address,@StartDate,@EndDate,@ReferenceID --PRINT @Sql END ERROR Msg 102, Level 15, State 1, Line 23 Incorrect syntax near '000000'. Msg 105, Level 15, State 1, Line 23 Unclosed quotation mark after the character string 'AND Person.Client = [1 ]AND Person.AccountNumber LIKE[1]'.

    Read the article

  • PHP Markdown Extra and Definition Lists

    - by Kirk Bentley
    I'm currently generating a definition list with PHP Markdown Extra with the following syntax: Term : Description : Description Two My Other Term : Description which generates the following HTML: <dl> <dt>Term</dt> <dd>Description</dd> <dd>Description Two</dd> <dt>My Other Term</dt> <dd>Description</dd> </dl> Does anyone know how I can get Markdown to create separate definition lists for each definition term and descriptions to create markup like this? <dl> <dt>Term</dt> <dd>Description</dd> <dd>Description Two</dd> </dl> <dl <dt>My Other Term</dt> <dd>Description</dd> </dl>

    Read the article

  • Mocking with Boost::Test

    - by Billy ONeal
    Hello everyone :) I'm using the Boost::Test library for unit testing, and I've in general been hacking up my own mocking solutions that look something like this: //In header for clients struct RealFindFirstFile { static HANDLE FindFirst(LPCWSTR lpFileName, LPWIN32_FIND_DATAW lpFindFileData) { return FindFirstFile(lpFileName, lpFindFileData); }; }; template <typename FirstFile_T = RealFindFirstFile> class DirectoryIterator { //.. Implementation } //In unit tests (cpp) #define THE_ANSWER_TO_LIFE_THE_UNIVERSE_AND_EVERYTHING 42 struct FakeFindFirstFile { static HANDLE FindFirst(LPCWSTR lpFileName, LPWIN32_FIND_DATAW lpFindFileData) { return THE_ANSWER_TO_LIFE_THE_UNIVERSE_AND_EVERYTHING; }; }; BOOST_AUTO_TEST_CASE( MyTest ) { DirectoryIterator<FakeFindFirstFile> LookMaImMocked; //Test } I've grown frustrated with this because it requires that I implement almost everything as a template, and it is a lot of boilerplate code to achieve what I'm looking for. Is there a good method of mocking up code using Boost::Test over my Ad-hoc method? I've seen several people recommend Google Mock, but it requires a lot of ugly hacks if your functions are not virtual, which I would like to avoid. Oh: One last thing. I don't need assertions that a particular piece of code was called. I simply need to be able to inject data that would normally be returned by Windows API functions.

    Read the article

  • Just introducing myself to TMPing, and came across a quirk

    - by Justen
    I was just trying to learn the syntax of the beginner things, and how it worked when I was making this short bit of code. The code below works in adding numbers 1 to 499, but if I add 1 to 500, the compiler bugs out giving me: fatal error C1001: An internal error has occurred in the compiler. And I was just wondering why that is. Is there some limit to how much code the compiler can generate or something and it just happened to be a nice round number of 500 for me? #include <iostream> using namespace std; template < int b > struct loop { enum { sum = loop< b - 1 >::sum + b }; }; template <> struct loop< 0 > { enum { sum = 0 }; }; int main() { cout << "Adding the numbers from 1 to 499 = " << loop< 499 >::sum << endl; return 0; }

    Read the article

  • 1D Function into 2D Function Interpolation

    - by Drazick
    Hello. I have a 1D function which I want to interpolate into 2D function. I know the function should have "Polar Symmetry". Hence I use the following code (Matlab Syntax): Assuming the 1D function is LSF of the length 15. [x, y] = meshgrid([-7:7]); r = sqrt(x.^2 + y.^2); PSF = interp1([-7:7], LSF, r(:)); % Sometimes using 'spline' option, same results. PSF = reshape(PSF, [7, 7]); I have few problems: 1. Got some overshoot at the edges (As there some Extrapolation). 2. Can't enforce some assumptions (Monotonic, Non Negative). Is there a better Interpolation method for those circumstances? I couldn't find "Lanczos" based interpolation I can use the same way as interp1 (For a certain vector of points, in "imresize" you can only set the length). Is there such function anywhere? Has anyone encountered a function which allows enforcing some assumptions (Monotonically Decreasing, Non Negative, etc..). Thanks.

    Read the article

  • d:DesignData issue, Visual Studio 2010 cant build after adding sample design data with Expression Bl

    - by Valko
    Hi, VS 2010 solution and Silverlight project builds fine, then: I open MyView.xaml view in Expression Blend 4 Add sample data from class (I use my class defined in the same project) after I add new sample design data with Expression blend 4, everything looks fine, you see the added sample data in the EB 4 fine, you also see the data in VS 2010 designer too. Close the EB 4, and next VS 2010 build is giving me this errors: Error 7 XAML Namespace http://schemas.microsoft.com/expression/blend/2008 is not resolved. C:\Code\source\...myview.xaml and: Error 12 Object reference not set to an instance of an object. ... TestSampleData.xaml when I open the TestSampleData.xaml I see that namespace for my class used to define sample data is not recognized. However this namespace and the class itself exist in the same project! If I remove the design data from the MyView.xaml: d:DataContext="{d:DesignData /SampleData/TestSampleData.xaml}" it builds fine and the namespace in TestSampleData.xaml is recognized this time?? and then if add: d:DataContext="{d:DesignData /SampleData/TestSampleData.xaml}" I again see in the VS 2010 designer sample data, but the next build fails and again I see studio cant find the namespace in my TestSampleData.xaml containing sample data. That cycle is driving me crazy. Am I missing something here, is it not possible to have your class defining sample design data in the same project you have the MyView.xaml view?? cheers Valko

    Read the article

  • Hibernate : Opinions in Composite PK vs Surrogate PK

    - by Albert Kam
    As i understand it, whenever i use @Id and @GeneratedValue on a Long field inside JPA/Hibernate entity, i'm actually using a surrogate key, and i think this is a very nice way to define a primary key considering my not-so-good experiences in using composite primary keys, where : there are more than 1 business-value-columns combination that become a unique PK the composite pk values get duplicated across the table details cannot change the business value inside that composite PK I know hibernate can support both types of PK, but im left wondering by my previous chats with experienced colleagues where they said that composite PK is easier to deal with when doing complex SQL queries and stored procedure processes. They went on saying that when using surrogate keys will complicate things when doing joining and there are several condition when it's impossible to do some stuffs when using surrogate keys. Although im sorry i cant explain the detail here since i was not clear enough when they explain it. Maybe i'll put more details next time. Im currently trying to do a project, and want to try out surrogate keys, since it's not getting duplicated across tables, and we can change the business-column values. And when the need for some business value combination uniqueness, i can use something like : @Table(name="MY_TABLE", uniqueConstraints={ @UniqueConstraint(columnNames={"FIRST_NAME", "LAST_NAME"}) // name + lastName combination must be unique But im still in doubt because of the previous discussion about the composite key. Could you share your experiences in this matter ? Thank you !

    Read the article

  • Inserting instructions into method.

    - by Alix
    Hi, (First of all, this is a very lengthy post, but don't worry: I've already implemented all of it, I'm just asking your opinion.) I'm having trouble implementing the following; I'd appreciate some help: I get a Type as parameter. I define a subclass using reflection. Notice that I don't intend to modify the original type, but create a new one. I create a property per field of the original class, like so: [- ignore this text here; I had to add something or the formatting wouldn't work <-] public class OriginalClass { private int x; } public class Subclass : OriginalClass { private int x; public int X { get { return x; } set { x = value; } } } [This is number 4! Numbered lists don't work if you add code in between; sorry] For every method of the superclass, I create an analogous method in the subclass. The method's body must be the same except that I replace the instructions ldfld x with callvirt this.get_X, that is, instead of reading from the field directly I call the get accessor. I'm having trouble with step 4. I know you're not supposed to manipulate code like this, but I really need to. Here's what I've tried: Attempt #1: Use Mono.Cecil. This would allow me to parse the body of the method into human-readable Instructions, and easily replace instructions. However, the original type isn't in a .dll file, so I can't find a way to load it with Mono.Cecil. Writing the type to a .dll, then load it, then modify it and write the new type to disk (which I think is the way you create a type with Mono.Cecil), and then load it seems like a huge overhead. Attempt #2: Use Mono.Reflection. This would also allow me to parse the body into Instructions, but then I have no support for replacing instructions. I've implemented a very ugly and inefficient solution using Mono.Reflection, but it doesn't yet support methods that contain try-catch statements (although I guess I can implement this) and I'm concerned that there may be other scenarios in which it won't work, since I'm using the ILGenerator in a somewhat unusual way. Also, it's very ugly ;). Here's what I've done: private void TransformMethod(MethodInfo methodInfo) { // Create a method with the same signature. ParameterInfo[] paramList = methodInfo.GetParameters(); Type[] args = new Type[paramList.Length]; for (int i = 0; i < args.Length; i++) { args[i] = paramList[i].ParameterType; } MethodBuilder methodBuilder = typeBuilder.DefineMethod( methodInfo.Name, methodInfo.Attributes, methodInfo.ReturnType, args); ILGenerator ilGen = methodBuilder.GetILGenerator(); // Declare the same local variables as in the original method. IList<LocalVariableInfo> locals = methodInfo.GetMethodBody().LocalVariables; foreach (LocalVariableInfo local in locals) { ilGen.DeclareLocal(local.LocalType); } // Get readable instructions. IList<Instruction> instructions = methodInfo.GetInstructions(); // I first need to define labels for every instruction in case I // later find a jump to that instruction. Once the instruction has // been emitted I cannot label it, so I'll need to do it in advance. // Since I'm doing a first pass on the method's body anyway, I could // instead just create labels where they are truly needed, but for // now I'm using this quick fix. Dictionary<int, Label> labels = new Dictionary<int, Label>(); foreach (Instruction instr in instructions) { labels[instr.Offset] = ilGen.DefineLabel(); } foreach (Instruction instr in instructions) { // Mark this instruction with a label, in case there's a branch // instruction that jumps here. ilGen.MarkLabel(labels[instr.Offset]); // If this is the instruction that I want to replace (ldfld x)... if (instr.OpCode == OpCodes.Ldfld) { // ...get the get accessor for the accessed field (get_X()) // (I have the accessors in a dictionary; this isn't relevant), MethodInfo safeReadAccessor = dataMembersSafeAccessors[((FieldInfo) instr.Operand).Name][0]; // ...instead of emitting the original instruction (ldfld x), // emit a call to the get accessor, ilGen.Emit(OpCodes.Callvirt, safeReadAccessor); // Else (it's any other instruction), reemit the instruction, unaltered. } else { Reemit(instr, ilGen, labels); } } } And here comes the horrible, horrible Reemit method: private void Reemit(Instruction instr, ILGenerator ilGen, Dictionary<int, Label> labels) { // If the instruction doesn't have an operand, emit the opcode and return. if (instr.Operand == null) { ilGen.Emit(instr.OpCode); return; } // Else (it has an operand)... // If it's a branch instruction, retrieve the corresponding label (to // which we want to jump), emit the instruction and return. if (instr.OpCode.FlowControl == FlowControl.Branch) { ilGen.Emit(instr.OpCode, labels[Int32.Parse(instr.Operand.ToString())]); return; } // Otherwise, simply emit the instruction. I need to use the right // Emit call, so I need to cast the operand to its type. Type operandType = instr.Operand.GetType(); if (typeof(byte).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (byte) instr.Operand); else if (typeof(double).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (double) instr.Operand); else if (typeof(float).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (float) instr.Operand); else if (typeof(int).IsAssignableFrom(operandType)) ilGen.Emit(instr.OpCode, (int) instr.Operand); ... // you get the idea. This is a pretty long method, all like this. } Branch instructions are a special case because instr.Operand is SByte, but Emit expects an operand of type Label. Hence the need for the Dictionary labels. As you can see, this is pretty horrible. What's more, it doesn't work in all cases, for instance with methods that contain try-catch statements, since I haven't emitted them using methods BeginExceptionBlock, BeginCatchBlock, etc, of ILGenerator. This is getting complicated. I guess I can do it: MethodBody has a list of ExceptionHandlingClause that should contain the necessary information to do this. But I don't like this solution anyway, so I'll save this as a last-resort solution. Attempt #3: Go bare-back and just copy the byte array returned by MethodBody.GetILAsByteArray(), since I only want to replace a single instruction for another single instruction of the same size that produces the exact same result: it loads the same type of object on the stack, etc. So there won't be any labels shifting and everything should work exactly the same. I've done this, replacing specific bytes of the array and then calling MethodBuilder.CreateMethodBody(byte[], int), but I still get the same error with exceptions, and I still need to declare the local variables or I'll get an error... even when I simply copy the method's body and don't change anything. So this is more efficient but I still have to take care of the exceptions, etc. Sigh. Here's the implementation of attempt #3, in case anyone is interested: private void TransformMethod(MethodInfo methodInfo, Dictionary<string, MethodInfo[]> dataMembersSafeAccessors, ModuleBuilder moduleBuilder) { ParameterInfo[] paramList = methodInfo.GetParameters(); Type[] args = new Type[paramList.Length]; for (int i = 0; i < args.Length; i++) { args[i] = paramList[i].ParameterType; } MethodBuilder methodBuilder = typeBuilder.DefineMethod( methodInfo.Name, methodInfo.Attributes, methodInfo.ReturnType, args); ILGenerator ilGen = methodBuilder.GetILGenerator(); IList<LocalVariableInfo> locals = methodInfo.GetMethodBody().LocalVariables; foreach (LocalVariableInfo local in locals) { ilGen.DeclareLocal(local.LocalType); } byte[] rawInstructions = methodInfo.GetMethodBody().GetILAsByteArray(); IList<Instruction> instructions = methodInfo.GetInstructions(); int k = 0; foreach (Instruction instr in instructions) { if (instr.OpCode == OpCodes.Ldfld) { MethodInfo safeReadAccessor = dataMembersSafeAccessors[((FieldInfo) instr.Operand).Name][0]; byte[] bytes = toByteArray(OpCodes.Callvirt.Value); for (int m = 0; m < OpCodes.Callvirt.Size; m++) { rawInstructions[k++] = bytes[put.Length - 1 - m]; } bytes = toByteArray(moduleBuilder.GetMethodToken(safeReadAccessor).Token); for (int m = instr.Size - OpCodes.Ldfld.Size - 1; m >= 0; m--) { rawInstructions[k++] = bytes[m]; } } else { k += instr.Size; } } methodBuilder.CreateMethodBody(rawInstructions, rawInstructions.Length); } private static byte[] toByteArray(int intValue) { byte[] intBytes = BitConverter.GetBytes(intValue); if (BitConverter.IsLittleEndian) Array.Reverse(intBytes); return intBytes; } private static byte[] toByteArray(short shortValue) { byte[] intBytes = BitConverter.GetBytes(shortValue); if (BitConverter.IsLittleEndian) Array.Reverse(intBytes); return intBytes; } (I know it isn't pretty. Sorry. I put it quickly together to see if it would work.) I don't have much hope, but can anyone suggest anything better than this? Sorry about the extremely lengthy post, and thanks.

    Read the article

  • how to insert multiple rows using cakephp

    - by Paul
    In the cakePHP project I'm building, I want to insert a defined number of identical records. These will serve as placeholders records that will have additional data added later. Each record will insert the IDs taken from two belongs_to relationships as well as two other string values. What I want to do is be able to enter a value for the number of records I want created, which would equate to how many times the data is looped during save. What I don't know is: 1) how to setup a loop to handle a set number of inserts 2) how to define a form field in cakePHP that only sets the number of records to create. What I've tried is the following: function massAdd() { $inserts_required = 1; while ($inserts_required <= 10) { $this->Match->create(); $this->Match->save($this->data); echo $inserts_required++; } $brackets = $this->Match->Bracket->find('list'); $this->set(compact('brackets')); } What happens is: 1) at the top of the screen, above the doc type, the string 12345678910 is displayed, this is displayed on screen 2) a total of 11 records are created, and only the last record has the values passed in the form. I don't know why 11 records as opposed to 10 are created, and why only the last records has the entered form data? As always, your help and direction is appreciated. -Paul

    Read the article

  • Rails: unable to set any attribute of child model

    - by Bryan Roth
    I'm having a problem instantiating a ListItem object with specified attributes. For some reason all attributes are set to nil even if I specify values. However, if I specify attributes for a List, they retain their values. Attributes for a List retain their values: >> list = List.new(:id => 20, :name => "Test List") => #<List id: 20, name: "Test List"> Attributes for a ListItem don't retain their values: >> list_item = ListItem.new(:id => 17, :list_id => 20, :name => "Test Item") => #<ListItem id: nil, list_id: nil, name: nil> UPDATE #1: I thought the id was the only attribute not retaining its value but realized that setting any attribute for a ListItem gets set to nil. list.rb: class List < ActiveRecord::Base has_many :list_items, :dependent => :destroy accepts_nested_attributes_for :list_items, :reject_if => lambda { |a| a[:name].blank? }, :allow_destroy => true end list_item.rb: class ListItem < ActiveRecord::Base belongs_to :list validates_presence_of :name end schema.rb ActiveRecord::Schema.define(:version => 20100506144717) do create_table "list_items", :force => true do |t| t.integer "list_id" t.string "name" t.datetime "created_at" t.datetime "updated_at" end create_table "lists", :force => true do |t| t.string "name" t.datetime "created_at" t.datetime "updated_at" end end

    Read the article

  • Open Source CMS (.Net vs Java)

    - by CmsAndDotNetKindaGuy
    I must say up-front that this is not a strictly programming-related question and a very opinionated one. I am the lead developer of the dominant .Net CMS in my country and I don't like my own product :). Managerial decisions over what is important or not and large chunks of legacy code done before I joined gives me headache every day I go for work. Anyway, having a vast amount of experience in web industry and a very good grasp of C# and programming practices I'm designing my own CMS the past few months. Now the problem is that I'm an open source kinda guy so I have a dilemma. Should I use C# and .Net which has crippled multi-platform support or should I drop .Net entirely and start learning Java where I can create a truly open-source and cross-platform CMS? Please don't answer that I should contribute to an existing open source CMS. I ruled that out after spending a great deal of time searching for something similar in structure to what I have in mind. I am not a native speaker, so feel free to correct my syntax or rephrase my question.

    Read the article

  • What are the specific names for these OLE controls?

    - by Kris
    I have been working on making a block of code that would enable me to input values into a worksheet, as well as an MSgraph object. I have succeeded in this, but this has just presented me with a new set of problems: What are the control names for changing the visible size as well as the focus of a worksheet? What are the control names for changing/making background colours and borders? How do I create and define new worksheets and MSGraph objects inside the document? My example code so far: Option Explicit Dim objWord 'Word application object Dim objIShape 'Inline shapes object Dim objOLE 'OLE object Set objWord=CreateObject("Word.Application") objWord.Application.Documents.Open("C:\birdy.doc") objWord.Visible=True Set objIShape = objWord.ActiveDocument.InlineShapes Function count_filled_spaces(intOLENo, strRange) 'Activates the the inline shape by number(intOLENo) and defines it as the OLE object objIShape(intOLENo).OLEFormat.Activate Set objOLE = objIShape(intOLENo).OLEFormat.Object 'Detects the ClassType of the inline shape and uses a class specific counter to count which datafields have data Dim strClass, i, p, intSheetno intSheetno = 1 strClass = objIShape(intOLENo).OLEFormat.ClassType i = 0 If Left(strClass, 8) = "MSGraph." then For Each p In objOLE.Application.DataSheet.Range(strRange) If p <> "" Then i = i+1 End If Next ElseIf Left(strClass, 6) = "Excel." then For Each p In objOLE.Worksheets(intSheetno).Range(strRange) If p <> "" Then i = i+1 End If Next objOLE.Worksheets(intSheetno).Range("B" & i+1) = objOLE.Worksheets(intSheetno).Range("B" & i) End if count_filled_spaces = i End Function Dim strRange strRange = InputBox("Lol", "do eeet", "B1:B10") wscript.echo count_filled_spaces(2, strRange) 'objWord.Application.Documents.Save 'objWord.Application.Documents.Close 'objWord.Application.Quit WScript.Quit(0)

    Read the article

  • using JQuery and Prototype in the same page

    - by Don
    Hi, Several of my pages use both JQuery and Protoype. Since I upgraded to version 1.3 of JQuery this appears to be causing problems, because both libraries define a function named '$'. JQuery provides a function noConflict() which relinquishes control of $ to other libraries that may be using it. So it seems like I need to go through all my pages that look like this: <head> <script type="text/javascript" src="/obp/js/prototype.js"></script> <script type="text/javascript" src="/obp/js/jquery.js"></script> </head> and change them to look like this: <head> <script type="text/javascript" src="/obp/js/prototype.js"></script> <script type="text/javascript" src="/obp/js/jquery.js"></script> <script type="text/javascript"> jQuery.noConflict(); var $j = jQuery; </script> </head> I should then be able to use '$' for Prototype and '$j' (or 'jQuery') for JQuery. I'm not entirely happy about duplicating these 2 lines of code in every relevant page, and expect that at some point somebody is likely to forget to add them to a new page. I'd prefer to be able to do the following Create a file jquery-noconflict.js which "includes" jquery.js and the 2 lines of code shown above Import jquery-noconflict.js (instead of jquery.js) in all my JSP/HTML pages However, I'm not sure if it's possible to include one JS file in another, in the manner I've described? Of course an alternate solution is simply to add the 2 lines of code above to jquery.js directly, but if I do that I'll need to remember to do it every time I upgrade JQuery. Thanks in advance, Don

    Read the article

  • Why is Microsoft under-supporting or under-developping VBNET?

    - by Will Marcouiller
    I ran into a situation where the lack of some features has become somewhat frustrating while developping in VB.NET 2.0. Since my first day of programming, I've always been a C programmer, and still am. Naturally, I chose C# as my favorite .NET language. Recently, a customer of mine has obliged that all of his development projects which disregard SharePoint development have to be written in VB.NET 2.0, that is to avoid conflictual systems to come into some problems. That is a legitimate choice of his which I approve somehow, since he's running some old central systems and is slowly migrating toward latest technologies. As for me, I would have prefered to go with C#, but then, never having done much VB in my life, I see it as an opportunity to learn somethings new, how to handle this and that in VBNET, etc. Except that the syntax is really too verbose for me, which is a pain! I got used to it and that is fine. However, I recently wanted to use the InternalsVisibleToAttribute which I discovered lastly here on SO. But then, in addition to not being able to have lambda expression that returns no value, which I discovered months ago, today I learn that I can't use the attribute in VBNET! Here is what I have read in an article: [...] Sorry VB.Net developers, Microsoft is again shunning you guys and this attribute is NOT available to you.... :( And here is the link: InternalsVisibleTo: Testing internal methods in .Net 2.0 I have heard from Anders Hejlsberg mouth while watching a Webcast from his presentation of .NET 4.0 Framework that the VBNET team was working or has worked in collaboration with the C# team (Eric Lippert and others) in order to bring VBNET to offer the same features as C# offers. But then, I say to myself that the VBNET team has a huge step forward to make, if already in .NET 2.0, some of the most important features lacked! So my question is this: Why is Microsoft under-supporting or under-developping VBNET? Will VBNET ever be lacking the C# features?

    Read the article

  • "Undefined symbols" linker error with simple template class

    - by intregus
    Been away from C++ for a few years and am getting a linker error from the following code: Gene.h #ifndef GENE_H_INCLUDED #define GENE_H_INCLUDED template <typename T> class Gene { public: T getValue(); void setValue(T value); void setRange(T min, T max); private: T value; T minValue; T maxValue; }; #endif // GENE_H_INCLUDED Gene.cpp #include "Gene.h" template <typename T> T Gene<T>::getValue() { return this->value; } template <typename T> void Gene<T>::setValue(T value) { if(value >= this->minValue && value <= this->minValue) { this->value = value; } } template <typename T> void Gene<T>::setRange(T min, T max) { this->minValue = min; this->maxValue = max; } Using Code::Blocks and GCC if it matters to anyone. Also, clearly porting some GA stuff to C++ for fun and practice.

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >