Search Results

Search found 6346 results on 254 pages for 'turn a'.

Page 236/254 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • ASP.NET MVC 2: Linq to SQL entity w/ ForeignKey relationship and Default ModelBinder strangeness

    - by Simon
    Once again I'm having trouble with Linq to Sql and the MVC Model Binder. I have Linq to Sql generated classes, to illustrate them they look similar to this: public class Client { public int ClientID { get; set; } public string Name { get; set; } } public class Site { public int SiteID { get; set; } public string Name { get; set; } } public class User { public int UserID { get; set; } public string Name { get; set; } public int? ClientID { get; set; } public EntityRef<Client> Client { get; set; } public int? SiteID { get; set; } public EntityRef<Site> Site { get; set; } } The 'User' has a relationship with the 'Client' and 'Site . The User class has nullable ClientIDs and SiteIDs because the admin users are not bound to a Client or Site. Now I have a view where a user can edit a 'User' object, the view has fields for all the 'User' properties. When the form is submitted, the appropiate 'Save' action is called in my UserController: public ActionResult Save(User user, FormCollection form) { //form['SiteID'] == 1 //user.SiteID == 1 //form['ClientID'] == 1 //user.ClientID == null } The problem here is that the ClientID is never set, it is always null, even though the value is in the FormCollection. To figure out whats going wrong I set breakpoints for the ClientID and SiteID getters and setters in the Linq to Sql designer generated classes. I noticed the following: SiteID is being set, then ClientID is being set, but then the Client EntityRef property is being set with a null value which in turn is setting the ClientID to null too! I don't know why and what is trying to set the Client property, because the Site property setter is never beeing called, only the Client setter is being called. Manually setting the ClientID from the FormCollection like this: user.ClientID = int.Parse(form["ClientID"].ToString()); throws a 'ForeignKeyReferenceAlreadyHasValueException', because it was already set to null before. The only workaround I have found is to extend the generated partial User class with a custom method: Client = default(EntityRef<Client>) but this is not a satisfying solution. I don't think it should work like this? Please enlighten me someone. So far Linq to Sql is driving me crazy! Best regards

    Read the article

  • IEnumerable<T> ToArray usage, is it a copy or a pointer?

    - by Daniel
    I am parsing an arbitrary length byte array that is going to be passed around to a few different layers of parsing. Each parser creates a Header and a Packet payload just like any ordinary encapsulation. And my problem lies in how the encapsulation holds its packet byte array payload. Say i have a 100 byte array, and it has 3 levels of encapsulation. 3 packet objects will be created and i want to set the payload of these packets to the corresponding position in the byte array of the packet. For example lets say the payload size is 20 for all levels, then imagine it has a public byte[] Payload on each object. However the problem is that this byte[] Payload is a copy of the original 100 bytes. So i'm going to end up with 160 bytes in memory instead of 100. If it were in c++ i could just easily use a pointer however i'm writing this in c#. So i created the following class: public class PayloadSegment<T> : IEnumerable<T> { public readonly T[] Array; public readonly int Offset; public readonly int Count; public PayloadSegment(T[] array, int offset, int count) { this.Array = array; this.Offset = offset; this.Count = count; } public T this[int index] { get { if (index < 0 || index >= this.Count) throw new IndexOutOfRangeException(); else return Array[Offset + index]; } set { if (index < 0 || index >= this.Count) throw new IndexOutOfRangeException(); else Array[Offset + index] = value; } } public IEnumerator<T> GetEnumerator() { for (int i = Offset; i < Offset + Count; i++) yield return Array[i]; } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { IEnumerator<T> enumerator = this.GetEnumerator(); while (enumerator.MoveNext()) { yield return enumerator.Current; } } } This way i can simply reference a position inside the original byte array but use positional indexing. However if i do something like: PayloadSegment<byte> something = new PayloadSegment<byte>(someArray, 5, 10); byte[] somethingArray = something.ToArray(); Will the somethingArray be a copy of the bytes, or a reference to the original PayloadSegment which in turn is a reference to the original byte array? Sorry it was hard to word this lol _<

    Read the article

  • polymorphism, inheritance in c# - base class calling overridden method?

    - by Andrew Johns
    This code doesn't work, but hopefully you'll get what I'm trying to achieve here. I've got a Money class, which I've taken from http://www.noticeablydifferent.com/CodeSamples/Money.aspx, and extended it a little to include currency conversion. The implementation for the actual conversion rate could be different in each project, so I decided to move the actual method for retrieving a conversion rate (GetCurrencyConversionRate) into a derived class, but the ConvertTo method contains code that would work for any implementation assuming the derived class has overriden GetCurrencyConversionRate so it made sense to me to keep it in the parent class? So what I'm trying to do is get an instance of SubMoney, and be able to call the .ConvertTo() method, which would in turn use the overriden GetCurrencyConversionRate, and return a new instance of SubMoney. The problem is, I'm not really understanding some concepts of polymorphism and inheritance yet, so not quite sure what I'm trying to do is even possible in the way I think it is, as what is currently happening is that I end up with an Exception where it has used the base GetCurrencyConversionRate method instead of the derived one. Something tells me I need to move the ConvertTo method down to the derived class, but this seems like I'll be duplicating code in multiple implementations, so surely there's a better way? public class Money { public CurrencyConversionRate { get { return GetCurrencyConversionRate(_regionInfo.ISOCurrencySymbol); } } public static decimal GetCurrencyConversionRate(string isoCurrencySymbol) { throw new Exception("Must override this method if you wish to use it."); } public Money ConvertTo(string cultureName) { // convert to base USD first by dividing current amount by it's exchange rate. Money someMoney = this; decimal conversionRate = this.CurrencyConversionRate; decimal convertedUSDAmount = Money.Divide(someMoney, conversionRate).Amount; // now convert to new currency CultureInfo cultureInfo = new CultureInfo(cultureName); RegionInfo regionInfo = new RegionInfo(cultureInfo.LCID); conversionRate = GetCurrencyConversionRate(regionInfo.ISOCurrencySymbol); decimal convertedAmount = convertedUSDAmount * conversionRate; Money convertedMoney = new Money(convertedAmount, cultureName); return convertedMoney; } } public class SubMoney { public SubMoney(decimal amount, string cultureName) : base(amount, cultureName) {} public static new decimal GetCurrencyConversionRate(string isoCurrencySymbol) { // This would get the conversion rate from some web or database source decimal result = new Decimal(2); return result; } }

    Read the article

  • Objective-C++ Memory Problem

    - by Stephen Furlani
    Hello, I'm having memory woes. I've got a C++ Library (Equalizer from Eyescale) and they use the Traversal Visitor Pattern to allow you to add new functionality to their classes. I've finally figured out how it works, and I've got a Visitor that just returns the properties from one of the objects. (since I don't know how they're allocated). so. My little code does this: VisitorResult AGLContextVisitor::visit( Channel* channel ) { // Search through Nodes, Pipes until we get to the right window. // Add some code to make sure we find the right one? // Not executing the following code as C++ in gdb? eq::Window* w = channel->getWindow(); OSWindow* osw = w->getOSWindow(); AGLWindow* aw = (AGLWindow *)osw; AGLContext agl_ctx = aw->getAGLContext(); this->setContext(agl_ctx); return TRAVERSE_PRUNE; } So here's the problem. eq::Window* w = channel->getWindow(); (gdb) print w 0x0 BUT If I do this: (gdb) set objc-non-blocking-mode off (gdb) print w=channel->getWindow() 0x300effb9 // an honest memory location, and sets w as verified in the Debugger window of XCode. It does the same thing for osw. I don't get it. Why would something work in (gdb) but not in the code? The file is completely a cpp file, but it seems to be running in objc++, since I need to turn blocking off. Help!? I feel like I'm missing some memory-management basic thing here, either with C++ or Obj-C. [edit] channel-getWindow() is supposed to do this: /** @return the parent window. @version 1.0 */ Window* getWindow() { return _window; } The code also executes fine if I run it from a C++-only application. [edit] No... I tried creating a simple stand-alone program since I was tired of running it as a plugin. Messy to debug. And no, it doesn't run in the C++ program either. So I'm really at a loss as to what I'm doing wrong. Thanks, -- Stephen Furlani

    Read the article

  • Implementing coroutines in Java

    - by JUST MY correct OPINION
    This question is related to my question on existing coroutine implementations in Java. If, as I suspect, it turns out that there is no full implementation of coroutines currently available in Java, what would be required to implement them? As I said in that question, I know about the following: You can implement "coroutines" as threads/thread pools behind the scenes. You can do tricksy things with JVM bytecode behind the scenes to make coroutines possible. The so-called "Da Vinci Machine" JVM implementation has primitives that make coroutines doable without bytecode manipulation. There are various JNI-based approaches to coroutines also possible. I'll address each one's deficiencies in turn. Thread-based coroutines This "solution" is pathological. The whole point of coroutines is to avoid the overhead of threading, locking, kernel scheduling, etc. Coroutines are supposed to be light and fast and to execute only in user space. Implementing them in terms of full-tilt threads with tight restrictions gets rid of all the advantages. JVM bytecode manipulation This solution is more practical, albeit a bit difficult to pull off. This is roughly the same as jumping down into assembly language for coroutine libraries in C (which is how many of them work) with the advantage that you have only one architecture to worry about and get right. It also ties you down to only running your code on fully-compliant JVM stacks (which means, for example, no Android) unless you can find a way to do the same thing on the non-compliant stack. If you do find a way to do this, however, you have now doubled your system complexity and testing needs. The Da Vinci Machine The Da Vinci Machine is cool for experimentation, but since it is not a standard JVM its features aren't going to be available everywhere. Indeed I suspect most production environments would specifically forbid the use of the Da Vinci Machine. Thus I could use this to make cool experiments but not for any code I expect to release to the real world. This also has the added problem similar to the JVM bytecode manipulation solution above: won't work on alternative stacks (like Android's). JNI implementation This solution renders the point of doing this in Java at all moot. Each combination of CPU and operating system requires independent testing and each is a point of potentially frustrating subtle failure. Alternatively, of course, I could tie myself down to one platform entirely but this, too, makes the point of doing things in Java entirely moot. So... Is there any way to implement coroutines in Java without using one of these four techniques? Or will I be forced to use the one of those four that smells the least (JVM manipulation) instead?

    Read the article

  • Perl - Calling subclass constructor from superclass (OO)

    - by Emmel
    This may turn out to be an embarrassingly stupid question, but better than potentially creating embarrassingly stupid code. :-) This is an OO design question, really. Let's say I have an object class 'Foos' that represents a set of dynamic configuration elements, which are obtained by querying a command on disk, 'mycrazyfoos -getconfig'. Let's say that there are two categories of behavior that I want 'Foos' objects to have: Existing ones: one is, query ones that exist in the command output I just mentioned (/usr/bin/mycrazyfoos -getconfig`. Make modifications to existing ones via shelling out commands. Create new ones that don't exist; new 'crazyfoos', using a complex set of /usr/bin/mycrazyfoos commands and parameters. Here I'm not really just querying, but actually running a bunch of system() commands. Affecting changes. Here's my class structure: Foos.pm package Foos, which has a new($hashref-{name = 'myfooname',) constructor that takes a 'crazyfoo NAME' and then queries the existence of that NAME to see if it already exists (by shelling out and running the mycrazyfoos command above). If that crazyfoo already exists, return a Foos::Existing object. Any changes to this object requires shelling out, running commands and getting confirmation that everything ran okay. If this is the way to go, then the new() constructor needs to have a test to see which subclass constructor to use (if that even makes sense in this context). Here are the subclasses: Foos/Existing.pm As mentioned above, this is for when a Foos object already exists. Foos/Pending.pm This is an object that will be created if, in the above, the 'crazyfoo NAME' doesn't actually exist. In this case, the new() constructor above will be checked for additional parameters, and it will go ahead and, when called using -create() shell out using system() and create a new object... possibly returning an 'Existing' one... OR As I type this out, I am realizing it is perhaps it's better to have a single: (an alternative arrangement) Foos class, that has a -new() that takes just a name -create() that takes additional creation parameters -delete(), -change() and other params that affect ones that exist; that will have to just be checked dynamically. So here we are, two main directions to go with this. I'm curious which would be the more intelligent way to go.

    Read the article

  • multithreading with database

    - by Darsin
    I am looking out for a strategy to utilize multithreading (probably asynchronous delegates) to do a synchronous operation. I am new to multithreading so i will outline my scenario first. This synchronous operation right now is done for one set of data (portfolio) based on the the parameters provided. The (psudeo-code) implementation is given below: public DataSet DoTests(int fundId, DateTime portfolioDate) { // Get test results for the portfolio // Call the database adapter method, which in turn is a stored procedure, // which in turns runs a series of "rule" stored procs and fills a local temp table and returns it back. DataSet resultsDataSet = GetTestResults(fundId, portfolioDate); try { // Do some local processing on the results DoSomeProcessing(resultsDataSet); // Save the results in Test, TestResults and TestAllocations tables in a transaction. // Sets a global transaction which is provided to all the adapter methods called below // It is defined in the Base class StartTransaction("TestTransaction"); // Save Test and get a testId int testId = UpdateTest(resultsDataSet); // Adapter method, uses the same transaction // Update testId in the other tables in the dataset UpdateTestId(resultsDataSet, testId); // Update TestResults UpdateTestResults(resultsDataSet); // Adapter method, uses the same transaction // Update TestAllocations UpdateTestAllocations(resultsDataSet); // Adapter method, uses the same transaction // It is defined in the base class CommitTransaction("TestTransaction"); } catch { RollbackTransaction("TestTransaction"); } return resultsDataSet; } Now the requirement is to do it for multiple set of data. One way would be to call the above DoTests() method in a loop and get the data. I would prefer doing it in parallel. But there are certain catches: StartTransaction() method creates a connection (and transaction) every time it is called. All the underlying database tables, procedures are the same for each call of DoTests(). (obviously). Thus my question are: Will using multithreading anyway improve performance? What are the chances of deadlock especially when new TestId's are being created and the Tests, TestResults and TestAllocations are being saved? How can these deadlocked be handled? Is there any other more efficient way of doing the above operation apart from looping over the DoTests() method repeatedly?

    Read the article

  • GridView's ItemContainerStyle and selection states

    - by Roberto Casadei
    I'm trying to change the appearance of gridview items when they are selected. (Before, I used a trick with an IsSelected property in the ViewModel object bound to the containing grid and a bool-to-color converter, but I recognize that it is bad) To do so, I do: <GridView ItemContainerStyle="{StaticResource GridViewItemContainerStyle}" ...> ... and <Style x:Key="GridViewItemContainerStyle" TargetType="GridViewItem"> <Setter Property="Background" Value="Red" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="GridViewItem"> <Grid> <VisualStateManager.VisualStateGroups> <VisualStateGroup x:Name="CommonStates"> <VisualState x:Name="Normal"> <Storyboard> <ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="(Grid.Background)" Storyboard.TargetName="itemGrid"> <DiscreteObjectKeyFrame KeyTime="0" Value="Black"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </VisualState> </VisualStateGroup> <VisualStateGroup x:Name="SelectionStates"> <VisualState x:Name="UnselectedSwiping"/> <VisualState x:Name="UnselectedPointerOver"/> <VisualState x:Name="Selecting"/> <VisualState x:Name="Selected"> <Storyboard> <ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="(Grid.Background)" Storyboard.TargetName="itemGrid"> <DiscreteObjectKeyFrame KeyTime="0" Value="White"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </VisualState> <VisualState x:Name="SelectedSwiping"/> <VisualState x:Name="Unselecting"/> <VisualState x:Name="Unselected"/> <VisualState x:Name="SelectedUnfocused"/> </VisualStateGroup> </VisualStateManager.VisualStateGroups> <Grid ... x:Name="itemGrid"> <!-- HERE MY DATA TEMPLATE --> </Grid> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> When I run the app, the items are Black (as in the "normal" state). But selecting them does not turn them into White. Where am I wrong? Moreover, it there a way to set "ItemContainerStyle" without having it to "overwrite" the "ItemTemplate" ???

    Read the article

  • Learning... anything really

    - by WebDevHobo
    I'm particularly interested in Windows PowerShell, but here's a somewhat more general complaint: When asking for help on learning something new, be it a small subject on PHP or understanding a class in Java, what usually happens is that people direct me towards the documentation pages. What I'm looking for is somewhat of a course. A deep explanation of why something works the way it does. I know my basic programming, like Java and C#. I've never seen C or C++, though I have seen a bit of assembler. I know what the Stack and Heap are, how boxing and unboxing works, why you have to deep-copy an array instead of copying the pointer and some other things. Windows PowerShell on the other hand, I know nothing about. And I notice that when reading the small document or some code, I usually forget what it does or why it works. What I am looking for is preferably, a nice tutorial that explains the beginnings, the concepts, and goes to more difficult things at a steady pace. The only thing documentation can do is explain what a function does. That's no good to me since I don't know what I want to do yet. I could read about a thousand functions, and forget about most of them, because I don't need to implement them right after it. Randomly wandering through the documentation doesn't do me any good. So conclude, what is a good tutorial on Windows Powershell? One which explains in clear language what is happening, one which builds on previous things learned. I don't think googling this is a good idea. Doing a Google search on this would turn up numerous tutorials. And experience tells me that you have to look long and hard to find the gem you're looking for. That's why I'm asking here. Because this is the place where you can find more experienced people. Many of the PowerShell guys among you will know the good ones already, and by asking you, I avoid wasting time that could be spent learning. So to summarize: I will not google this!

    Read the article

  • C#/.NET Project - Am I setting things up correctly?

    - by JustLooking
    1st solution located: \Common\Controls\Controls.sln and its project: \Common\Controls\Common.Controls\Common.Controls.csproj Description: This is a library that contains this class: public abstract class OurUserControl : UserControl { // Variables and other getters/setters common to our UserControls } 2nd solution located: \AControl\AControl.sln and its project: \AControl\AControl\AControl.csproj Description: Of the many forms/classes, it will contain this class: using Common.Controls; namespace AControl { public partial class AControl : OurUserControl { // The implementation } } A note about adding references (not sure if this is relevant): When I add references (for projects I create), using the names above: 1. I add Common.Controls.csproj to AControl.sln 2. In AControl.sln I turn off the build of Common.Controls.csproj 3. I add the reference to Common.Controls (by project) to AControl.csproj. This is the (easiest) way I know how to get Debug versions to match Debug References, and Release versions to match Release References. Now, here is where the issue lies (the 3rd solution/project that actually utilizes the UserControl): 3rd solution located: \MainProj\MainProj.sln and its project: \MainProj\MainProj\MainProj.csproj Description: Here's a sample function in one of the classes: private void TestMethod<T>() where T : Common.Controls.OurUserControl, new() { T TheObject = new T(); TheObject.OneOfTheSetters = something; TheObject.AnotherOfTheSetters = something_else; // Do stuff with the object } We might call this function like so: private void AnotherMethod() { TestMethod<AControl.AControl>(); } This builds, runs, and works. No problem. The odd thing is after I close the project/solution and re-open it, I have red squigglies everywhere. I bring up my error list and I see tons of errors (anything that deals with AControl will be noted as an error). I'll see errors such as: The type 'AControl.AControl' cannot be used as type parameter 'T' in the generic type or method 'MainProj.MainClass.TestMethod()'. There is no implicit reference conversion from 'AControl.AControl' to 'Common.Controls.OurUserControl'. or inside the actual method (the properties located in the abstract class): 'AControl.AControl' does not contain a definition for 'OneOfTheSetters' and no extension method 'OneOfTheSetters' accepting a first argument of type 'AControl.AControl' could be found (are you missing a using directive or an assembly reference?) Meanwhile, I can still build and run the project (then the red squigglies go away until I re-open the project, or close/re-open the file). It seems to me that I might be setting up the projects incorrectly. Thoughts?

    Read the article

  • C++ - Breaking code implementation into different parts

    - by Kotti
    Hi! The question plot (a bit abstract, but answering this question will help me in my real app): So, I have some abstract superclass for objects that can be rendered on the screen. Let's call it IRenderable. struct IRenderable { // (...) virtual void Render(RenderingInterface& ri) = 0; virtual ~IRenderable() { } }; And suppose I also have some other objects that derive from IRenderable, e.g. Cat and Dog. So far so good. I add some Cat and Dog specific methods, like SeekForWhiskas(...) and Bark(...). After that I add specific Render(...) method for them, so my code looks this way: class Cat : public IRenderable { public: void SeekForWhiskas(...) { // Implementation could be here or moved // to a source file (depends on me wanting // to inline it or not) } virtual void Render(...) { // Here comes the rendering routine, that // is specific for cats SomehowDrawAppropriateCat(...); } }; class Dog : public IRenderable { public: void Bark(...) { // Same as for 'SeekForWhiskas(...)' } virtual void Render(...) { // Here comes the rendering routine, that // is specific for dogs DrawMadDog(...); } }; And then somewhere else I can do drawing the way that an appropriate rendering routine is called: IRenderable* dog = new Dog(); dog->Render(...); My question is about logical wrapping of such kind of code. I want to break apart the code, that corresponds to rendering of the current object and it's own methods (Render and Bark in this example), so that my class implementation doesn't turn into a mess (imagine that I have 10 methods like Bark and of course my Render method doesn't fit in their company and would be hard to find). Two ways of making what I want to (as far as I know) are: Making appropriate routines that look like RenderCat(Cat& cat, RenderInterface* ri), joining them to render namespace and then the functions inside a class would look like virtual void Render(...) { RenderCat(*this, ...); }, but this is plain stupid, because I'll lose access to Cat's private members and friending these functions looks like a total design disaster. Using visitor pattern, but this would also mean I have to rebuild my app's design and looks like an inadequate way to make my code complicated from the very beginning. Any brilliant ideas? :)

    Read the article

  • Thinking about introducing PHP/MySQL into a .NET/SQL Server environment. Thoughts?

    - by abszero
    I posted this over at reddit but it didn't gain any momentum. So here is what is going on: our company was recently purchased by another web shop and I was promoted to head of development here in our office. Our office is completely .NET/SQL Server and the company who purchased us is a *nix/PHP/MySQL shop. Now several of our large clients who are on the .NET platform are up for complete rewrites (the sites are from '04 and are running on the 1.x framework.) While reviewing the proposal for one client with my superior I came across a pretty extensive module which would require several hundred man hours to complete and voiced some concern about it in relation to the quote. One of the guys from the PHP group happen to hear this and told me of a module that they (PHP Group) use in Drupal that does exactly what the proposal in front of me was describing and it only took, at most, 8 hours to completely setup / configure. My superior suggested that I take a look at Drupal and the module in question over the weekend but stressed that we should only go that route if it really made sense. So this weekend I spun up a CentOS instance in VirtualBox and started playing around with Drupal. I am still fleshing it out so don't have a solid opinion on it just yet. Anyway I have some questions / fears that I was hoping progit could help me out in! Has anyone had experience doing this and, if so, how did it turn out? I am completely ignorant to what IDE's (if any) are available to for PHP. The last time I worked with PHP it was in Notepad and that was less than intuitive. So is there are more intuitive IDE out there for PHP dev? I don't want to scare my .NET guys. Since the merger all of our new business clients that have had relatively small websites have gone on Drupal with the larger sites going on .NET. My concern is that if they see a large site go onto Drupal that they might start getting anxious and start handing out their resumes. For the foreseeable future there are no plans to liquidate the .NET platform and really we can't just from a support standpoint. What would be the best way to approach this? Any other helpful info? Thanks!

    Read the article

  • Helping linqtosql datacontext use implicit conversion between varchar column in the database and tab

    - by user213256
    I am creating an mssql database table, "Orders", that will contain a varchar(50) field, "Value" containing a string that represents a slightly complex data type, "OrderValue". I am using a linqtosql datacontext class, which automatically types the "Value" column as a string. I gave the "OrderValue" class implicit conversion operators to and from a string, so I can easily use implicit conversion with the linqtosql classes like this: // get an order from the orders table MyDataContext db = new MyDataContext(); Order order = db.Orders(o => o.id == 1); // use implicit converstion to turn the string representation of the order // value into the complex data type. OrderValue value = order.Value; // adjust one of the fields in the complex data type value.Shipping += 10; // use implicit conversion to store the string representation of the complex // data type back in the linqtosql order object order.Value = value; // save changes db.SubmitChanges(); However, I would really like to be able to tell the linqtosql class to type this field as "OrderValue" rather than as "string". Then I would be able to avoid complex code and re-write the above as: // get an order from the orders table MyDataContext db = new MyDataContext(); Order order = db.Orders(o => o.id == 1); // The Value field is already typed as the "OrderValue" type rather than as string. // When a string value was read from the database table, it was implicity converted // to "OrderValue" type. order.Value.Shipping += 10; // save changes db.SubmitChanges(); In order to achieve this desired goal, I looked at the datacontext designer and selected the "Value" field of the "Order" table. Then, in properties, I changed "Type" to "global::MyApplication.OrderValue". The "Server Data Type" property was left as "VarChar(50) NOT NULL" The project built without errors. However, when reading from the database table, I was presented with the following error message: Could not convert from type 'System.String' to type 'MyApplication.OrderValue'. at System.Data.Linq.DBConvert.ChangeType(Object value, Type type) at Read_Order(ObjectMaterializer1 ) at System.Data.Linq.SqlClient.ObjectReaderCompiler.ObjectReader2.MoveNext() at System.Linq.Buffer1..ctor(IEnumerable1 source) at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source) at Example.OrdersProvider.GetOrders() at ... etc From the stack trace, I believe this error is happening while reading the data from the table. When presented with converting a string to my custom data type, even though the implicit conversion operators are present, the DBConvert class gets confused and throws an error. Is there anything I can do to help it not get confused and do the implicit conversion? Thanks in advance, and apologies if I have posted in the wrong forum. cheers / Ben

    Read the article

  • Efficient file buffering & scanning methods for large files in python

    - by eblume
    The description of the problem I am having is a bit complicated, and I will err on the side of providing more complete information. For the impatient, here is the briefest way I can summarize it: What is the fastest (least execution time) way to split a text file in to ALL (overlapping) substrings of size N (bound N, eg 36) while throwing out newline characters. I am writing a module which parses files in the FASTA ascii-based genome format. These files comprise what is known as the 'hg18' human reference genome, which you can download from the UCSC genome browser (go slugs!) if you like. As you will notice, the genome files are composed of chr[1..22].fa and chr[XY].fa, as well as a set of other small files which are not used in this module. Several modules already exist for parsing FASTA files, such as BioPython's SeqIO. (Sorry, I'd post a link, but I don't have the points to do so yet.) Unfortunately, every module I've been able to find doesn't do the specific operation I am trying to do. My module needs to split the genome data ('CAGTACGTCAGACTATACGGAGCTA' could be a line, for instance) in to every single overlapping N-length substring. Let me give an example using a very small file (the actual chromosome files are between 355 and 20 million characters long) and N=8 import cStringIO example_file = cStringIO.StringIO("""\ header CAGTcag TFgcACF """) for read in parse(example_file): ... print read ... CAGTCAGTF AGTCAGTFG GTCAGTFGC TCAGTFGCA CAGTFGCAC AGTFGCACF The function that I found had the absolute best performance from the methods I could think of is this: def parse(file): size = 8 # of course in my code this is a function argument file.readline() # skip past the header buffer = '' for line in file: buffer += line.rstrip().upper() while len(buffer) = size: yield buffer[:size] buffer = buffer[1:] This works, but unfortunately it still takes about 1.5 hours (see note below) to parse the human genome this way. Perhaps this is the very best I am going to see with this method (a complete code refactor might be in order, but I'd like to avoid it as this approach has some very specific advantages in other areas of the code), but I thought I would turn this over to the community. Thanks! Note, this time includes a lot of extra calculation, such as computing the opposing strand read and doing hashtable lookups on a hash of approximately 5G in size. Post-answer conclusion: It turns out that using fileobj.read() and then manipulating the resulting string (string.replace(), etc.) took relatively little time and memory compared to the remainder of the program, and so I used that approach. Thanks everyone!

    Read the article

  • casting doubles to integers in order to gain speed

    - by antirez
    Hello all, in Redis (http://code.google.com/p/redis) there are scores associated to elements, in order to take this elements sorted. This scores are doubles, even if many users actually sort by integers (for instance unix times). When the database is saved we need to write this doubles ok disk. This is what is used currently: snprintf((char*)buf+1,sizeof(buf)-1,"%.17g",val); Additionally infinity and not-a-number conditions are checked in order to also represent this in the final database file. Unfortunately converting a double into the string representation is pretty slow. While we have a function in Redis that converts an integer into a string representation in a much faster way. So my idea was to check if a double could be casted into an integer without lost of data, and then using the function to turn the integer into a string if this is true. For this to provide a good speedup of course the test for integer "equivalence" must be fast. So I used a trick that is probably undefined behavior but that worked very well in practice. Something like that: double x = ... some value ... if (x == (double)((long long)x)) use_the_fast_integer_function((long long)x); else use_the_slow_snprintf(x); In my reasoning the double casting above converts the double into a long, and then back into an integer. If the range fits, and there is no decimal part, the number will survive the conversion and will be exactly the same as the initial number. As I wanted to make sure this will not break things in some system, I joined #c on freenode and I got a lot of insults ;) So I'm now trying here. Is there a standard way to do what I'm trying to do without going outside ANSI C? Otherwise, is the above code supposed to work in all the Posix systems that currently Redis targets? That is, archs where Linux / Mac OS X / *BSD / Solaris are running nowaday? What I can add in order to make the code saner is an explicit check for the range of the double before trying the cast at all. Thank you for any help.

    Read the article

  • How do I handle the Maybe result of at in Control.Lens.Indexed without a Monoid instance

    - by Matthias Hörmann
    I recently discovered the lens package on Hackage and have been trying to make use of it now in a small test project that might turn into a MUD/MUSH server one very distant day if I keep working on it. Here is a minimized version of my code illustrating the problem I am facing right now with the at lenses used to access Key/Value containers (Data.Map.Strict in my case) {-# LANGUAGE OverloadedStrings, GeneralizedNewtypeDeriving, TemplateHaskell #-} module World where import Control.Applicative ((<$>),(<*>), pure) import Control.Lens import Data.Map.Strict (Map) import qualified Data.Map.Strict as DM import Data.Maybe import Data.UUID import Data.Text (Text) import qualified Data.Text as T import System.Random (Random, randomIO) newtype RoomId = RoomId UUID deriving (Eq, Ord, Show, Read, Random) newtype PlayerId = PlayerId UUID deriving (Eq, Ord, Show, Read, Random) data Room = Room { _roomId :: RoomId , _roomName :: Text , _roomDescription :: Text , _roomPlayers :: [PlayerId] } deriving (Eq, Ord, Show, Read) makeLenses ''Room data Player = Player { _playerId :: PlayerId , _playerDisplayName :: Text , _playerLocation :: RoomId } deriving (Eq, Ord, Show, Read) makeLenses ''Player data World = World { _worldRooms :: Map RoomId Room , _worldPlayers :: Map PlayerId Player } deriving (Eq, Ord, Show, Read) makeLenses ''World mkWorld :: IO World mkWorld = do r1 <- Room <$> randomIO <*> (pure "The Singularity") <*> (pure "You are standing in the only place in the whole world") <*> (pure []) p1 <- Player <$> randomIO <*> (pure "testplayer1") <*> (pure $ r1^.roomId) let rooms = at (r1^.roomId) ?~ (set roomPlayers [p1^.playerId] r1) $ DM.empty players = at (p1^.playerId) ?~ p1 $ DM.empty in do return $ World rooms players viewPlayerLocation :: World -> PlayerId -> RoomId viewPlayerLocation world playerId= view (worldPlayers.at playerId.traverse.playerLocation) world Since rooms, players and similar objects are referenced all over the code I store them in my World state type as maps of Ids (newtyped UUIDs) to their data objects. To retrieve those with lenses I need to handle the Maybe returned by the at lens (in case the key is not in the map this is Nothing) somehow. In my last line I tried to do this via traverse which does typecheck as long as the final result is an instance of Monoid but this is not generally the case. Right here it is not because playerLocation returns a RoomId which has no Monoid instance. No instance for (Data.Monoid.Monoid RoomId) arising from a use of `traverse' Possible fix: add an instance declaration for (Data.Monoid.Monoid RoomId) In the first argument of `(.)', namely `traverse' In the second argument of `(.)', namely `traverse . playerLocation' In the second argument of `(.)', namely `at playerId . traverse . playerLocation' Since the Monoid is required by traverse only because traverse generalizes to containers of sizes greater than one I was now wondering if there is a better way to handle this that does not require semantically nonsensical Monoid instances on all types possibly contained in one my objects I want to store in the map. Or maybe I misunderstood the issue here completely and I need to use a completely different bit of the rather large lens package?

    Read the article

  • Model Binding with Parent/Child Relationship

    - by user296297
    I'm sure this has been answered before, but I've spent the last three hours looking for an acceptable solution and have been unable to find anything, so I apologize for what I'm sure is a repeat. I have two domain objects, Player and Position. Player's have a Position. My domain objects are POCOs tied to my database with NHibernate. I have an Add action that takes a Player, so I'm using the built in model binding. On my view I have a drop down list that lets a user select the Position for the Player. The value of the drop down list is the Id of the position. Everything gets populated correctly except that my Position object fails validation (ModelState.IsValid) because at the point of model binding it only has an Id and none of it's other required attributes. What is the preferred solution for solving this with ASP.NET MVC 2? Solutions I've tried... Fetch the Position from the database based on the Id before ModelState.IsValid is called in the Add action of my controller. I can't get the model to run the validation again, so ModelState.IsValid always returns false. Create a custom ModelBinder that inherits from the default binder and fetch the Position from the database after the base binder is called. The ModelBinder seems to be doing the validation so if I use anything from the default binder I'm hosed. Which means I have to completely roll my own binder and grab every value from the form...this seems really wrong and inefficient for such a common use-case. Solutions I think might work, I just can't figure out how to do... Turn off the validation for the Position class when used in Player. Write a custom ModelBinder leverages the default binder for most of the property binding, but lets me get the Position from the database BEFORE the default binder runs validation. So, how do the rest of you solve this? Thanks, Dan P.S. In my opinion having a PositionId on Player just for this case is not a good solution. There has to be solvable in a more elegant fashion.

    Read the article

  • User has many computers, computers have many attributes in different tables, best way to JOIN?

    - by krismeld
    I have a table for users: USERS: ID | NAME | ---------------- 1 | JOHN | 2 | STEVE | a table for computers: COMPUTERS: ID | USER_ID | ------------------ 13 | 1 | 14 | 1 | a table for processors: PROCESSORS: ID | NAME | --------------------------- 27 | PROCESSOR TYPE 1 | 28 | PROCESSOR TYPE 2 | and a table for harddrives: HARDDRIVES: ID | NAME | ---------------------------| 35 | HARDDRIVE TYPE 25 | 36 | HARDDRIVE TYPE 90 | Each computer can have many attributes from the different attributes tables (processors, harddrives etc), so I have intersection tables like this, to link the attributes to the computers: COMPUTER_PROCESSORS: C_ID | P_ID | --------------| 13 | 27 | 13 | 28 | 14 | 27 | COMPUTER_HARDDRIVES: C_ID | H_ID | --------------| 13 | 35 | So user JOHN, with id 1 owns computer 13 and 14. Computer 13 has processor 27 and 28, and computer 13 has harddrive 35. Computer 14 has processor 27 and no harddrive. Given a user's id, I would like to retrieve a list of that user's computers with each computers attributes. I have figured out a query that gives me a somewhat of a result: SELECT computers.id, processors.id AS p_id, processors.name AS p_name, harddrives.id AS h_id, harddrives.name AS h_name, FROM computers JOIN computer_processors ON (computer_processors.c_id = computers.id) JOIN processors ON (processors.id = computer_processors.p_id) JOIN computer_harddrives ON (computer_harddrives.c_id = computers.id) JOIN harddrives ON (harddrives.id = computer_harddrives.h_id) WHERE computers.user_id = 1 Result: ID | P_ID | P_NAME | H_ID | H_NAME | ----------------------------------------------------------- 13 | 27 | PROCESSOR TYPE 1 | 35 | HARDDRIVE TYPE 25 | 13 | 28 | PROCESSOR TYPE 2 | 35 | HARDDRIVE TYPE 25 | But this has several problems... Computer 14 doesnt show up, because it has no harddrive. Can I somehow make an OUTER JOIN to make sure that all computers show up, even if there a some attributes they don't have? Computer 13 shows up twice, with the same harddrive listet for both. When more attributes are added to a computer (like 3 blocks of ram), the number of rows returned for that computer gets pretty big, and it makes it had to sort the result out in application code. Can I somehow make a query, that groups the two returned rows together? Or a query that returns NULL in the h_name column in the second row, so that all values returned are unique? EDIT: What I would like to return is something like this: ID | P_ID | P_NAME | H_ID | H_NAME | ----------------------------------------------------------- 13 | 27 | PROCESSOR TYPE 1 | 35 | HARDDRIVE TYPE 25 | 13 | 28 | PROCESSOR TYPE 2 | 35 | NULL | 14 | 27 | PROCESSOR TYPE 1 | NULL | NULL | Or whatever result that make it easy to turn it into an array like this [13] => [P_NAME] => [0] => PROCESSOR TYPE 1 [1] => PROCESSOR TYPE 2 [H_NAME] => [0] => HARDDRIVE TYPE 25 [14] => [P_NAME] => [0] => PROCESSOR TYPE 1

    Read the article

  • Paypal IPN: how get the POSTs from this class?

    - by sineverba
    I'm using this Class <?php class paypalIPN { //sandbox: private $paypal_url = 'https://www.sandbox.paypal.com/cgi-bin/webscr'; //live site: //private $paypal_url = 'https://www.paypal.com/cgi-bin/webscr'; private $data = null; public function __construct() { $this->data = new stdClass; } public function isa_dispute() { //is it some sort of dispute. return $this->data->txn_type == "new_case"; } public function validate() { // parse the paypal URL $response = ""; $url_parsed = parse_url($this->paypal_url); // generate the post string from the _POST vars aswell as load the // _POST vars into an arry so we can play with them from the calling // script. $post_string = ''; foreach ($_POST as $field=>$value) { $this->data->$field = $value; $post_string .= $field.'='.urlencode(stripslashes($value)).'&'; } $post_string.="cmd=_notify-validate"; // append ipn command $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $this->paypal_url); //curl_setopt($ch, CURLOPT_VERBOSE, 1); //keep the peer and server verification on, recommended //(can switch off if getting errors, turn to false) curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, FALSE); curl_setopt($ch, CURLOPT_RETURNTRANSFER,1); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $post_string); $response = curl_exec($ch); if (curl_errno($ch)) { die("Curl Error: " . curl_errno($ch) . ": " . curl_error($ch)); } curl_close($ch); return $response; if (preg_match("/VERIFIED/", $response)) { // Valid IPN transaction. return $this->data; } else { return false; } } } ANd i recall in this mode: public function get_ipn() { $ipn = new paypalIPN(); $result = $ipn->validate(); $logger = new Log('/error.log'); $logger->write(print_r($result)); } But I obtain only "VERIFIED" or "1" (whitout or with the print_r function). I just tried also to return directly the raw curl response with return $response; or return $this->response; or also return $this->parse_string; but everytime I receive only "1" or "VERIFIED"....... Thank you very much

    Read the article

  • How to design service that can provide interface as JAX-WS web service, or via JMS, or as local meth

    - by kevinegham
    Using a typical JEE framework, how do I develop and deploy a service that can be called as a web service (with a WSDL interface), be invoked via JMS messages, or called directly from another service in the same container? Here's some more context: Currently I am responsible for a service (let's call it Service X) with the following properties: Interface definition is a human readable document kept up-to-date manually. Accepts HTTP form-encoded requests to a single URL. Sends plain old XML responses (no schema). Uses Apache to accept requests + a proprietary application server (not servlet or EJB based) containing all logic which runs in a seperate tier. Makes heavy use of a relational database. Called both by internal applications written in a variety of languages and also by a small number of third-parties. I want to (or at least, have been told to!): Switch to a well-known (pref. open source) JEE stack such as JBoss, Glassfish, etc. Split Service X into Service A and Service B so that we can take Service B down for maintenance without affecting Service A. Note that Service B will depend on (i.e. need to make requests to) Service A. Make both services easier for third parties to integrate with by providing at least a WS-I style interface (WSDL + SOAP + XML + HTTP) and probably a JMS interface too. In future we might consider a more lightweight API too (REST + JSON? Google Protocol Buffers?) but that's a nice to have. Additional consideration are: On a smaller deployment, Service A and Service B will likely to running on the same machine and it would seem rather silly for them to use HTTP or a message bus to communicate; better if they could run in the same container and make method calls to each other. Backwards compatibility with the existing ad-hoc Service X interface is not required, and we're not planning on re-using too much of the existing code for the new services. I'm happy with either contract-first (WSDL I guess) or (annotated) code-first development. Apologies if my terminology is a bit hazy - I'm pretty experienced with Java and web programming in general, but am finding it quite hard to get up to speed with all this enterprise / SOA stuff - it seems I have a lot to learn! I'm also not very used to using a framework rather than simply writing code that calls some packages to do things. I've got as far as downloading Glassfish, knocking up a simple WSDL file and using wsimport + a little dummy code to turn that into a WAR file which I've deployed.

    Read the article

  • A two player game over the intranet..

    - by Santwana
    Hi everybody.. I am a student of 3rd year engineering and only a novice in my programming skills. I need some help with my project.. I wish to develop a two player game to be played over the network (Intranet). I want to develop a simple website with a few html pages for this.My ideas for the project run as follows: 1.People can log in from different systems and check who ever is online on the network currently. the page also shows who is playing with whom. 2.If a person is interested in playing with a player who is currently online, he sends a request of which the other player is somehow notified( using a message or an alert on his profile page..) 3.If the player accepts the request, a game is started. This is exactly where I am clueless.. How can I make them play the game? I need to develop a turn based game with two players, eg chessboard.. how can I do this? The game has to be played live.. and it is time tracked. i need your help with coding the above.. the other features i wish to include are: 4.The game could not be abruptly terminated by any one if the users.The request to terminate the game should be sent to the other player first and only if he accepts can the game be terminated. Whoever wins the game would get a plus 10 on their credit and if he terminated he gets a minus 10. The credits remains constant even if he loses but the success percentage is reduced. 6.The player with highest winning percentage is projected as the player of the week on the home page and he can post a challenge to all others.. I only have an intermediate knowledge of core java and know the basics of Swing and Awt. I am not at all familiar with networking in java right now. I have 5 to 6 weeks of time for developing the project but I hope to learn the things before I start my project. i would prefer to use a lan to illustrate the project and I know only java,jsp,oracle,html and bit of xml to develop my proj. Also I wish to know if I can code this within 6 weeks, would it be too difficult or complicated? Please spare some time to tell me. Please.. please.. I need your suggestions and help.. thank you so much..

    Read the article

  • php connecting to mysql server(localhost) very slow

    - by Ahmad
    actually its little complicated: summary: the connection to DB is very slow. the page rendering takes around 10 seconds but the last statement on the page is an echo and i can see its output while the page is loading in firefox (IE is same). in google chrome the output becomes visible only when the loading finishes. loading time is approximately the same across browsers. on debugging i found out that its the DB connectivity that is creating problem. the DB was on another machine. to debug further. i deployed the DB on my local machine .. so now the DB connection is at 127.0.0.1 but the connectivity still takes long time. this means that the issue is with APACHE/PHP and not with mysql. but then i deployed my code on another machine which connects to DB remotely.and everything seems fine. basically the application uses couple of mod_rewrite.. but i removed all the .htaccess files and the slow connectivity issue remains.. i installed another APACHE on my machine and used default settings. the connection was still very slow. i added following statements to measure the execution time $stime = microtime(); $stime = explode(" ",$stime); $stime = $stime[1] + $stime[0]; // my code -- it involves connection to DB $mtime = microtime(); $mtime = explode(" ",$mtime); $mtime = $mtime[1] + $mtime[0]; $totaltime = ($mtime - $stime); echo $totaltime; the output is 0.0631899833679 but firebug Net panel shows total loading time of 10-11 seconds. same is the case with google chrome i tried to turn off windows firewall.. connectivity is still slow and i just can't quite find the reason.. i've tried multiple DB servers.. multiple apaches.. nothing seems to be working.. any idea of what might be the problem?

    Read the article

  • .NET Free memory usage (how to prevent overallocation / release memory to the OS)

    - by Ronan Thibaudau
    I'm currently working on a website that makes large use of cached data to avoid roundtrips. At startup we get a "large" graph (hundreds of thouthands of different kinds of objects). Those objects are retrieved over WCF and deserialized (we use protocol buffers for serialization) I'm using redgate's memory profiler to debug memory issues (the memory didn't seem to fit with how much memory we should need "after" we're done initializing and end up with this report Now what we can gather from this report is that: 1) Most of the memory .NET allocated is free (it may have been rightfully allocated during deserialisation, but now that it's free, i'd like for it to return to the OS) 2) Memory is fragmented (which is bad, as everytime i refresh the cash i need to redo the memory hungry deserialisation process and this, in turn creates large object that may throw an OutOfMemoryException due to fragmentation) 3) I have no clue why the space is fragmented, because when i look at the large object heap, there are only 30 instances, 15 object[] are directly attached to the GC and totally unrelated to me, 1 is a char array also attached directly to the GC Heap, the remaining 15 are mine but are not the cause of this as i get the same report if i comment them out in code. So my question is, what can i do to go further with this? I'm not really sure what to look for in debugging / tools as it seems my memory is fragmented, but not by me, and huge amounts of free spaces are allocated by .net , which i can't release. Also please make sure you understand the question well before answering, i'm not looking for a way to free memory within .net (GC.Collect), but to free memory that is already free in .net , to the system as well as to defragment said memory. Note that a slow solution is fine, if it's possible to manually defragment the large heap i'd be all for it as i can call it at the end of RefreshCache and it's ok if it takes 1 or 2 second to run. Thanks for your help! A few notes i forgot: 1) The project is a .net 2.0 website, i get the same results running it in a .net 4 pool, idem if i run it in a .net 4 pool and convert it to .net 4 and recompile. 2) These are results of a release build, so debug build can not be the issue. 3) And this is probably quite important, i do not get these issues at all in the webdev server, only in IIS, in the webdev i get memory consumption rather close to my actual consumption (well more, but not 5-10X more!)

    Read the article

  • ANSI C as core of a C# project? Is this possible?

    - by Nektarios
    I'm writing a NON-GUI app which I want to be cross platform between OS X and Windows. I'm looking at the following architecture, but I don't know if it will work on the windows side: (Platform specific entry point) - ANSI C main loop = ANSI C model code doing data processing / logic = (Platform specific helpers) So the core stuff I'm planning to write in regular ANSI C, because A) it should be platform independent, B) I'm extremely comfortable with C, C) It can do the job and do it well (Platform specific entry point) can be written in whatever necessary to get the job done, this is a small amount of code, doesn't matter to me. (Platform specific helpers) is the sticky thing. This is stuff like parsing XML, accessing databases, graphics toolkit stuff, whatever. Things that aren't easy in C. Things that modern languages/frameworks will give for free. On OS X this code will be written in Objective-C interfacing with Cocoa. On Windows I'm thinking my best bet is to use C# So on Windows my architecture (simplified) looks like (C# or C?) - ANSI C - C# Is this possible? Some thoughts/suggestions so far.. 1) Compile my C core as a .dll -- this is fine, but seems there's no way to call my C# helpers unless I can somehow get function pointers and pass them to my core, but that seems unlikely 2) Compile a C .exe and a C# .exe and have them talk via shared memory or some kind of IPC. I'm not entirely opposed to this but it obviously introduces a lot of complexity so it doesn't seem ideal 3) Instead of C# use C++, it gets me some nice data management stuff and nice helper code. And I can mix it pretty easily. And the work I do could probably easily port to Linux. But I really don't like C++, and I don't want this to turn in to a 3rd-party-library-fest. Not that it's a huge deal, but it's 2010.. anything for basic data management should be built in. And targetting Linux is really not a priority. Note that no "total" alternatives are OK as suggested in other similar questions on SO I've seen; java, RealBasic, mono.. this is an extremely performance intensive application doing soft realtime for game/simulation purposes, I need C & friends here to do it right (maybe you don't, but I do)

    Read the article

  • How to handle very frequent updates to a Lucene index

    - by fsm
    I am trying to prototype an indexing/search application which uses very volatile indexing data sources (forums, social networks etc), here are some of the performance requirements, Very fast turn-around time (by this I mean that any new data (such as a new message on a forum) should be available in the search results very soon (less than a minute)) I need to discard old documents on a fairly regular basis to ensure that the search results are not dated. Last but not least, the search application needs to be responsive. (latency on the order of 100 milliseconds, and should support at least 10 qps) All of the requirements I have currently can be met w/o using Lucene (and that would let me satisfy all 1,2 and 3), but I am anticipating other requirements in the future (like search relevance etc) which Lucene makes easier to implement. However, since Lucene is designed for use cases far more complex than the one I'm currently working on, I'm having a hard time satisfying my performance requirements. Here are some questions, a. I read that the optimize() method in the IndexWriter class is expensive, and should not be used by applications that do frequent updates, what are the alternatives? b. In order to do incremental updates, I need to keep committing new data, and also keep refreshing the index reader to make sure it has the new data available. These are going to affect 1 and 3 above. Should I try duplicate indices? What are some common approaches to solving this problem? c. I know that Lucene provides a delete method, which lets you delete all documents that match a certain query, in my case, I need to delete all documents which are older than a certain age, now one option is to add a date field to every document and use that to delete documents later. Is it possible to do range queries on document ids (I can create my own id field since I think that the one created by lucene keeps changing) to delete documents? Is it any faster than comparing dates represented as strings? I know these are very open questions, so I am not looking for a detailed answer, I will try to treat all of your answers as suggestions and use them to inform my design. Thanks! Please let me know if you need any other information.

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >