Search Results

Search found 4126 results on 166 pages for 'bitwise operations'.

Page 136/166 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • Sphinx - delimiters

    - by yoda
    Hi, I would like to know if the Sphinx engine works with any delimiters (like commas and periods in normal MySQL). My question comes from the urge, not to use them at all, but to escape them or at least thay they don't enter in conflict when performing MATCH operations with FULLTEXT searches, since I have problems dealing with them in MySQL by default and I would prefer not to be forced to replace those delimiters by any other characters to provide a good set of results. Sorry if I'm saying something stupid, but I don't have experience with Sphinx or other complementary (?) search engines. To give you an example, if I perform a search with "Passat 2.0 TDI" MySQL by default would identify the period in this case as a delimiter and since the "2" and "0" are too short to be considered words by default, the results would be a bit messed up. Is it easy to handle with Sphinx (or other search engine)? I'm open to suggestions. This is for a large project, with probably more than 500.000 possible records (not trivial at all). Cheers!

    Read the article

  • ADO.NET Data Services Media type requires a ';' character before a parameter definition.

    - by idahosaedokpayi
    I am experimenting with ADO.NET and I am seeing this error on the second attempt to browse the service: <?xml version="1.0" encoding="utf-8" standalone="yes" ?> <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> <code /> <message xml:lang="en-US">Media type requires a ';' character before a parameter definition.</message> </error> The first attempt is normal. I am working with an exactly identical service on an internal development network and it is fine. I am including my connection string: <add name="J4Entities" connectionString="metadata=res://*;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=MNSTSQL01N;Initial Catalog=J4;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient"/> and my Data service class: using System; using System.Data.Services; using System.Collections.Generic; using System.Linq; using System.ServiceModel.Web; public class Data : DataService< J4Model.J4Entities > { // This method is called only once to initialize service-wide policies. public static void InitializeService(IDataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); } } Is there something obvious I am not doing?

    Read the article

  • Will TFS 2010 support non-contiguous merging?

    - by steve_d
    I know that merging non-contiguous changesets at once may not be a good idea. However there is at least one situation in which merging non-contiguous changesets is (probably) not going to break anything: when there are no intervening changes on the affected individual files. (At least, it wouldn't break any worse than would a series of cherry-picked merges, checked in each time; and at least this way you would discover breakage before checking in). For instance, let's say you have a Main and a Development branch. They start out identical (e.g. after a release). They have two files, foo.cs and bar.cs. Alice makes a change in Development\foo.cs and checks it in as changeset #1001. Bob makes a change in Development\bar.cs and checks it in as #1002. Alice makes another change to Development\foo.cs and checks it in as #1003. Now we could in theory merge both changes #1001 and #1003 from dev-to main in a single operation. If we try to merge at the branch level, dev-to-main, we will have to do it as two operations. In this simple, contrived example it's simple enough to merge the one file - but in the real world where there would be many files involved, it's not so simple. Non-contiguous merging is one of the reasons given for why "merge by workitem" is not implemented in TFS.

    Read the article

  • Tree iterator, can you optimize this any further?

    - by Ron
    As a follow up to my original question about a small piece of this code I decided to ask a follow up to see if you can do better then what we came up with so far. The code below iterates over a binary tree (left/right = child/next ). I do believe there is room for one less conditional in here (the down boolean). The fastest answer wins! The cnt statement can be multiple statements so lets make sure this appears only once The child() and next() member functions are about 30x as slow as the hasChild() and hasNext() operations. Keep it iterative <-- dropped this requirement as the recursive solution presented was faster. This is C++ code visit order of the nodes must stay as they are in the example below. ( hit parents first then the children then the 'next' nodes). BaseNodePtr is a boost::shared_ptr as thus assignments are slow, avoid any temporary BaseNodePtr variables. Currently this code takes 5897ms to visit 62200000 nodes in a test tree, calling this function 200,000 times. void processTree (BaseNodePtr current, unsigned int & cnt ) { bool down = true; while ( true ) { if ( down ) { while (true) { cnt++; // this can/will be multiple statesments if (!current->hasChild()) break; current = current->child(); } } if ( current->hasNext() ) { down = true; current = current->next(); } else { down = false; current = current->parent(); if (!current) return; // done. } } }

    Read the article

  • Is it legal to take sealed .NET framework class source and extend it?

    - by Giedrius
    To be short, I'm giving very specific example, but I'm interested in general situation. There is a FtpWebRequest class in .NET framework and it is missing some of new FTP operations, like MFCT. It is ok in a sense that this operation is still in draft mode, but it is not ok in a sense, that FtpWebRequest is sealed and there's no other way (at least I don't see it) to extend it with this new operation. Easiest way to do it would be take FtpWebRequest class source from .NET reference sources and extend it, in such way will be kept all the consistence in naming, implementation, etc. Question is how much legal it is? I won't sell this class as a product, I can publish my changes on web - nothing to hide here. If it is not legal, can I take this class source from mono and include in native .net project? Did you had similar case and how you solved it? Update: as long as extension method is offered, I'm pasting source from .NET framework which should show that extension methods are not the solution. So there's a property Method, where you can pass FTP command: public override string Method { get { return m_MethodInfo.Method; } set { if (String.IsNullOrEmpty(value)) { throw new ArgumentException(SR.GetString(SR.net_ftp_invalid_method_name), "value"); } if (InUse) { throw new InvalidOperationException(SR.GetString(SR.net_reqsubmitted)); } try { m_MethodInfo = FtpMethodInfo.GetMethodInfo(value); } catch (ArgumentException) { throw new ArgumentException(SR.GetString(SR.net_ftp_unsupported_method), "value"); } } } As you can see there FtpMethodInfo.GetMethodInfo(value) call in setter, which basically validates value against internal enum static array. Update 2: Checked mono implementation and it is not exact replica of native code + it does not implement some of the things.

    Read the article

  • Is Domain Anaemia appropriate in a Service Oriented Architecture?

    - by Stimul8d
    I want to be clear on this. When I say domain anaemia, I mean intentional domain anaemia, not accidental. In a world where most of our business logic is hidden away behind a bunch of services, is a full domain model really necessary? This is the question I've had to ask myself recently since working on a project where the "domain" model is in reality a persistence model; none of the domain objects contain any methods and this is a very intentional decision. Initially, I shuddered when I saw a library full of what are essentially type-safe data containers but after some thought it struck me that this particular system doesn't do much but basic CRUD operations, so maybe in this case this is a good choice. My problem I guess is that my experience so far has been very much focussed on a rich domain model so it threw me a little. The remainder of the domain logic is hidden away in a group of helpers, facades and factories which live in a separate assembly. I'm keen to hear what people's thoughts are on this. Obviously, the considerations for reuse of these classes are much simpler but is really that great a benefit?

    Read the article

  • Interesting Scala typing solution, doesn't work in 2.7.7?

    - by djc
    I'm trying to build some image algebra code that can work with images (basically a linear pixel buffer + dimensions) that have different types for the pixel. To get this to work, I've defined a parametrized Pixel trait with a few methods that should be able to get used with any Pixel subclass. (For now, I'm only interested in operations that work on the same Pixel type.) Here it is: trait Pixel[T <: Pixel[T]] { def mul(v: Double): T def max(v: T): T def div(v: Double): T def div(v: T): T } Now I define a single Pixel type that has storage based on three doubles (basically RGB 0.0-1.0), I've called it TripleDoublePixel: class TripleDoublePixel(v: Array[Double]) extends Pixel[TripleDoublePixel] { var data: Array[Double] = v def this() = this(Array(0.0, 0.0, 0.0)) def toString(): String = { "(" + data(0) + ", " + data(1) + ", " + data(2) + ")" } def increment(v: TripleDoublePixel) { data(0) += v.data(0) data(1) += v.data(1) data(2) += v.data(2) } def mul(v: Double): TripleDoublePixel = { new TripleDoublePixel(data.map(x => x * v)) } def div(v: Double): TripleDoublePixel = { new TripleDoublePixel(data.map(x => x / v)) } def div(v: TripleDoublePixel): TripleDoublePixel = { var tmp = new Array[Double](3) tmp(0) = data(0) / v.data(0) tmp(1) = data(1) / v.data(1) tmp(2) = data(2) / v.data(2) new TripleDoublePixel(tmp) } def max(v: TripleDoublePixel): TripleDoublePixel = { val lv = data(0) * data(0) + data(1) * data(1) + data(2) * data(2) val vv = v.data(0) * v.data(0) + v.data(1) * v.data(1) + v.data(2) * v.data(2) if (lv > vv) (this) else v } } Now I want to write code to use this, that doesn't have to know what type the pixels are. For example: def idiv[T](a: Image[T], b: Image[T]) { for (i <- 0 until a.data.size) { a.data(i) = a.data(i).div(b.data(i)) } } Unfortunately, this doesn't compile: (fragment of lindet-gen.scala):145: error: value div is not a member of T a.data(i) = a.data(i).div(b.data(i)) I was told in #scala that this worked for someone else, but that was on 2.8. I've tried to get 2.8-rc1 going, but it doesn't compile for me. Is there any way to get this to work in 2.7.7?

    Read the article

  • Is it possible to load an entire SQL Server CE database into RAM?

    - by DanM
    I'm using LinqToSql to query a small SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table via a foreign key, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of the appeal of LinqToSql. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view.

    Read the article

  • RIA Service/oData ... "Requests that attempt to access a single element using key values from a resu

    - by user327911
    I've recently started working up a sample project to play with an oData feed coming from a RIA service. I am able to view the feed and the metadata via any web browser, however, if I try to perform certain query operations on the feed I receive "unsupported" exceptions. Sample oData feed: ProductSet http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet/ 2010-04-28T14:02:10Z http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet(guid'b0a2b170-c6df-441f-ae2a-74dd19901128') 2010-04-28T14:02:10Z b0a2b170-c6df-441f-ae2a-74dd19901128 Product 0 Type 1 Active Sample web.config entry: Sample service: [EnableClientAccess()] public class ProductService : DomainService { [Query(IsDefault = true)] public IQueryable GetProducts() { IList products = new List(); for (int i = 0; i < 90; i++) { Product product = new Product { Id = Guid.NewGuid(), Name = "Product " + i.ToString(), ProductType = i < 30 ? "Type 1" : ((i > 30 && i < 60) ? "Type 2" : "Type 3"), Status = i % 2 == 0 ? "Active" : "NotActive" }; products.Add(product); } return products.AsQueryable(); } } If I provide the url "http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet(guid'b0a2b170-c6df-441f-ae2a-74dd19901128')" to my web browser I receive the following xml: Requests that attempt to access a single element using key values from a result set are not supported. I'm new to RIA and oData. Could this be something as simple as my web browsers not supporting this type of querying on the result set or something else? Thanks ahead! Corey

    Read the article

  • WCF consumed as WebService adds a boolean parameter?

    - by Martín Marconcini
    I've created the default WCF Service in VS2008. It's called "Service1" public class Service1 : IService1 { public string GetData( int value ) { return string.Format("You entered: {0}", value); } public CompositeType GetDataUsingDataContract( CompositeType composite ) { if ( composite.BoolValue ) { composite.StringValue += "Suffix"; } return composite; } } It works fine, the interface is IService1: [ServiceContract] public interface IService1 { [OperationContract] string GetData( int value ); [OperationContract] CompositeType GetDataUsingDataContract( CompositeType composite ); // TODO: Add your service operations here } This is all by default; Visual Studio 2008 created all this. I then created a simple Winforms app to "test" this. I added the Service Reference to my the above mentioned service and it all works. I can instanciate and call myservice1.GetData(100); and I get the result. But I was told that this service will have to be consumed by a Winforms .NET 2.0 app via Web Services, so I proceeded to add the reference to a new Winforms .NET 2.0 application created from scratch (only one winform called form1). This time, when adding the "web reference", it added the typical "localhost" one belonging to webservices; the wizard saw the WCF Service (running on background) and added it. When I tried to consume this, I found out that the GetData(int) method, was now GetData(int, bool). Here's the code private void button1_Click( object sender, EventArgs e ) { localhost.Service1 s1 = new WindowsFormsApplication2.localhost.Service1(); Console.WriteLine(s1.GetData(100, false)); } Notice the false in the GetData call? I don't know what that parameter is or where did that come from, it is called "bool valueSpecified". Does anybody know where this is coming from? Anything else I should do to consume a WCF Service as a WebService from .NET 2.0? (winforms).

    Read the article

  • GUI for server-client program

    - by sksingh73
    I am making a server-client application in c++. In this i am also using shared memory & file read-write operations. my program is completely ready & i now wants to make a gui for it. someone suggested me to go for QT4, but when i tried it, i found i have to re-write 80% of the code because QT has got its own classes & variable. i don't want to do it. i want suggestions from you on this regard. my requirements for gui are very simple i.e there will be a main form, which will have two text boxes in which all messages being sent & received by client & server should be shown. there should be another lineedit box, through which i can send the messages to the other end server. I don't know how to make this gui. someone suggested tcl/tk, other suggested me use php/swig. i am not sure how to go about this. my only requirement is that i want to make this simple gui with minimum of changes in my code. THANX

    Read the article

  • Moving Javascript object with all bounded events to other variable

    - by Saif Bechan
    Let's say I have an anchor tag, with some events. <a id="clickme" href="/endpoint">clickme</a> <a id="clickme2" href="/endpoint2">clickme2</a> Let's use jquery for simplicity: $('#clickme').on('click', function(){.....}) I also have a variable: var myActiveVar = $('#clickme'); When I want to remove the element an every trace of it I can do this: myActiveVar.off().remove(); Here comes the problem, I want to reuse the variable. Something like this: var oldAcriveVar = myActiveVar; myActiveVar = $('#clickme2'); // Now I want to do some operations // both of the elements are still on // the page, when I'm done: oldActiveVar.off().remove(); Comeplete code: var myActiveVar = $('#clickme'); // Operate on myActiveVar var oldAcriveVar = myActiveVar; myActiveVar = $('#clickme2'); // Operate on myActiveVar which is // the new element. // Old element stays visible oldActiveVar.off().remove(); // Old element and all traces are gone Edit: Maybe the above code will work, but my problem goes beyond. I just gave a simplified example. I am using Backbone events that are bounded to object. They need to be removed when I am done with the object.

    Read the article

  • Strange results about C++11 memory model (Relaxed ordering)

    - by Dancing_bunny
    I was testing the example in the memory model of the Anthony Williams's book "C++ Concurrency" #include<atomic> #include<thread> #include<cassert> std::atomic_bool x,y; std::atomic_int z; void write_x_then_y() { x.store(true, std::memory_order_relaxed); y.store(true, std::memory_order_relaxed); } void read_y_then_x() { while(!y.load(std::memory_order_relaxed)); if(x.load(std::memory_order_relaxed)) { ++z; } } int main() { x = false; y = false; z = 0; std::thread a(write_x_then_y); std::thread b(read_y_then_x); a.join(); b.join(); assert(z.load()!=0); } According to the explanation, relaxed operations on difference variables (here x and y) can be freely reordered. However, I repeated running the problem for more than several days. I never hit the situation that the assertion (assert(z.load()!=0);) fires. I just use the default optimization and compile the code using g++ -std=c++11 -lpthread dataRaceAtomic.cpp Does anyone actually try it and hit the assertion? Could anyone give me an explanation about my test results? BTW, I also tried the version without using the atomic type, I got the same result. Currently, both programs are running healthily. Thanks.

    Read the article

  • Checked and Unchecked operators don't seem to be working when...

    - by flockofcode
    1) Is UNCHECKED operator in effect only when expression inside UNCHECKED context uses an explicit cast ( such as byte b1=unchecked((byte)2000); ) and when conversion to particular type can happen implicitly? I’m assuming this since the following expression throws a compile time error: byte b1=unchecked(2000); //compile time error 2) a) Do CHECKED and UNCHECKED operators work only when resulting value of an expression or conversion is of an integer type? I’m assuming this since in the first example ( where double type is being converted to integer type ) CHECKED operator works as expected: double m = double.MaxValue; b=checked((byte)m); // reports an exception , while in second example ( where double type is being converted to a float type ) CHECKED operator doesn’t seem to be working. since it doesn't throw an exception: double m = double.MaxValue; float f = checked((float)m); // no exception thrown b) Why don’t the two operators also work with expressions where type of a resulting value is of floating-point type? 2) Next quote is from Microsoft’s site: The unchecked keyword is used to control the overflow-checking context for integral-type arithmetic operations and conversions I’m not sure I understand what exactly have expressions and conversions such as unchecked((byte)(100+200)); in common with integrals? Thank you

    Read the article

  • Memory mapping of files and system cache behavior in WinXP

    - by Canopus
    Our application is memory intensive and deals with reading a large number of disk files. The total load can be more than 3 GB. There is a custom memory manager that uses memory mapped files to achieve reading of such a huge data. The files are mapped into the process memory space only when needed and with this the process memory is well under control. But what is observed is, with memory mapping, the system cache keeps on increasing until it occupies the available physical memory. This leads to the slowing down of the entire system. My question is how to prevent system cache from hogging the physical memory? I attempted to remove the file buffering (by using FILE_FLAG_NO_BUFFERING ), but with this, the read operations take considerable amount of time and slows down the application performance. How to achieve the scalability without sacrificing much on performance. What are the common techniques used in such cases? I dont have a good understanding of the WinXP OS caching behavior. Any good links explaining the same would also be helpful.

    Read the article

  • Exemplars of large document-centric applications with COM/XPCOM/.NET interfaces.

    - by Warren P
    I am looking for exemplars (design examples) showing the use of interfaces (aka 'protocols' for you smalltalkers) to design a document management architecture in a large Word Processor, Spreadsheet, vector graphic or publishing package, or office-productivity (non-database) application with support for as many of the following as possible: any open source project, will be ideal, and language of implementation is unimportant since I am looking for design examples, however an object oriented language with support for "interfaces" is a must. I know at least a dozen languages, and I'm willing to study any application's source. use of "interface" could loosely be applied to either XPCOM or COM interfaces, or .NET interfaces, or even the use of pure-virtual (virtual+abstract) base-classes for OOP languages that lack the ability to declare an interface distinct from a class. I am mostly looking for a robust, thorough and flexible implementation for a document, IDocument, various document views (IDocumentView), and whatever operations make sense in that case. I am particular interested in cases where the product in question is a real-world product. For example, if anybody familiar with OpenOffice can tell me if the code contains a good sample design. I am looking for design documentation that outlines the design of the interfaces for such an application. So for example, if the openoffice spreadsheet has such an interface design, then that might be the best case, because it is a widely used real-world design, with millions of users, rather than a textbook example, which is minimal, and contrived. I know that the Mozilla platform uses XPCOM, and its design is heavily "interface" oriented, but I am looking more for a "word processor" or "spreadsheet" type of document design, rather than a web-browser. I am particularly interested in the interfaces used to access to data and meta-data such as markup (attributes like bold, and italics, and font size), and the ability to search and look up named entities within a document.

    Read the article

  • How to convert a rectangle to TRBL CSS rect value?

    - by VLostBoy
    I'm not quite sure how to put this, but here goes... The css clip attribute is defined like so: rect(top, right, bottom, left). However, I'm exploring the use of a custom Rectangle 'class' to encapsulate some operations. The rectangle class has the attributes height, width and x, y. The x and y values are encapsulated in a Point object, and the height and width are encapsulated in a Dimension object, the rectangle being a composite of a point (its top-left location) and a dimension (width and height). So far so good. I though it would be pretty simple on the basis of having the rectangles x, y, width and height values to define the css rect attribute in terms of top, right, bottom, left, but I've become hopelessly confused- I've been googling for a while, and I can't seem to find any documentation as to what the TRBL values actually are or what they represent. For example, should I be thinking in terms of co-ordinates, in which case, surely I can describe the rectangle as a css rect using the rectangles x position for Top, x position + width for Right, the rectangles height + y for Bottom and its y position for Left... but thats a load of BS, surely? Also, surely rect is actually an inset, or have I just inverted my understanding of clip? I'd appreciate some advice. What I want to be able to do is (i) Define a rectangle using x, y, width and height (ii)Express the rectangle in TRBL form so that I can manipulate a divs clipping behaviour (iii)Change x, y, width or height and recalculate in terms of TRBL and goto (ii) I appreciate there are some other factors here, and some intermediary transforms to be done, but I've confused myself pretty badly- Can anyone give me some pointers?

    Read the article

  • What C# container is most resource-efficient for existence for only one operation?

    - by ccornet
    I find myself often with a situation where I need to perform an operation on a set of properties. The operation can be anything from checking if a particular property matches anything in the set to a single iteration of actions. Sometimes the set is dynamically generated when the function is called, some built with a simple LINQ statement, other times it is a hard-coded set that will always remain the same. But one constant always exists: the set only exists for one single operation and has no use before or after it. My problem is, I have so many points in my application where this is necessary, but I appear to be very, very inconsistent in how I store these sets. Some of them are arrays, some are lists, and just now I've found a couple linked lists. Now, none of the operations I'm specifically concerned about have to care about indices, container size, order, or any other functionality that is bestowed by any of the individual container types. I picked resource efficiency because it's a better idea than flipping coins. I figured, since array size is configured and it's a very elementary container, that might be my best choice, but I figure it is a better idea to ask around. Alternatively, if there's a better choice not out of resource-efficiency but strictly as being a better choice for this kind of situation, that would be nice as well.

    Read the article

  • MSI File/Registry failures on Windows Server 2008/Windows 7 (x64)

    - by Luca
    I'm trying to deploy an application on Windows Server 2008 (SP2 x64) and Windows 7 (x64), using VS2005 Installer Project. The MSI version (I think) it the 2.0. Everything works fine, except that some registry keys and some files are not copied on the install machine. The MSI system doesn't notify about nothing (and I don't know whether MSI logs its operations). Are there incompatibilities between my MSI installer project and these new OSes? It seems to me that the OS protect itself for being modified in some part. For example, I'm trying to set the registry keys: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\WinLogon\SpecialAccounts\UserList\User but it is not created. In the same installer there are many other keys, which are created like expected (as they always did before on Windows XP and Windows Server 2003). To provide another example, I'm trying to install the file %SystemFolder%\oobe\info\backgrounds\backgroundDefault.jpg (where %SystemFolder% is typically "C:\Windows\System32"), but the file is not copied at all!!! What's going on? I've found the backgroundDefault.jpg file is located in another directory: %SystemRoot%\SysWOW64\oobe\info. But I've not specified nothing about a System (64 bit) folder. How can I copy the file in the right place?

    Read the article

  • Caching vector addition over changing collections

    - by DRMacIver
    I have the following setup: I have a largish number of uuids (currently about 10k but expected to grow unboundedly - they're user IDs) and a function f : id - sparse vector with 32-bit integer values (no need to worry about precision). The function is reasonably expensive (not outrageously so, but probably on the order of a few 100ms for a given id). The dimension of the sparse vectors should be assumed to be infinite, as new dimensions can appear over time, but in practice is unlikely to ever exceed about 20k (and individual results of f are unlikely to have more than a few hundred non-zero values). I want to support the following operations efficiently: add a new ID to the collection invalidate an existing ID retrieve sum f(id) in O(changes since last retrieval) i.e. I want to cache the sum of the vectors in a way that's reasonable to do incrementally. One option would be to support a remove ID operation and treat invalidation as a remove followed by an add. The problem with this is that it requires us to keep track of all the old values of f, which is expensive in space. I potentially need to use many instances of this sort of cached structure, so I would like to avoid that. The likely usage pattern is that new IDs are added at a fairly continuous rate and are frequently invalidated at first. Ids which have been invalidated recently are much more likely to be invalidated again than ones which have remained valid for a long time, but in principle an old Id can still be invalidated. Ideally I don't want to do this in memory (or at least I want a way that lets me save the result to disk efficiently), so an idea which lets me piggyback off an existing DB implementation of some sort would be especially appreciated.

    Read the article

  • What's a good way to write batch scripts in C#?

    - by Scott Bilas
    I would like to write simple scripts in C#. Stuff I would normally use .bat or 4NT .btm files for. Copying files, parsing text, asking user input, and so on. Fairly simple but doing this stuff right in a batch file is really hard (no exceptions for example). I'm familiar with command line "scripting" wrappers like AxScript so that gets me part of the way there. What I'm missing is the easy file-manipulation framework. I want to be able to do cd(".."), copy(srcFile, destFile) type functionality. Tools I have tried: NANT, which we use in our build process. Not a good scripting tool. Insanely verbose XML syntax and to add a simple function you must write an extension assembly. Can't do it inline. PowerShell. Looks great, but I just haven't been able to switch over to this as my primary shell. Too many differences from 4NT. Whatever I do needs to run from an ordinary command prompt and not require a special shell to run it through. Can PowerShell be used as a script executor? Perl/Python/Ruby. Really hate learning an entirely new language and framework just to do batch file operations. Haven't been able to dedicate the time I need to do this. Plus, we're a 99% .NET shop for our toolchain and I really want to leverage our existing experience and codebase. Are there frameworks out there that are trying to solve this problem of "make a batch file in C#" that you have used? I want the power of C#/.NET with the immediate-mode type functionality of a typical cmd.exe shell language. Am I alone in wanting something like this?

    Read the article

  • Is this use of PreparedStatements in a Thread in JAVA correct?

    - by Gormcito
    I'm still an undergrad just working part time and so I'm always trying to be aware of better ways to do things. Recently I had to write a program for work where the main thread of the program would spawn "task" threads (for each db "task" record) which would perform some operations and then update the record to say that it has finished. Therefore I needed a database connection object and PreparedStatement objects in or available to the ThreadedTask objects. This is roughly what I ended up writing, is creating a PreparedStatement object per thread a waste? I thought static PreparedStatments could create race conditions... Thread A stmt.setInt(); Thread B stmt.setInt(); Thread A stmt.execute(); Thread B stmt.execute(); A´s version never gets execed.. Is this thread safe? Is creating and destroying PreparedStatement objects that are always the same not a huge waste? public class ThreadedTask implements runnable { private final PreparedStatement taskCompleteStmt; public ThreadedTask() { //... taskCompleteStmt = Main.db.prepareStatement(...); } public run() { //... taskCompleteStmt.executeUpdate(); } } public class Main { public static final db = DriverManager.getConnection(...); }

    Read the article

  • Better way to compare neighboring cells in matrix

    - by HyperCube
    Suppose I have a matrix of size 100x100 and I would like to compare each pixel to its direct neighbor (left, upper, right, lower) and then do some operations on the current matrix or a new one of the same size. A sample code in Python/Numpy could look like the following: (the comparison 0.5 has no meaning, I just want to give a working example for some operation while comparing the neighbors) import numpy as np my_matrix = np.random.rand(100,100) new_matrix = np.array((100,100)) my_range = np.arange(1,99) for i in my_range: for j in my_range: if my_matrix[i,j+1] > 0.5: new_matrix[i,j+1] = 1 if my_matrix[i,j-1] > 0.5: new_matrix[i,j-1] = 1 if my_matrix[i+1,j] > 0.5: new_matrix[i+1,j] = 1 if my_matrix[i-1,j] > 0.5: new_matrix[i-1,j] = 1 if my_matrix[i+1,j+1] > 0.5: new_matrix[i+1,j+1] = 1 if my_matrix[i+1,j-1] > 0.5: new_matrix[i+1,j-1] = 1 if my_matrix[i-1,j+1] > 0.5: new_matrix[i-1,j+1] = 1 This can get really nasty if I want to step into one neighboring cell and start from it to do a similar task... Do you have some suggestions how this can be done in a more efficient manner? Is this even possible?

    Read the article

  • Creating simple calculator with bison & flex in C++ (not C)

    - by ak91
    Hey, I would like to create simple C++ calculator using bison and flex. Please note I'm new to the creating parsers. I already found few examples in bison/flex but they were all written in C. My goal is to create C++ code, where classes would contain nodes of values, operations, funcs - to create AST (evaluation would be done just after creating whole AST - starting from the root and going forward). For example: my_var = sqrt(9 ** 2 - 32) + 4 - 20 / 5 my_var * 3 Would be parsed as: = / \ my_var + / \ sqrt - | / \ - 4 / / \ / \ ** 32 20 5 / \ 9 2 and the second AST would look like: * / \ my_var 3 Then following pseudocode reflects AST: ast_root = create_node('=', new_variable("my_var"), exp) where exp is: exp = create_node(OPERATOR, val1, val2) but NOT like this: $$ = $1 OPERATOR $3 because this way I directly get value of operation instead of creation the Node. I believe the Node should contain type (of operation), val1 (Node), val2 (Node). In some cases val2 would be NULL, like above mentioned sqrt which takes in the end one argument. Right? It will be nice if you can propose me C++ skeleton (without evaluation) for above described problem (including *.y file creating AST) to help me understand the way of creating/holding Nodes in AST. Code can be snipped, just to let me get the idea. I'll also be grateful if you point me to an existing (possibly simple) example if you know any. Thank you all for your time and assistance!

    Read the article

  • localhost + staging + production environments?

    - by Kentor
    Hello, I have a website say www.livesite.com which is currently running. I have been developing a new version of the website on my local machine with http://localhost and then committing my changes with svn to www.testsite.com where I would test the site on the livesite.com server but under another domain (its the same environment as the live site but under a different domain). Now I am ready to release the new version to livesite.com. Doing it the first time is easy, I could just copy & paste everything from testsite.com to livesite.com (not sure its the best way to do it). I want to keep testsite.com as a testing site where I would push updates, test them and once satisfied move to livesite.com but I am not sure how to do that after the new site is launched.. I don't think copy pasting the whole directory is the right way of doing it and it will break the operations of current users on the livesite.com. I also want to keep my svn history on testsite.com. What is the correct way of doing this with SVN ? Thank you so much!

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >