Search Results

Search found 6346 results on 254 pages for 'turn a'.

Page 206/254 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • nedmalloc: where does mem<fm come from?

    - by Suma
    While implementing nedmalloc into my application, I am frequently hitting a situation when nedmalloc refuses to free a block of memory, claiming it did not allocate it. While debugging I have come up to the point I see a particular condition which is failing, all other (including magic numbers) succeed. The condition is this: if((size_t)mem-(size_t)fm>=(size_t)1<<(SIZE_T_BITSIZE-1)) return 0; On Win32 this seems to be equivalent to: if((int)((size_t)mem-(size_t)fm)<0) return 0; Which seems to be the same as: if((size_t)mem<(size_t)fm) return 0; In my case I really see mem < fm. What I do not understand now is, where does this condition come from. I cannot find anything which would guarantee the fm <= m anywhere in code. Yet, "select isn't broken": I doubt it would really be a bug in nedmalloc, most likely I am doing something wrong somewhere, but I cannot find it. Once I turn debugging features of nedmalloc on, the problem goes away. If someone here understands inner working of nedmalloc, could you please explain to me why is fm <= m?

    Read the article

  • Where do you take mocking - immediate dependencies, or do you grow the boundaries...?

    - by Peter Mounce
    So, I'm reasonably new to both unit testing and mocking in C# and .NET; I'm using xUnit.net and Rhino Mocks respectively. I'm a convert, and I'm focussing on writing behaviour specifications, I guess, instead of being purely TDD. Bah, semantics; I want an automated safety net to work above, essentially. A thought struck me though. I get programming against interfaces, and the benefits as far as breaking apart dependencies goes there. Sold. However, in my behaviour verification suite (aka unit tests ;-) ), I'm asserting behaviour one interface at a time. As in, one implementation of an interface at a time, with all of its dependencies mocked out and expectations set up. The approach seems to be that if we verify that a class behaves as it should against its collaborating dependencies, and in turn relies on each of those collaborating dependencies to have signed that same quality contract, we're golden. Seems reasonable enough. Back to the thought, though. Is there any value in semi-integration tests, where a test-fixture is asserting against a unit of concrete implementations that are wired together, and we're testing its internal behaviour against mocked dependencies? I just re-read that and I think I could probably have worded it better. Obviously, there's going to be a certain amount of "well, if it adds value for you, keep doing it", I suppose - but has anyone else thought about doing that, and reaped benefits from it outweighing the costs?

    Read the article

  • Help me understand entity framework 4 caching for lazy loading

    - by Chris
    I am getting some unexpected behaviour with entity framework 4.0 and I am hoping someone can help me understand this. I am using the northwind database for the purposes of this question. I am also using the default code generator (not poco or self tracking). I am expecting that anytime I query the context for the framework to only make a round trip if I have not already fetched those objects. I do get this behaviour if I turn off lazy loading. Currently in my application I am breifly turning on lazy loading and then turning it back off so I can get the desired behaviour. That pretty much sucks, so please help. Here is a good code example that can demonstrate my problem. Public Sub ManyRoundTrips() context.ContextOptions.LazyLoadingEnabled = True Dim employees As List(Of Employee) = context.Employees.Execute(System.Data.Objects.MergeOption.AppendOnly).ToList() 'makes unnessesary round trip to the database, I just loaded the employees' MessageBox.Show(context.Employees.Where(Function(x) x.EmployeeID < 10).ToList().Count) context.Orders.Execute(System.Data.Objects.MergeOption.AppendOnly) For Each emp As Employee In employees 'makes unnessesary trip to database every time despite orders being pre loaded.' Dim i As Integer = emp.Orders.Count Next End Sub Public Sub OneRoundTrip() context.ContextOptions.LazyLoadingEnabled = True Dim employees As List(Of Employee) = context.Employees.Include("Orders").Execute(System.Data.Objects.MergeOption.AppendOnly).ToList() MessageBox.Show(employees.Where(Function(x) x.EmployeeID < 10).ToList().Count) For Each emp As Employee In employees Dim i As Integer = emp.Orders.Count Next End Sub Why is the first block of code making unnessesary round trips?

    Read the article

  • Eclipse CDT: cannot debug or terminate application

    - by Paul Lammertsma
    I have Eclipse set up fairly nicely to run the G++ compiler through Cygwin. Even the character encoding is set up correctly! There still seems to be something wrong with my configuration: I can't debug. The pause button in the debug view is simply disabled, and no threads appear in my application tree. It seems that gdb is simply not communicating with Eclipse. Presently, I have the debug settings as follows: Debugger: "Cygwin gdb Debugger" GDB debugger: gdb GDB command file: .gdbinit Protocol: Default I should mention here that I have no idea what .gdbinit does; in my project it is merely an empty file. What is wrong with my configuration? Debugging When attempting to terminate the application in debug mode, Eclipse displays the following error: Target request failed: failed to interrupt. I can't kill the process, either; I have to kill its parent gdb.exe, which in turn kills my application. Running When running it normally, a bunch of kill.exes are called, doing nothing, while Eclipse displays the following error: Terminate failed. I can kill FaceDetector.exe from the task manager. Process Explorer This is what it looks like in Process Explorer (debugging left, running right):

    Read the article

  • Word VBA - Find text between delimiters and convert to lower case

    - by jJack
    I would like to find text which is between the < and characters, and then turn any found text into "normal" case, where first letter of word is capitalized. Here is what I have thus far: Function findTextBetweenCarots() As String Dim strText As String With Selection .Find.Text = "<" ' what about <[^0-9]+> ? .Find.Forward = True .Find.Wrap = wdFindContinue End With Selection.Find.Execute ' Application.Selection. ' how do I get the text between the other ">"? findCarotSymb = Application.Selection.Text End Function Or, is there a better way of doing this? I also approached the problem using the VBScript Regex 5.5 library, which worked on simple documents, but not on certain documents with complex tables. For example, trying to just bold the text (for simplicity): Sub BoldUpperCaseWords() Dim regEx, Match, Matches Dim rngRange As Range Set regEx = New RegExp regEx.Pattern = "<[^0-9]+>" regEx.IgnoreCase = False regEx.Global = True Set Matches = regEx.Execute(ActiveDocument.Range.Text) For Each Match In Matches ActiveDocument.Range(Match.FirstIndex, Match.FirstIndex + Len(Match.Value)).Bold = True Next End Sub would not work in a document with tables. In fact, it would not even bold the correct text (the text between the <. This leads me to believe I have a broader issue here that I am missing. Here is what a sample doc looks like. Notice the wrong text is bold:

    Read the article

  • What is the best testing pattern for checking that parameters are being used properly?

    - by Joseph
    I'm using Rhino Mocks to try to verify that when I call a certain method, that the method in turn will properly group items and then call another method. Something like this: //Arrange var bucketsOfFun = new BucketGame(); var balls = new List<IBall> { new Ball { Color = Color.Red }, new Ball { Color = Color.Blue }, new Ball { Color = Color.Yellow }, new Ball { Color = Color.Orange }, new Ball { Color = Color.Orange } }; //Act bucketsOfFun.HaveFunWithBucketsAndBalls(balls); //Assert ??? Here is where the trouble begins for me. My method is doing something like this: public void HaveFunWithBucketsAndBalls(IList<IBall> balls) { //group all the balls together according to color var blueBalls = GetBlueBalls(balls); var redBalls = GetRedBalls(balls); // you get the idea HaveFunWithABucketOfBalls(blueBalls); HaveFunWithABucketOfBalls(redBalls); // etc etc with all the different colors } public void HaveFunWithABucketOfBalls(IList<IBall> colorSpecificBalls) { //doing some stuff here that i don't care about //for the test i'm writing right now } What I want to assert is that each time I call HaveFunWithABucketOfBalls that I'm calling it with a group of 1 red ball, then 1 blue ball, then 1 yellow ball, then 2 orange balls. If I can assert that behavior then I can verify that the method is doing what I want it to do, which is grouping the balls properly. Any ideas of what the best testing pattern for this would be?

    Read the article

  • Rails: constraint violation on create but not on update

    - by justinbach
    Note: This is a "railsier" (and more succinct) version of this question, which was getting a little long. I'm getting Rails behavior on a production server that I can't replicate on the development server. The codebases are identical save for credentials and caching settings, and both are powered by Oracle 10g databases with identical schema (but different data). My Rails application contains a user model, which has_one registration; registration in turn has_and_belongs_to_many company_ownerships through a registration_ownerships table. Upon registering, users fill out data pertinent to all three models, including a series of checkboxes indicating what registration_ownerships might apply to their account. On the dev server, the registration process is seamless, no matter what data is entered. On production, however, if users check off any of the company ownership fields before submitting their registration, Oracle complains about a constraint violation on the primary key of the company_ownerships table (which is a two-field key based on company_ownership_id and registration_id) and users get the standard Rails 500 error screen. In every case, I've verified that no conflicting record on these two fields exists in the production database, so I don't know why the constraint is getting violated. To further confuse things, if a user registers without listing any ownerships and later goes back and modifies their account to reflect ownership data (which is done through the same interface), the application happily complies with their request and Oracle is well-behaved (this is both on production and dev). I've spent the past couple days trying to figure out what might be causing this problem and am reaching the end of my wits. Any advice would be greatly appreciated!

    Read the article

  • Database access through collections

    - by Mike
    Hi All, I have an 3 tiered application where I need to get database results and populated the UI. I have a MessagesCollection class that deals with messages. I load my user from the database. On the instantiation of a user (ie. new User()), a MessageCollection Messages = new MessageCollection(this) is performed. Message collection accepts a user as a parameter. User user = user.LoadUser("bob"); I want to get the messages for Bob. user.Messages.GetUnreadMessages(); GetUnreadMessages calls my Business Data provider which in turn calls the data access layer. The Business data provider returns List. My question is - I am not sure what the best practice is here - If I have a collection of messages in an array inside the MessagesCollection class, I could implement ICollection to provide GetEnumerator() and ability to traverse the messages. But what happens if the messages change and the the user has old messages loaded? What about big message collections? What if my user had 10,000 unread messages? I don't think accessing the database and returning 10,000 Message objects would be efficient.

    Read the article

  • C# Inheritence: Choosing what repository based on type of inherited class

    - by Oskar Kjellin
    I have been making a program that downloads information about movies from the internet. I have a base class Title, which represents all titles. Movie, Serie and Episode are inherited from that class. To save them to the database I have 2 services, MovieService and SerieService. They in turn call repositories, but that is not important here. I have a method Save(Title title) which I am not very happy with. I check for what type the title is and then call the correct service. I would like to perhaps write like this: ITitleService service = title.GetService(); title.GetSavedBy(service); So I have an abstract method on Title that returns an ITitleSaver, which will return the correct service for the instance. My question is how should I implement ITitleSaver? If I make it accept Title I will have to cast it to the correct type before calling the correct overload. Which will lead to having to deal with casting once again. What is the best approach to dealing with this? I would like to have the saving logic in the corresponding class.

    Read the article

  • Flash "visible" issue

    - by justkevin
    I'm writing a tool in Flex that lets me design composite sprites using layered bitmaps and then "bake" them into a low overhead single bitmapData. I've discovered a strange behavior I can't explain: toggling the "visible" property of my layers works twice for each layer (i.e., I can turn it off, then on again) and then never again for that layer-- the layer stays visible from that point on. If I override "set visible" on the layer as such: override public function set visible(value:Boolean):void { if(value == false) this.alpha = 0; else {this.alpha = 1;} } The problem goes away and I can toggle "visibility" as much as I want. Any ideas what might be causing this? Edit: Here is the code that makes the call: private function onVisibleChange():void { _layer.visible = layerVisible.selected; changed(); } The changed() method "bakes" the bitmap: public function getBaked():BitmapData { var w:int = _composite.width + (_atmosphereOuterBlur * 2); var h:int = _composite.height + (_atmosphereOuterBlur * 2); var bmpData:BitmapData = new BitmapData(w,h,true,0x00000000); var matrix:Matrix = new Matrix(); var bounds:Rectangle = this.getBounds(this); matrix.translate(w/2,h/2); bmpData.draw(this,matrix,null,null,new Rectangle(0,0,w,h),true); return bmpData; } Incidentally, while the layer is still visible, using the Flex debugger I can verify that the layer's visible value is "false".

    Read the article

  • How can I stop SQL Server Management Studio replacing 'SELECT *' with the column list ?

    - by Ben McIntyre
    SQL Server Mgmt Studio is driving me crazy. If I create a view and SELECT '*' from a table, it's all OK and I can save the view. Looking at the SQL for the view (eg.by scripting a CREATE) reveals that the 'SELECT *' really is saved to the view's SQL. But as soon as I reopen the view using the GUI (right click modify), SELECT * is replaced with a column list of all the columns in the table. How can I stop Management Studio from doing this ? I want my 'SELECT *' to remain just that. Perhaps it's just the difficulty of googling 'SELECT *' that prevented me from finding anything remotely relevant to this (i did put it in double quotes). Please, I am highly experienced in Transact-SQL, so please DON'T give me a lecture on why I shouldn't be using SELECT *. I know all the pros and cons and I do use it at times. It's a language feature, and like all language features can be used for good or evil (I emphatically do NOT agree that it is never appropriate to use it). Edit: I'm giving Marc the answer, since it seems it is not possible to turn this behaviour off. Problem is considered closed. I note that Enterprise Manager did no similar thing. The workaround is to either edit SQL as text, or go to a product other than Managment Studio. Or constantly edit out the column list and replace the * every time you edit a view. Sigh.

    Read the article

  • Calling QAxWidget method outside of the GUI thread

    - by user304361
    I'm beginning to wonder if this is impossible, but I thought I'd ask in case there's a clever way to get around the problems I'm having. I have a Qt application that uses an ActiveX control. The control is held by a QAxWidget, and the QAxWidget itself is contained within another QWidget (I needed to add additional signals/slots to the widget, and I couldn't just subclass QAxWidget because the class doesn't permit that). When I need to interact with the ActiveX control, I call a method of the QWidget, which in turn calls the dynamicCall method of the QAxWidget in order to invoke the appropriate method of the ActiveX control. All of that is working fine. However, one method of the ActiveX control takes several seconds to return. When I call this method, my entire GUI locks up for a few seconds until the method returns. This is undesirable. I'd like the ActiveX control to go off and do its processing by itself and come back to me when it's done without locking up the Qt GUI. I've tried a few things without success: Creating a new QThread and calling QAxWidget::dynamicCall from the new thread Connecting a signal to the appropriate slot method of the QAxWidget and calling the method using signals/slots instead of using dynamicCall Calling QAxWidget::dynamicCall using QtConcurrent::run Nothing seems to affect the behavior. No matter how or where I use dynamicCall (or trigger the appropriate slot of the QAxWidget), the GUI locks until the ActiveX control completes its operation. Is there any way to detach this ActiveX processing from the Qt GUI thread so that the GUI doesn't lock up while the ActiveX control is running a method? Is there something clever I can do with QAxBase or QAxObject to get my desired results?

    Read the article

  • Convert a Unit Vector to a Quaternion

    - by Hmm
    So I'm very new to quaternions, but I understand the basics of how to manipulate stuff with them. What I'm currently trying to do is compare a known quaternion to two absolute points in space. I'm hoping what I can do is simply convert the points into a second quaternion, giving me an easy way to compare the two. What I've done so far is to turn the two points into a unit vector. From there I was hoping I could directly plug in the i j k into the imaginary portion of the quaternion with a scalar of zero. From there I could multiply one quaternion by the other's conjugate, resulting in a third quaternion. This third quaternion could be converted into an axis angle, giving me the degree by which the original two quaternions differ by. Is this thought process correct? So it should just be [ 0 i j k ]. I may need to normalize the quaternion afterwards, but I'm not sure about that. I have a bad feeling that it's not a direct mapping from a vector to a quaternion. I tried looking at converting the unit vector to an axis angle, but I'm not sure this would work, since I don't know what angle to give as an input.

    Read the article

  • Returning a Value From the Bit.ly JS API Callback

    - by mtorbin
    Hey all, I am attempting to turn this "one shot" script into something more extensible. The problem is that I cannot figure out how to get the callback function to set a value outside of itself (please note that references to the Bit.ly API and the prototype.js frame work which are required have been left out due to login and apiKey information): CURRENTLY WORKING CODE var setShortUrl = function(data) { var resultOBJ, myURL; for(x in data.results){resultOBJ = data.results[x];} for(key in resultOBJ){if(key == "shortUrl"){myURL = resultOBJ[key];}} alert(myURL); } BitlyClient.shorten('http://www.thinkgeek.com', 'setShortUrl'); PROPOSED CHANGES var setShortUrl = function(data) { var resultOBJ, myURL; for(x in data.results){resultOBJ = data.results[x];} for(key in resultOBJ){if(key == "shortUrl"){myURL = resultOBJ[key];}} alert(myURL); return myURL; } var myTEST = BitlyClient.shorten('http://www.thinkgeek.com', 'setShortUrl'); alert(myTEST); As you can see this doesn't work the way I'm am expecting. If I could get a pointer in the right direction it would be most appreciated. Thanks, MT

    Read the article

  • Mocking an object that uses jni using EasyMock

    - by Visage
    So my class under test has code that looks braodly like this public void doSomething(int param) { Report report = new Report() ...do some calculations report.someMethod(someData) } my intention was to extract the construction of report into a protected method and override it to use a mock object that I could then test to ensure that someMethod had been called with the right data. So far so good. But Report isnt under my control, and to mkae things worse it uses JNI to load a library at runtime. If I do Report report = EasyMock.createMock(Report.class) then EasyMock attempts to use reflection to find out the class members, but this causes an attempt to load the JNI library, which fails (the JNI libraries are only available on UNIX). Im considering two things: a) Introduce a ReportWrapper interface with two implementations, one of which will delegate calls to an real Report (so basically a Proxy), and a second which will basically use a mock object. or b) instead of calling someMethod, call a protected method which will in turn call someMethod that I can override in a testing subclass. Either way it seems nasty. Any better ways?

    Read the article

  • Saving a record in Authlogic table

    - by denniss
    I am using authlogic to do my authentication. The current model that serves as the authentication model is the user model. I want to add a "belongs to" relationship to user which means that I need a foreign key in the user table. Say the foreign key is called car_id in the user's model. However, for some reason, when I do u = User.find(1) u.car_id = 1 u.save! I get ActiveRecord::RecordInvalid: Validation failed: Password can't be blank My guess is that this has something to do with authlogic. I do not have validation on password on the user's model. This is the migration for the user's table. def self.up create_table :users do |t| t.string :email t.string :first_name t.string :last_name t.string :crypted_password t.string :password_salt t.string :persistence_token t.string :single_access_token t.string :perishable_token t.integer :login_count, :null => false, :default => 0 # optional, see Authlogic::Session::MagicColumns t.integer :failed_login_count, :null => false, :default => 0 # optional, see Authlogic::Session::MagicColumns t.datetime :last_request_at # optional, see Authlogic::Session::MagicColumns t.datetime :current_login_at # optional, see Authlogic::Session::MagicColumns t.datetime :last_login_at # optional, see Authlogic::Session::MagicColumns t.string :current_login_ip # optional, see Authlogic::Session::MagicColumns t.string :last_login_ip # optional, see Authlogic::Session::MagicColumns t.timestamps end end And later I added the car_id column to it. def self.up add_column :users, :user_id, :integer end Is there anyway for me to turn off this validation?

    Read the article

  • asp.net doesn't render Sys.WebForms.PageRequestManager._initialize code

    - by ajitatif
    i'm using the ASP.NET 2.0 Ajax Extensions on a web site. as always, everything is fine on local but the remote web site does not use ajax calls. my local server has the ASP.NET Ajax extensions installed but the remote one doesn't. i know that i should be able to use the Ajax extensions without installing them. so in turn, i added the extensions' .dll among the web site's references but still no luck. after my further investigation, i found out that local and remote pages have exactly the same HTML code rendered, except that the local (working) one has these lines //<![CDATA[ Sys.WebForms.PageRequestManager._initialize('ctl00$ContentPlaceHolder1$ScriptManager1', document.getElementById('aspnetForm')); Sys.WebForms.PageRequestManager.getInstance()._updateControls(['tctl00$ContentPlaceHolder1$updReportArgs','tctl00$ContentPlaceHolder1$updReport'], ['ctl00$ContentPlaceHolder1$chkTumu','ctl00$ContentPlaceHolder1$btnGetir'], [], 90); //]]> obviously, these are the lines of code that make callbacks possible. the question is why doesn't asp.net render these lines? what could be missing? by the way, the ScriptResource.axd and WebResource.axd doesn't give a 404 or anything, i can see through their js codes via Firebug. and one more thing: i'm unsure if it is related or not, but there are client-side asp.net validators on the page whose js code are not rendered either. again, those work fine locally. for further investigation you can see the remote site here : http://www.ajitatif.com/subdomains/nazer/Raporlar/danismanbasarim.aspx

    Read the article

  • Web-based game in Python + Django and client browser polling

    - by ty
    I am creating a text-based game that implements a basic model in which multiple (10+) players interact with data and one moderator watches them and sets certain environmental statistics that affect gameplay. Recently I have begun to familiarize myself with Django. It seems to me that it would be an excellent tool for creating a game quickly, particularly because the nature of my game depends largely on sets of data (which lends itself quite well to a database). I am wondering how to "push" changes made by the game moderator to the players (for example, the moderator can decide to display an image to all players). The game is turn-based, not real-time, but certain messages need to be pushed out in roughly real-time. My thoughts: I could have each player's browser poll a status periodically (say, every 30 seconds) to see if there is a message from a moderator. But this forces a lag and means different players might receive it at different times. And reducing this interval to <10 seems like a bad idea for the server. Is there a better way to inform clients of changes? Would you suggest something other than using a web framework like Django? Thanks!

    Read the article

  • Theory of formal languages - Automaton

    - by dader51
    Hi everybody ! I'm wondering about formal languages. I have a kind of parser : It reads à xml-like serialized tree structure and turn it into a multidimmensionnal array. I figured out that i need at least three variables to achieve the job : $tree = array(); // a new array $pTree = array(&$tree); // a new array which the first element points to $tree; $deep = 0; plus the one containing the sentence splitted into words. My point is on the similarities between the algorithm deing used and the differents kinds of automatons ( state machines turing machines stack ... ). The $words variable is the "tape" of the automaton, the test/conditions of the algorithm are transitions, $deep is the state and $tree is the output. I cannont figure what is $pTree. So the question is : which is the automaton I implictly use here, and to which formal languages family does it fit ? And what's about recursion ?

    Read the article

  • Is there a library that can decompile a method into an Expression tree, with support for CLR 4.0?

    - by Daniel Earwicker
    Previous questions have asked if it is possible to turn compiled delegates into expression trees, for example: http://stackoverflow.com/questions/767733/converting-a-net-funct-to-a-net-expressionfunct The sane answers at the time were: It's possible, but very hard and there's no standard library solution. Use Reflector! But fortunately there are some greatly-insane/insanely-great people out there who like reverse engineering things, and they make difficult things easy for the rest of us. Clearly it is possible to decompile IL to C#, as Reflector does it, and so you could in principle instead target CLR 4.0 expression trees with support for all statement types. This is interesting because it wouldn't matter if the compiler's built-in special support for Expression<> lambdas is never extended to support building statement expression trees in the compiler. A library solution could fill the gap. We would then have a high-level starting point for writing aspect-like manipulations of code without having to mess with raw IL. As noted in the answers to the above linked question, there are some promising signs but I haven't succeeded in finding if there's been much progress since by searching. So has anyone finished this job, or got very far with it? Note: CLR 4.0 is now released. Time for another look-see.

    Read the article

  • .NET out of memory troubleshooting

    - by bushman
    After reading a few enlightening articles about memory in the .NET technology, Out of Memory does not refer to physical memory, 597499. I thought I understood why a C# app would throw an out of memory exception -- until I started experimenting with two servers-- both are having 2.5 gigs of ram, windows server 2003 and identical programs running. The only significant difference between the two being one has 7% hard drive storage left and the other more than 50%. The server with 7% storage space left is consistently throwing an out of memory while the other is performing consistently well. My app is a C# web application that process' hundreds of MBs of String object. Why would this difference happen seeing that the most likely reason for the out of memory issue is out of contiguous virtual address space -- What solutions do you guys propose -- and what do you say about the following 1. turn on the 3gb switch to increase the virtual address space -- 2. instead of using one giant string object, break it up into smaller pieces and collect it in a jagged array (here I have to find a way to return to the caller in some other way as right now, the return type is a string) thanks SO

    Read the article

  • Can Haskell's Parsec library be used to implement a recursive descent parser with backup?

    - by Thor Thurn
    I've been considering using Haskell's Parsec parsing library to parse a subset of Java as a recursive descent parser as an alternative to more traditional parser-generator solutions like Happy. Parsec seems very easy to use, and parse speed is definitely not a factor for me. I'm wondering, though, if it's possible to implement "backup" with Parsec, a technique which finds the correct production to use by trying each one in turn. For a simple example, consider the very start of the JLS Java grammar: Literal: IntegerLiteral FloatingPointLiteral I'd like a way to not have to figure out how I should order these two rules to get the parse to succeed. As it stands, a naive implementation like this: literal = do { x <- try (do { v <- integer; return (IntLiteral v)}) <|> (do { v <- float; return (FPLiteral v)}); return(Literal x) } Will not work... inputs like "15.2" will cause the integer parser to succeed first, and then the whole thing will choke on the "." symbol. In this case, of course, it's obvious that you can solve the problem by re-ordering the two productions. In the general case, though, finding things like this is going to be a nightmare, and it's very likely that I'll miss some cases. Ideally, I'd like a way to have Parsec figure out stuff like this for me. Is this possible, or am I simply trying to do too much with the library? The Parsec documentation claims that it can "parse context-sensitive, infinite look-ahead grammars", so it seems like something like I should be able to do something here.

    Read the article

  • VB.NET vs. C#.NET?

    - by Onion-Knight
    Hello everyone, The company I work for has all of its legacy ("legacy" being used rather liberally in this context) code in VB.NET. They have about 6000+ lines of VB.NET code, so all of the developers are comfortable with it. We have started to develop a new product, and are finding that some modules are easier to complete in C# than in VB.NET, such as Interprocess Communication via WCF. The things our product will eventually need to do are as follows: Communicate via IPC between Windows Services, Silverlight, and WinForms Handle parallization, and all the complexity that comes along with it Windows Service and WinForms development ASP.NET, AJAX, and Silverlight development Database (SQL) access Lots of event handling (Sync and Async events) My question is: Given the type of work we will be doing to complete our product, are there features of one language that will make life easier that the other does not have? And if so, it is worth asking the developers to switch to a language they are less comfortable with? I was hoping to keep this as objective as possible, by listing exactly what type of work we will be doing with the product. Please don't turn this into a VB/C# holy war. Thanks, Onion-Knight

    Read the article

  • MySQL can't access root account or reset with mysqladmin

    - by glumptious
    So if I type mysql -u root I'm supposedly logged in, however upon trying to create or access a database I get this lovely error: ERROR 1044 (42000): Access denied for user ''@'localhost' to database 'test1'. I haven't the foggiest idea why after logging in as root it's trying access DB's as ''@'localhost' and it's driving me a bit crazy right now. Possibly related, when I try to set the root password I get the error mysqladmin: Can't turn off logging; error: 'Access denied; you need (at least one of) the SUPER privilege(s) for this operation'. I've tried removing mysql-server via running apt-get purge mysql-server and then reinstalling with no luck. This is running Ubuntu Server 12.10 64-bit and mysql is indeed running. --Edit-- I wonder if perhaps there is no root user. So I try to start MySQL with --skip-grant-tables and the create the root user but then I'm given this: ERROR 1290 (HY000): The MySQL server is running with the --skip-grant-tables option so it cannot execute this statement. Fun fun fun fun fun.

    Read the article

  • Split large repo into multiple subrepos and preserve history (Mercurial)

    - by Andrew
    We have a large base of code that contains several shared projects, solution files, etc in one directory in SVN. We're migrating to Mercurial. I would like to take this opportunity to reorganize our code into several repositories to make cloning for branching have less overhead. I've already successfully converted our repo from SVN to Mercurial while preserving history. My question: how do I break all the different projects into separate repositories while preserving their history? Here is an example of what our single repository (OurPlatform) currently looks like: /OurPlatform ---- Core ---- Core.Tests ---- Database ---- Database.Tests ---- CMS ---- CMS.Tests ---- Product1.Domain ---- Product1.Stresstester ---- Product1.Web ---- Product1.Web.Tests ---- Product2.Domain ---- Product2.Stresstester ---- Product2.Web ---- Product2.Web.Tests ==== Product1.sln ==== Product2.sln All of those are folders containing VS Projects except for the solution files. Product1.sln and Product2.sln both reference all of the other projects. Ideally, I'd like to take each of those folders, and turn them into separate Hg repos, and also add new repos for each project (they would act as parent repos). Then, If someone was going to work on Product1, they would clone the Product1 repo, which contained Product1.sln and subrepo references to ReferenceAssemblies, Core, Core.Tests, Database, Database.Tests, CMS, and CMS.Tests. So, it's easy to do this by just hg init'ing in the project directories. But can it be done while preserving history? Or is there a better way to arrange this?

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >