Search Results

Search found 7522 results on 301 pages for 'itai bar haim'.

Page 58/301 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • How is it that json serialization is so much faster than yaml serialization in python?

    - by guidoism
    I have code that relies heavily on yaml for cross-language serialization and while working on speeding some stuff up I noticed that yaml was insanely slow compared to other serialization methods (e.g., pickle, json). So what really blows my mind is that json is so much faster that yaml when the output is nearly identical. >>> import yaml, cjson; d={'foo': {'bar': 1}} >>> yaml.dump(d, Dumper=yaml.SafeDumper) 'foo: {bar: 1}\n' >>> cjson.encode(d) '{"foo": {"bar": 1}}' >>> import yaml, cjson; >>> timeit("yaml.dump(d, Dumper=yaml.SafeDumper)", setup="import yaml; d={'foo': {'bar': 1}}", number=10000) 44.506911039352417 >>> timeit("yaml.dump(d, Dumper=yaml.CSafeDumper)", setup="import yaml; d={'foo': {'bar': 1}}", number=10000) 16.852826118469238 >>> timeit("cjson.encode(d)", setup="import cjson; d={'foo': {'bar': 1}}", number=10000) 0.073784112930297852 PyYaml's CSafeDumper and cjson are both written in C so it's not like this is a C vs Python speed issue. I've even added some random data to it to see if cjson is doing any caching, but it's still way faster than PyYaml. I realize that yaml is a superset of json, but how could the yaml serializer be 2 orders of magnitude slower with such simple input?

    Read the article

  • FIFOs implementation

    - by nunos
    Consider the following code: writer.c mkfifo("/tmp/myfifo", 0660); int fd = open("/tmp/myfifo", O_WRONLY); char *foo, *bar; ... write(fd, foo, strlen(foo)*sizeof(char)); write(fd, bar, strlen(bar)*sizeof(char)); reader.c int fd = open("/tmp/myfifo", O_RDONLY); char buf[100]; read(fd, buf, ??); My question is: Since it's not know before hand how many bytes will foo and bar have, how can I know how many bytes to read from reader.c? Because if I, for example, read 10 bytes in reader and foo and bar are together less than 10 bytes, I will have them both in the same variable and that I do not want. Ideally I would have one read function for every variable, but again I don't know before hand how many bytes will the data have. I thought about adding another write instruction in writer.c between the write for foo and bar with a separator and then I would have no problem decoding it from reader.c. Is this the way to go about it? Thanks.

    Read the article

  • Is Perl's flip-flop operator bugged? It has global state, how can I reset it?

    - by Evan Carroll
    I'm dismayed. Ok, so this was probably the most fun perl bug I've ever found. Even today I'm learning new stuff about perl. Essentially, the flip-flop operator .. which returns false until the left-hand-side returns true, and then true until the right-hand-side returns false keep global state (or that is what I assume.) My question is can I reset it, (perhaps this would be a good addition to perl4-esque hardly ever used reset())? Or, is there no way to use this operator safely? I also don't see this (the global context bit) documented anywhere in perldoc perlop is this a mistake? Code use feature ':5.10'; use strict; use warnings; sub search { my $arr = shift; grep { !( /start/ .. /never_exist/ ) } @$arr; } my @foo = qw/foo bar start baz end quz quz/; my @bar = qw/foo bar start baz end quz quz/; say 'first shot - foo'; say for search \@foo; say 'second shot - bar'; say for search \@bar; Spoiler $ perl test.pl first shot foo bar second shot

    Read the article

  • Dynamic memory inside a struct

    - by Maximilien
    Hello, I'm editing a piece of code, that is part of a big project, that uses "const's" to initialize a bunch of arrays. Because I want to parametrize these const's I have to adapt the code to use "malloc" in order to allocate the memory. Unfortunately there is a problem with structs: I'm not able to allocate dynamic memory in the struct itself. Doing it outside would cause to much modification of the original code. Here's a small example: int globalx,globaly; struct bigStruct{ struct subStruct{ double info1; double info2; bool valid; }; double data; //subStruct bar[globalx][globaly]; subStruct ** bar=(subStruct**)malloc(globalx*sizeof(subStruct*)); for(int i=0;i<globalx;i++) bar[i]=(*subStruct)malloc(globaly*sizeof(subStruct)); }; int main(){ globalx=2; globaly=3; bigStruct foo; for(int i=0;i<globalx;i++) for(int j=0;j<globaly;j++){ foo.bar[i][j].info1=i+j; foo.bar[i][j].info2=i*j; foo.bar[i][j].valid=(i==j); } return 0; } Note: in the program code I'm editing globalx and globaly were const's in a specified namespace. Now I removed the "const" so they can act as parameters that are set exactly once. Summarized: How can I properly allocate memory for the substruct inside the struct? Thank you very much! Max

    Read the article

  • ipad tabbar rotation

    - by MaKo
    hi, please help with this noob questions but really making me go crazy if I create a project from scratch (using windows based app) for the ipad, and add a tabbar , with firstviewController, and secondviewController, it works fine, starts in landscape mode, but in info.plist I set it to Landscape(left home button), but really in simulator starts with the button on the right side! in the FirstViewController.m (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { if (interfaceOrientation == UIInterfaceOrientationLandscapeLeft || interfaceOrientation == UIInterfaceOrientationLandscapeRight) return YES; else { return NO; }} so it starts in landscape, and rotates as the simulator rotates, but if I create a template app for iphone tabbar, set the info.plist Initial interface orientation Landscape (left home button) and add the code above, IT DOESNT WORK!!! simulator starts with button at left but tab bar on the side, same problem that I had with an app that Im porting from iphone to ipad, (landscape intended) I get to the landscape start mode, but the tab bar remains on the side! also the only way to make the old ported app to show the simulator on the side was with UIInterfaceOrientation UIIntefaceOrientationLandscapeLeft (didnt work with Initial interface orientation), does not let me choose the value for the key, but it shows the simulator on landscape,, so,, what can I do please to show the tab bar on landscape mode??? the tabbar from scratch was made to see if the code will work , but it didnt?? why does it work in the tab bar made from windows app and not tab bar app? I just want the tab bar to show in landscape ahhh, thanks

    Read the article

  • Managing logs/warnings in Python extensions

    - by Dimitri Tcaciuc
    TL;DR version: What do you use for configurable (and preferably captured) logging inside your C++ bits in a Python project? Details follow. Say you have a a few compiled .so modules that may need to do some error checking and warn user of (partially) incorrect data. Currently I'm having a pretty simplistic setup where I'm using logging framework from Python code and log4cxx library from C/C++. log4cxx log level is defined in a file (log4cxx.properties) and is currently fixed and I'm thinking how to make it more flexible. Couple of choices that I see: One way to control it would be to have a module-wide configuration call. # foo/__init__.py import sys from _foo import import bar, baz, configure_log configure_log(sys.stdout, WARNING) # tests/test_foo.py def test_foo(): # Maybe a custom context to change the logfile for # the module and restore it at the end. with CaptureLog(foo) as log: assert foo.bar() == 5 assert log.read() == "124.24 - foo - INFO - Bar returning 5" Have every compiled function that does logging accept optional log parameters. # foo.c int bar(PyObject* x, PyObject* logfile, PyObject* loglevel) { LoggerPtr logger = default_logger("foo"); if (logfile != Py_None) logger = file_logger(logfile, loglevel); ... } # tests/test_foo.py def test_foo(): with TemporaryFile() as logfile: assert foo.bar(logfile=logfile, loglevel=DEBUG) == 5 assert logfile.read() == "124.24 - foo - INFO - Bar returning 5" Some other way? Second one seems to be somewhat cleaner, but it requires function signature alteration (or using kwargs and parsing them). First one is.. probably somewhat awkward but sets up entire module in one go and removes logic from each individual function. What are your thoughts on this? I'm all ears to alternative solutions as well. Thanks,

    Read the article

  • typename resolution in cases of ambiguity

    - by parapura rajkumar
    I was playing with Visual Studio and templates. Consider this code struct Foo { struct Bar { }; static const int Bar=42; }; template<typename T> void MyFunction() { typename T::Bar f; } int main() { MyFunction<Foo>(); return 0; } When I compile this is either Visual Studio 2008 and 11, I get the following error error C2146: syntax error : missing ';' before identifier 'f' Is Visual Studio correct in this regard ? Is the code violating any standards ? If I change the code to struct Foo { struct Bar { }; static const int Bar=42; }; void SecondFunction( const int& ) { } template<typename T> void MyFunction() { SecondFunction( T::Bar ); } int main() { MyFunction<Foo>(); return 0; } it compiles without any warnings. In Foo::BLAH a member preferred over a type in case of conflicts ?

    Read the article

  • SQL Server stored procedure return code oddity

    - by gbn
    Hello The client that calls this code is restricted and can only deal with return codes from stored procs. So, we modified our usual contract to RETURN -1 on error and default to RETURN 0 if no error If the code hits the inner catch block, then the RETURN code default to -4. Where does this come from, does anyone know...? IF OBJECT_ID('dbo.foo') IS NOT NULL DROP TABLE dbo.foo GO CREATE TABLE dbo.foo ( KeyCol char(12) NOT NULL, ValueCol xml NOT NULL, Comment varchar(1000) NULL, CONSTRAINT PK_foo PRIMARY KEY CLUSTERED (KeyCol) ) GO IF OBJECT_ID('dbo.bar') IS NOT NULL DROP PROCEDURE dbo.bar GO CREATE PROCEDURE dbo.bar @Key char(12), @Value xml, @Comment varchar(1000) AS SET NOCOUNT ON DECLARE @StartTranCount tinyint; BEGIN TRY SELECT @StartTranCount = @@TRANCOUNT; IF @StartTranCount = 0 BEGIN TRAN; BEGIN TRY --SELECT @StartTranCount = 'fish' INSERT dbo.foo (KeyCol, ValueCol, Comment) VALUES (@Key, @Value, @Comment); END TRY BEGIN CATCH IF ERROR_NUMBER() = 2627 --PK violation UPDATE dbo.foo SET ValueCol = @Value, Comment = @Comment WHERE KeyCol = @Key; ELSE RAISERROR ('Tits up', 16, 1); END CATCH IF @StartTranCount = 0 COMMIT TRAN; END TRY BEGIN CATCH IF @StartTranCount = 0 AND XACT_STATE() <> 0 ROLLBACK TRAN; RETURN -1 END CATCH --Without this, we'll send -4 if we hit the UPDATE CATCH block above --RETURN 0 GO --Run with RETURN 0 and fish line commented out DECLARE @rtn int EXEC @rtn = dbo.bar 'abcdefghijkl', '<foobar />', 'testing' SELECT @rtn; SELECT * FROM dbo.foo DECLARE @rtn int EXEC @rtn = dbo.bar 'abcdefghijkl', '<foobar2 />', 'testing2' --updated OK but we get @rtn = -4 SELECT @rtn; SELECT * FROM dbo.foo --uncomment fish line DECLARE @rtn int EXEC @rtn = dbo.bar 'abcdefghijkl', '<foobar />', 'testing' --Hit outer CATCH, @rtn = -1 as expected SELECT @rtn; SELECT * FROM dbo.foo

    Read the article

  • str_replace() and strpos() for arrays?

    - by Josh
    I'm working with an array of data that I've changed the names of some array keys, but I want the data to stay the same basically... Basically I want to be able to keep the data that's in the array stored in the DB, but I want to update the array key names associated with it. Previously the array would have looked like this: $var_opts['services'] = array('foo-1', 'foo-2', 'foo-3', 'foo-4'); Now the array keys are no longer prefixed with "foo", but rather with "bar" instead. So how can I update the array variable to get rid of the "foos" and replace with "bars" instead? Like so: $var_opts['services'] = array('bar-1', 'bar-2', 'bar-3', 'bar-4'); I'm already using if(isset($var_opts['services']['foo-1'])) { unset($var_opts['services']['foo-1']); } to get rid of the foos... I just need to figure out how to replace each foo with a bar. I thought I would use str_replace on the whole array, but to my dismay it only works on strings (go figure, heh) and not arrays.

    Read the article

  • Inline form fields with labels placed on top

    - by rcourtna
    I can't believe I'm having to ask this, but I'm at my wit's end. I'm trying to display 2 form fields inline, but with the label for each field on the top. In ascii art: Label 1 Label 2 --------- --------- | | | | --------- --------- Should be pretty simple. <label for=foo>Label 1</label> <input type=text name=foo id=foo /> <label for=bar>Label 2</label> <input type=text name=bar id=bar /> This will get me: --------- --------- Label 1 | | Label 2 | | --------- --------- To get the labels on top of the boxes, I add display=block: <label for=foo style="display:block">Label 1</label> <input type=text name=foo id=foo /> <label for=bar style="display:block">Label 2</label> <input type=text name=bar id=bar /> After I do this, the labels are where I want them, but the form fields are no longer inline: Label 1 --------- | | --------- Label 2 --------- | | --------- I've been unable to find a way to wrap my html so the fields display inline. Can anyone help?

    Read the article

  • Problem updating collection using JPA

    - by FarmBoy
    I have an entity class Foo foo that contains Collection<Bar> bars. I've tried a variety of ways, but I'm unable to successfully update my collection. One attempt: foo = em.find(key); foo.getBars().clear(); foo.setBars(bars); em.flush; \\ commit, etc. This appends the new collection to the old one. Another attempt: foo = em.find(key); bars = foo.getBars(); for (Bar bar : bars) { em.remove(bar); } em.flush; At this point, I thought I could add the new collection, but I find that the entity foo has been wiped out. Here are some annotations. In Foo: @OneToMany(cascade = { CascadeType.ALL }, mappedBy = "foo") private List<Bar> bars; In Bar: @ManyToOne(optional = false, cascade = { CascadeType.ALL }) @JoinColumn(name = "FOO_ID") private Foo foo; Has anyone else had trouble with this? Any ideas?

    Read the article

  • Objective-C Objects Having Each Other as Properties

    - by mwt
    Let's say we have two objects. Furthermore, let's assume that they really have no reason to exist without each other. So we aren't too worried about re-usability. Is there anything wrong with them "knowing about" each other? Meaning, can each one have the other as a property? Is it OK to do something like this in a mythical third class: Foo *f = [[Foo alloc] init]; self.foo = f; [f release]; Bar *b = [[Bar alloc] init]; self.bar = b; [b release]; foo.bar = bar; bar.foo = foo; ...so that they can then call methods on each other? Instead of doing this, I'm usually using messaging, etc., but sometimes this seems like it might be a tidier solution. I hardly ever see it in example code (maybe never), so I've shied away from doing it. Can somebody set me straight on this? Thanks.

    Read the article

  • How to configure multiple mappings using FluentHibernate?

    - by chris.baglieri
    First time rocking it with NHibernate/Fluent so apologies in advance if this is a naive question. I have a set of Models I want to map. When I create my session factory I'm trying to do all mappings at once. I am not using auto-mapping (though I may if what I am trying to do ends up being more painful than it ought to be). The problem I am running into is that it seems only the top map is taking. Given the code snippet below and running a unit test that attempts to save 'bar', it fails and checking the logs I see NHibernate is trying to save a bar entity to the foo table. While I suspect it's my mappings it could be something else that I am simply overlooking. Code that creates the session factory (note I've also tried separate calls into .Mappings): Fluently.Configure().Database(MsSqlConfiguration.MsSql2008 .ConnectionString(c => c .Server(@"localhost\SQLEXPRESS") .Database("foo") .Username("foo") .Password("foo"))) .Mappings(m => { m.FluentMappings.AddFromAssemblyOf<FooMap>() .Conventions.Add(FluentNHibernate.Conventions.Helpers .Table.Is(x => "foos")); m.FluentMappings.AddFromAssemblyOf<BarMap>() .Conventions.Add(FluentNHibernate.Conventions.Helpers .Table.Is(x => "bars")); }) .BuildSessionFactory(); Unit test snippet: using (var session = Data.SessionHelper.SessionFactory.OpenSession()) { var bar = new Bar(); session.Save(bar); Assert.NotNull(bar.Id); }

    Read the article

  • SQL: find entries in 1:n relation that don't comply with condition spanning multiple rows

    - by milianw
    I'm trying to optimize SQL queries in Akonadi and came across the following problem that is apparently not easy to solve with SQL, at least for me: Assume the following table structure (should work in SQLite, PostgreSQL, MySQL): CREATE TABLE a ( a_id INT PRIMARY KEY ); INSERT INTO a (a_id) VALUES (1), (2), (3), (4); CREATE TABLE b ( b_id INT PRIMARY KEY, a_id INT, name VARCHAR(255) NOT NULL ); INSERT INTO b (b_id, a_id, name) VALUES (1, 1, 'foo'), (2, 1, 'bar'), (3, 1, 'asdf'), (4, 2, 'foo'), (5, 2, 'bar'), (6, 3, 'foo'); Now my problem is to find entries in a that are missing name entries in table b. E.g. I need to make sure each entry in a has at least the name entries "foo" and "bar" in table b. Hence the query should return something similar to: a_id = 3 is missing name "bar" a_id = 4 is missing name "foo" and "bar" Since both tables are potentially huge in Akonadi, performance is of utmost importance. One solution in MySQL would be: SELECT a.a_id, CONCAT('|', GROUP_CONCAT(name ORDER BY NAME ASC SEPARATOR '|'), '|') as names FROM a LEFT JOIN b USING( a_id ) GROUP BY a.a_id HAVING names IS NULL OR names NOT LIKE '%|bar|foo|%'; I have yet to measure the performance tomorrow, but severly doubt it's any fast for tens of thousand of entries in a and thrice as many in b. Furthermore we want to support SQLite and PostgreSQL where to my knowledge the GROUP_CONCAT function is not available. Thanks, good night.

    Read the article

  • php / phpDoc - @return instance of $this class ?

    - by searbe
    How do I mark a method as "returns an instance of the current class" in my phpDoc? In the following example my IDE (Netbeans) will see that setSomething always returns a foo object. But that's not true if I extent the object - it'll return $this, which in the second example is a bar object not a foo object. class foo { protected $_value = null; /** * Set something * * @param string $value the value * @return foo */ public function setSomething($value) { $this->_value = $value; return $this; } } $foo = new foo(); $out = $foo->setSomething(); So fine - setSomething returns a foo - but in the following example, it returns a bar..: class bar extends foo { public function someOtherMethod(){} } $bar = new bar(); $out = $bar->setSomething(); $out->someOtherMethod(); // <-- Here, Netbeans will think $out // is a foo, so doesn't see this other // method in $out's code-completion ... it'd be great to solve this as for me, code completion is a massive speed-boost. Anyone got a clever trick, or even better, a proper way to document this with phpDoc?

    Read the article

  • Can AutoMapper create a map for an interface and then map with a derived type?

    - by TheCloudlessSky
    I have an interface IFoo: public interface IFoo { int Id { get; set; } } And then a concrete implementation: public class Bar : IFoo { public int Id { get; set; } public string A { get; set; } } public class Baz : IFoo { public int Id { get; set; } public string B { get; set; } } I'd like to be able to map all IFoo but specifying their derived type instead: Mapper.CreateMap<int, IFoo>().AfterMap((id, foo) => foo.Id = id); And then map (without explicitly creating maps for Bar and Baz): var bar = Mapper.Map<int, Bar>(123); // bar.Id == 123 var baz = Mapper.Map<int, Baz>(456); // baz.Id == 456 But this doesn't work in 1.1. I know I could specify all Bar and Baz but if there are 20 of these, I'd like to not have to manage them and rather just have what I did above for creating the map. Is this possible?

    Read the article

  • "render as JSON" is display JSON as text instead of returning it to AJAX call as expected

    - by typoknig
    I'm navigating to the index action of MyController. Some of the on the index page I'm making an AJAX call back to myAction in MyController. I expect myAction action to return some data as JSON to my AJAX call so I can do something with the data client side, but instead of returning the data as JSON like I want, the data is being displayed as text. Example of my Grails controller: class MyController { def index() { render( view: "myView" ) } def myAction { def mapOfStuff = [ "foo": "foo", "bar":] render mapOfStuff as JSON } } Example of my JavaScript: $( function() { function callMyAction() { $.ajax({ dataType: 'json', url: base_url + '/myController/myAction', success: function( data ) { $(function() { if( data.foo ) { alert( data.foo ); } if( data.bar ) { alert( data.bar ); } }); } }); } }); What I expect is that my page will render, then my JavaScript will be called, then two alerts will display. Instead the JSON array is displayed as text in my browser window: {"foo":"foo","bar":"bar"} At this point the last segment of the URL in my address bar is myAction and not index. Now if I manually enter the URL of the index page and press refresh, all works as expected. I have half a dozen AJAX calls I do the exact same way and none of them are having problems. What is the deal here? UPDATE: I have noticed something. When I set a break point in the index action of MyController and another one in the myAction action, the break point in myAction gets hit BEFORE the break point in index, even though I am navigating to the index. This is obviously closer to the root cause of my problem, but why is it happening?

    Read the article

  • clear explanation sought: throw() and stack unwinding

    - by Jerry Gagelman
    I'm not a programmer but have learned a lot watching others. I am writing wrapper classes to simplify things with a really technical API that I'm working with. Its routines return error codes, and I have a function that converts those to strings: static const char* LibErrString(int errno); For uniformity I decided to have member of my classes throw an exception when an error is encountered. I created a class: struct MyExcept : public std::exception { const char* errstr_; const char* what() const throw() {return errstr_;} MyExcept(const char* errstr) : errstr_(errstr) {} }; Then, in one of my classes: class Foo { public: void bar() { int err = SomeAPIRoutine(...); if (err != SUCCESS) throw MyExcept(LibErrString(err)); // otherwise... } }; The whole thing works perfectly: if SomeAPIRoutine returns an error, a try-catch block around the call to Foo::bar catches a standard exception with the correct error string in what(). Then I wanted the member to give more information: void Foo::bar() { char adieu[128]; int err = SomeAPIRoutine(...); if (err != SUCCESS) { std::strcpy(adieu,"In Foo::bar... "); std::strcat(adieu,LibErrString(err)); throw MyExcept((const char*)adieu); } // otherwise... } However, when SomeAPIRoutine returns an error, the what() string returned by the exception contains only garbage. It occurred to me that the problem could be due to adieu going out of scope once the throw is called. I changed the code by moving adieu out of the member definition and making it an attribute of the class Foo. After this, the whole thing worked perfectly: a try-call block around a call to Foo::bar that catches an exception has the correct (expanded) string in what(). Finally, my question: what exactly is popped off the stack (in sequence) when the exception is thrown in the if-block when the stack "unwinds?" As I mentioned above, I'm a mathematician, not a programmer. I could use a really lucid explanation of what goes onto the stack (in sequence) when this C++ gets converted into running machine code.

    Read the article

  • Remove Clutter from the Opera Speed Dial Page

    - by Asian Angel
    Do you want to clean up the Speed Dial page in Opera so that only the thumbnails are visible? Today we show you a couple of tweaks that will make it happen. Speed Dial Page The search bar and text at the bottom take up room and add clutter to the look and feel of Opera’s Speed Dial page. Changing the Settings Two small tweaks to the config settings will clean it all up. To get started type opera:config into the address bar and press enter. Type “speed” into the quick find bar and look for the Speed Dial State entry. Change the 1 to 2 and click save. You will see the following message concerning the changes…click OK. Next type “search” into the quick find bar and look for the Speed Dial Search Type entry. Remove all of the text in the blank and click save. Once again you will see a message about the latest change that you have made. At this point you may need to restart Opera for both changes to take full effect. There will be a noticeable difference in how the Speed Dial page looks afterwards and is much cleaner without the Search bar and text field. You will also still be able to access the right click context menu just like before. Conclusion If you have been looking to get a cleaner and less cluttered Speed Dial page in Opera, then these two little hacks will get the job done! Similar Articles Productive Geek Tips Set the Speed Dial as the Opera Startup PageReplace Google Chrome’s New Tab Page with Speed DialSpeed up Windows Vista Start Menu Search By Limiting ResultsBlank New Tab Quick-Fix for Google ChromeMonitor and Control Memory Usage in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Backup Outlook 2010 Daily Motivator (Firefox) FetchMp3 Can Download Videos & Convert Them to Mp3 Use Flixtime To Create Video Slideshows Creating a Password Reset Disk in Windows Bypass Waiting Time On Customer Service Calls With Lucyphone

    Read the article

  • JOGL Double Buffering

    - by Bar
    What is eligible way to implement double buffering in JOGL (Java OpenGL)? I am trying to do that by the following code: ... /** Creating canvas. */ GLCapabilities capabilities = new GLCapabilities(); capabilities.setDoubleBuffered(true); GLCanvas canvas = new GLCanvas(capabilities); ... /** Function display(…), which draws a white Rectangle on a black background. */ public void display(GLAutoDrawable drawable) { drawable.swapBuffers(); gl = drawable.getGL(); gl.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT); gl.glClearColor(0.0f, 0.0f, 0.0f, 0.0f); gl.glColor3f(1.0f, 1.0f, 1.0f); gl.glBegin(GL.GL_POLYGON); gl.glVertex2f(-0.5f, -0.5f); gl.glVertex2f(-0.5f, 0.5f); gl.glVertex2f(0.5f, 0.5f); gl.glVertex2f(0.5f, -0.5f); gl.glEnd(); } ... /** Other functions are empty. */ Questions: — When I'm resizing the window, I usually get flickering. As I see it, I have a mistake in my double buffering implementation. — I have doubt, where I must place function swapBuffers — before or after (as many sources says) the drawing? As you noticed, I use function swapBuffers (drawable.swapBuffers()) before drawing a rectangle. Otherwise, I'm getting a noise after resize. So what is an appropriate way to do that? Including or omitting the line capabilities.setDoubleBuffered(true) does not make any effect.

    Read the article

  • UIView Animation: PartialCurl ...bug during rotate?

    - by itai alter
    Hello all, a short question. I've created an app for the iPad, much like a utility app for the iPhone (one mainView, one flipSideView). The animation between them is UIModalTransitionStylePartialCurl. shouldAutorotateToInterfaceOrientation is returning YES. If I rotate the device BEFORE entering the FlipSide, everything is okay and the PartialCurl is displayed okay. But if I enter the FlipSide and then rotate the device, while the UIElements do rotate and position themselves just fine, the actual "page curl" stays with its initial orientation. it just won't budge :) Is it a known issue? am I doing something wrong? Thanks!

    Read the article

  • Toolbar items in sub-nib

    - by roe
    This question has probably been asked before, but my google-fu must be inferior to everybody else's, cause I can't figure this out. I'm playing around with the iPhone SDK, and I'm building a concept app I've been thinking about. If we have a look at the skeleton generated with a navigation based app, the MainWindow.xib contains a navigation controller, and within that a root-view controller (and a navigation bar and toolbar if you play around with it a little). The root-view controller has the RootViewController-nib associated with it, which loads the table-view. So far so good. To add content to the tool bar and to the navigation bar, I'm supposed to add those to in the hierarchy below the Root View Controller (which works, no problem). However, what I can't figure out is, this is all still within the MainWindow.xib (or, at runtime, nib). How would I define a xib in order for it to pick up tool bar items from that? I want to do (the equivalent of, just reusing the name here) RootViewController *controller = [[RootViewController alloc] initWithNibName:nil bundle:nil]; [self.navigationController pushViewController:controller animated:YES]; [controller release]; and have the navigation controller pick-up on the tool bar items defined in that nib. The logical place to put it would be in the hierarchy under File's Owner (which is of type RootViewController), but it doesn't appear to be possible. Currently, I'm assigning these (navigationItem and toolbarItems) manually in the viewDidLoad method, or define them in the MainWindow.xib directly to be loaded when the app initializes. Any ideas? Edit I guess I'll try to explain with a picture. This is the Interface Builder of the main window, pretty much as it comes out of the wizard to create a navigation based project. I've added a toolbar item for clarity though. You can see the navigation controller, with a toolbar and a navigation bar, and the root view controller. Basically, the Root View Controller has a bar button item and a navigation item as you can see. The thing is, it's also got a nib associated with it, which, when loaded will instantiate a view, and assign it to the view outlet of the controller (which in that nib is File's Owner, of type RootViewController, as should be). How can I get the toolbar item, and the navigation item, into the other nib, the RootViewController.nib so I can remove them here. The RootViewController.nib adds everything else to the Root View Controller, why not these items? The background for this is that I want to simply instantiate RootViewController, initialize it with its own nib (i.e. initWithNibName:nil shown above), and push it onto the navigation controller, without having to add the navigation/toolbar items in coding (as I do it now).

    Read the article

  • How to honor/inherit user's language settings in WinForm app

    - by msorens
    I have worked with globalization settings in the past but not within the .NET environment, which is the topic of this question. What I am seeing is most certainly due to knowledge I have yet to learn so I would appreciate illumination on the following. Setup: My default language setting is English (en-us specifically). I added a second language (Danish) on my development system (WinXP) and then opened the language bar so I could select either at will. I selected Danish on the language bar then opened Notepad and found the language reverted to English on the language bar. I understand that the language setting is per application, so it seemed that Notepad set the default back to English. (I found that strange since Windows and thus Notepad is used all over the world.) Closing Notepad returned the setting on the language bar to Danish. I then launched my open custom WinForm application--which I know does not set the language--and it also reverted from English to Danish when opened, then back to Danish when terminated! Question #1A: How do I get my WinForm application upon launch to inherit the current setting of the language bar? My experiment seems to indicate that each application starts with the system default and requires the user to manually change it once the app is running--this would seem to be a major inconvenience for anyone that wants to work with more than one language! Question #1B: If one must, in fact, set the language manually in a multi-language scenario, how do I change my default system language (e.g. to Danish) so I can test my app's launch in another language? I added a display of the current language in my application for this next experiment. Specifically I set a MouseEnter handler on a label that set its tooltip to CultureInfo.CurrentCulture.Name so each time I mouse over I thought I should see the current language setting. Since setting the language before I launch my app did not work, I launched it then set the language to Danish. I found that some things (like typing in a TextBox) did honor this Danish setting. But mousing over the instrumented label still showed en-us! Question #2A: Why does CultureInfo.CurrentCulture.Name not reflect the change from my language bar while other parts of my app seem to recognize the change? (Trying CultureInfo.CurrentUICulture.Name produced the same result.) Question #2B: Is there an event that fires upon changes on the language bar so I could recognize within my app when the language setting changes?

    Read the article

  • Dragging a Sprite (Cocos2D) while Chipmunk is simulating

    - by itai alter
    Hello all! I have a simple project built with Cocos2D and Chipmunk. So far it's just a Ball (body, shape & sprite) bouncing on the Ground (a static line segment at the bottom of the screen). I implemented the ccTouchesBegan/Moved/Ended methods to drag the ball around. I've tried both: cpBodySlew(ballBody, touchPoint, 1.0/60.0f); and ballBody->p = cgPointMake(touchPoint.x,touchPoint.y); and while the Ball does follow my dragging, it's still being affected by gravity and it tries to go down (which causes velocity problems and others). Does anyone know of the preferred way to Drag an active Body while the physics simulation is going on? Do I need somehow to stop the simulation and turn it back on afterwards? Thanks!

    Read the article

  • $1 vs \1 in Perl regex substitutions

    - by Mr Foo Bar
    I'm debugging some code and wondered if there is any practical difference between $1 and \1 in Perl regex substitutions For example: my $package_name = "Some::Package::ButNotThis"; $package_name =~ s{^(\w+::\w+)}{$1}; print $package_name; # Some::Package This following line seems functionally equivalent: $package_name =~ s{^(\w+::w+)}{\1}; Are there subtle differences between these two statements? Do they behave differently in different versions of Perl?

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >