Search Results

Search found 24201 results on 969 pages for 'andrew case'.

Page 804/969 | < Previous Page | 800 801 802 803 804 805 806 807 808 809 810 811  | Next Page >

  • What are the elegant ways to do MixIns in Python?

    - by Slava Vishnyakov
    I need to find an elegant way to do 2 kinds of MixIns. First: class A(object): def method1(self): do_something() Now, a MixInClass should make method1 do this: do_other() - A.method1() - do_smth_else() - i.e. basically "wrap" the older function. I'm pretty sure there must exist a good solution to this. Second: class B(object): def method1(self): do_something() do_more() In this case, I want MixInClass2 to be able to inject itself between do_something() and do_more(), i.e.: do_something() - MixIn.method1 - do_more(). I understand that probably this would require modifying class B - that's ok, just looking for simplest ways to achieve this. These are pretty trivial problems and I actually solved them, but my solution is tainted. Fisrt one by using self._old_method1 = self.method1(); self.method1() = self._new_method1(); and writing _new_method1() that calls to _old_method1(). Problem: multiple MixIns will all rename to _old_method1 and it is inelegant. Second MixIn one was solved by creating a dummy method call_mixin(self): pass and injecting it between calls and defining self.call_mixin(). Again inelegant and will break on multiple MixIns.. Any ideas?

    Read the article

  • Stupid newbie c++ two-dimensional array problem.

    - by paulson scott
    I've no idea if this is too newbie or generic for stackoverlflow. Apologies if that's the case, I don't intend to waste time. I've just started working through C++ Primer Plus and I've hit a little stump. This is probably super laughable but: for (int year = 0; year < YEARS; year++) { cout << "Year " << year + 1 << ":"; for (int month = 0; month < MONTHS; month++) { absoluteTotal = (yearlyTotal[year][year] += sales[year][month]); } cout << yearlyTotal[year][year] << endl; } cout << "The total number of books sold over a period of " << YEARS << " years is: " << absoluteTotal << endl; I wish to display the total of all 3 years. The rest of the code works fine: input is fine, individual yearly output is fine but I just can't get 3 years added together for one final total. I did have the total working at one point but I didn't have the individual totals working. I messed with it and reversed the situation. I've been messing with it for God knows how long. Any idea guys? Sorry if this isn't the place!

    Read the article

  • Unusual heap size limitations in VS2003 C++

    - by Shane MacLaughlin
    I have a C++ app that uses large arrays of data, and have noticed while testing that it is running out of memory, while there is still plenty of memory available. I have reduced the code to a sample test case as follows; void MemTest() { size_t Size = 500*1024*1024; // 512mb if (Size > _HEAP_MAXREQ) TRACE("Invalid Size"); void * mem = malloc(Size); if (mem == NULL) TRACE("allocation failed"); } If I create a new MFC project, include this function, and run it from InitInstance, it works fine in debug mode (memory allocated as expected), yet fails in release mode (malloc returns NULL). Single stepping through release into the C run times, my function gets inlined I get the following // malloc.c void * __cdecl _malloc_base (size_t size) { void *res = _nh_malloc_base(size, _newmode); RTCCALLBACK(_RTC_Allocate_hook, (res, size, 0)); return res; } Calling _nh_malloc_base void * __cdecl _nh_malloc_base (size_t size, int nhFlag) { void * pvReturn; // validate size if (size > _HEAP_MAXREQ) return NULL; ' ' And (size _HEAP_MAXREQ) returns true and hence my memory doesn't get allocated. Putting a watch on size comes back with the exptected 512MB, which suggests the program is linking into a different run-time library with a much smaller _HEAP_MAXREQ. Grepping the VC++ folders for _HEAP_MAXREQ shows the expected 0xFFFFFFE0, so I can't figure out what is happening here. Anyone know of any CRT changes or versions that would cause this problem, or am I missing something way more obvious?

    Read the article

  • What file format can I use to output a formatted text file straight from a program without having the markup be too complicated?

    - by Matt
    Premise: I am parsing a file that is quite nearly XML, but not quite. From this file I would like to extract data and output in a file that a user could open up in some program and read. To make the data reasonable, I would almost certainly need to format the text. In case it matters, I will probably be using Java to write the program. Problem: I cannot find a file format that supports formatting without having terribly complex rules and encoding problems. Attempts: I looked into a basic .txt extension first, but it does not have enough formatting advantage. I then tried a .rtf extension, but the rules for outputting text seem to be terribly complicated. It was then suggested that I used XML, but I do not understand how this file would be viewed. This appears to be probably the best solution, but I don't understand much about it. Perhaps somebody could shed some light here. In Other Words: Could somebody suggest and easy to use file format and/or shed some light on how to use XML for text formatting and viewing?

    Read the article

  • How to calculate the normal of points on a 3D cubic Bézier curve given normals for its start and end points?

    - by Robert
    I'm trying to render a "3D ribbon" using a single 3D cubic Bézier curve to describe it (the width of the ribbon is some constant). The first and last control points have a normal vector associated with them (which are always perpendicular to the tangents at those points, and describe the surface normal of the ribbon at those points), and I'm trying to smoothly interpolate the normal vector over the course of the curve. For example, given a curve which forms the letter 'C', with the first and last control points both having surface normals pointing upwards, the ribbon should start flat, parallel to the ground, slowly turn, and then end flat again, facing the same way as the first control point. To do this "smoothly", it would have to face outwards half-way through the curve. At the moment (for this case), I've only been able to get all the surfaces facing upwards (and not outwards in the middle), which creates an ugly transition in the middle. It's quite hard to explain, I've attached some images below of this example with what it currently looks like (all surfaces facing upwards, sharp flip in the middle) and what it should look like (smooth transition, surfaces slowly rotate round). Silver faces represent the front, black faces the back. Incorrect, what it currently looks like: Correct, what it should look like: All I really need is to be able to calculate this "hybrid normal vector" for any point on the 3D cubic bézier curve, and I can generate the polygons no problem, but I can't work out how to get them to smoothly rotate round as depicted. Completely stuck as to how to proceed!

    Read the article

  • Simplification / optimization of GPS track

    - by GreyCat
    I've got a GPS track, produces by gpxlogger(1) (supplied as a client for gpsd). GPS receiver updates its coordinates every 1 second, gpxlogger's logic is very simple, it writes down location (lat, lon, ele) and a timestamp (time) received from GPS every n seconds (n = 3 in my case). After writing down a several hours worth of track, gpxlogger saves several megabyte long GPX file that includes several thousands of points. Afterwards, I try to plot this track on a map and use it with OpenLayers. It works, but several thousands of points make using the map a sloppy and slow experience. I understand that having several thousands of points of suboptimal. There are myriads of points that can be deleted without losing almost anything: when there are several points making up roughly the straight line and we're moving with the same constant speed between them, we can just leave the first and the last point and throw anything else. I thought of using gpsbabel for such track simplification / optimization job, but, alas, it's simplification filter works only with routes, i.e. analyzing only geometrical shape of path, without timestamps (i.e. not checking that the speed was roughly constant). Is there some ready-made utility / library / algorithm available to optimize tracks? Or may be I'm missing some clever option with gpsbabel?

    Read the article

  • Linq2Sql: query - subquery optimisation

    - by Budda
    I have the following query: IList<InfrStadium> stadiums = (from sector in DbContext.sectors where sector.Type=typeValue select new InfrStadium(sector.TeamId) ).ToList(); and InfrStadium class constructor: private InfrStadium(int teamId) { IList<Sector> teamSectors = (from sector in DbContext.sectors where sector.TeamId==teamId select sector) .ToList<>(); ... work with data } Current implementation perform 1+n queries, where n - number of records fetched the 1st time. I want to optimize that. And another one I would love to do using 'group' operator in way like this: IList<InfrStadium> stadiums = (from sector in DbContext.sectors group sector by sector.TeamId into team_sectors select new InfrStadium(team_sectors.Key, team_sectors) ).ToList(); with appropriate constructor: private InfrStadium(int iTeamId, IEnumerable<InfrStadiumSector> eSectors) { IList<Sector> teamSectors = eSectors.ToList(); ... work with data } But attempt to launch query causes the following error: Expression of type 'System.Int32' cannot be used for constructor parameter of type 'System.Collections.Generic.IEnumerable`1[InfrStadiumSector]' Question 1: Could you please explain, what is wrong here, I don't understand why 'team_sectors' is applied as 'System.Int32'? I've tried to change query a little (replace IEnumerable with IQueryeable): IList<InfrStadium> stadiums = (from sector in DbContext.sectors group sector by sector.TeamId into team_sectors select new InfrStadium(team_sectors.Key, team_sectors.AsQueryable()) ).ToList(); with appropriate constructor: private InfrStadium(int iTeamId, IQueryeable<InfrStadiumSector> eSectors) { IList<Sector> teamSectors = eSectors.ToList(); ... work with data } In this case I've received another but similar error: Expression of type 'System.Int32' cannot be used for parameter of type 'System.Collections.Generic.IEnumerable1[InfrStadiumSector]' of method 'System.Linq.IQueryable1[InfrStadiumSector] AsQueryableInfrStadiumSector' Question 2: Actually, the same question: can't understand at all what is going on here... P.S. I have another to optimize query idea (describe here: Linq2Sql: query optimisation) but I would love to find a solution with 1 request to DB).

    Read the article

  • Using Dispose on a Singleton to Cleanup Resources

    - by ImperialLion
    The question I have might be more to do with semantics than with the actual use of IDisposable. I am working on implementing a singleton class that is in charge of managing a database instance that is created during the execution of the application. When the application closes this database should be deleted. Right now I have this delete being handled by a Cleanup() method of the singleton that the application calls when it is closing. As I was writing the documentation for Cleanup() it struck me that I was describing what a Dispose() method should be used for i.e. cleaning up resources. I had originally not implemented IDisposable because it seemed out of place in my singleton, because I didn't want anything to dispose the singleton itself. There isn't currently, but in the future might be a reason that this Cleanup() might be called but the singleton should will need to still exist. I think I can include GC.SuppressFinalize(this); in the Dispose method to make this feasible. My question therefore is multi-parted: 1) Is implementing IDisposable on a singleton fundamentally a bad idea? 2) Am I just mixing semantics here by having a Cleanup() instead of a Dispose() and since I'm disposing resources I really should use a dispose? 3) Will implementing 'Dispose()' with GC.SuppressFinalize(this); make it so my singleton is not actually destroyed in the case I want it to live after a call to clean-up the database.

    Read the article

  • How can I execute an ANTLR parser action for each item in a rule that can match more than one item?

    - by Chris Farmer
    I am trying to write an ANTLR parser rule that matches a list of things, and I want to write a parser action that can deal with each item in the list independently. Some example input for these rules is: $(A1 A2 A3) I'd like this to result in an evaluator that contains a list of three MyIdentEvaluator objects -- one for each of A1, A2, and A3. Here's a snippet of my grammar: my_list returns [IEvaluator e] : { $e = new MyListEvaluator(); } '$' LPAREN op=my_ident+ { /* want to do something here for each 'my_ident'. */ /* the following seems to see only the 'A3' my_ident */ $e.Add($op.e); } RPAREN ; my_ident returns [IEvaluator e] : IDENT { $e = new MyIdentEvaluator($IDENT.text); } ; I think my_ident is defined correctly, because I can see the three MyIdentEvaluators getting created as expected for my input string, but only the last my_ident ever gets added to the list (A3 in my example input). How can I best treat each of these elements independently, either through a grammar change or a parser action change? It also occurred to me that my vocabulary for these concepts is not what it should be, so if it looks like I'm misusing a term, I probably am. EDIT in response to Wayne's comment: I tried to use op+=my_ident+. In that case, the $op in my action becomes an IList (in C#) that contains Antlr.Runtime.Tree.CommonTree instances. It does give me one entry per matched token in $op, so I see my three matches, but I don't have the MyIdentEvaluator instances that I really want. I was hoping I could then find a rule attribute in the ANTLR docs that might help with this, but nothing seemed to help me get rid of this IList. Result... Based on chollida's answer, I ended up with this which works well: my_list returns [IEvaluator e] : { $e = new MyListEvaluator(); } '$' LPAREN (op=my_ident { $e.Add($op.e); } )+ RPAREN ; The Add method gets called for each match of my_ident.

    Read the article

  • What algorithm can I use to determine points within a semi-circle?

    - by khayman218
    I have a list of two-dimensional points and I want to obtain which of them fall within a semi-circle. Originally, the target shape was a rectangle aligned with the x and y axis. So the current algorithm sorts the pairs by their X coord and binary searches to the first one that could fall within the rectangle. Then it iterates over each point sequentially. It stops when it hits one that is beyond both the X and Y upper-bound of the target rectangle. This does not work for a semi-circle as you cannot determine an effective upper/lower x and y bounds for it. The semi-circle can have any orientation. Worst case, I will find the least value of a dimension (say x) in the semi-circle, binary search to the first point which is beyond it and then sequentially test the points until I get beyond the upper bound of that dimension. Basically testing an entire band's worth of points on the grid. The problem being this will end up checking many points which are not within the bounds.

    Read the article

  • many to many and integrity when saving the null

    - by user369759
    I have two classes Software, Tag - they are related by ManyToMany. We CAN NOT create Software without putting the Tag to it. I want to write a test which check that: @Test public void createSoftwareWithNullTags() { List<Tag> tags = null; Software software = new Software("software1", "description1", tags); try { software.save(); fail("tags should not be null"); } catch (Exception ex) { // NEVER COME HERE !!! } } So, this test fails. I guess it is not good test for that case - because it even do not try to SAVE data to SOFTWARE_TAG table. Of course i could do validation manually but i want implement it with hibernate, using some annotation or something. Is it possible? Or how would you do this? My entities: @Entity public class Tag extends Model { public String title; public Tag(String title) { this.title = title; } @ManyToMany( cascade = {CascadeType.ALL}, mappedBy = "tags", targetEntity = Software.class ) public List<Software> softwares; } @Entity public class Software extends Model { public String title; public String description; @ManyToOne(optional = false) public Author author; @ManyToMany(cascade = CascadeType.ALL) @JoinTable( name = "SOFTWARE_TAG", joinColumns = @JoinColumn(name = "Software_id"), inverseJoinColumns = @JoinColumn(name = "Tag_id") ) public List<Tag> tags; public Software(String title, String description, Author author, List<Tag> tags) { this.title = title; this.description = description; this.author = author; this.tags = tags; } }

    Read the article

  • Spawning a thread in python

    - by morpheous
    I have a series of 'tasks' that I would like to run in separate threads. The tasks are to be performed by separate modules. Each containing the business logic for processing their tasks. Given a tuple of tasks, I would like to be able to spawn a new thread for each module as follows. from foobar import alice, bob charles data = getWorkData() # these are enums (which I just found Python doesn't support natively) :( tasks = (alice, bob, charles) for task in tasks # Ok, just found out Python doesn't have a switch - @#$%! # yet another thing I'll need help with then ... switch case alice: #spawn thread here - how ? alice.spawnWorker(data) No prizes for guessing I am still thinking in C++. How can I write this in a Pythonic way using Pythonic 'enums' and 'switch'es, and be able to run a module in a new thread. Obviously, the modules will all have a class that is derived from a ABC (abstract base class) called Plugin. The spawnWorker() method will be declared on the Plugin interface and defined in the classes implemented in the various modules. Maybe, there is a better (i.e. Pythonic) way of doing all this?. I'd be interested in knowing

    Read the article

  • Can I use foreign key restrictions to return meaningful UI errors with PHP

    - by Shane
    I want to start by saying that I am a big fan of using foreign keys and have a tendency to use them even on small projects to keep my database from being filled with orphaned data. On larger projects I end up with gobs of keys which end up covering upwards of 8 - 10 layers of data. I want to know if anyone could suggest a graceful way of handling 'expected errors' from the MySQL database in a way that I can construct meaningful messages for the end user. I will explain 'expected errors' with an example. Lets say I have a set of tables used for basic discussions: discussion questions responses users Hierarchically they would probably look something like this: -users --discussion ---questions ----responses When I attempt to delete a user the FKs will check discussions and if any discussion exist the deletion is restricted, deleting discussion checks questions, deleting questions checks responses. An 'expected error' in this case would be attempting to delete a user--unless they are newly created I can anticipate that one or more foreign keys will fail causing an error. What I WANT to do is to catch that error on deletion and be able to tell the end user something like 'We're sorry, but all discussions must be removed before you can delete this user...'. Now I know I can keep and maintain matching arrays in PHP and map specific errors to messages but that is messy and prone to becoming stagnant, or I could manually run a set of selects prior to attempting the deletion, but then I am doing just as much work as without using FKs. Any help here would be greatly appreciated, or if I am just looking at this completely wrong then please let me know. On a side note I generally use CodeIgniter for my application development, so if that would open up an avenue through that framework please consider that in your answers. Thanks in Advance

    Read the article

  • How to re-prompt after a trap return in bash?

    - by verbose
    I have a script that is supposed to trap SIGTERM and SIGTSTP. This is what I have in the main block: trap 'killHandling' TERM And in the function: killHandling () { echo received kill signal, ignoring return } ... and similar for SIGINT. The problem is one of user interface. The script prompts the user for some input, and if the SIGTERM or SIGINT occurs when the script is waiting for input, it's confusing. Here is the output in that case: Enter something: # SIGTERM received received kill signal, ignoring # shell waits at blank line for user input, user gets confused # user hits "return", which then gets read as blank input from the user # bad things happen because of the blank input I have definitely seen scripts which handle this more elegantly, like so: Enter something: # SIGTERM received received kill signal, ignoring Enter something: # re-prompts user for user input, user is not confused What is the mechanism used to accomplish the latter? Unfortunately I can't simply change my trap code to do the re-prompt as the script prompts the user for several things and what the prompt says is context-dependent. And there has to be a better way than writing context-dependent trap functions. I'd be very grateful for any pointers. Thanks!

    Read the article

  • What limits scaling in this simple OpenMP program?

    - by Douglas B. Staple
    I'm trying to understand limits to parallelization on a 48-core system (4xAMD Opteron 6348, 2.8 Ghz, 12 cores per CPU). I wrote this tiny OpenMP code to test the speedup in what I thought would be the best possible situation (the task is embarrassingly parallel): // Compile with: gcc scaling.c -std=c99 -fopenmp -O3 #include <stdio.h> #include <stdint.h> int main(){ const uint64_t umin=1; const uint64_t umax=10000000000LL; double sum=0.; #pragma omp parallel for reduction(+:sum) for(uint64_t u=umin; u<umax; u++) sum+=1./u/u; printf("%e\n", sum); } I was surprised to find that the scaling is highly nonlinear. It takes about 2.9s for the code to run with 48 threads, 3.1s with 36 threads, 3.7s with 24 threads, 4.9s with 12 threads, and 57s for the code to run with 1 thread. Unfortunately I have to say that there is one process running on the computer using 100% of one core, so that might be affecting it. It's not my process, so I can't end it to test the difference, but somehow I doubt that's making the difference between a 19~20x speedup and the ideal 48x speedup. To make sure it wasn't an OpenMP issue, I ran two copies of the program at the same time with 24 threads each (one with umin=1, umax=5000000000, and the other with umin=5000000000, umax=10000000000). In that case both copies of the program finish after 2.9s, so it's exactly the same as running 48 threads with a single instance of the program. What's preventing linear scaling with this simple program?

    Read the article

  • Fastest XML parser for small, simple documents in Java

    - by Varkhan
    I have to objectify very simple and small XML documents (less than 1k, and it's almost SGML: no namespaces, plain UTF-8, you name it...), read from a stream, in Java. I am using JAXP to process the data from my stream into a Document object. I have tried Xerces, it's way too big and slow... I am using Dom4j, but I am still spending way too much time in org.dom4j.io.SAXReader. Does anybody out there have any suggestion on a faster, more efficient implementation, keeping in mind I have very tough CPU and memory constraints? [Edit 1] Keep in mind that my documents are very small, so the overhead of staring the parser can be important. For instance I am spending as much time in org.xml.sax.helpers.XMLReaderFactory.createXMLReader as in org.dom4j.io.SAXReader.read [Edit 2] The result has to be in Dom format, as I pass the document to decision tools that do arbitrary processing on it, like switching code based on the value of arbitrary XPaths, but also extracting lists of values packed as children of a predefined node. [Edit 3] In any case I eventually need to load/parse the complete document, since all the information it contains is going to be used at some point. (This question is related to, but different from, http://stackoverflow.com/questions/373833/best-xml-parser-for-java )

    Read the article

  • Is there a term for this concept, and does it exist in a static-typed language?

    - by Strilanc
    Recently I started noticing a repetition in some of my code. Of course, once you notice a repetition, it becomes grating. Which is why I'm asking this question. The idea is this: sometimes you write different versions of the same class: a raw version, a locked version, a read-only facade version, etc. These are common things to do to a class, but the translations are highly mechanical. Surround all the methods with lock acquires/releases, etc. In a dynamic language, you could write a function which did this to an instance of a class (eg. iterate over all the functions, replacing them with a version which acquires/releases a lock.). I think a good term for what I mean is 'reflected class'. You create a transformation which takes a class, and returns a modified-in-a-desired-way class. Synchronization is the easiest case, but there are others: make a class immutable [wrap methods so they clone, mutate the clone, and include it in the result], make a class readonly [assuming you can identify mutating methods], make a class appear to work with type A instead of type B, etc. The important part is that, in theory, these transformations make sense at compile-time. Even though an ActorModel<T> has methods which change depending on T, they depend on T in a specific way knowable at compile-time (ActorModel<T> methods would return a future of the original result type). I'm just wondering if this has been implemented in a language, and what it's called.

    Read the article

  • Is there a Scala version of .irbrc or another way to define some default libraries for REPL use?

    - by Tom Morris
    I've written a little library that uses implicits to add functionality that one only needs when using the REPL in Scala. Ruby has libraries like this - for things like pretty printing, firing up text editors (like the interactive_editor gem which invokes Vim from irb - see this post), debuggers and the like. The library I am trying to write adds some methods to java.lang.Class and java.lang.reflect classes using the 'pimp my library' implicit conversion process to help you go and find documentation (initially, with Google, then later possibly with a JavaDoc/ScalaDoc viewer, and maybe the StackOverflow API eventually!). It's an itch-scratching library: I spend so much time copying and pasting classnames into Google that I figured I may as well automate the process. It is the sort of functionality that developers will want to add to their system for use only in the REPL - they shouldn't really be adding it to projects (partly because it may not be something that their fellow developers want, but also because if you are doing some exploratory development, it may be with just a Scala REPL that's not being invoked by an IDE or build tool). In my case, I want to include a few classes and set up some implicits - include a .jar on the CLASSPATH and import it, basically. In Ruby, this is the sort of thing that you'd add to your .irbrc file. Other REPLs have similar ways of setting options and importing libraries. Is there a similar file or way of doing this for the Scala REPL?

    Read the article

  • How do I create an inheritable Semaphore in .NET?

    - by pauldoo
    I am trying to create a Win32 Semaphore object which is inheritable. This means that any child processes I launch may automatically have the right to act on the same Win32 object. My code currently looks as follows: Semaphore semaphore = new Semaphore(0, 10); Process process = Process.Start(pathToExecutable, arguments); But the semaphore object in this code cannot be used by the child process. The code I am writing is a port of come working C++. The old C++ code achieves this by the following: SECURITY_ATTRIBUTES security = {0}; security.nLength = sizeof(security); security.bInheritHandle = TRUE; HANDLE semaphore = CreateSemaphore(&security, 0, LONG_MAX, NULL); Then later when CreateProcess is called the bInheritHandles argument is set to TRUE. (In both the C# and C++ case I am using the same child process (which is C++). It takes the semaphore ID on command line, and uses the value directly in a call to ReleaseSemaphore.) I suspect I need to construct a special SemaphoreSecurity or ProcessStartInfo object, but I haven't figured it out yet.

    Read the article

  • Enum exeeding the 65535 bytes limit of static initializer... what's best to do?

    - by Daniel Bleisteiner
    I've started a rather large Enum of so called Descriptors that I've wanted to use as a reference list in my model. But now I've come across a compiler/VM limit the first time and so I'm looking for the best solution to handle this. Here is my error : The code for the static initializer is exceeding the 65535 bytes limit It is clear where this comes from - my Enum simply has far to much elements. But I need those elements - there is no way to reduce that set. Initialy I've planed to use a single Enum because I want to make sure that all elements within the Enum are unique. It is used in a Hibernate persistence context where the reference to the Enum is stored as String value in the database. So this must be unique! The content of my Enum can be devided into several groups of elements belonging together. But splitting the Enum would remove the unique safety I get during compile time. Or can this be achieved with multiple Enums in some way? My only current idea is to define some Interface called Descriptor and code several Enums implementing it. This way I hope to be able to use the Hibernate Enum mapping as if it were a single Enum. But I'm not even sure if this will work. And I loose unique safety. Any ideas how to handle that case?

    Read the article

  • I tried everyhing, I just can't put my uiview behind my status bar ....

    - by gotye
    Hey guys, I have been trying every tips you already gave to other mates that where having the same issue but I just can't manage to do it ... My app isn't even special, it is a basic app with a tab bar and nav bar ... I currently got something like this in the controller I push (that i want to be fullscreen) : - (void)viewDidLoad { [super viewDidLoad]; self.view.autoresizingMask = 0; self.view.superview.autoresizingMask = 0; [[UIApplication sharedApplication] setStatusBarStyle: UIStatusBarStyleBlackTranslucent]; //[[UIApplication sharedApplication] setStatusBarHidden:YES animated:YES]; [self.navigationController.navigationBar setBarStyle:UIBarStyleBlackTranslucent]; CGRect test = CGRectMake(0.0,0.0,320,44); //[self.navigationController.navigationBar setFrame:test]; //[self setWantsFullScreenLayout:YES]; [self.view.superview setBounds:CGRectMake(0,-20.0,320,480)]; //[self.view setBounds:[[UIScreen mainScreen] bounds]]; [self.view setBounds:CGRectMake(0,-20.0,320,480)]; //CGRect rect4 = CGRectMake(0.0,0.0,320.0,500.0); //[self.view setFrame:rect4]; [self.view setBackgroundColor:[UIColor redColor]]; } As you can see I tried a lot of stuff :D but my red background isn't going under my status bar ... I am using the iphone simulator (in case it has to do with it ..) Thank you for any help ;) Gotye.

    Read the article

  • php paging and the use of limit clause

    - by Average Joe
    Imagine you got a 1m record table and you want to limit the search results down to say 10,000 and not more than that. So what do I use for that? Well, the answer is use the limit clause. example select recid from mytable order by recid asc limit 10000 This is going to give me the last 10,000 records entered into this table. So far no paging. But the limit phrase is already in use. That brings to question to the next level. What if I want to page thru this record particular record set 100 recs at a time? Since the limit phrase is already part of the original query, how do I use it again, this time to take care of the paging? If the org. query did not have a limit clause to begin with, I'd adjust it as limit 0,100 and then adjusting it as 100,100 and then 200,100 and so on while the paging takes it course. But at this time, I cannot. You almost think you'd want to use two limit phrases one after the other - which is not not gonna work. limit 10000 limit 0,1000 for sure it would error out. So what's the solution in this particular case?

    Read the article

  • Flash: Closest point to MovieClip

    - by LiraNuna
    I need to constrain a point inside a DisplayObject given to me by the artist. I got it working but only for the occations where the cursor is still inside bounds. The limited object is called limited. function onSqMouseMove(event:MouseEvent) { if(bounds.hitTestPoint(event.stageX, event.stageY, true)) { limited.x = event.stageX; limited.y = event.stageY; } else { /* Find closest point in the Sprite */ } } limited.addEventListener(MouseEvent.MOUSE_DOWN, function(event:MouseEvent) { stage.addEventListener(MouseEvent.MOUSE_MOVE, onSqMouseMove); }); limited.addEventListener(MouseEvent.MOUSE_UP, function(event:MouseEvent) { stage.removeEventListener(MouseEvent.MOUSE_MOVE, onSqMouseMove); }); How do I go about implementing the other half of the function? I am aware Sprite's startDrag accepts arguments, where the second one is the constraint rectangle, but in my case, bounds are an arbitrary shape. When the object is dragged outside the bounds, I want to calculate the closest point from the cursor to bounds' polygon. Just to note that bounds can have 'holes'.

    Read the article

  • Should I close sockets from both ends?

    - by Roman
    I have the following problem. My client program monitor for availability of server in the local network (using Bonjour, but it does not rally mater). As soon as a server is "noticed" by the client application, the client tries to create a socket: Socket(serverIP,serverPort);. At some point the client can loose the server (Bonjour says that server is not visible in the network anymore). So, the client decide to close the socket, because it is not valid anymore. At some moment the server appears again. So, the client tries to create a new socket associated with this server. But! The server can refuse to create this socket since it (server) has already a socket associated with the client IP and client port. It happens because the socket was closed by the client, not by the server. Can it happen? And if it is the case, how this problem can be solved? Well, I understand that it is unlikely that the client will try to connect to the server from the same port (client port), since client selects its ports randomly. But it still can happen (just by chance). Right?

    Read the article

  • When should we use Views, Temporary Tables and Direct Queries ? What are the Performance issues in a

    - by Shantanu Gupta
    I want to know the performance of using Views, Temp Tables and Direct Queries Usage in a Stored Procedure. I have a table that gets created every time when a trigger gets fired. I know this trigger will be fired very rare and only once at the time of setup. Now I have to use that created table from triggers at many places for fetching data and I confirms it that no one make any changes in that table. i.e ReadOnly Table. I have to use this tables data along with multiple tables to join and fetch result for further queries say select * from triggertable By Using temp table select ... into #tx from triggertable join t2 join t3 and so on select a,b, c from #tx --do something select d,e,f from #tx ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. By Using Views create view viewname ( select ... from triggertable join t2 join t3 and so on ) select a,b, c from viewname --do something select d,e,f from viewname ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. This View can be used in other places as well. So I will be creating at database rather than at sp By Using Direct Query select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something . . --and so on --around 6-7 queries in a row in a stored procedure. Now I can create a view/temporary table/ directly query usage in all upcoming queries. What would be the best to use in this case.

    Read the article

< Previous Page | 800 801 802 803 804 805 806 807 808 809 810 811  | Next Page >