Search Results

Search found 85548 results on 3422 pages for 'placement new'.

Page 11/3422 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Ubuntu 11.10 Gnome Shell new window focus problem

    - by grafthez
    I'm using gnome shell with new ubuntu for few days now and have experienced really annoying behaviour with new windows. Sometimes when I use another window and press e.g. Alt Ctrl T to open new terminal window, I don't get this window being brought to front. Instead I get notification at the bottom that "New terminal window is ready to use". The same is with Pidgin being integrated with gnome shell (via extension). Every time I get new message, window pops up but doesn't show. I need to either Alt Tab it or click the notification. Is there any way to have new windows being always brought to front, and remove those annoying "Window is ready" notifications?

    Read the article

  • Sometimes new windows don't come to the front when launched

    - by grafthez
    I'm using gnome shell with new ubuntu for few days now and have experienced really annoying behaviour with new windows. Sometimes when I use another window and press e.g. Alt Ctrl T to open new terminal window, I don't get this window being brought to front. Instead I get notification at the bottom that "New terminal window is ready to use". The same is with Pidgin being integrated with gnome shell (via extension). Every time I get new message, window pops up but doesn't show. I need to either Alt Tab it or click the notification. Is there any way to have new windows being always brought to front, and remove those annoying "Window is ready" notifications? UPDATE - gconftool-2 --search-key focus_new_windows (as severin asked): /schemas/apps/metacity/general/focus_new_windows = Schema (type: `string' list_type: '*invalid*' car_type: '*invalid*' cdr_type: '*invalid*' locale: `C') /apps/metacity/general/focus_new_windows = smart

    Read the article

  • What is "top new free" on GooglePlay

    - by Lumis
    On Android Market i.e. GooglePlay, there used to be a page with the latest new games. So every game had a chance to get noticed and make its way up especially if it was good. But now I see "top new free" page and no more the latest apps. I don't understand how can be "top new" Anybody knows how this works? If there are no more pages with the very latest uploaded games then the new apps will be barely seen to exist even if they are excellent, and new programmers have very little chance of getting noticed. Any good advice how to promote a new Android app these days?

    Read the article

  • New features of C# 4.0

    This article covers New features of C# 4.0. Article has been divided into below sections. Introduction. Dynamic Lookup. Named and Optional Arguments. Features for COM interop. Variance. Relationship with Visual Basic. Resources. Other interested readings… 22 New Features of Visual Studio 2008 for .NET Professionals 50 New Features of SQL Server 2008 IIS 7.0 New features Introduction It is now close to a year since Microsoft Visual C# 3.0 shipped as part of Visual Studio 2008. In the VS Managed Languages team we are hard at work on creating the next version of the language (with the unsurprising working title of C# 4.0), and this document is a first public description of the planned language features as we currently see them. Please be advised that all this is in early stages of production and is subject to change. Part of the reason for sharing our plans in public so early is precisely to get the kind of feedback that will cause us to improve the final product before it rolls out. Simultaneously with the publication of this whitepaper, a first public CTP (community technology preview) of Visual Studio 2010 is going out as a Virtual PC image for everyone to try. Please use it to play and experiment with the features, and let us know of any thoughts you have. We ask for your understanding and patience working with very early bits, where especially new or newly implemented features do not have the quality or stability of a final product. The aim of the CTP is not to give you a productive work environment but to give you the best possible impression of what we are working on for the next release. The CTP contains a number of walkthroughs, some of which highlight the new language features of C# 4.0. Those are excellent for getting a hands-on guided tour through the details of some common scenarios for the features. You may consider this whitepaper a companion document to these walkthroughs, complementing them with a focus on the overall language features and how they work, as opposed to the specifics of the concrete scenarios. C# 4.0 The major theme for C# 4.0 is dynamic programming. Increasingly, objects are “dynamic” in the sense that their structure and behavior is not captured by a static type, or at least not one that the compiler knows about when compiling your program. Some examples include a. objects from dynamic programming languages, such as Python or Ruby b. COM objects accessed through IDispatch c. ordinary .NET types accessed through reflection d. objects with changing structure, such as HTML DOM objects While C# remains a statically typed language, we aim to vastly improve the interaction with such objects. A secondary theme is co-evolution with Visual Basic. Going forward we will aim to maintain the individual character of each language, but at the same time important new features should be introduced in both languages at the same time. They should be differentiated more by style and feel than by feature set. The new features in C# 4.0 fall into four groups: Dynamic lookup Dynamic lookup allows you to write method, operator and indexer calls, property and field accesses, and even object invocations which bypass the C# static type checking and instead gets resolved at runtime. Named and optional parameters Parameters in C# can now be specified as optional by providing a default value for them in a member declaration. When the member is invoked, optional arguments can be omitted. Furthermore, any argument can be passed by parameter name instead of position. COM specific interop features Dynamic lookup as well as named and optional parameters both help making programming against COM less painful than today. On top of that, however, we are adding a number of other small features that further improve the interop experience. Variance It used to be that an IEnumerable<string> wasn’t an IEnumerable<object>. Now it is – C# embraces type safe “co-and contravariance” and common BCL types are updated to take advantage of that. Dynamic Lookup Dynamic lookup allows you a unified approach to invoking things dynamically. With dynamic lookup, when you have an object in your hand you do not need to worry about whether it comes from COM, IronPython, the HTML DOM or reflection; you just apply operations to it and leave it to the runtime to figure out what exactly those operations mean for that particular object. This affords you enormous flexibility, and can greatly simplify your code, but it does come with a significant drawback: Static typing is not maintained for these operations. A dynamic object is assumed at compile time to support any operation, and only at runtime will you get an error if it wasn’t so. Oftentimes this will be no loss, because the object wouldn’t have a static type anyway, in other cases it is a tradeoff between brevity and safety. In order to facilitate this tradeoff, it is a design goal of C# to allow you to opt in or opt out of dynamic behavior on every single call. The dynamic type C# 4.0 introduces a new static type called dynamic. When you have an object of type dynamic you can “do things to it” that are resolved only at runtime: dynamic d = GetDynamicObject(…); d.M(7); The C# compiler allows you to call a method with any name and any arguments on d because it is of type dynamic. At runtime the actual object that d refers to will be examined to determine what it means to “call M with an int” on it. The type dynamic can be thought of as a special version of the type object, which signals that the object can be used dynamically. It is easy to opt in or out of dynamic behavior: any object can be implicitly converted to dynamic, “suspending belief” until runtime. Conversely, there is an “assignment conversion” from dynamic to any other type, which allows implicit conversion in assignment-like constructs: dynamic d = 7; // implicit conversion int i = d; // assignment conversion Dynamic operations Not only method calls, but also field and property accesses, indexer and operator calls and even delegate invocations can be dispatched dynamically: dynamic d = GetDynamicObject(…); d.M(7); // calling methods d.f = d.P; // getting and settings fields and properties d[“one”] = d[“two”]; // getting and setting thorugh indexers int i = d + 3; // calling operators string s = d(5,7); // invoking as a delegate The role of the C# compiler here is simply to package up the necessary information about “what is being done to d”, so that the runtime can pick it up and determine what the exact meaning of it is given an actual object d. Think of it as deferring part of the compiler’s job to runtime. The result of any dynamic operation is itself of type dynamic. Runtime lookup At runtime a dynamic operation is dispatched according to the nature of its target object d: COM objects If d is a COM object, the operation is dispatched dynamically through COM IDispatch. This allows calling to COM types that don’t have a Primary Interop Assembly (PIA), and relying on COM features that don’t have a counterpart in C#, such as indexed properties and default properties. Dynamic objects If d implements the interface IDynamicObject d itself is asked to perform the operation. Thus by implementing IDynamicObject a type can completely redefine the meaning of dynamic operations. This is used intensively by dynamic languages such as IronPython and IronRuby to implement their own dynamic object models. It will also be used by APIs, e.g. by the HTML DOM to allow direct access to the object’s properties using property syntax. Plain objects Otherwise d is a standard .NET object, and the operation will be dispatched using reflection on its type and a C# “runtime binder” which implements C#’s lookup and overload resolution semantics at runtime. This is essentially a part of the C# compiler running as a runtime component to “finish the work” on dynamic operations that was deferred by the static compiler. Example Assume the following code: dynamic d1 = new Foo(); dynamic d2 = new Bar(); string s; d1.M(s, d2, 3, null); Because the receiver of the call to M is dynamic, the C# compiler does not try to resolve the meaning of the call. Instead it stashes away information for the runtime about the call. This information (often referred to as the “payload”) is essentially equivalent to: “Perform an instance method call of M with the following arguments: 1. a string 2. a dynamic 3. a literal int 3 4. a literal object null” At runtime, assume that the actual type Foo of d1 is not a COM type and does not implement IDynamicObject. In this case the C# runtime binder picks up to finish the overload resolution job based on runtime type information, proceeding as follows: 1. Reflection is used to obtain the actual runtime types of the two objects, d1 and d2, that did not have a static type (or rather had the static type dynamic). The result is Foo for d1 and Bar for d2. 2. Method lookup and overload resolution is performed on the type Foo with the call M(string,Bar,3,null) using ordinary C# semantics. 3. If the method is found it is invoked; otherwise a runtime exception is thrown. Overload resolution with dynamic arguments Even if the receiver of a method call is of a static type, overload resolution can still happen at runtime. This can happen if one or more of the arguments have the type dynamic: Foo foo = new Foo(); dynamic d = new Bar(); var result = foo.M(d); The C# runtime binder will choose between the statically known overloads of M on Foo, based on the runtime type of d, namely Bar. The result is again of type dynamic. The Dynamic Language Runtime An important component in the underlying implementation of dynamic lookup is the Dynamic Language Runtime (DLR), which is a new API in .NET 4.0. The DLR provides most of the infrastructure behind not only C# dynamic lookup but also the implementation of several dynamic programming languages on .NET, such as IronPython and IronRuby. Through this common infrastructure a high degree of interoperability is ensured, but just as importantly the DLR provides excellent caching mechanisms which serve to greatly enhance the efficiency of runtime dispatch. To the user of dynamic lookup in C#, the DLR is invisible except for the improved efficiency. However, if you want to implement your own dynamically dispatched objects, the IDynamicObject interface allows you to interoperate with the DLR and plug in your own behavior. This is a rather advanced task, which requires you to understand a good deal more about the inner workings of the DLR. For API writers, however, it can definitely be worth the trouble in order to vastly improve the usability of e.g. a library representing an inherently dynamic domain. Open issues There are a few limitations and things that might work differently than you would expect. · The DLR allows objects to be created from objects that represent classes. However, the current implementation of C# doesn’t have syntax to support this. · Dynamic lookup will not be able to find extension methods. Whether extension methods apply or not depends on the static context of the call (i.e. which using clauses occur), and this context information is not currently kept as part of the payload. · Anonymous functions (i.e. lambda expressions) cannot appear as arguments to a dynamic method call. The compiler cannot bind (i.e. “understand”) an anonymous function without knowing what type it is converted to. One consequence of these limitations is that you cannot easily use LINQ queries over dynamic objects: dynamic collection = …; var result = collection.Select(e => e + 5); If the Select method is an extension method, dynamic lookup will not find it. Even if it is an instance method, the above does not compile, because a lambda expression cannot be passed as an argument to a dynamic operation. There are no plans to address these limitations in C# 4.0. Named and Optional Arguments Named and optional parameters are really two distinct features, but are often useful together. Optional parameters allow you to omit arguments to member invocations, whereas named arguments is a way to provide an argument using the name of the corresponding parameter instead of relying on its position in the parameter list. Some APIs, most notably COM interfaces such as the Office automation APIs, are written specifically with named and optional parameters in mind. Up until now it has been very painful to call into these APIs from C#, with sometimes as many as thirty arguments having to be explicitly passed, most of which have reasonable default values and could be omitted. Even in APIs for .NET however you sometimes find yourself compelled to write many overloads of a method with different combinations of parameters, in order to provide maximum usability to the callers. Optional parameters are a useful alternative for these situations. Optional parameters A parameter is declared optional simply by providing a default value for it: public void M(int x, int y = 5, int z = 7); Here y and z are optional parameters and can be omitted in calls: M(1, 2, 3); // ordinary call of M M(1, 2); // omitting z – equivalent to M(1, 2, 7) M(1); // omitting both y and z – equivalent to M(1, 5, 7) Named and optional arguments C# 4.0 does not permit you to omit arguments between commas as in M(1,,3). This could lead to highly unreadable comma-counting code. Instead any argument can be passed by name. Thus if you want to omit only y from a call of M you can write: M(1, z: 3); // passing z by name or M(x: 1, z: 3); // passing both x and z by name or even M(z: 3, x: 1); // reversing the order of arguments All forms are equivalent, except that arguments are always evaluated in the order they appear, so in the last example the 3 is evaluated before the 1. Optional and named arguments can be used not only with methods but also with indexers and constructors. Overload resolution Named and optional arguments affect overload resolution, but the changes are relatively simple: A signature is applicable if all its parameters are either optional or have exactly one corresponding argument (by name or position) in the call which is convertible to the parameter type. Betterness rules on conversions are only applied for arguments that are explicitly given – omitted optional arguments are ignored for betterness purposes. If two signatures are equally good, one that does not omit optional parameters is preferred. M(string s, int i = 1); M(object o); M(int i, string s = “Hello”); M(int i); M(5); Given these overloads, we can see the working of the rules above. M(string,int) is not applicable because 5 doesn’t convert to string. M(int,string) is applicable because its second parameter is optional, and so, obviously are M(object) and M(int). M(int,string) and M(int) are both better than M(object) because the conversion from 5 to int is better than the conversion from 5 to object. Finally M(int) is better than M(int,string) because no optional arguments are omitted. Thus the method that gets called is M(int). Features for COM interop Dynamic lookup as well as named and optional parameters greatly improve the experience of interoperating with COM APIs such as the Office Automation APIs. In order to remove even more of the speed bumps, a couple of small COM-specific features are also added to C# 4.0. Dynamic import Many COM methods accept and return variant types, which are represented in the PIAs as object. In the vast majority of cases, a programmer calling these methods already knows the static type of a returned object from context, but explicitly has to perform a cast on the returned value to make use of that knowledge. These casts are so common that they constitute a major nuisance. In order to facilitate a smoother experience, you can now choose to import these COM APIs in such a way that variants are instead represented using the type dynamic. In other words, from your point of view, COM signatures now have occurrences of dynamic instead of object in them. This means that you can easily access members directly off a returned object, or you can assign it to a strongly typed local variable without having to cast. To illustrate, you can now say excel.Cells[1, 1].Value = "Hello"; instead of ((Excel.Range)excel.Cells[1, 1]).Value2 = "Hello"; and Excel.Range range = excel.Cells[1, 1]; instead of Excel.Range range = (Excel.Range)excel.Cells[1, 1]; Compiling without PIAs Primary Interop Assemblies are large .NET assemblies generated from COM interfaces to facilitate strongly typed interoperability. They provide great support at design time, where your experience of the interop is as good as if the types where really defined in .NET. However, at runtime these large assemblies can easily bloat your program, and also cause versioning issues because they are distributed independently of your application. The no-PIA feature allows you to continue to use PIAs at design time without having them around at runtime. Instead, the C# compiler will bake the small part of the PIA that a program actually uses directly into its assembly. At runtime the PIA does not have to be loaded. Omitting ref Because of a different programming model, many COM APIs contain a lot of reference parameters. Contrary to refs in C#, these are typically not meant to mutate a passed-in argument for the subsequent benefit of the caller, but are simply another way of passing value parameters. It therefore seems unreasonable that a C# programmer should have to create temporary variables for all such ref parameters and pass these by reference. Instead, specifically for COM methods, the C# compiler will allow you to pass arguments by value to such a method, and will automatically generate temporary variables to hold the passed-in values, subsequently discarding these when the call returns. In this way the caller sees value semantics, and will not experience any side effects, but the called method still gets a reference. Open issues A few COM interface features still are not surfaced in C#. Most notably these include indexed properties and default properties. As mentioned above these will be respected if you access COM dynamically, but statically typed C# code will still not recognize them. There are currently no plans to address these remaining speed bumps in C# 4.0. Variance An aspect of generics that often comes across as surprising is that the following is illegal: IList<string> strings = new List<string>(); IList<object> objects = strings; The second assignment is disallowed because strings does not have the same element type as objects. There is a perfectly good reason for this. If it were allowed you could write: objects[0] = 5; string s = strings[0]; Allowing an int to be inserted into a list of strings and subsequently extracted as a string. This would be a breach of type safety. However, there are certain interfaces where the above cannot occur, notably where there is no way to insert an object into the collection. Such an interface is IEnumerable<T>. If instead you say: IEnumerable<object> objects = strings; There is no way we can put the wrong kind of thing into strings through objects, because objects doesn’t have a method that takes an element in. Variance is about allowing assignments such as this in cases where it is safe. The result is that a lot of situations that were previously surprising now just work. Covariance In .NET 4.0 the IEnumerable<T> interface will be declared in the following way: public interface IEnumerable<out T> : IEnumerable { IEnumerator<T> GetEnumerator(); } public interface IEnumerator<out T> : IEnumerator { bool MoveNext(); T Current { get; } } The “out” in these declarations signifies that the T can only occur in output position in the interface – the compiler will complain otherwise. In return for this restriction, the interface becomes “covariant” in T, which means that an IEnumerable<A> is considered an IEnumerable<B> if A has a reference conversion to B. As a result, any sequence of strings is also e.g. a sequence of objects. This is useful e.g. in many LINQ methods. Using the declarations above: var result = strings.Union(objects); // succeeds with an IEnumerable<object> This would previously have been disallowed, and you would have had to to some cumbersome wrapping to get the two sequences to have the same element type. Contravariance Type parameters can also have an “in” modifier, restricting them to occur only in input positions. An example is IComparer<T>: public interface IComparer<in T> { public int Compare(T left, T right); } The somewhat baffling result is that an IComparer<object> can in fact be considered an IComparer<string>! It makes sense when you think about it: If a comparer can compare any two objects, it can certainly also compare two strings. This property is referred to as contravariance. A generic type can have both in and out modifiers on its type parameters, as is the case with the Func<…> delegate types: public delegate TResult Func<in TArg, out TResult>(TArg arg); Obviously the argument only ever comes in, and the result only ever comes out. Therefore a Func<object,string> can in fact be used as a Func<string,object>. Limitations Variant type parameters can only be declared on interfaces and delegate types, due to a restriction in the CLR. Variance only applies when there is a reference conversion between the type arguments. For instance, an IEnumerable<int> is not an IEnumerable<object> because the conversion from int to object is a boxing conversion, not a reference conversion. Also please note that the CTP does not contain the new versions of the .NET types mentioned above. In order to experiment with variance you have to declare your own variant interfaces and delegate types. COM Example Here is a larger Office automation example that shows many of the new C# features in action. using System; using System.Diagnostics; using System.Linq; using Excel = Microsoft.Office.Interop.Excel; using Word = Microsoft.Office.Interop.Word; class Program { static void Main(string[] args) { var excel = new Excel.Application(); excel.Visible = true; excel.Workbooks.Add(); // optional arguments omitted excel.Cells[1, 1].Value = "Process Name"; // no casts; Value dynamically excel.Cells[1, 2].Value = "Memory Usage"; // accessed var processes = Process.GetProcesses() .OrderByDescending(p =&gt; p.WorkingSet) .Take(10); int i = 2; foreach (var p in processes) { excel.Cells[i, 1].Value = p.ProcessName; // no casts excel.Cells[i, 2].Value = p.WorkingSet; // no casts i++; } Excel.Range range = excel.Cells[1, 1]; // no casts Excel.Chart chart = excel.ActiveWorkbook.Charts. Add(After: excel.ActiveSheet); // named and optional arguments chart.ChartWizard( Source: range.CurrentRegion, Title: "Memory Usage in " + Environment.MachineName); //named+optional chart.ChartStyle = 45; chart.CopyPicture(Excel.XlPictureAppearance.xlScreen, Excel.XlCopyPictureFormat.xlBitmap, Excel.XlPictureAppearance.xlScreen); var word = new Word.Application(); word.Visible = true; word.Documents.Add(); // optional arguments word.Selection.Paste(); } } The code is much more terse and readable than the C# 3.0 counterpart. Note especially how the Value property is accessed dynamically. This is actually an indexed property, i.e. a property that takes an argument; something which C# does not understand. However the argument is optional. Since the access is dynamic, it goes through the runtime COM binder which knows to substitute the default value and call the indexed property. Thus, dynamic COM allows you to avoid accesses to the puzzling Value2 property of Excel ranges. Relationship with Visual Basic A number of the features introduced to C# 4.0 already exist or will be introduced in some form or other in Visual Basic: · Late binding in VB is similar in many ways to dynamic lookup in C#, and can be expected to make more use of the DLR in the future, leading to further parity with C#. · Named and optional arguments have been part of Visual Basic for a long time, and the C# version of the feature is explicitly engineered with maximal VB interoperability in mind. · NoPIA and variance are both being introduced to VB and C# at the same time. VB in turn is adding a number of features that have hitherto been a mainstay of C#. As a result future versions of C# and VB will have much better feature parity, for the benefit of everyone. Resources All available resources concerning C# 4.0 can be accessed through the C# Dev Center. Specifically, this white paper and other resources can be found at the Code Gallery site. Enjoy! span.fullpost {display:none;}

    Read the article

  • Your Job Search Should be More Than Just a New Year's Resolution

    - by david.talamelli
    I love the beginning of a new year, it is a great chance to refocus and either re-evaluate goals you are working to or even set new ones. I don't have any statistics to measure this but I am sure that one of the more popular new year's resolutions in the general workforce is to either get a new job or work to further develop one's career. I think this is a good idea, in today's competitive work force people should have a plan of what they want to do, what role they are after and how to get there. One common mistake I think many people make though is that a career plan shouldn't be a once a year thought. When people finish with the holiday season with their new year's resolution to find a new job fresh in their mind, you can see the enthusiasm and motivation a person has to make something happen. Emails are sent, calls are made, applications are made, networking is happening, etc..... Finding the right role that you are after however can be difficult, while it would be great if that dream role was available just at the time you happened to be looking for it - in reality this is not always the case. Job Seekers need to keep reminding themselves that while sometimes that dream job they are after is available at the same time they are looking, that also a Job search can be a difficult and long process. Many people who set out with the best of intentions in January to find a new job can soon lose interest in a job search if they do not immediately find a role. Just like the Christmas decorations are put away and the photos from New Year's are stored away - a Job Seeker's motivation may slowly decrease until that person finds themselves 12 months later in the same situation in same role and looking for that new opportunity again. Rather than just "going for it" and looking for a role in the month of January, a person's job search or career plan should be an ongoing activity and thought process that is constantly updated and evaluated over the course of the year. It can be hard to stay motivated over an extended period of time, especially when you are newly motivated and ready for that new role and the results are not immediate. Rather than letting your job search fall down the priority list and into the "too hard basket" a few ideas that may keep your enthusiasm fresh Update your resume every 6 months, even if you are not looking for a job - it is easy to forget what you have accomplished if you don't keep your details updated. Also it is good to be prepared and have a resume ready to go in case you do get an unexpected phone call for that 'dream job' you have been hoping for. Work out what you want out of your next role before you begin your job search - rather than aimlessly searching job ads or talking to people - think of the organisations or type of role you would like before you search. If you know what you are looking for it will be much easier to work out how to get there than if you do not know what you want. Don't expect immediate results once you decide to look for another job, things don't always fall into place. Timing and delivery can be important pieces of being selected for a role, companies don't hire every role in January. Have an open mind - people you meet or talk to may not result in immediate results for your job search but every connection may help you get a bit closer to what you are after . These actions will not guarantee a positive result, but in today's competitive work force every little of extra preparation and planning helps. All the best for 2011 and I hope your career plan whatever it may be is a success.

    Read the article

  • jQuery, insertBefore, validation, Placement of error message

    - by Steve
    I am using insertBefore under errorplacement to add validation messages, each one below each other as I want the order of the messages to go from top to bottom. I am using a hidden input at the very bottom of a parent element in order to supply an element for the insertBefore argument. Is there a way to insert from the bottom without using the dummy hidden element? If I use absolute positioning, there is a possibility for white space, for example 3 stacked messages, the middle one missing since there is no collapsing with absolute positioning.

    Read the article

  • contentEditable javascript caret placement in div

    - by Ben Mc
    I have a contentEditable div. Let's say the user clicks a button that inserts HTML into the editable area. So, they click a button and the following is added to the innerHTML of the contentEditable div: <div id="outside"><div id="inside"></div></div> How do I automatically place the cursor (ie caret) IN the "inside" div? Worse. How can this work in IE and FF?

    Read the article

  • Core Location Best Placement and User Interruption

    - by b.dot
    Hi All, My application uses Core Location in three different views. It's working perfectly. In my first view, I subclass the CLLocationManager and use protocol methods for location updates to my calling class. Before I install the framework and code in my other classes, I was wondering: Is the protocol method the best way? What happens to the Core Location execution if the user exits the view or quits the app while it's trying to get a location fix? Is the location task terminated with the GPS system turned off immediately? If the user simply switches to another view, is it OK to assume that I can start Core Location in the next view without regard to the last? Where should the first update location call be placed. Should the application delegate instantiate the CLLocation Manager class using protocol so that it can update any of the views chosen or should each class instantiate the manager. Any feedback would be appreciated. Thanks.

    Read the article

  • UINavigationController placement problem

    - by Chonch
    Hey, I'm having this problem and I can't seem to figure out why! I want to use a UINavigationController, but I don't want to use it through out the entire application. So, in the UIViewController I was to add it to, I define: UINavigationController *navigationController; and the @property, @synthesize and dealloc. In the viewDidLoad method, I add: navigationController = [[UINavigationController alloc] init]; navigationController.navigationBar.barStyle = UIBarStyleBlack; and when the relevant UIViewController is loaded, I add: [navigationController pushViewController:viewController animated:NO]; and make the old view disappear and the UINavigationController's view to appear. Everything works fine except that the navigation bar appears about 20pixels too low (dragging the entire view down and out of the screen). If I set the UINavigationController's view frame parameter manually to start at 0,0, it jumps there, but only after about a second, so that you can see the navigation bar starting at a certain location and only then jumping to another (and it is ugly!). I have done the exact same thing in the past and had no problem with it. Does anyone know what may cause this? Thanks,

    Read the article

  • Cocos2D, UIScrollView, and initial placement of a scene

    - by diatrevolo
    Hello: I am using a UIScrollView to forward touches to Cocos2D as outlined in http://getsetgames.com/2009/08/21/cocos2d-and-uiscrollview/ Everything works great after a few days of working with it, except one thing: when the initial view appears on the screen, the background appears to be scrolled to the center. As soon as I try to scroll around, the image jumps to 0,0, and everything works as normal, except the touches are offset by half the width and height of the background image. Am I overlooking something basic? I can't think of a useful portion of the code that illustrates the issue, as I can't track it down, but would be happy to post code if anyone has any ideas. Thanks in advance, -Roberto

    Read the article

  • Letter placement error with GD2, TTF and PHP

    - by Javier Parra
    Some time ago I made a script that takes some text and returns it as an image, and worked flawlessly. But I'm not sure since when a weird bug started to happen. The letters that have a (my apologies to the font geeks) "glyph" on the left get pushed to the right so the letter starts on it, but leaves space only for the main letter, hehe, I think an example should do it. The expected result is: The "bad" one was generated, obviously, by my script, located here: http://www.esbasura.com/images/text.php?txt=The%20quick%20brown%20fox%20jumps%20over%20the%20lazy%20dog.&fnt=1&size=23&bg=lightgrey And the good one was generated by dafont here: http://img.dafont.com/preview.php?text=The%20quick%20brown%20fox%20jumps%20over%20the%20lazy%20dog.&ttf=bleeding_cowboys0&ext=1&size=23&psize=m&y=46 I'm not doing anything fancy in the script, here is the relevant part: imagefilledrectangle($im, 0, 0, $width, $height, $$bg); imagettftext($im, $size, 0, (-1*$textsize[6]), (-1*$textsize[7]), $$color, $font, $text); // imagefttext($im, $size, 0, (-1*$textsize[6]), (-1*$textsize[7]), $$color, $font, $text); same results using imagefttext imagecolortransparent($im, $$bg); header("Cache-Control: public"); // HTTP/1.1 header("Content-type: image/png"); imagepng($im); imagedestroy($im); } I'm kind of surprised, because, as I said, it used to work flawlessly. Maybe my host changed my machine. (here's my phpinfo: http://www.work4bandwidth.com/info.php) Relevant bit: gd GD Support enabled GD Version bundled (2.0.34 compatible) FreeType Support enabled FreeType Linkage with freetype FreeType Version 2.2.1 GIF Read Support enabled GIF Create Support enabled JPG Support enabled PNG Support enabled WBMP Support enabled XBM Support enabled

    Read the article

  • CSS placement of text

    - by Josh
    Anyone know how I can get "Posted by AUTHOR on April 3rd, 2010 | 0 Comments" underneath the headline of the news links WITHOUT it being apart of the A CLASS? I want it to look, read, and function like: [IMAGE] HEADLINE Posted by AUTHOR on April 3rd, 2010 | 0 Comments All of that in the initial looking field, then the user can click and it expands further. I can obviously get it to work if I put it in the A CLASS tag, but that's the problem. I can't have it there.

    Read the article

  • Best placement for javascript in Asp.net MVC app that heavily uses partial views

    - by KallDrexx
    What is the best place for javascript that is specific to a partial view? For example, if I have a partial view (loaded via ajax call) with some divs and I want to turn those divs into an accordian, would it be better put the $("#section").accordion() in script tags inside of the partial view, or in a .js file in the function that retrieves that partial view and inserts it into the DOM? Obviously, common methods I will be keeping in a .js file, however I am more talking about javascript very specific to the partial view itself. Most things I find on the net seem to say to put all javascript into a separate .js but nothing addresses the idea of partial views.

    Read the article

  • C# / Visual Studio: production and test code placement

    - by Patrick Linskey
    Hi, In JavaLand, I'm used to creating projects that contain both production and test code. I like this practice because it simplifies testing of internal code without artificially exposing the internals in a project's published API. So far, in my experiences with C# / Visual Studio / ReSharper / NUnit, I've created separate projects (i.e., separate DLLs) for production and test code. Is this the idiom, or am I off base? If this idiomatically correct, what's the right way to deal with exposing classes and methods for test purposes? Thanks, -Patrick

    Read the article

  • Placement of axis labels at minor breaks with ggplot2

    - by JAShapiro
    I am using ggplot2 to do some plotting of genomic data, so the basic format is that there is a chromosome and a position along it. I convert the positions to be on a continuous scale, then put the breaks at the boundaries of the chromosomes with: scale_x_continuous("Genome Position", breaks = c(0, cumsum(chromosome_length))) That looks great, as far as the actual plotting is concerned, but the labels are then put at the start and end of the chromosomes. I would like them to be centered along each chromosome, at the position where the minor break is drawn by default. Is this possible?

    Read the article

  • JQuery: Specify placement of error messages inline using Metadata and Validate plugins

    - by jalperin
    I'm doing form validation using JQuery with the validate and metadata plugins. I'm using the metadata plugin so I can specify the rules and messages inline in the html form instead of in the javascript in the page . I'm getting an error when I try to specify the location of the error message using errorPlacement. (If I specify it in the section it works fine, but not if I specify it inline.) Here's what my html looks like: <input name="list" id="list1" type="checkbox" validate="{required:true, minlength:1, messages:{required:'Please select at least one newsletter.', minlength:'Please select at least one newsletter.'}, errorPlacement: function(error, element) { error.appendTo('#listserror');} }"> As reported by the validate debug function, the error is: "error.appendTo is not a function." It works fine if I specify it in the section like this: $().ready(function() { $("#subscribeForm").validate({ errorPlacement: function(error, element) { if (element.attr("name") == "list" ) error.appendTo("#listserror"); else error.insertAfter(element); } }); });

    Read the article

  • Placement of a call to the parent method

    - by Alejandro
    I have a class that has a method. That class has a child class that overrides that method. The child's method has to call the parent's method. In all OO that I've seen or written calls to the parent's version of the same method were on the first line of the method. On a project that I am working on circumstances call for placing that method call at the end of a method. Should I be worried? Is that a code smell? Is this code inherently bad? class Parent { function work() { // stuff } } class Child { function work() { // do thing 1 // do thing 2 parent::work(); // is this a bad practice? // should I call the parent's work() method before // I do anything in this method? } }

    Read the article

  • Placement of service methods

    - by mhp
    Let's assume I have two service classes with the following methods: GroupService createGroup() deleteGroup() updateGroup() findGroup() UserService createUser() deleteUser() updateUser() findUser() Now, I am thinking about the aesthetics of theses classes. Imagine we want to implement a method which search for all user of a specific group. Which service class is responsible for such a method? I mean, the return value is a user (or maybe a collection of users) but the parameter (which means the name of the group) is a group. So which service class is the better place to put this method in?

    Read the article

  • Lights off effect and jquery placement on wordpress

    - by Alexander Santiago
    I'm trying to implement a lights on/off on single posts of my wordpress theme. I know that I have to put this code on my css, which I did already: #the_lights{ background-color:#000; height:1px; width:1px; position:absolute; top:0; left:0; display:none; } #standout{ padding:5px; background-color:white; position:relative; z-index:1000; } Now this is the code that I'm having trouble with: function getHeight() { if ($.browser.msie) { var $temp = $("").css("position", "absolute") .css("left", "-10000px") .append($("body").html()); $("body").append($temp); var h = $temp.height(); $temp.remove(); return h; } return $("body").height(); } $(document).ready(function () { $("#the_lights").fadeTo(1, 0); $("#turnoff").click(function () { $("#the_lights").css("width", "100%"); $("#the_lights").css("height", getHeight() + "px"); $("#the_lights").css({‘display’: ‘block’ }); $("#the_lights").fadeTo("slow", 1); }); $("#soft").click(function () { $("#the_lights").css("width", "100%"); $("#the_lights").css("height", getHeight() + "px"); $("#the_lights").css("display", "block"); $("#the_lights").fadeTo("slow", 0.8); }); $("#turnon").click(function () { $("#the_lights").css("width", "1px"); $("#the_lights").css("height", "1px"); $("#the_lights").css("display", "block"); $("#the_lights").fadeTo("slow", 0); }); }); I think it's a jquery. Where do I place it and how do I call it's function? Been stuck on this thing for 6 hours now and any help would be greatly appreciated...

    Read the article

  • Optimizing a bin-placement algorithm

    - by user258651
    Alright, I've got two collections, and I need to place elements from collection1 into the bins (elements) of collection2, based on whether their value falls within a given bin's range. For a concrete example, assume I have a sorted collection objects (bins) which have an int range ([1...4], [5..10], etc). I need to determine the range an int falls in, and place it in the appropriate bin. foreach(element n in collection1) { foreach(bin m in collection2) { if (m.inRange(n)) { m.add(n); break; } } } So the obvious NxM complexity algorithm is there, but I really would like to see Nxlog(M). To do this I'd like to use BinarySearch in place of the inner foreach loop. To use BinarySearch, I need to implement an IComparer class to do the searching for me. The problem I'm running into is this approach would require me to make an IComparer.Compare function that compares two different types of objects (an element to its bin), and that doesn't seem possible or correct. So I'm asking, how should I write this algorithm?

    Read the article

  • Resource placement (optimal strategy)

    - by blackened
    I know that this is not exactly the right place to ask this question, but maybe a wise guy comes across and has the solution. I'm trying to write a computer game and I need an algorithm to solve this question: The game is played between 2 players. Each side has 1.000 dollars. There are three "boxes" and each player writes down the amount of money he is going to place into those boxes. Then these amounts are compared. Whoever placed more money in a box scores 1 point (if draw half point each). Whoever scores more points wins his opponents 1.000 dollars. Example game: Player A: [500, 500, 0] Player B: [333, 333, 334] Player A wins because he won Box A and Box B (but lost Box C). Question: What is the optimal strategy to place the money? I have more questions to ask (algorithm related, not math related) but I need to know the answer to this one first. Update (1): After some more research I've learned that these type of problems/games are called Colonel Blotto Games. I did my best and found few (highly technical) documents on the subject. Cutting it short, the problem I have (as described above) is called simple Blotto Game (only three battlefields with symmetric resources). The difficult ones are the ones with, say, 10+ battle fields with non-symmetric resources. All the documents I've read say that the simple Blotto game is easy to solve. The thing is, none of them actually say what that "easy" solution is.

    Read the article

  • Icon placement relative to list items

    - by Ronedog
    When I show a hidden div that is stored inside an <li> tag the icons are pushing down to the bottom of the <li>. How can I prevent this? Here's the HTML: <ul> <li>Utah <ul> <li>Park City <ul> <li>Park Cat 1 <div><img class="portf_edit" /></div> <div><img class="portf_archive" /></div> <div><img class="portf_delete" /></div> </li> <li>Skiing <div><img class="portf_edit" /></div> <div><img class="portf_archive" /></div> <div><img class="portf_delete" /></div> </li> </ul> </li> </ul> </li> </ul> Here's the li css: li { list-style-type:none; vertical-align: top; list-style-image: none; left:0px; text-align:left; clear: both; } .portf_edit{ float:right; position: relative; right:50px; display:block; } .portf_archive{ float:right; position: relative; right:-5px; display:block; } .portf_delete{ float:right; position: relative; right: -60px; display:block; } Here's a screen shot prior to expanding which shows the icons how I want them to line up: Here's the screen shot prior to expanding which shows where the icons are being pushed to:

    Read the article

  • Ajax Control Toolkit July 2011 Release and the New HTML Editor Extender

    - by Stephen Walther
    I’m happy to announce the July 2011 release of the Ajax Control Toolkit which includes important bug fixes and a completely new HTML Editor Extender control. You can download the July 2011 Release by visiting the Ajax Control Toolkit CodePlex site at: http://AjaxControlToolkit.CodePlex.com Using the New HTML Editor Extender Control You can use the new HTML Editor Extender to extend any standard ASP.NET TextBox control so that it supports rich formatting such as bold, italics, bulleted lists, numbered lists, typefaces and different foreground and background colors. The following code illustrates how you can extend a standard ASP.NET TextBox control with the HtmlEditorExtender: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Simple.aspx.cs" Inherits="WebApplication1.Simple" %> <%@ Register TagPrefix="asp" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Simple</title> </head> <body> <form id="form1" runat="server"> <asp:ToolkitScriptManager runat="Server" /> <asp:TextBox ID="txtComments" TextMode="MultiLine" Columns="60" Rows="8" runat="server" /> <asp:HtmlEditorExtender TargetControlID="txtComments" runat="server" /> </form> </body> </html> This page has the following three controls: ToolkitScriptManager – The ToolkitScriptManager renders all of the scripts required by the Ajax Control Toolkit. TextBox – The TextBox control is a standard ASP.NET TextBox which is set to display multiple lines (a TextArea instead of an Input element). HtmlEditorExtender – The HtmlEditorExtender is set to extend the TextBox control. You can use the standard TextBox Text property to read the rich text entered into the TextBox control on the server. Lightweight and HTML5 The HTML Editor Extender works on all modern browsers including the most recent versions of Mozilla Firefox (Firefox 5), Google Chrome (Chrome 12), and Apple Safari (Safari 5). Furthermore, the HTML Editor Extender is compatible with Microsoft Internet Explorer 6 and newer. The HTML Editor Extender is very lightweight. It takes advantage of the HTML5 ContentEditable attribute so it does not require an iframe or complex browser workarounds. If you select View Source in your browser while using the HTML Editor Extender, we hope that you will be pleasantly surprised by how little markup and script is generated by the HTML Editor Extender. Customizable Toolbar Buttons Depending on the web application that you are building, you will want to display different toolbar buttons with the HTML Editor Extender. One of the design goals of the HTML Editor Extender was to make it very easy for you to customize the toolbar buttons. Imagine, for example, that you want to use the HTML Editor Extender when accepting comments on blog posts. In that case, you might want to restrict the type of formatting that a user can display. You might want to enable a user to format text as bold or italic but you do not want the user to make any other formatting changes. The following page illustrates how you can customize the HTML Editor Extender toolbar: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="CustomToolbar.aspx.cs" Inherits="WebApplication1.CustomToolbar" %> <%@ Register TagPrefix="asp" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <html> <head runat="server"> <title>Custom Toolbar</title> </head> <body> <form id="form1" runat="server"> <asp:ToolkitScriptManager Runat="server" /> <asp:TextBox ID="txtComments" TextMode="MultiLine" Columns="50" Rows="10" Text="Hello <b>world!</b>" Runat="server" /> <asp:HtmlEditorExtender TargetControlID="txtComments" runat="server"> <Toolbar> <asp:Bold /> <asp:Italic /> </Toolbar> </asp:HtmlEditorExtender> </form> </body> </html> Notice that the HTML Editor Extender in the page above has a Toolbar subtag. You can list the toolbar buttons which you want to appear within the subtag. In the case above, only Bold and Italic buttons are displayed. Here is a complete list of the Toolbar buttons currently supported by the HTML Editor Extender: Undo Redo Bold Italic Underline StrikeThrough Subscript Superscript JustifyLeft JustifyCenter JustifyRight JustifyFull InsertOrderedList InsertUnorderedList CreateLink UnLink RemoveFormat SelectAll UnSelect Delete Cut Copy Paste BackgroundColorSelector ForeColorSelector FontNameSelector FontSizeSelector Indent Outdent InsertHorizontalRule HorizontalSeparator Of course the HTML Editor Extender was designed to be extensible. You can create your own buttons and add them to the control. Compatible with the AntiXSS Library When using the HTML Editor Extender on a public facing website, we strongly recommend that you use the HTML Editor Extender with the AntiXSS Library. If you allow users to submit arbitrary HTML, and you don’t take any action to strip out malicious markup, then you are opening your website to Cross-Site Scripting Attacks (XSS attacks). The HTML Editor Extender uses the Provider Model to support different Sanitizer Providers. The July 2011 release of the Ajax Control Toolkit ships with a single Sanitizer Provider which uses the AntiXSS library (see http://AntiXss.CodePlex.com ). A Sanitizer Provider is responsible for sanitizing HTML markup by removing any malicious elements, attributes, and attribute values. For example, the AntiXss Sanitizer Provider will take the following block of HTML: <b><a href=""javascript:doEvil()"">Visit Grandma</a></b> <script>doEvil()</script> And return the following sanitized block of HTML: <b><a href="">Visit Grandma</a></b> Notice that the JavaScript href and <SCRIPT> tag are both stripped out. Be aware that there are a depressingly large number of ways to sneak evil markup into your HTML. You definitely want a Sanitizer as a safety net. Before you can use the AntiXSS Sanitizer Provider, you must add three assemblies to your web application: AntiXSSLibrary.dll, HtmlSanitizationLibrary.dll, and SanitizerProviders.dll. All three assemblies are included with the CodePlex download of the Ajax Control Toolkit in the SanitizerProviders folder. Here’s how you modify your web.config file to use the AntiXSS Sanitizer Provider: <configuration> <configSections> <sectionGroup name="system.web"> <section name="sanitizer" requirePermission="false" type="AjaxControlToolkit.Sanitizer.ProviderSanitizerSection, AjaxControlToolkit"/> </sectionGroup> </configSections> <system.web> <compilation targetFramework="4.0" debug="true"/> <sanitizer defaultProvider="AntiXssSanitizerProvider"> <providers> <add name="AntiXssSanitizerProvider" type="AjaxControlToolkit.Sanitizer.AntiXssSanitizerProvider"></add> </providers> </sanitizer> </system.web> </configuration> You can detect whether the HTML Editor Extender is using the AntiXSS Sanitizer Provider by checking the HtmlEditorExtender SanitizerProvider property like this: if (MyHtmlEditorExtender.SanitizerProvider == null) { throw new Exception("Please enable the AntiXss Sanitizer!"); } When the SanitizerProvider property has the value null, you know that a Sanitizer Provider has not been configured in the web.config file. Because the AntiXSS library requires Full Trust, you cannot use the AntiXSS Sanitizer Provider with most shared website hosting providers. Because most shared hosting providers only support Medium Trust and not Full Trust, we do not recommend using the HTML Editor Extender with a public website hosted with a shared hosting provider. Why a New HTML Editor Control? The Ajax Control Toolkit now includes two HTML Editor controls. Why did we introduce a new HTML Editor control when there was already an existing HTML Editor? We think you will like the new HTML Editor much more than the previous one. We had several goals with the new HTML Editor Extender: Lightweight – We wanted to leverage HTML5 to create a lightweight HTML Editor. The new HTML Editor generates much less markup and script than the previous HTML Editor. Secure – We wanted to make it easy to integrate the AntiXSS library with the HTML Editor. If you are creating a public facing website, we strongly recommend that you use the AntiXSS Provider. Customizable – We wanted to make it easy for users to customize the toolbar buttons displayed by the HTML Editor. Compatibility – We wanted to ensure that the HTML Editor will work with the latest versions of the most popular browsers (including Internet Explorer 6 and higher). The old HTML Editor control is still included in the Ajax Control Toolkit and continues to live in the AjaxControlToolkit.HTMLEditor namespace. We have not modified the control and you can continue to use the control in the same way as you have used it in the past. However, we hope that you will consider migrating to the new HTML Editor Extender for the reasons listed above. Summary We’ve introduced a new Ajax Control Toolkit control with this release. I want to thank the developers and testers on the Superexpert team for the huge amount of work which they put into this control. It was a non-trivial task to build an entirely new control which has the complexity of the HTML Editor in less than 6 weeks. Please let us know what you think! We want to hear your feedback. If you discover issues with the new HTML Editor Extender control, or you have questions about the control, or you have ideas for how it can be improved, then please post them to this blog. Tomorrow starts a new sprint

    Read the article

  • What’s New in JD Edwards EnterpriseOne Release 9.1

    - by Breanne Cooley
    Oracle JD Edwards EnterpriseOne 9.1 offers customers significant updates to usability and business processes to improve productivity and bolster business value. This release addresses critical user needs, while delivering key enhancements in several areas, including:  New User Experience Release 9.1 offers significant enhancements to the user experience. New Web 2.0 features reduce task time and enable you to access meaningful information. Enhanced Reporting Oracle’s JD Edwards EnterpriseOne One View Reporting is a breakthrough solution that allows business users to create interactive reports - without IT support. Industry Specific Functionality This new release delivers key enhancements for the consumer goods, real estate management and manufacturing and distribution industries. Enhanced Support for Global Operations JD Edwards EnterpriseOne 9.1 supports global operations with several new features, including enhancements that consider the entire ERP business process associated with managing country of origin requirements. Productivity Features This new release offers new more tightly integrated business processes and other productivity advancements like improved data access and enhanced financial controls. Want to find out more? ü Bookmark the JD Edwards EnterpriseOne web page ü Listen to the Podcast: Announcing JD Edwards EnterpriseOne 9.1  ü Watch the Demo: JD Edwards EnterpriseOne 9.1 Features Demo ü Watch the Demo: JD Edwards EnterpriseOne One View Reporting Demo  ü Review the Data Sheet: JD Edwards EnterpriseOne Tools 9.1  New Training JD Edwards EnterpriseOne 9.1 training through Oracle University is designed to help you leverage these usability and productivity enhancements. The curriculum is aligned to the JD Edwards EnterpriseOne products and will teach your team how to efficiently and effectively implement and use your applications. Get started today! View available training and schedules.   -Jim Vonick, Oracle University Market Development Manager 

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >