Search Results

Search found 16329 results on 654 pages for 'b long'.

Page 354/654 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • Convert latitude and longitude into northings and eastings

    - by Rippo
    Hi I have the following UK postcode dy8 3xt and know that the latitude and longitude is:- 54.452772 -2.156082 I also know that the Eastings, Northings for the postcode is:- 389490 283880 However I am struggling to find the equation that converts lat/long to northings and Eastings, I would prefer to have the equation in both in jScript and c# (I am being greedy)! Can anyone help? EDIT Thanks for your help so far guys, I am starting to learn something here esp. the terminology... Some more info, if you click on this link you can see the results I am looking for. The postcode I entered projects to lat/lng using WG S84 and the grid ref projects to OSGB. So my question is how is this done? WHAT I LEARNT Thanks to all that answered, I finally got led to here which I can confirm works great

    Read the article

  • Seeking Functional Programming Lexicon

    - by Randall Schulz
    Hi, Knowing the argot of a field helps me a lot, especially since it allows me to converse intelligently with those who know a lot more than I, so I would like to find a good lexicon of Functional Programming terms. E.g., I repeatedly encounter these: Functor, Arrow, Category, Kleisli, Monad, Monoid, a veritable zoo of Morphisms, etc. I also notice many of these appear with prefixes such as "covariant", "co-", "endo-" etc. Of these, I can say I actually understand Monoid and Covariant and sort of get Monad, but the rest are still gibberish to me. (Note that I don't mean this list as exhaustive and I'm not looking to have these defined or described for me here, I'm looking for learning resources.) Can someone point me towards an FP lexicon? It need not be on-line, as long as it's possible to find it (and it's not a rare volume for which I'd have to pay many tens of dollars).

    Read the article

  • Wrapping a C# service in a console app to debug it.

    - by Jack Smit
    I want to debug a service written in C# and the old fashioned way is just too long. I have to stop the service, start my application that uses the service in debug mode (Visual studio 2008), start the service, attach to the service process and then navigate in my Asp.Net application to trigger the service. I basically have the service running in the background, waiting for a task. The web application will trigger a task to be picked up by the service. What I would like to do is to have a console application that fires the service in an effort for me to debug. Is there any simple demo that anybody knows about? Thank you Jack

    Read the article

  • Where does the ObjectDataSource cache data?

    - by Jeremy
    I'm considering using an ObjectDataSource as an intermediate between my page controls and my data access layer & object model. Traditionally I have manually created the object and populate it via a series of findcontrol statements when I need to insert/update data in the database. I'm hoping that I can use the ObjectDataSource to marshal data between my object and my controls, eliminating that manual code, as long as the ObjectDataSource doesn't come with a lot of overhead. I noticed the EnableCaching property, where does the caching occure? is it in view state?

    Read the article

  • Performance Testing Versus Unit Testing

    - by Mystagogue
    I'm reading Osherove's "The Art of Unit Testing," and though I've not yet seen him say anything about performance testing, two thoughts still cross my mind: Performance tests generally can't be unit tests, because performance tests generally need to run for long periods of time. Performance tests generally can't be unit tests, because performance issues too often manifest at an integration or system level (or at least the logic of a single unit test needed to re-create the performance of the integration environment would be too involved to be a unit test). Particularly for the first reason stated above, I doubt it makes sense for performance tests to be handled by a unit testing framework (such as NUnit). My question is: do my findings / leanings correspond with the thoughts of the community?

    Read the article

  • My C#.NET application is running slower when the exe is located on the network

    - by leo
    Hi, My C#.NET application is running much slower when the exe is located on the network. And I'm talking about everything, even the graphical dispay is slower. For example: when a form is already loaded, if I unplug my network cable and minimize and maximize the window, it takes a very long time to redraw itself. I'm using framework .NET 3.5 SP1. Any idea on the cause? My hypothesis so far: I'm missing some options when building the app? my corporate antivirus checks more stuff because the exe is on the network the cache of Windows XP SP3 doesn't work the same way when the exe is on the network the server is a Novell server: maybe this does change something ? Thanks for your help! Leo

    Read the article

  • Python web server

    - by Ben
    Hi I'd like to get suggestions on the best way to serve python scripts up as web pages. Typically I'd like a way for me and my colleagues to write simple web pages with minimal effort ie we focus on the business logic eg creating simple forms etc. Possibly with some way to manage sessions but this is a nice-to-have. It doesn't have to be WYSIWYG as they are developers but we are busy and don't want to spend long turning an idea into reality. It's for internal use so appearances are not paramount. The software required to enable this should be easy to setup and configure. eg adding new directories and python lib dirs should be easy. My first instinct is apache or tomcat with mod_python. Any comments / suggestions welcome. Thanks in advance.

    Read the article

  • jQuery UI Slider - Animation of handle and scroll content not in sync

    - by Mayko
    Im having troubles getting the jQuery UI Slider customized for my purposes. On load the slider and its content should automatically animate to a certain position. Ideally it should animate to the very right, then stop and then animate back (as a loop) as long as the user doesn't hovers over scroll content or slider. Following my default slider setup (http://jsfiddle.net/mayko/j6WuE/1/): var scrollbar = $("#slider").slider({ animate: 3000, min: 0, max: $("#timeline_content .items").width(), change: handleSlider, slide: handleSlider }); function handleSlider(e, ui) { $("#timeline_content").stop().animate({scrollLeft: ui.value}, scrollbar.slider("option", "animate")); } If i now try to set the value like this: $('#slider').slider({'value': 1000}); The scroll content nicely animates, but the handle just jumps to that new position. Even if I click on the slider track itself the animation of scroll content and slider handle are not in sync. Does anyone got a solution?

    Read the article

  • Eclipse warning: "<methodName> has non-API return type <parameterizedType>"

    - by Tenner
    My co-worker and I have come across this warning message a couple times recently. For the below code: package com.mycompany.product.data; import com.mycompany.product.dao.GenericDAO; public abstract class EntityBean { public abstract GenericDAO<Object, Long> getDAO(); // ^^^^^^ <-- WARNING OCCURS HERE } the warning appears in the listed spot as EntityBean.getDAO() has non-API return type GenericDAO<T, ID> A Google search for "has non-API return type" only shows instances where this message appears in problem lists. I.e., there's no public explanation for it. What does this mean? We can create a usage problem filter in Eclipse to make the message go away, but we don't want to do this if our usage is a legitimate problem. Thanks!

    Read the article

  • MOSS 2007 authentication

    - by Dante
    Hi, I have a MOSS web site configured with Windows Integrated Authentication. I added a couple of local users in the server, added them to Sharepoint groups, and I can log into my site (as long as the local user is part of the administrators group... odd). If I add a domain user to the Owners group, I can't access the site with it. Anybody knows what must be done to open access to domain users in a site configured with Windows Authentication or Basic Authentication? Thanks in advance

    Read the article

  • Array Vs. Linked List

    - by Onorio Catenacci
    I apologize--this question may be a bit open-ended but I think there are probably definite, quantifiable answers to it so I'll post it anyway. A person I know is trying to learn C++ and software development (+1 to him) and he asked me why someone would want to use a linked list in preference to an array. Coding a linked list is, no doubt, a bit more work than using an array and he wondered what would justify the additional effort. I gave him the answer I know: insertion of new elements is trivial in linked list but it's a major chore in an array. But then I got to thinking about it a bit more. Besides the ease of insertion of a new element into a linked list are there other advantages to using a linked list to store a set of data vs. storing it in an array? As I said, I'm not meaning to start a long and drawn-out discussion. I'm just looking for other reasons that a developer might prefer a linked list to an array.

    Read the article

  • How Can I List a TDictionary in Alphabetical Order by Key in Delphi 2009?

    - by lkessler
    How can I use a TEnumerator to go through my TDictionary in sorted order by key? I've got something like this: var Dic: TDictionary<string, string>; begin Dic := TDictionary<string, string>.create; Dic.Add('Tired', 'I have been working on this too long'); Dic.Add('Early', 'It is too early in the morning to be working on this'); Dic.Add('HelpMe', 'I need some help'); Dic.Add('Dumb', 'Yes I know this example is dumb'); { The following is what I don't know how to do } for each string1 (ordered by string1) in Dic do some processing with the current Dic element Dic.Free; end; So I would like to process my dictionary in the order: Dumb, Early, HelpMe, Tired. Unfortunately the Delphi help is very minimal in describing how TEnumerator works and gives no examples that I can find. There is also very little written on the web about using Enumerators with Generics in Delphi.

    Read the article

  • MFC CTreeCtrl max visible item text length

    - by Steven smethurst
    Hello I have an application that outputs large amounts of text data to an MFC tree control. When I call SetItemText() with a long string (larger then 1000+ char) only the first ~250 chars are displayed in the control. But when I call GetItemText() on the item the entire string is returned (1000+ chars) My questions are; Is there a MAX visible string length for a MFC tree control? Is there any way to increase the visible limit? I have included example text code below // In header CTreeCtrl m_Tree; // In .cpp file void CTestDlg::OnDiagnosticsDebug() { CString csText; CString csItemText; csText.Format( _T("0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789") ); for( int i = 0 ; i < 10 ; i ++ ) { csItemText += csText ; } bool b = m_Tree.SetItemText( m_Tree.GetRootItem(), csItemText ); return ; }

    Read the article

  • How to check server connection is available or not in android

    - by Kalai Selvan.G
    Testing of Network Connection can be done by following method: public boolean isNetworkAvailable() { ConnectivityManager cm = (ConnectivityManager) getSystemService(Context.CONNECTIVITY_SERVICE); NetworkInfo networkInfo = cm.getActiveNetworkInfo(); if (networkInfo != null && networkInfo.isConnected()) { return true; } return false; } But i don't know how to check the server connection.I had followed this method public boolean isConnectedToServer(String url, long timeout) { try{ URL myUrl = new URL(url); URLConnection connection = myUrl.openConnection(); connection.setConnectTimetout(timeout); connection.connect(); return true; } catch (Exception e) { // Handle your exceptions return false; } } it doesn't works....Any Ideas Guys!!

    Read the article

  • How is the iPhone SMS compose view implemented?

    - by erotsppa
    Regarding the SMS compose view as show in the picture below: I have two questions: 1) How is the text entry box implemented? There are no standard control from the API and the box is smart enough to resize when you press enter OR when the text is too long. Also the bar resizes with it. How is this done with the least coding? 2) How to code it such that when the keyboard shows up the whole view shifts up? Typically when the keyboard shows, it goes over your current view.

    Read the article

  • Binding a Viewbox to a Canvas

    - by Bjarne
    I'm trying to bind a Viewbox to Canvas that is created dynamically like so: <ListBox.ItemTemplate> <DataTemplate> <DockPanel> <Viewbox> <ContentPresenter Content="{Binding Canvas}"/> </Viewbox> </DockPanel> </DataTemplate> </ListBox.ItemTemplate> This works fine as long as the Canvas doesn't have any children, but as soon at the Canvas has children it's not shown. What am I missing here?

    Read the article

  • Onpaint events (invalidated) changing execution order after a period normal operation (runtime)

    - by Luke Mcneice
    I have 3 data graphs that are painted via the their paint events. When I have data that I need to insert into the graph I call the controls invalidate() command. The first control's paint event actually creates a bitmap buffer for the other 2 graphs to avoid repeating a long loop. So the invalidate commands are in a specific order (1,2,3). This works well, however when the graphed data reaches the end of the graph window (PictureBox) where the data would normally start scrolling, the paint events begin firing in the wrong order (2,3,1). has anyone came across this before? why might this be happening?

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • NSLog crashing app using 3.1.3 software

    - by Matt Facer
    Hi guys - the other day I had a bug submitted for my app from a user on an ipod touch with 3.1.3 software. It was a strange bug as no-one else has submitted it yet. Long story short, it appears that anywhere where I have NSLog() in code it will actually crash the app. I tried stripping out ALL the code other than NSLog(@"hello") and running on my iphone (3.1.3) it indeed did crash. I removed the NSLog and it worked. Has anyone else had this problem?? PS) I am now aware that we shouldn't release an app with NSLog still in use... so they've all gone now!

    Read the article

  • Avoiding EXC_BAD_ACCESS when using the delegate pattern

    - by Kenny Winker
    A have a view controller, and it creates a "downloader" object, which has a reference to the view controller (as a delegate). The downloader calls back the view controller if it successfully downloads the item. This works fine as long as you stay on the view, but if you navigate away before the download is complete I get EXC_BAD_ACCESS. I understand why this is happening, but is there any way to check if an object is still allocated? I tried to test using delegate != nil, and [delegate respondsToSelector:], but it chokes. if (!self.delegate || ![self.delegate respondsToSelector:@selector(downloadComplete:)]) { // delegate is gone, go away quietly [self autorelease]; return; } else { // delegate is still around [self.delegate downloadComplete:result]; } I know I could, a) have the downloader objects retain the view controller b) keep an array of downloaders in the view controller, and set their delegate values to nil when I deallocate the view controller. But I wonder if there is an easier way, where I just test if the delegate address contains a valid object?

    Read the article

  • iPad Orientation Paradigm

    - by JustinXXVII
    I'm not a super awesome designer so this new paradigm has me a little cranky. The iPad is not supposed to have a standard orientation, and should/shall display screen contents at whichever orientation the user decides. This has me sort of stumped. I can keep my UI designed the way I want it in landscape mode, but switching to portrait, I just can't determine the best way to present app content. I know it's all speculation at this point, but what are the chances we can override the autoRotateToOrientation to only include the orientation of our choice? Apple ignored the HIG on a lot of issues for iPhone, including splash screens, saving state, etc. I know we can't really argue with Apple, but doesn't it sound slightly ridiculous to reject an app because it won't rotate to portrait? I've come a long way porting some code to iPad and it works great in landscape mode. I guess only time will tell. What do you all think?

    Read the article

  • Page Footer not showing in Crystal Report

    - by Mike C.
    Hello, I am using Crystal Reports in Visual Studio 2008. I have about 5 pages worth of static text that needs to appear at the top of my report, so I put it in the report header section. I have a page footer section on the page that shows the page number. This does not show, and I suspect it has something to do with the long report header. How can I make the page footer show with a large report header? Edit: The Page Footer is actually appearing once on the last page. The Report Header takes up 5 pages and there isn't a page footer on any of those pages.

    Read the article

  • How2 display Image from DB in datagrid?

    - by Saverio Tedeschi
    Hope it's not yet been answered, but I've googled for quite a long time, and I've not been able to get it working. In brief, I've a SL4 page with a DataGrid filled accordingly to parameters passed from the previous page, so I fill the context in code, with a "RIA" query (w/parameters as well). So far, so good. I get the XAML declared columns as expected but the Photo templated column, which holds just an image control. I think I can use the LoadingRow event, but can some1 give me some code on how to get my goal (image is in in the context from DB)? Thanks in advance

    Read the article

  • Cross Platform build

    - by Neeraj
    I have an application in which we use a hand-made build system.The reason for this is portability, the application should be portable on Linux/Mac/Windows. There are some port-specific files that are not updated by the deafult build system. What I do now is update the files manually or have a script do this. However I am thinking of switching to cross platform build system like cmake or scons. Are their associated problems wrt portability? Will it pay in long run? and if so what should be my choice "cmake", "scons" or some other(if any?). Thanks,

    Read the article

  • ruby rails loop causes server freeze

    - by Darkerstar
    Hi all: I am working on a Ruby on Rails project on Windows. I have Ruby 1.86 and Rails 2.35 installed. Everything is fine until I tried to implement a comet process. I have the following code written to respond to a long poll javascript request. But everytime this function is called, it will hang the whole rails server, no second request can get in, until the timeout. (I know there is juggernaut, but I like to implement one myself first :) Is this due to my server setup? The project will be deployed on a linux server with Ngix and Passenger setup, will it suffer the same problem? def comet_hook timeout(5) do while true do key = 'station_' + station_id.to_s + '_message_lastwrite' if Rails.cache.exist?(key) @cache_time = DateTime.parse(Rails.cache.read(key)) if @cache_time > hook_start @messages = @station.messages_posted_after(hook_start) hook_start = @cache_time break end end end ... end Also with Rails memory store cache, I keep getting "cannot modify frozen object" error, so the above script only worked for me when I switched to File cache. :(

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >