Search Results

Search found 18151 results on 727 pages for 'upside down'.

Page 577/727 | < Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >

  • Trouble with __VA_ARGS__

    - by Noah Roberts
    C++ preprocessor __VA_ARGS__ number of arguments The accepted answer there doesn't work for me. I've tried with MSVC++ 10 and g++ 3.4.5. I also crunched the example down into something smaller and started trying to get some information printed out to me in the error: template < typename T > struct print; #include <boost/mpl/vector_c.hpp> #define RSEQ_N 10,9,8,7,6,5,4,3,2,1,0 #define ARG_N(_1,_2,_3,_4,_5,_6,_7,_8,_9,_10,N,...) N #define ARG_N_(...) ARG_N(__VA_ARGS__) #define XXX 5,RSEQ_N #include <iostream> int main() { print< boost::mpl::vector_c<int, ARG_N_( XXX ) > > g; // ARG_N doesn't work either. } It appears to me that the argument for ARG_N ends up being 'XXX' instead of 5,RSEQ_N and much less 5,10,...,0. The error output of g++ more specifically says that only one argument is supplied. Having trouble believing that the answer would be proposed and then accepted when it totally fails to work, so what am I doing wrong? Why is XXX being interpreted as the argument and not being expanded? In my own messing around everything works fine until I try to pass off VA_ARGS to a macro containing some names followed by ... like so: #define WTF(X,Y,...) X , Y , __VA_ARGS__ #define WOT(...) WTF(__VA_ARGS__) WOT(52,2,5,2,2) I've tried both with and without () in the various macros that take no input.

    Read the article

  • Custom back button click event on pushed view controller

    - by TechFusion
    Hello, I have pushed view controller and load WebView and Custom rectangular rounded button on right down left corner into view using programmatic way. -(void)loadView { CGRect frame = CGRectMake(0.0, 0.0, 480, 320); WebView = [[[UIWebView alloc] initWithFrame:frame] autorelease]; WebView.backgroundColor = [UIColor whiteColor]; WebView.scalesPageToFit = YES; WebView.autoresizingMask = (UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight | UIViewAutoresizingFlexibleLeftMargin | UIViewAutoresizingFlexibleTopMargin); WebView.autoresizesSubviews = YES; WebView.exclusiveTouch = YES; WebView.clearsContextBeforeDrawing = YES; self.roundedButtonType = [[UIButton buttonWithType:UIButtonTypeRoundedRect] retain]; self.roundedButtonType.frame = CGRectMake(416.0, 270.0, 44, 19); [self.roundedButtonType setTitle:@"Back" forState:UIControlStateNormal]; self.roundedButtonType.backgroundColor = [UIColor grayColor]; [self.roundedButtonType addTarget:self action:@selector(back:) forControlEvents:UIControlEventTouchUpInside]; self.view = WebView; [self.view addSubview: self.roundedButtonType ]; [WebView release]; } This is action that I have added as back button of navigation. -(void)back:(id)sender{ [self.navigationController popViewControllerAnimated:YES]; } -(void)viewDidUnload{ self.WebView = nil; self.roundedButtonType = nil; } -(void)dealloc{ [roundedButtonType release]; [super dealloc]; } Here, When Back button click then it is showing previous view but application got stuck in that view and GDB shows Program received signal :EXC_BAD_ACCESS message. how resolve this issue? Thanks,

    Read the article

  • Can any linux API or tool watch for any change in any folder below e.g. /SharedRoot or do I have to

    - by Simon B.
    I have a folder with ~10 000 subfolders. Can any linux API or tool watch for any change in any folder below e.g. /SharedRoot or do I have to setup inotify for each folder? (i.e. I loose if I want to do this for 10k+ folders). I guess yes, since I've already seen examples of this inefficient method, for instance http://twistedmatrix.com/trac/browser/trunk/twisted/internet/inotify.py?rev=28866#L345 My problem: I need to keep folders time-sorted with most recently active "project" up top. When a file changes, each folder above that file should update its last-modified timestamp to match the file. Delays are ok. Opening a file (typically MS Excel) and closing again, its file date can jump up and then down again. For this reason I need to wait until after a file is closed, then queue the folder of that file for checking, and only a while later do I go and look for the newest file in its folder, since the filedate of the triggering file could already be back-dated to its original timestamp by Excel or similar programs. Also in case several files from same folder are used/created, it makes sense to buffer timestamping of that folders' parents to at least get a bunch of updates collapsed into one delayed update. I'm looking for a linux solution. I have some code that can be run on a windows server, most of the queing functionality is here: http://github.com/sesam/FolderdateFollowsFiles/blob/master/FolderdateFollowsFiles/Follower.vb Available API:s The relative of inotify on windows, ReadDirectoryChangesW, can watch a folder and its whole subtree; see bWatchSubtree on http://msdn.microsoft.com/en-us/library/aa365465(VS.85).aspx Samba? Patching samba source is a possibility, but perhaps there are already hooks available? Other possibilities, like client side (various windows versions) and spying on file activities in order to update folders recursively?

    Read the article

  • SQL Server 2008 - Shrinking the Transaction Log - Any way to automate?

    - by Albert
    I went in and checked my Transaction log the other day and it was something crazy like 15GB. I ran the following code: USE mydb GO BACKUP LOG mydb WITH TRUNCATE_ONLY GO DBCC SHRINKFILE(mydb_log,8) GO Which worked fine, shrank it down to 8MB...but the DB in question is a Log Shipping Publisher, and the log is already back up to some 500MB and growing quick. Is there any way to automate this log shrinking, outside of creating a custom "Execute T-SQL Statement Task" Maintenance Plan Task, and hooking it on to my log backup task? If that's the best way then fine...but I was just thinking that SQL Server would have a better way of dealing with this. I thought it was supposed to shrink automatically whenever you took a log backup, but that's not happening (perhaps because of my log shipping, I don't know). Here's my current backup plan: Full backups every night Transaction log backups once a day, late morning (maybe hook the Log shrinking onto this...doesn't need to be shrank every day though) Or maybe I just run it once a week, after I run a full backup task? What do you all think?

    Read the article

  • HttpWebRequest.BeginGetResponse() does not return the second time

    - by evilfred
    Hi, I make one HttpWebRequest and call GetResponse() on it to get a synchronous response. Then after processing that response, I make a new HttpWebRequest and call BeginGetResponse() on it. Since BeginGetResponse() is an asynchronous call I expect it to return right away, but it doesn't! Why not? Here is some stripped down sample code: HttpWebRequest request = RequestFactory.MakeSessionCreationRequest(); try { // Get the response from the server. using (WebResponse response = request.GetResponse()) { using (Stream responseStream = response.GetResponseStream()) { ; // Get the response. } } ; // Process the response. } catch (WebException e) { Logger("Caught WebException when attempting to connect: " + e); return; } // Make the second, asynchronous request. HttpWebRequest msgRequest = RequestFactory.MakeMessageRequest(); IAsyncResult result = msgRequest.BeginGetResponse( new AsyncCallback(HandleResponse), msgRequest); // PROBLEM: This line is never reached!!! Logger("Message send started");

    Read the article

  • GIT clone repo across local file system

    - by Jon
    Hi all, I am a complete Noob when it comes to GIT. I have been just taking my first steps over the last few days. I setup a repo on my laptop, pulled down the Trunk from an SVN project (had some issues with branches, not got them working), but all seems ok there. I now want to be able to pull or push from the laptop to my main desktop. The reason being the laptop is handy on the train as I spend 2 hours a day travelling and can get some good work done. But my main machine at home is great for development. So I want to be able to push / pull from the laptop to the main computer when I get home. I thought the most simple way of doing this would be to just have the code folder shared out across the LAN and do: git clone file://192.168.10.51/code unfortunately this doesn't seem to be working for me: so I open a git bash cmd and type the above command, I am in C:\code (the shared folder for both machines) this is what I get back: Initialized empty Git repository in C:/code/code/.git/ fatal: 'C:/Program Files (x86)/Git/code' does not appear to be a git repository fatal: The remote end hung up unexpectedly How can I share the repository between the two machines in the most simple of ways. There will be other locations that will be official storage points and places where the other devs and CI server etc will pull from, this is just so that I can work on the same repo across two machines. Thanks

    Read the article

  • jQuery ajax not preloading images

    - by George Wiscombe
    I have a list of galleries, when you click on the title of a gallery it pulls in the contents (HTML with images). When the content is pulled in it preloads the html but not the images, any ideas? This is the JavaScript i'm using: $('#ajax-load').ajaxStart(function() { $(this).show(); }).ajaxStop(function() { $(this).hide();}); // PORTFOLIO SECTION // Hide project details on load $('.project > .details').hide(); // Slide details up / down on click $('.ajax > .header').click(function () { if ($(this).siblings(".details").is(":hidden")) { var detailUrl = $(this).find("a").attr("href"); var $details = $(this).siblings(".details"); $.ajax({ url: detailUrl, data: "", type: "GET", success: function(data) { $details.empty(); $details.html(data); $details.find("ul.project-nav").tabs($details.find(".pane"), {effect: 'fade'}); $details.slideDown("slow"); }}); } else {$(this).siblings(".details").slideUp();} return false; }); You can see this demonstrated at http://www.georgewiscombe.com Thanks in advance!

    Read the article

  • Deleting objects with FK constraints in Spring/Hibernate

    - by maxdj
    This seems like such a simple scenario to me, yet I cannot for the life of my find a solution online or in print. I have several objects like so (trimmed down): @Entity public class Group extends BaseObject implements Identifiable<Long> { private Long id; private String name; private Set<HiringManager> managers = new HashSet<HiringManager>(); private List<JobOpening> jobs; @ManyToMany(fetch=FetchType.EAGER) @JoinTable( name="group_hiringManager", joinColumns=@JoinColumn(name="group_id"), inverseJoinColumns=@JoinColumn(name="hiringManager_id") ) public Set<HiringManager> getManagers() { return managers; } @OneToMany(mappedBy="group", fetch=FetchType.EAGER) public List<JobOpening> getJobs() { return jobs; } } @Entity public class JobOpening extends BaseObject implements Identifiable<Long> { private Long id; private String name; private Group group; @ManyToOne @JoinColumn(name="group_id", updatable=false, nullable=true) public Group getGroup() { return group; } } @Entity public class HiringManager extends User { @ManyToMany(mappedBy="managers", fetch=FetchType.EAGER) public Set<Group> getGroups() { return groups; } } Say I want to delete a Group object. Now there are dependencies on it in the JobOpening table and in the group_hiringManager table, which cause the delete function to fail. I don't want to cascade the delete, because the managers have other groups, and the jobopenings can be groupless. I have tried overriding the remove() function of my GroupManager to remove the dependencies, but it seems like no matter what I do they persist, and the delete fails! What is the right way to remove this object?

    Read the article

  • How can I find "People's Contacts" folders via Outlook's object model?

    - by Dennis Palmer
    I have some code that locates all the contact folders that a user has access to by iterating through the Application.Session.Stores collection. This works for the user's contacts and also all the public contacts folders. It also finds all the contacts folders in additional mailbox accounts that the user has added via the Tools - Account Settings... menu command. However, this requires the user to have full access to the other person's account. When a user only has access to another person's contacts, then that person's contacts show up under the "People's Contacts" group in the Contacts view. How do I find those contact folders that don't show up under Session.Stores? In order to see the other user's contacts folder without adding access to their full mailbox, click File - Open - Other User's Folder... from the Outlook menu. In the dialog box, enter the other user's name and select Contacts from the Folder type drop down list. Here's the code (minus the error checking and logging) I'm using to find a list of all the user's Outlook contact folders. I know this can (and maybe should) be done using early binding to the Outlook.Application type, but that doesn't affect the results. EnumerateFolders is recursive so that it searches all sub folders. Dim folderList = New Dictionary(Of String, String) Dim outlookApp = CreateObject(Class:="Outlook.Application") For Each store As Object In outlookApp.Session.Stores EnumerateFolders(folderList, store.GetRootFolder) Next Private Sub EnumerateFolders(ByRef folderList As Dictionary(Of String, String), ByVal folder As Object) Try If folder.DefaultItemType = 2 Then folderList.Add(folder.EntryID, folder.FolderPath.Substring(2)) End If For Each subFolder As Object In folder.Folders EnumerateFolders(folderList, subFolder) Next Catch ex As Exception End Try End Sub

    Read the article

  • Launch command on remote Windows machine, given admin credentials

    - by Bilal Aslam
    I have a Windows Server 2008 instance on Amazon EC2 (Amazon's cloud compute platform, which provides VMs in the cloud). It has an external IP, and I have an admin account on the box. I would like to 'bootstrap' this instance remotely i.e. I want to run commands to download, install and configure apps on it, all without having to log on even once. I have figured out how to do this to a remote, domain-joined computer using WMI. I can even use psexec to get what I want, as long as the remote computer is part of the domain. However, I have NOT been able to do for a remote computer on EC2. Here are some specific restrictions: 1) The remote computer is not part of my domain, hence no Kerberos 2) The remote computer does not have a cert I trust, or vice versa I am sure I am running into to some auth/trust restriction. Is there any way I can run a single command on the remote, given that I have admin privileges? I'm not tied down to using WMI, but I do need to run a command somehow. Feels like this should be a solved problem.

    Read the article

  • Returning large collections from WCF Serivce

    - by Nate Bross
    I'm trying to determine the best approach for building a WCF Service, and the area I'm struggling with most is returning lists of objects. The built-in maxMessageSize of 64k seems pretty high, and I really don't want to bump it up (quick googling finds 100s of places bumping the maxMessageSize up to multi-gigabyte range which seems foolish). But, when I'm returning a collection of objects (~150 items) I am exceeding the default 64k. I'm almost to the point of returning my own class which inherits IEnumerable and has properties for hasNext, hasPrevious and PageSize so that I can implement paging on the client side -- this seems like alot of code. The other option is to jackup the maxMessageSize and hope for the best, but that feels wrong. All other aspects of my service are working great, its just returning large collectiosn where I'm having issues. For background, there are two types of consumers of this service, UI applications which will be primarly web and/or wpf applications, and data processing applications, .NET console apps, and maybe some other non-UI apps. For the UI applications, I would like to keep them responsive and keep the messageSize low, on the console apps it doesn't matter as much as they are just pulling data down to do processing and push it back up to the service.

    Read the article

  • Record Disappeared from Mysql Table, How Can I Find Out What Happened?

    - by Jascha
    I got the fire alarm phone call, AIM messages and email today from a client stating "The site is down!, WTF happened?!" Well, after a little digging, it turns out one of the records in a table had been wiped clean, but without removing the row itself. So, I had the representation of data, but a bunch of empty fields. (needless to day I need to write into my code a catch for this.) What my real question is, where can I figure out what happened? I've got access to phpmyadmin and that's about it. I found some access logs on in the root directory of my server, but that just tells me the client was in the admin area I built editing that record, I'd like to know specifically what they did that made all of the data go away. (what query was run etc...) is it possible without real server admin access? is there a neat little php to mysql class that returns data like this? Thanks in advance. -Jascha

    Read the article

  • Passing data between ViewControllers versus doing local Fetch in each VC

    - by Tofrizer
    Hi All, I'm developing an iPhone app using Core Data and I'm looking for some general advice and recommendations on whether its acceptable to pass data between ViewControllers versus doing a local fetch in each ViewController as you navigate to it. Ordinarily I would say it all depends on various factors (e.g. performance etc) but the passing data approach is so prevalent in my app and I'm spooked by all the stories about Apple rejecting apps because of not conforming to their standard guidelines. So let me put another way -- is it non-standard to pass data between VC's? The reason I pass data so much is because each ViewController is just another view on to data present in my object model / graph. Once I have a handle on my first object in the first view controller (which I of course do have to fetch), I can use the existing object composition / relationships to drill down into the next level of detail into data and so I just pass these objects to the next VC. Separately, one possible downside with this passing-data-to-each-VC approach is I don't benefit from (what I perceive to be) the optimisation/benefits that NSFetchedResultsController provides in terms of efficient memory usage and section handling. My app is read-only but I do have one table with 5000 rows and I'm curious if I am missing out on NSFetchedResultsController benefits. Any thoughts on this as well? Can I somehow still benefit from NSFetchedResultsController goodness without having to do a full fetch (as I would have already passed in the data from my previous VC)? Thanks a lot.

    Read the article

  • Using Silverlight for Views in ASP.Net MVC - a bad idea?

    - by bplus
    I'm currently writing a small application for use internally at my office. I started out teaching myself some MVC (I've been a C# dev for 3 years). One of the main requirements is editable grids - I quickly realised that silverlight (i have zero silverlight experience) could be a big help in this. I've managed to create a proof of concept of getting MVC and silverlight to talk back an forth by combining these two techniques: Creating a Rest API using MVC MVC SilverLight I also got some help on stackoverflow: silverlight-grids-mvc-http-post Essentially all I'm doing is embedding a silver light object in a view. Serializing the Model data as JSON and passing it to silverlight(using intit params written into the response). The silverlight object can post data back to the controller as JSON. So far this seems like it could work quite well. However I am a bit concerned that I could be painting myself into a corner with this approach, as in I don't have much experience with either technology so I'm worried I'm going get hit with something further down the line that I won't be able to work around. Has anybody else tried doing this? Any advice would be much appreciated!

    Read the article

  • Dynamic columns/rows

    - by Fuego DeBassi
    Wondering--does anyone know of any good articles explaining the CSS technique allowing multiple instances of a class to flow down the page relative to the items above it. Not explaining it that well. Veerle' Pierter's does it on this page: http://veerle.duoh.com/belgiangraphicdesign Although I'm not sure I want to use a technique like her's that requires entering of the height per element via her EE installation. I made a little graphic of what I am trying to acheive; http://cl.ly/71163510ce9d294f9f33 The key is I need a robust technique for doing it. Something where the markup could be as simple as; <div class="box"> Number 1 </div> <div class="box"> Number 2 </div> <div class="box"> Number 3 </div> <div class="box"> Number 4 </div> <div class="box"> Number 5 </div> ... Would love any pointers in the right direction.

    Read the article

  • Same route nested in multiple resources ember.js

    - by Daniel Upton
    I'm building an ember.js app which has a model called "Programme". A user can drill down to a programme by going: Genre > Subgenre > Programme or Folder > List > Programme Here's my router: this.resource('mylists', { path: '/' }, function() { this.resource('folder', { path: '/folder/:folder_id' }, function() { this.resource('list', { path: '/list/:list_id' }, function() { this.resource('programme', { path: '/programme/:programme_id' }); }); }); }); this.resource('catalogue', function() { this.resource('genre', { path: '/genre/:genre_id' }, function() { this.resource('subgenre', { path: '/subgenre/:subgenre_id' }, function() { this.resource('programme', { path: '/programme/:programme_id' }); }); }); }); The UI needs to be deeply nested (the genre view renders in the outlet of the catalogue template, the subgenre in the outlet of the genre template... and so forth). The problem I have is as both generated routes are called ProgrammeRoute when I linkTo the programme route inside the list template, it actually goes to the programme route nested in the subgenre route. What should I be doing here? To work around it I've named one route ListProgrammeRoute and SubgenreProgrammeRoute but that leads to some duplication.

    Read the article

  • Better way of looping to detect change.

    - by Dremation
    As of now I'm using a while(true) method to detect changes in memory. The problem with this is it's kill the applications performance. I have a list of 30 pointers that need checked as rapidly as possible for changes, without sacrificing a huge performance loss. Anyone have ideas on this? memScan = new Thread(ScanMem); public static void ScanMem() { int i = addy.Length; while (true) { Thread.Sleep(30000); //I do this to cut down on cpu usage for (int j = 0; j < i; j++) { string[] values = addy[j].Split(new char[] { Convert.ToChar(",") }); //MessageBox.Show(values[2]); try { if (Memory.Scanner.getIntFromMem(hwnd, (IntPtr)Convert.ToInt32(values[0], 16), 32).ToString() != values[1].ToString()) { //Ok, it changed lets do our work //work if (Globals.Working) return; SomeFunction("Results: " + values[2].ToString(), "Memory"); Globals.Working = true; }//end if }//end try catch { } }//end for }//end while }//end void

    Read the article

  • Where can I find a professional image gallery built on a javascript framework?

    - by user278457
    I'm looking to find a galleria replacement, hopefully using jQuery but other javascript frameworks such as prototype or mootools are fine too. I used galleria a while back, and I need a similar product now. Unfortunately, the devkick.com domain seems to have disappeared in the meantime and I'm wary of using products that aren't actively maintained. I'm willing to pay up to $50 per site for licensing costs, if the product meets my needs. I'm specifically looking for a gallery with the following features: Every image in the gallery preloads asap, not as the user clicks "next" Minimalist default css to keep my subsequent styling headaches down, preferably a "darkroom" style by default, much as galleria looks Each element that constructs the image gallery should be simple and logical to reference with CSS As easy to install as adding a css class to a single unordered list No dependencies other than the core jQuery/other library, including "easing" and other effects must be optional Works on browsers back to IE6, Firefox 3, Safari (and iPhone), Chrome, Opera Has a javascript API that lets me trigger callback functions on common events such as "user clicks next" or "image loads" degrades gracefully without javascript, either displays images as a list, or just displays the first image in the list bonus: The gallery can display other content, such as video or external sites, like the modal boxes at shadowbox-js.com well documented minimal bandwidth requirement - .js file should be ~10kb minified bonus: The gallery source is hosted on a reliable CDN like google's bonus: Thumbnails for images do not appear until the main image has loaded bonus: includes ability to set parameters with JSON to change common behaviours, such as slide/fade transitions or automatic image switch every X seconds

    Read the article

  • Script to add user to MediaWiki

    - by Marquis Wang
    I'm trying to write a script that will create a user in MediaWiki, so that I can run a batch job to import a series of users. I'm using mediawiki-1.12.0. I got this code from a forum, but it doesn't look like it works with 1.12 (it's for 1.13) $name = 'Username'; #Username (MUST start with a capital letter) $pass = 'password'; #Password (plaintext, will be hashed later down) $email = 'email'; #Email (automatically gets confirmed after the creation process) $path = "/path/to/mediawiki"; putenv( "MW_INSTALL_PATH={$path}" ); require_once( "{$path}/includes/WebStart.php" ); $pass = User::crypt( $pass ); $user = User::createNew( $name, array( 'password' => $pass, 'email' => $email ) ); $user->confirmEmail(); $user->saveSettings(); $ssUpdate = new SiteStatsUpdate( 0, 0, 0, 0, 1 ); $ssUpdate->doUpdate(); Thanks!

    Read the article

  • Starting a process in one HTTP call and getting results in another

    - by KillianDS
    Hi, I'm writing a very simple testing framework for my application, the design isn't perfect, but I don't have time to write something more complex. Essentially, I have a client and server-application, on my server I want a small python web server to start the server application with given test sequences on a GET or POST call. Also, the application prints some testdata to stderr which I'd like to catch and return in another HTTP call. At the moment I have this: from subprocess import Popen, PIPE from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer p = None class MyHandler(BaseHTTPRequestHandler): def do_GET(self): global p if self.path.endswith("start/"): p = Popen(["./bin/Release/simplex264","BBB-360","127.0.0.1"], stderr=PIPE) print 'started' return elif self.path.endswith("getResults/"): self.wfile.write(p.stderr.read()) return self.send_error(404,'File Not Found: %s' % self.path) def main(): try: server = HTTPServer(('localhost', 9876), MyHandler) print 'Started server...' server.serve_forever() except KeyboardInterrupt: print 'Shutting down...' server.socket.close() if __name__ == '__main__': main() Which 'works', except for one part, when I try to open http://localhost:9876/start/, it does not return before the process ended. However, the 'started' appears in my shell immediately (I added this because I thought the Popen call would only return after execution). I do not know the perfect inner workings of Popen and BaseHTTPRequestHandler however and do not really know where it goes wrong. Is there any way to make this work asynchronously?

    Read the article

  • Returning searched results in an array in Java without ArrayList

    - by Crystal
    I started down this path of implementing a simple search in an array for a hw assignment without knowing we could use ArrayList. I realized it had some bugs in it and figured I'd still try to know what my bug is before using ArrayList. I basically have a class where I can add, remove, or search from an array. public class AcmeLoanManager { public void addLoan(Loan h) { int loanId = h.getLoanId(); loanArray[loanId - 1] = h; } public Loan[] getAllLoans() { return loanArray; } public Loan[] findLoans(Person p) { //Loan[] searchedLoanArray = new Loan[10]; // create new array to hold searched values searchedLoanArray = this.getAllLoans(); // fill new array with all values // Looks through only valid array values, and if Person p does not match using Person.equals() // sets that value to null. for (int i = 0; i < searchedLoanArray.length; i++) { if (searchedLoanArray[i] != null) { if (!(searchedLoanArray[i].getClient().equals(p))) { searchedLoanArray[i] = null; } } } return searchedLoanArray; } public void removeLoan(int loanId) { loanArray[loanId - 1] = null; } private Loan[] loanArray = new Loan[10]; private Loan[] searchedLoanArray = new Loan[10]; // separate array to hold values returned from search } When testing this, I thought it worked, but I think I am overwriting my member variable after I do a search. I initially thought that I could create a new Loan[] in the method and return that, but that didn't seem to work. Then I thought I could have two arrays. One that would not change, and the other just for the searched values. But I think I am not understanding something, like shallow vs deep copying???....

    Read the article

  • Is this trivial function silly?

    - by Chas. Owens
    I came across a function today that made me stop and think. I can't think of a good reason to do it: sub replace_string { my $string = shift; my $regex = shift; my $replace = shift; $string =~ s/$regex/$replace/gi; return $string; } The only possible value I can see to this is that it gives you the ability to control the default options used with a substitution, but I don't consider that useful. My first reaction upon seeing this function get called is "what does this do?". Once I learn what it does, I am going to assume it does that from that point on. Which means if it changes, it will break any of my code that needs it to do that. This means the function will likely never change, or changing it will break lots of code. Right now I want to track down the original programmer and beat some sense into him or her. Is this a valid desire, or am I missing some value this function brings to the table?

    Read the article

  • Best strategy for synching data in iPhone app

    - by iamj4de
    I am working on a regular iPhone app which pulls data from a server (XML, JSON, etc...), and I'm wondering what is the best way to implement synching data. Criteria are speed (less network data exchange), robustness (data recovery in case update fails), offline access and flexibility (adaptable when the structure of the database changes slightly, like a new column). I know it varies from app to app, but can you guys share some of your strategy/experience? For me, I'm thinking of something like this: 1) Store Last Modified Date in iPhone 2) Upon launching, send a message like getNewData.php?lastModifiedDate=... 3) Server will process and send back only modified data from last time. 4) This data is formatted as so: <+><data id="..."></data></+> // add this to SQLite/CoreData <-><data id="..."></data></-> // remove this <%><data id="..."><attribute>newValue</attribute></data></%> // new modified value I don't want to make <+, <-, <%... for each attribute as well, because it would be too complicated, so probably when receive a <% field, I would just remove the data with the specified id and then add it again (assuming id here is not some automatically auto-incremented field). 5) Once everything is downloaded and updated, I will update the Last Modified Date field. The main problem with this strategy is: If the network goes down when I am updating something = the Last Modified Date is not yet updated = next time I relaunch the app, I will have to go through the same thing again. Not to mention potential inconsistent data. If I use a temporary table for update and make the whole thing atomic, it would work, but then again, if the update is too long (lots of data change), the user has to wait a long time until new data is available. Should I use Last-Modified-Date for each of the data field and update data gradually?

    Read the article

  • Using a UITableViewController with a small-sized table?

    - by rpj
    When using a UITableViewController, the initWithStyle: method automatically creates the underlying UITableView with - according to the documentation - "the correct dimensions". My problem is that these "correct dimensions" seem 320x460 (the iPhone's screen size), but I'm pushing this TableView/Controller pair into a UINavigationController which is itself contained in a UIView, which itself is about half the height of the screen. No frame or bounds wrangling I can come up with seems to correctly reset the table's size, and as such it's "too long", meaning there are a collection of rows that are pushed off the bottom of the screen and are not visible nor reachable by scrolling. So my question comes down to: what is the proper way to tell a UITableViewController to resize its component UITableView to a specified rectangle? Thanks! Update I've tried all the techniques suggested here to no avail, but I did find one interesting thing: if I eschew the UINavigationController altogether (which I'm not yet willing to do for production, but as an experiment), and add the table view as a direct subview of the enclosing view I mentioned, the frame size given is respected. The very moment I re-introduce the UINavigationController into the mix, no matter if it is added as a subview before or after the table view, and no matter if alloc/init it before or after the table view is added as a subview, the result is the same as it was before. I'm beginning to suspect UINavigationController isn't much of a team player... Update 2 The suggestion to check frame size after the table view on screen was a good one: turns out that the navigation controller is in fact resizing it some time in between load and display. My solution, hacky at best, has been to cache the frame given on load and to reset it if changed at the beginning of tableView:cellForRowAtIndexPath:. Why there you ask? Because it's the one place I found that worked, that's why! I don't consider this a solution as it's obviously improper, but for the benefit of anyone else reading, it does seem to work.

    Read the article

  • DataReader-DataSet Hybrid solution

    - by G33kKahuna
    My solution architects and I have exhausted both pure Dataset and Datareader solutions. Basically we have a Microsoft.NET 2.0 windows service application that pulls data based on a query and processes additional tasks per record; almost a poor mans workflow system. The recordsets are broader (in terms of the columns) and deeper (in terms of number of records). We observed that DataSet performs much better in terms of performance but runs into contraints as # of records increase say 100K+ we start seeing System.OutOfMemoryException on a 4G machine with processModel configured to run at memoryLimit set to 85. Since this is a multi-threaded app, there could be multiple threads processing different queries and building different DataSets, so we run into the exception sooner in that case DataReader on the other hand works but is a lot slower and hits other contraints; if there is some sort of disconnect it has to start over again or leaves open connections on the DB side and worst case takes down the service completely etc. So, we decided the best option would be some sort of hybrid solution. I'm open to guidance and suggestions. Are there any hybrid solutions available? Any other suggestions

    Read the article

< Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >