Search Results

Search found 11991 results on 480 pages for 'cedric copy'.

Page 410/480 | < Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >

  • Read from plist insted of Code array

    - by BoSoud
    Hi Guys am Using [URL="http://www.iphonesdkarticles.com/2009/01/uitableview-searching-table-view.html"]UITableView - Searching table view[/URL] its really nice easy tutorial but i really have bad time try to read from plist that what i did change - (void)viewDidLoad { [super viewDidLoad]; //Initialize the array. listOfItems = [[NSMutableArray alloc] init]; TableViewAppDelegate *AppDelegate = (TableViewAppDelegate *)[[UIApplication sharedApplication] delegate]; listOfItems = [AppDelegate.data objectForKey:@"Countries"]; //Initialize the copy array. copyListOfItems = [[NSMutableArray alloc] init]; //Set the title self.navigationItem.title = @"Countries"; //Add the search bar self.tableView.tableHeaderView = searchBar; searchBar.autocorrectionType = UITextAutocorrectionTypeNo; searching = NO; letUserSelectRow = YES; } and This How i read plist from My AppDelegate.m - (void)applicationDidFinishLaunching:(UIApplication *)application { NSString *Path = [[NSBundle mainBundle] bundlePath]; NSString *DataPath = [Path stringByAppendingPathComponent:@"Data.plist"]; NSDictionary *tempDict = [[NSDictionary alloc] initWithContentsOfFile:DataPath]; self.data = tempDict; [tempDict release]; // Configure and show the window [window addSubview:[navigationController view]]; [window makeKeyAndVisible]; } and this my plist <plist version="1.0"> <dict> <key>Countries</key> <array> <array> <string>USA</string> </array> <dict/> </array> </dict> </plist> and i get this error *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[NSCFArray objectForKey:]: unrecognized selector sent to instance 0x1809dc0' help please am stock with this

    Read the article

  • Ruby on Rails: Uploading a modifed site.

    - by Califer
    I'm having a heck of a time getting a site I modified to work correctly. I didn't set the site up originally, and since the person that set it up no longer works with me I had to learn ruby just to make some changes. I made all the changes in the development server and everything worked fine. Then I did a diff on the production and development and moved only my changes over. Unfortunately when I loaded my changes onto the production server I got a lot of errors. I've changed all of the permissions to 755, which took care being able to access anything at all, but after that I started getting a lot of 500 errors. Nothing showed up in the production.log file. I really have no clue what's going wrong except that perhaps things are not noticing the new changes. I moved the old site to a backup folder, and the new site crashes whenever it goes to anything that I've changed. In particular, I added a link to a new setup with an extra controller/model/view group. It works fine on development but in production it gives me a 404. Yes, I did copy all the files up. I even put everything back how it was, but the website is still showing the broken version of it. I checked the tmp/cache folder but it was empty. Running dispatch.fcgi shows the old site (which I expected) but it still shows the flawed new site when I connect through a browser. I've been tearing my hair out trying to get this to work. Any ideas as to how I can get this to work?

    Read the article

  • More efficient R / Sweave / TeXShop work-flow?

    - by user594795
    I've now got everything to work properly on my Mac OS X 10.6 machine so that I can create decent looking LaTeX documents with Sweave that include snippets of R code, output, and LaTeX formatting together. Unfortunately, I feel like my work-flow is a bit clunky and inefficient: Using TextWrangler, I write LaTeX code and R code (surrounded by <<= above and @ below R code chunk) together in one .Rnw file. After saving changes, I call the .Rnw file from R using the Sweave command Sweave(file="/Users/mymachine/Documents/Assign4.Rnw", syntax="SweaveSyntaxNoweb") In response, R outputs the following message: You can now run LaTeX on 'Assign4.tex' So then I find the .tex file (Assign4.tex) in the R directory and copy it over to the folder in my documents ~/Documents/ where the .Rnw file is sitting (to keep everything in one place). Then I open the .tex file (e.g. Assign4.tex) in TeXShop and compile it there into pdf format. It is only at this point that I get to see any changes I have made to the document and see if it 'looks nice'. Is there a way that I can compile everything with one button click? Specifically it would be nice to either call Sweave / R directly from TextWrangler or TeXShop. I suspect it might be possible to code a script in Terminal to do it, but I have no experience with Terminal. Please let me know if there's any other things I can do to streamline or improve my work flow.

    Read the article

  • Are there any less costly alternatives to Amazon's Relational Database Services (RDS)?

    - by swapnonil
    Hi All, I have the following requirement. I have with me a database containing the contact and address details of at least 2000 members of my school alumni organization. We want to store all that information in a relation model so that This data can be created and edited on demand. This data is always backed up and should be simple to restore in case the master copy becomes unusable. All sensitive personal information residing in this database is guaranteed to be available only to authorized users. This database won't be online in the first 6 months. It will become online only after a website is built on top of it. I am not a DBA and I don't want to spend time doing things like backups. I thought Amazon's RDS with it's automatic backup facility was the perfect solution for our needs. The only problem is that being a voluntary organization we cannot spare the monthly $100 to $150 fees this service demands. So my question is, are there any less costlier alternatives to Amazon's RDS?

    Read the article

  • No event is firing when placing a custom data bound control in DataRepeater control in Windows forms

    - by Remo
    Hi, Custom events in a custom data bound control are not firing in DataRepeater control. When I debug it I found that the DataRepeater Control recreates the control using Activator.CreateInstance and Copies the Properties and Events. In my case copying events doesn't copy the custom events that I hooked in. For example public class MyClass : Control { public event EventHandler MyEvent; protected virtual void OnMyEvent() { if(this.MyEvent != null) { this.MyEvent(this,EventArgs.Empty); } } private int selectedIndex= -1; public int SelectedIndex { get { return this.selectedIndex; } set { if(this.selectedIndex != value) { this.selectedIndex = value; this.OnMyEvent(); } } } // // DataBinding stuff goes here // } public Form1() { InitialiseComponent(); ArrayList list = new ArrayList(); list.Add("one"); this.dataRepeater1.DataSource = list; // One Repeater MyClass test = new Myclass(); test.DataSource = GetDataTable(); this.dataRepeater1.ItemTemplate.Controls.Add(test); test.MyEvent +=new EventHandler(test_MyEvent); } // This Event should fire when selected index of Datatable is changed and is firing when placed directly in the form and not firing when place in DataRepeater control/////////////////////// private void test_MyEvent(object sender, EventArgss e) { // This event is not fired/////////////////////// } private DataTable GetDataTable() { ..// Create a data Table and return } Any help Appreciated. Thanks,

    Read the article

  • PHP sql with foreach loop variable problem

    - by anthony
    This is really getting frustrating. I have a text file that I'm reading for a list of part numbers that goes into an array. I'm using the following foreach function to search a database for matching numbers. $file = file('parts_array.txt'); foreach ($file as $newPart) { $sql = "SELECT products_sku FROM products WHERE products_sku='" . $newPart . "'"; $rs = mysql_query($sql); $num_rows = mysql_num_rows($rs); echo $num_rows; echo "<br />"; } The problem is I'm getting 0 rows returned from mysql_num_rows. I can type the sql statement without the variable and it works perfectly. I can even echo out the sql statement from this script, copy and paste the statement from the browser and it works. But, for some reason I'm not getting any records when I'm using the variable. I've used variables in sql statements tons of times, but this really has me stumped.

    Read the article

  • Uploading an xml direct to ftp

    - by Joshua Maerten
    i put this direct below a button: XmlDocument doc = new XmlDocument(); XmlElement root = doc.CreateElement("Login"); XmlElement id = doc.CreateElement("id"); id.SetAttribute("userName", usernameTxb.Text); id.SetAttribute("passWord", passwordTxb.Text); XmlElement name = doc.CreateElement("Name"); name.InnerText = nameTxb.Text; XmlElement age = doc.CreateElement("Age"); age.InnerText = ageTxb.Text; XmlElement Country = doc.CreateElement("Country"); Country.InnerText = countryTxb.Text; id.AppendChild(name); id.AppendChild(age); id.AppendChild(Country); root.AppendChild(id); doc.AppendChild(root); // Get the object used to communicate with the server. FtpWebRequest request = (FtpWebRequest)WebRequest.Create("ftp://users.skynet.be"); request.Method = WebRequestMethods.Ftp.UploadFile; request.UsePassive = false; // This example assumes the FTP site uses anonymous logon. request.Credentials = new NetworkCredential("fa490002", "password"); // Copy the contents of the file to the request stream. StreamReader sourceStream = new StreamReader(); byte[] fileContents = Encoding.UTF8.GetBytes(sourceStream.ReadToEnd()); sourceStream.Close(); request.ContentLength = fileContents.Length; Stream requestStream = request.GetRequestStream(); requestStream.Write(fileContents, 0, fileContents.Length); requestStream.Close(); FtpWebResponse response = (FtpWebResponse)request.GetResponse(); response.Close(); MessageBox.Show("Created SuccesFully!"); this.Close(); but i always get an error of the streamreader path, what do i need to place there ? the meening is, creating an account and when i press the button, an xml file is saved to, ftp://users.skynet.be/testxml/ the filename is from usernameTxb.Text + ".xml".

    Read the article

  • Exporting database from external server (without SSH access)

    - by Derek Carlisle
    Our current website is hosted by the design agency who originally built the website, however we are bringing the development of the website in house therefore need to export the database from their server and import it to ours. We have FTP and phpMyAdmin access but don't have SSH access to the server. I was hoping to run a PHP script that would mysql dump the database, compress it and then copy it across to our server using scp: $backupFile = $_SERVER['DOCUMENT_ROOT'].'/backup' . date("Y-m-d-H-i-s") . '.gz'; system("mysqldump -h DB_HOST -u DB_USER -pDB_PASS DB_NAME | gzip > $backupFile"); exec("sshpass -p PASSWORD scp -r -P PORT_NUMBER $backupFile [email protected]:/path/to/directory/"); I have ran this locally from the command line and it worked fine, although I had to install sshpass (the hosting server might not have this installed). Also, I was hoping to run it from the browser as I don't have command line access on the hosting server, however it didn't work, no errors produced though. Can you anyone recommend how I can export from the server that I don't have SSH access to and import to my server? Thanks

    Read the article

  • Script says Undefined

    - by user1058887
    I have this script that would let the user input a text and it would get translated into something else. It works only when the word has only 1 letter. When there is more than 1 letter it says Undefined. Here is the script : function copyit(theField) { var tempval=eval("document."+theField) tempval.focus() tempval.select() therange=tempval.createTextRange() therange.execCommand("Copy") } function results() { var behavior="form"; var text=document.csrAlpha.csrresult2.value; var ff22=text.toLowerCase(); var Words=new Array ; Words["b"]="Dadada"; Words["bob"]="Robert"; Words["flower"]="Banana"; Words["brad"]="Chair"; var trans=""; var regExp=/[\!@#$%^&*(),=";:\/]/; var stringCheck=regExp.exec(ff22); if(!stringCheck) { if(ff22.length > 0) { for(var i=0;i < ff22.length;i++) { var thisChar=ff22.charAt(i); trans += Words[thisChar] + " "; } } else { trans +="Please write something."; } } else { trans +="You entered invalid characters. Remove them and try again."; } document.csrAlpha.csrresult.value=trans; } Please insert your text below:

    Read the article

  • jquery question

    - by OM The Eternity
    I am using the Jquery library to copy the text enter in one textfield to another textfield using a check box when clicked.. which is as follows <html> <head> <script src="js/jquery.js" ></script> </head> <body> <form> <input type="text" name="startdate" id="startdate" value=""/> <input type="text" name="enddate" id="enddate" value=""/> <input type="checkbox" name="checker" id="checker" /> </form> <script> $(document).ready(function(){ $("input#checker").bind("click",function(o){ if($("input#checker:checked").length){ $("#enddate").val($("#startdate").val()); }else{ $("#enddate").val(""); } }); } ); </script> </body> </html> now here i want the check box to be selected by default, so that the data entered in start date get copied automatically as check box is checked by default... so what event should be called here in jquery script? please help me in resolving this issue...

    Read the article

  • Google App Engine error Object Manager has been closed

    - by newbie
    I had following error from Google App Engine when I was trying to iterate list in JSP page with EL. Object Manager has been closed I solved problem with following coe, but I don't think that it is very good solution to this problem: public List<Item> getItems() { PersistenceManager pm = getPersistenceManager(); Query query = pm.newQuery("select from " + Item.class.getName()); List<Item> items = (List<Items>) query.execute(); List<Item> items2 = new ArrayList<Item>(); // This line solved my problem Collections.copy(items, items2); // and this also pm.close(); return (List<Item>) items; } When I tried to use pm.detachCopyAll(items) it gave same error. I understood that detachCopyAll() method should do same what I did, but that method should be part of data nucelus, so it should be used instead of my owm methods. So why dosen't detachCopyAll() work at all?

    Read the article

  • Redirect requests only if the file is not found?

    - by ZenBlender
    I'm hoping there is a way to do this with mod_rewrite and Apache, but maybe there is another way to consider too. On my site, I have directories set up for re-skinned versions of the site for clients. If the web root is /home/blah/www, a client directory would be /home/blah/www/clients/abc. When you access the client directory via a web browser, I want it to use any requested files in the client directory if they exist. Otherwise, I want it to use the file in the web root. For example, let's say the client does not need their own index.html. Therefore, some code would determine that there is no index.html in /home/blah/www/clients/abc and will instead use the one in /home/blah/www. Keep in mind that I don't want to redirect the client to the web root at any time, I just want to use the web root's file with that name if the client directory has not specified its own copy. The web browser should still point to /clients/abc whether the file exists there or in the root. Likewise, if there is a request for news.html in the client directory and it DOES exist there, then just serve that file instead of the web root's news.html. The user's experience should be seamless. I need this to work for requests on any filename. If I need to, for example, add a new line to .htaccess for every file I might want to redirect, it rather defeats the purpose as there is too much maintenance needed, and a good chance for errors given the large number of files. In your examples, please indicate whether your code goes in the .htaccess file in the client directory, or the web root. Web root is preferred. Thanks for any suggestions! :)

    Read the article

  • How to "wrap" implementation in C#

    - by igor
    Hello, I have these classes in C# (.NET Framework 3.5) described below: public class Base { public int State {get; set;} public virtual int Method1(){} public virtual string Method2(){} ... public virtual void Method10(){} } public class B: Base { // some implementation } public class Proxy: Base { private B _b; public class Proxy(B b) { _b = b; } public override int Method1() { if (State == Running) return _b.Method1(); else return base.Method1(); } public override string Method2() { if (State == Running) return _b.Method2(); else return base.Method2(); } public override void Method10() { if (State == Running) _b.Method10(); else base.Method10(); } } I want to get something this: public Base GetStateDependentImplementation() { if (State == Running) // may be some other rule return _b; else return base; // compile error } and my Proxy's implementation will be: public class Proxy: Base { ... public override int Method1() { return GetStateDependentImplementation().Method1(); } public override string Method2() { return GetStateDependentImplementation().Method2(); } ... } Of course, I can do this (aggregation of base implementation): public RepeaterOfBase: Base // no any overrides, just inheritance { } public class Proxy: Base { private B _b; private RepeaterOfBase _Base; public class Proxy(B b, RepeaterOfBase aBase) { _b = b; _base = aBase; } } ... public Base GetStateDependentImplementation() { if (State == Running) return _b; else return _Base; } ... But instance of Base class is very huge and I have to avoid to have other additional copy in memory. So I have to simplify my code have to "wrap" implementation have to avoid a code duplication have to avoid aggregation of any additional instance of Base class (duplication) Is it possible to reach these goals?

    Read the article

  • Are there any less costlier alternatives to Amazon's Relational Database Services (RDS)?

    - by swapnonil
    Hi All, I have the following requirement. I have with me a database containing the contact and address details of at least 2000 members of my school alumni organization. We want to store all that information in a relation model so that This data can be created and edited on demand. This data is always backed up and should be simple to restore in case the master copy becomes unusable. All sensitive personal information residing in this database is guaranteed to be available only to authorized users. This database won't be online in the first 6 months. It will become online only after a website is built on top of it. I am not a DBA and I don't want to spend time doing things like backups. I thought Amazon's RDS with it's automatic backup facility was the perfect solution for our needs. The only problem is that being a voluntary organization we cannot spare the monthly $100 to $150 fees this service demands. So my question is, are there any less costlier alternatives to Amazon's RDS?

    Read the article

  • Hosting an Access DB

    - by Mitciv
    Hey, So I'm inexperienced in hosting DB's and I've always had the luxury of someone else getting the db setup. I was going to help a friend out with getting a webpage setup, I've got experience in Asp.Net MVC so I'm going with that. They want to setup a search page to query a db and display the results. My question I have is in getting the DB setup and hosted. They currently just have the Access DB on a local computer. There is basically only one table that would need to be queried for the search. What is the best approach to getting this table/db accessible? They would like to keep the main copy of the db on the local machine, so copying the entire db over to the hosted site would be time consuming, could the lone table needed be solely copied to the host? Should I try to convince them to make changes on the hosted db and just make copies of that for their local machines? Any suggestions are welcome, Again I'm a total noob when it comes to hosting databases. Thanks

    Read the article

  • Advice: Python Framework Server/Worker Queue management (not Website)

    - by Muppet Geoff
    I am looking for some advice/opinions of which Python Framework to use in an implementation of multiple 'Worker' PCs co-ordinated from a central Queue Manager. For completeness, the 'Worker' PCs will be running Audio Conversion routines (which I do not need advice on, and have standalone code that works). The Audio conversion takes a long time, and I need to co-ordinate an arbitrary number of the 'Workers' from a central location, handing them conversion tasks (such as where to get the source files, or where to ask for the job configuration) with them reporting back some additional info, such as the runtime of the converted audio etc. At present, I have a script that makes a webservice call to get the 'configuration' for a conversion task, based on source files located on the worker already (we manually copy the source files to the worker, and that triggers a conversion routine). I want to change this, so that we can distribute conversion tasks ("Oy you, process this: xxx") based on availability, and in an ideal world, based on pending tasks too. There is a chance that Workers can go offline mid-conversion (but this is not likely). All the workers are Windows based, the co-ordinator can be WIndows or Linux. I have (in my initial searches) come across the following - and I know that some are cross-dependent: Celery (with RabbitMQ) Twisted Django Using a framework, rather than home-brewing, seems to make more sense to me right now. I have a limited timeframe in which to develop this functional extension. An additional consideration would be using a Framework that is compatible with PyQT/PySide so that I can write a simple UI to display Queue status etc. I appreciate that the specifics above are a little vague, and I hope that someone can offer me a pointer or two. Again: I am looking for general advice on which Python framework to investigate further, for developing a Server/Worker 'Queue management' solution, for non-web activities (this is why DJango didn't seem the right fit).

    Read the article

  • Scanning, Checking, Converting, Copying values ... How to ? -- C --

    - by ZaZu
    Hi there, Its been a while now and im still trying to get a certain code to work. I asked some question about different commands etc. before, but now I hope this is the final one (combining all questions in one code). I basically want to : *Scan an input (should be character ? ) *Check if its a number *If not, return error *Convert that character into a float number *Copy the value to another variable ( I called it imp here) Here is what I came up with : #include<stdio.h> #include<stdlib.h> #include<ctype.h> main(){ int digits; float imp=0; int alpha; do{ printf("Enter input\n\n"); scanf("\n%c",&alpha); digits=isdigit(alpha); if(digits==0){ printf("error\n\n"); } imp=atof(alpha); }while(digits==0); } The problem is this code does not work at all ... It gives me that atof must be of a const char and whenever I try changing it around, it just keeps failing. I am frustrated and forced to ask here, because I believe I have tried alot and I keep failing, but I wont be relieved until I get it to work xD So I really need your help guys. Please tell me why isnt this code working, what am I doing wrong ? I am still learning C and really appreciate your help :)

    Read the article

  • Using Java classes(whole module with Spring/Hibernate dependency) in Grails

    - by Sitaram
    I have a Java/Spring/Hibernate application with a payment module. Payment module has some domain classes for payment subscription and transactions etc. Corresponding hibernate mapping files are there. This module uses applicationContext.xml for some of the configuration it needs. Also, This module has a PaymentService which uses a paymentDAO to do all database related work. Now, I want to use this module as it is(without any or minimal re-writing) in my other application(Grails application). I want to bring in the payment module as a jar or copy the source files to src/java folder in Grails. With that background, I have following queries: Will the existing applicationContext.xml for Spring configuration in the module will work as it is in Grails? Does it merge with rest of Grails's Spring config? Where do I put the applicationContext.xml? classpath? src/java should work? Can I bundle the applicationContext.xml in Jar(If I use jar option) and can overwrite in Grails if anything needs to be changed? Multiple bean definition problems in that case? PaymentService recognized as regular service? Will it be auto-injected in controllers and/or other services? Will PaymentDAO use the datasource configuration of Grails? Where do I put the hbm files of this module? Can I bundle the hbm files in Jar(If I use jar option) and can overwrite in Grails if anything needs to be changed? Which hbms are picked? or, there will be problems with that? Too many questions! :) All these concerns are actually before trying. I am going to try this in next few days(busy currently). Any help is appreciated. Thanks. Sitaram Meena

    Read the article

  • Redeploying an ASP.NET site in IIS7 without files in use interfering

    - by fyjham
    Hey, We've got a process currently which causes ASP.NET websites to be redeployed. The code is itself an ASP.NET application. The current method, which has worked for quite a while, is simply to loop over all the files in one folder and copy them over the top of the files in the webroot. The problem that's arisen is that occasionally files end up being in use and hence can't be copied over. This has in the past been intermittent to the point it didn't matter but on some of our higher traffic sites it happens the majority of the time now. I'm wondering if anyone has a workaround or alternative approach to this that I haven't thought of. Currently my ideas are: Simply retry each file until it works. That's going to cause errors for a short time though which isn't really that good. Deploy to a new folder and update IIS's webroot to the new folder. I'm not sure how to do this short of running the application as an administrator and running batch files, which is very untidy. Does anyone know what the best way to do this is, or if it's possible to do #2 without running the publishing application as a user who has admin access (Willing to grant it special privileges, but I'd prefer to stop short of administrator)?

    Read the article

  • how to emulate thread local storage at user space in C++ ?

    - by vprajan
    I am working on a mobile platform over Nucleus RTOS. It uses Nucleus Threading system but it doesn't have support for explicit thread local storage i.e, TlsAlloc, TlsSetValue, TlsGetValue, TlsFree APIs. The platform doesn't have user space pthreads as well. I found that __thread storage modifier is present in most of the C++ compilers. But i don't know how to make it work for my kind of usage. How does __thread keyword can be mapped with explicit thread local storage? I read many articles but nothing is so clear for giving me the following basic information will __thread variable different for each thread ? How to write to that and read from it ? does each thread has exactly one copy of the variable ? following is the pthread based implementation: pthread_key_t m_key; struct Data : Noncopyable { Data(T* value, void* owner) : value(value), owner(owner) {} int* value; }; inline ThreadSpecific() { int error = pthread_key_create(&m_key, destroy); if (error) CRASH(); } inline ~ThreadSpecific() { pthread_key_delete(m_key); // Does not invoke destructor functions. } inline T* get() { Data* data = static_cast<Data*>(pthread_getspecific(m_key)); return data ? data->value : 0; } inline void set(T* ptr) { ASSERT(!get()); pthread_setspecific(m_key, new Data(ptr, this)); } How to make the above code use __thread way to set & get specific value ? where/when does the create & delete happen? If this is not possible, how to write custom pthread_setspecific, pthread_getspecific kind of APIs. I tried using a C++ global map and index it uniquely for each thread and retrieved data from it. But it didn't work well.

    Read the article

  • Is it okay for multiple objects to retain the same object in Objective-C/Cocoa?

    - by Andrew Arrow
    Say I have a tableview class that lists 100 Foo objects. It has: @property (nonatomic, retain) NSMutableArray* fooList; and I fill it up with Foos like: self.fooList = [NSMutableArray array]; while (something) { Foo* foo = [[Foo alloc] init]; [fooList addObject:foo]; [foo release]; } First question: because the NSMutableArray is marked as retain, that means all the objects inside it are retained too? Am I correctly adding the foo and releasing the local copy after it's been added to the array? Or am I missing a retain call? Then if the user selects one specific row in the table and I want to display a detail Foo view I call: FooView* localView = [[FooView alloc] initWithFoo:[self.fooList objectAtIndex:indexPath.row]]; [self.navigationController pushViewController:localView animated:YES]; [localView release]; Now the FooView class has: @property (nonatomic, retain) Foo* theFoo; so now BOTH the array is holding on to that Foo as well as the FooView. But that seems okay right? When the user hits the back button dealloc will be called on FooView and [theFoo release] will be called. Then another back button is hit and dealloc is called on the tableview class and [fooList release] is called. You might argue that the FooView class should have: @property (nonatomic, assign) Foo* theFoo; vs. retain. But sometimes the FooView class is called with a Foo that's not also in an array. So I wanted to make sure it was okay to have two objects holding on to the same other object.

    Read the article

  • NHibernate / multiple sessions and nested objects

    - by bernhardrusch
    We are using NHibernate in a rich client application. It is a pretty open application (the user searches for a dataset or creates a new one, changes the data and saves the data set. We leave the session open, because sometimes we have to lazy load some properties of the object (nested object structure). This means one big problem if we leave the session open, the db (MySQL) closes the connection and we are not able to find this out and it throws an exception (socket communication error) when accessing the database (we are thinking about testing the db connection before accessing the object - but this is not really optimal neither, the other option would be to set back the timeout of the db connection , but this just doesn't seem to well). So - is it possible to reconnect the session to a new database connection ? Another problem is it possible to get an object from one session and then re-attach it to another session ? (I often hear that session.lock should work for this - but this doesn't work so well in our application - so I ended up getting a "fresh" object from the session and copy the data over manually - which is a little bit cumbersome) Any ideas for this ?

    Read the article

  • Repository layout and sparse checkouts

    - by chuanose
    My team is considering to move from Clearcase to Subversion and we are thinking of organising the repository like this: \trunk\project1 \trunk\project2 \trunk\project3 \trunk\staticlib1 \trunk\staticlib2 \trunk\staticlib3 \branches\.. \tags\.. The issue here is that we have lots of projects (1000+) and each project is a dll that links in several common static libraries. Therefore checking out everything in trunk is a non-starter as it will take way too long (~2 GB), and is unwieldy for branching. Using svn:externals to pull out relevant folders for each project doesn't seem ideal because it results in several working copies for each static library folder. We also cannot do an atomic commit if the changes span the project and some static libraries. Sparse checkouts sounds very suitable for this as we can write a script to pull out only the required directories. However when we want to merge changes from a branch back to the trunk we will need to first check out a full trunk. Wonder if there is some advice on 1) a better repository organization or 2) a way to merge over branch changes to a trunk working copy that is sparse?

    Read the article

  • How to use a TFileStream to read 2D matrices into dynamic array?

    - by Robert Frank
    I need to read a large (2000x2000) matrix of binary data from a file into a dynamic array with Delphi 2010. I don't know the dimensions until run-time. I've never read raw data like this, and don't know IEEE so I'm posting this to see if I'm on track. I plan to use a TFileStream to read one row at a time. I need to be able to read as many of these formats as possible: 16-bit two's complement binary integer 32-bit two's complement binary integer 64-bit two's complement binary integer IEEE single precision floating-point For 32-bit two's complement, I'm thinking something like the code below. Changing to Int64 and Int16 should be straight forward. How can I read the IEEE? Am I on the right track? Any suggestions on this code, or how to elegantly extend it for all 4 data types above? Since my post-processing will be the same after reading this data, I guess I'll have to copy the matrix into a common format when done. I have no problem just having four procedures (one for each data type) like the one below, but perhaps there's an elegant way to use RTTI or buffers and then move()'s so that the same code works for all 4 datatypes? Thanks! type TRowData = array of Int32; procedure ReadMatrix; var Matrix: array of TRowData; NumberOfRows: Cardinal; NumberOfCols: Cardinal; CurRow: Integer; begin NumberOfRows := 20; // not known until run time NumberOfCols := 100; // not known until run time SetLength(Matrix, NumberOfRows); for CurRow := 0 to NumberOfRows do begin SetLength(Matrix[CurRow], NumberOfCols); FileStream.ReadBuffer(Matrix[CurRow], NumberOfCols * SizeOf(Int32)) ); end; end;

    Read the article

  • HTML table manipulation using jQuery

    - by Daniel
    I'm building a site using CakePHP and am currently working on adding data to two separate tables at the same time.. not a problem. The problem I have is that I'm looking to dynamically alter the form that accepts the input values, allowing the click of a button/link to add an additional row of form fields. At the moment I have a table that looks something like this: <table> <thead> <tr> <th>Campus</th> <th>Code</th> </tr> </thead> <tbody> <tr> <td> <select id="FulltimeCourseCampusCode0CampusId" name="data[FulltimeCourseCampusCode][0][campus_id]"> <option value=""></option> <option value="1">Evesham</option> <option value="2">Malvern</option> </select> </td> <td> <input type="text" id="FulltimeCourseCampusCode0CourseCode" name="data[FulltimeCourseCampusCode][0][course_code]"> </td> </tr> </tbody> What I need is for the row within the tbody tag to be replicated, with the minor change of having all the zeros (i.e. such as here FulltimeCourseCampusCode0CampusId and here data[FulltimeCourseCampusCode][0][campus_id]) incremented. I'm very new to jQuery, having done a few minor bits, but nothing this advanced (mostly just copy/paste stuff). Can anyone help? Thank you.

    Read the article

< Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >