Search Results

Search found 8190 results on 328 pages for 'separate'.

Page 246/328 | < Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >

  • MySQL MyISAM table performance... painfully, painfully slow

    - by Salman A
    I've got a table structure that can be summarized as follows: pagegroup * pagegroupid * name has 3600 rows page * pageid * pagegroupid * data references pagegroup; has 10000 rows; can have anything between 1-700 rows per pagegroup; the data column is of type mediumtext and the column contains 100k - 200kbytes data per row userdata * userdataid * pageid * column1 * column2 * column9 references page; has about 300,000 rows; can have about 1-50 rows per page The above structure is pretty straight forwad, the problem is that that a join from userdata to page group is terribly, terribly slow even though I have indexed all columns that should be indexed. The time needed to run a query for such a join (userdata inner_join page inner_join pagegroup) exceeds 3 minutes. This is terribly slow considering the fact that I am not selecting the data column at all. Example of the query that takes too long: SELECT userdata.column1, pagegroup.name FROM userdata INNER JOIN page USING( pageid ) INNER JOIN pagegroup USING( pagegroupid ) Please help by explaining why does it take so long and what can i do to make it faster. Edit #1 Explain returns following gibberish: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE userdata ALL pageid 372420 1 SIMPLE page eq_ref PRIMARY,pagegroupid PRIMARY 4 topsecret.userdata.pageid 1 1 SIMPLE pagegroup eq_ref PRIMARY PRIMARY 4 topsecret.page.pagegroupid 1 Edit #2 SELECT u.field2, p.pageid FROM userdata u INNER JOIN page p ON u.pageid = p.pageid; /* 0.07 sec execution, 6.05 sec fecth */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE u ALL pageid 372420 1 SIMPLE p eq_ref PRIMARY PRIMARY 4 topsecret.u.pageid 1 Using index SELECT p.pageid, g.pagegroupid FROM page p INNER JOIN pagegroup g ON p.pagegroupid = g.pagegroupid; /* 9.37 sec execution, 60.0 sec fetch */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE g index PRIMARY PRIMARY 4 3646 Using index 1 SIMPLE p ref pagegroupid pagegroupid 5 topsecret.g.pagegroupid 3 Using where Moral of the story Keep medium/long text columns in a separate table if you run into performance problems such as this one.

    Read the article

  • Forms Auth: have different credentials for a subdirectory?

    - by Fyodor Soikin
    My website has forms authentication, and all is well. Now I want to create a subdirectory and have it also password-protected, but! I need the subdirectory to use a completely different set of logins/passwords than the whole website uses. Say, for example, I have users for the website stored in the "Users" table in a database. But for the subdirectory, I want the users to be taken from the "SubdirUsers" table. Which probably has a completely different structure. Consequently, I need the logins to be completely parallel, as in: Logging into the whole website does not make you logged into the subdirectory as well Clicking "logout" on the whole website does not nullify your login in the subdirectory And vice versa I do not want to create a separate virtual application for the subdirectory, because I want to share all libraries, user controls, as well as application state and cache. In other words, it has to be the same application. I also do not want to just add a flag to the "Users" table indicating whether this is a whole website user or the subdirectory user. User lists have to come from different sources. For now, the only option that I see is to roll my own Forms Auth for the subdirectory. Anybody can propose a better alternative?

    Read the article

  • Problems pulling data from NSMutableArray for MapKit?

    - by Graeme
    Hi, I have a class (DataImporter) which has the code to download an RSS feed. I also have a view and separate class (TableView) which displays the data in a UITableView and starts the parsing process, storing parsed information in an NSMutableArray (items). Now I wish to add a UIMapView which displays the items in the items NSMutableArray. I'm struggling to transfer the data into the new class (mapView) to show it in the map - and I preferably don't want to have to create a new class to download the data again for the map. Is there a way I can transfer the information from the NSMutableArray (items) to the mapView? Thanks. Code for viewDidLoad MapKit: Data *data = nil; NSString *ilocation = [data locations]; NSString *ilocation2 = @"New Zealand"; NSString *inewlString; inewlString = [ilocation stringByAppendingString:ilocation2]; NSLog(@"inewlString=%@",inewlString); if(forwardGeocoder == nil) { forwardGeocoder = [[BSForwardGeocoder alloc] initWithDelegate:self]; } // Forward geocode! [forwardGeocoder findLocation: inewlString]; Code for parsing data into NSMutable Array: - (void)beginParsing { NSLog(@"Parsing has begun"); //self.navigationItem.rightBarButtonItem.enabled = NO; // Allocate the array for song storage, or empty the results of previous parses if (incidents == nil) { NSLog(@"Grabbing array"); self.datas = [NSMutableArray array]; } else { [datas removeAllObjects]; [self.tableView reloadData]; } // Create the parser, set its delegate, and start it. self.parser = [[DataImporter alloc] init]; parser.delegate = self; [parser start]; }

    Read the article

  • Full Text Search in SQL Server 2008 shows wrong display_item for Thai language

    - by ensecoz
    I am working with SQL Server 2008. My task is to investigate the issue where FTS cannot find the right result for Thai. First, I have the table which enables the FTS on the column 'ItemName' which is nvarchar. The Catalog is created with the Thai Language. Note that the Thai language is one of the languages that doesn't separate the word by spaces, so '????' '???' '????' are written like this in a sentence: '???????????' In the table, there are many rows that include the word (????); for example row#1 (ItemName: '???????????') On the webpage, I try to search for '????' but SQL Server cannot find it. So I try to investigate it by trying the following query in SQL Server: select * from sys.dm_fts_parser(N'"???????????"', 1054, 0, 0) ...to see how the words are broken. The first one is the text to be broken. The second parameter is to specify that we're using Thai (WorkBreaker, so on). Here is the result: row#1 (display_item: '????', source_item: '???????????') row#2 (display_item: '????', source_item: '???????????') row#3 (display_item: '??', source_item: '???????????') Notice that the first and second row display the wrong display_item '?' in the '????' isn't even Thai characters. '?' in '????' is not a Thai character either. So the question is where did those alien characters come from? I guess this why I cannot search for '????' because the word breaker is broken and keeping the wrong character in the indexes. Please help!

    Read the article

  • LINQ to SQL and DataPager

    - by Jonathan S.
    I'm using LINQ to SQL to search a fairly large database and am unsure of the best approach to perform paging with a DataPager. I am aware of the Skip() and Take() methods and have those working properly. However, I'm unable to use the count of the results for the datapager, as they will always be the page size as determined in the Take() method. For example: var result = (from c in db.Customers where c.FirstName == "JimBob" select c).Skip(0).Take(10); This query will always return 10 or fewer results, even if there are 1000 JimBobs. As a result, the DataPager will always think there's a single page, and users aren't able to navigate across the entire result set. I've seen one online article where the author just wrote another query to get the total count and called that. Something like: int resultCount = (from c in db.Customers where c.FirstName == "JimBob" select c).Count(); and used that value for the DataPager. But I'd really rather not have to copy and paste every query into a separate call where I want to page the results for obvious reasons. Is there an easier way to do this that can be reused across multiple queries? Thanks.

    Read the article

  • Replacing .NET WebBrowser control with a better browser, like Chrome?

    - by Sylverdrag
    Is there any relatively easy way to insert a modern browser into a .NET application? As far as I understand, the WebBrowser control is a wrapper for IE, which wouldn't be a problem except that it looks like it is a very old version of IE, with all that entails in terms of CSS screw-ups, potential security risks (if the rendering engine wasn't patched, can I really expect the zillion buffer overflow problems to be fixed?), and other issues. I am using Visual Studio C# (express edition - does it make any difference here?) I would like to integrate a good web browser in my applications. In some, I just use it to handle the user registration process, interface with some of my website's features and other things of that order, but I have another application in mind that will require more err... control. I need: A browser that can integrate inside a window of my application (not a separate window) A good support for CSS, js and other web technologies, on par with any modern browser Basic browser functions like "navigate", "back", "reload"... Liberal access to the page code and output. I was thinking about Chrome, since it comes under the BSD license, but I would be just as happy with a recent version of IE. As much as possible, I would like to keep things simple. The best would be if one could patch the existing WebBrowser control, which does already about 70% of what I need, but I don't think that's possible. I have found an activeX control for Mozilla (http://www.iol.ie/~locka/mozilla/control.htm) but it looks like it's an old version, so it's not necessarily an improvement. I am open to suggestions

    Read the article

  • How to detect back button or forward button navigation in a silverlight navigation application

    - by parapura rajkumar
    When a Page is navigated to in silverlight you can override this method. protected override void OnNavigatedTo(NavigationEventArgs e) { base.OnNavigatedTo(e); } The NavigationEventArgs has a NavigationMode enumeration which is defined as public enum NavigationMode { New = 0, Back = 1, Forward = 2, Refresh = 3, } But calling e.NavigationMode always throws a NotImplementedException Is there a way in silverlight to detect a page is being navigated to because the user hit the forward/back browser button. What I am trying to achieve is some kind of state that can be preserved when the user hits the back button. For example assume you have a customer page which is showing a list of customers in a datagrid. The user can select a customer and there is a detail view which shows all the orders for that customer. Now within an order item you can click a hyperlink link that takes you to the shipping history of the order which is a separate page. When the user hits the back button I want to go back to the customers page and automatically select the customer he was viewing. Is this possible at all ? I also tried out the fragment navigation feature NavigationService.Navigate(new Uri("#currentcustomerid=" + customer.Id.ToString(), UriKind.Relative)); when the customer selection changes but this adds too many items to the history when the user clicks various customers on the customer page. EDIT There is also an method you can override protected override void OnNavigatingFrom(NavigatingCancelEventArgs e) { } which is the same as handling the NavigationService.Navigating event as indicated by BugFinder's answer. In this method e.NavigationMode always returns New when when you hit the Back or Forward Button. The only time this method returns Back is when you explicitly call NavigationService.GoBack()

    Read the article

  • Bind postback data from a strong type view of type List<T>

    - by Robert Koritnik
    I have a strong type view of type List<List<MyViewModelClass>> The outer list will always have two lists of List<MyViewModelClass>. For each of the two outer lists I want to display a group of checkboxes. Each set can have an arbitrary number of choices. My view model class looks similar to this: public class MyViewModelClass { public Area Area { get; set; } public bool IsGeneric { get; set; } public string Code { get; set; } public bool IsChecked { get; set; } } So the final view will look something like: Please select those that apply: First set of choices: x Option 1 x Option 2 x Option 3 etc. Second set of choices: x Second Option 1 x Second Option 2 x Second Option 3 x Second Option 4 etc. Checkboxes should display MyViewModelClass.Area.Name, and their value should be related to MyViewModelClass.Area.Id. Checked state is of course related to MyViewModel.IsChecked. Question I wonder how should I use Html.CheckBox() or Html.CheckBoxFor() helper to display my checkboxes? I have to get these values back to the server on a postback of course. I would like to have my controller action like one of these: public ActionResult ConsumeSelections(List<List<MyViewModelClass>> data) { // process data } public ActionResult ConsumeSelections(List<MyViewModelClass> first, List<MyViewModelClass> second) { // process data } If it makes things simpler, I could make a separate view model type like: public class Options { public List First { get; set; } public List Second { get; set; } } As well as changing my first version of controller action to: public ActionResult ConsumeSelections(Options data) { // process data }

    Read the article

  • Does HttpListener work well on Mono?

    - by billpg
    Hi everyone. I'm looking to write a small web service to run on a small Linux box. I prefer to code in C#, so I'm looking to use Mono. I don't want the overhead of running a full web server or Mono's version of ASP.NET. I'm thinking of having a single process with a thread dealing with each client connection. Shared memory between threads instead of a database. I've read a little on Microsoft's version of HttpListener and how it works with the Http.sys driver. Alas, Mono's documentation on this class is just the automated class interface with no discussion of how it works under the hood. (Linux doesn't have Http.sys, so I imagine it's implemented substantially differently.) Could anyone point me towards some resources discussing this module please? Many thanks, Bill, billpg.com (A little background to my question for the interested.) Some time ago, I asked this question, interested in keeping a long conversation open with lots of back-and-forth. I had settled on designing my own ad-hoc protocol, but people I spoke to really wanted a REST interface, even at the cost of the "Okay, send your command now" signal. So, I wondered about running ASP.NET on a Linux/Mono server, but stumbled upon HttpListener. This seemed ideal, as each "conversation" could run in a separate thread. The thread that calls HttpListener in a loop can look for which thread each incomming connection is for and pass the reference to that thread. The alternative for an ASP.NET driven service, would be to have the ASPX code pick up the state from a database, and write back the new state when it finishes. Yes, it would work, but that's a lot of overhead.

    Read the article

  • How to define index by several columns in hibernate entity?

    - by foobar
    Morning. I need to add indexing in hibernate entity. As I know it is possible to do using @Index annotation to specify index for separate column but I need an index for several fields of entity. I've googled and found jboss annotation @Table, that allows to do this (by specification). But (I don't know why) this functionality doesn't work. May be jboss version is lower than necessary, or maybe I don't understant how to use this annotation, but... complex index is not created. Why index may not be created? jboss version 4.2.3.GA Entity example: package somepackage; import org.hibernate.annotations.Index; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.Id; @Entity @org.hibernate.annotations.Table(appliesTo = House.TABLE_NAME, indexes = { @Index(name = "IDX_XDN_DFN", columnNames = {House.XDN, House.DFN} ) } ) public class House { public final static String TABLE_NAME = "house"; public final static String XDN = "xdn"; public final static String DFN = "dfn"; @Id @GeneratedValue private long Id; @Column(name = XDN) private long xdn; @Column(name = DFN) private long dfn; @Column private String address; public long getId() { return Id; } public void setId(long id) { this.Id = id; } public long getXdn() { return xdn; } public void setXdn(long xdn) { this.xdn = xdn; } public long getDfn() { return dfn; } public void setDfn(long dfn) { this.dfn = dfn; } public String getAddress() { return address; } public void setAddress(String address) { this.address = address; } } When jboss/hibernate tries to create table "house" it throws following exception: Reason: org.hibernate.AnnotationException: @org.hibernate.annotations.Table references an unknown table: house

    Read the article

  • How would I go about sharing variables in a class with Lua?

    - by Nicholas Flynt
    I'm fairly new to Lua, I've been working on trying to implement Lua scripting for logic in a Game Engine I'm putting together. I've had no trouble so far getting Lua up and running through the engine, and I'm able to call Lua functions from C and C functions from Lua. The way the engine works now, each Object class contains a set of variables that the engine can quickly iterate over to draw or process for physics. While game objects all need to access and manipulate these variables in order for the Game Engine itself to see any changes, they are free to create their own variables, a Lua is exceedingly flexible about this so I don't forsee any issues. Anyway, currently the Game Engine side of things are sitting in C land, and I really want them to stay there for performance reasons. So in an ideal world, when spawning a new game object, I'd need to be able to give Lua read/write access to this standard set of variables as part of the Lua object's base class, which its game logic could then proceed to run wild with. So far, I'm keeping two separate tables of objects in place-- Lua spawns a new game object which adds itself to a numerically indexed global table of objects, and then proceeds to call a C++ function, which creates a new GameObject class and registers the Lua index (an int) with the class. So far so good, C++ functions can now see the Lua object and easily perform operations or call functions in Lua land using dostring. What I need to do now is take the C++ variables, part of the GameObject class, and expose them to Lua, and this is where google is failing me. I've encountered a very nice method here which details the process using tags, but I've read that this method is deprecated in favor of metatables. What is the ideal way to accomplish this? Is it worth the hassle of learning how to pass class definitions around using libBind or some equivalent method, or is there a simple way I can just register each variable (once, at spawn time) with the global lua object? What's the "current" best way to do this, as of Lua 5.1.4?

    Read the article

  • 4.0/WCF: Best approach for bi-idirectional message bus?

    - by TomTom
    Just a technology update, now that .NET 4.0 is out. I write an application that communicates to the server through what is basically a message bus (instead of method calls). This is based on the internal architecture of the application (which is multi threaded, passing the messages around). There are a limited number of messages to go from the client to the server, quite a lot more from the server to the client. Most of those can be handled via a separate specialized mechanism, but at the end we talk of possibly 10-100 small messages per second going from the server to the client. The client is supposed to operate under "internet conditions". THis means possibly home end users behind standard NAT devices (i.e. typical DSL routers) - a firewalled secure and thus "open" network can not be assumed. I want to have as little latency and as little overhad for the communication as possible. What is the technologally best way to handle the message bus callback? I Have no problem regularly calling to the server for message delivery if something needs to be sent... ...but what are my options to handle the messagtes from the server to the client? WsDualHttp does work how? Especially under a NAT scenario? Just as a note: polling is most likely out - the main problem here is that I would have a significant overhead OR a significant delay, both aren ot really wanted. Technically I would love some sort of streaming appraoch, where the server can write messags to a stream while he generates them and they get sent to the client as they come. Not esure this is doable with WCF, though (if not, I may acutally decide to handle the whole message part outside of WCF and just do control / login / setup / destruction via WCF).

    Read the article

  • Is it approproate it use django signals withing the same app

    - by Alex Lebedev
    Trying to add email notification to my app in the cleanest way possible. When certain fields of a model change, app should send a notification to a user. Here's my old solution: from django.contrib.auth import User class MyModel(models.Model): user = models.ForeignKey(User) field_a = models.CharField() field_b = models.CharField() def save(self, *args, **kwargs): old = self.__class__.objects.get(pk=self.pk) if self.pk else None super(MyModel, self).save(*args, **kwargs) if old and old.field_b != self.field_b: self.notify("b-changed") # Sevelar more events here # ... def notify(self, event) subj, text = self._prepare_notification(event) send_mail(subj, body, settings.DEFAULT_FROM_EMAIL, [self.user.email], fail_silently=True) This worked fine while I had one or two notification types, but after that just felt wrong to have so much code in my save() method. So, I changed code to signal-based: from django.db.models import signals def remember_old(sender, instance, **kwargs): """pre_save hanlder to save clean copy of original record into `old` attribute """ instance.old = None if instance.pk: try: instance.old = sender.objects.get(pk=instance.pk) except ObjectDoesNotExist: pass def on_mymodel_save(sender, instance, created, **kwargs): old = instance.old if old and old.field_b != instance.field_b: self.notify("b-changed") # Sevelar more events here # ... signals.pre_save.connect(remember_old, sender=MyModel, dispatch_uid="mymodel-remember-old") signals.post_save.connect(on_mymodel_save, sender=MyModel, dispatch_uid="mymodel-on-save") The benefit is that I can separate event handlers into different module, reducing size of models.py and I can enable/disable them individually. The downside is that this solution is more code and signal handlers are separated from model itself and unknowing reader can miss them altogether. So, colleagues, do you think it's worth it?

    Read the article

  • python NameError: name '<anything>' is not defined (but it is!)

    - by BenjaminGolder
    Note: Solved. It turned out that I was importing a previous version of the same module. It is easy to find similar topics on StackOverflow, where someone ran into a NameError. But most of the questions deal with specific modules and the solution is often to update the module. In my case, I am trying to import a function from a module that I wrote myself. The module is named InfraPy, and it is definitely on sys.path. One particular function (called listToText) in InfraPy returns a NameError, but only when I try to import it into another script. Inside InfraPy, under if __name__=='__main__':, the listToText function works just fine. From InfraPy I can import other functions with no problems. Including from InfraPy import * in my script does not return any errors until I try to use the listToText function. How can this occur? How can importing one particular function return a NameError, while importing all the other functions in the same module works fine? Using python 2.6 on MacOSX 10.6, also encountered the same error running the script on Windows 7, using IronPython 2.6 for .NET 4.0 Thanks. If there are other details you think would be helpful in solving this, I'd be happy to provide them. As requested, here is the function definition inside of InfraPy: def listToText(inputList, folder=None, outputName='list.txt'): ''' Creates a text file from a list (with each list item on a separate line). May be placed in any given folder, but will otherwise be created in the working directory of the python interpreter. ''' fname = outputName if folder != None: fname = folder+'/'+fname f = open(fname, 'w') for file in inputList: f.write(file+'\n') f.close() This function is defined above and outside of if __name__=='__main__': I've tried moving InfraPy around in relation to the script. The most baffling situation is that when InfraPy is in the same folder as the script, and I import using from InfraPy import listToText, I receive this error: NameError: name listToText is not defined. Again, the other functions import fine, they are all defined outside of if __name__=='__main__': in InfraPy.

    Read the article

  • Speed up SQL Server Fulltext Index through Text Duplication of Non-Indexed Columns

    - by Alex
    1) I have the text fields FirstName, LastName, and City. They are fulltext indexed. 2) I also have the FK int fields AuthorId and EditorId, not fulltext indexed. A search on FirstName = 'abc' AND AuthorId = 1 will first search the entire fulltext index for 'abc', and then narrow the resultset for AuthorId = 1. This is bad because it is a huge waste of resources as the fulltext search will be performed on many records that won't be applicable. Unfortunately, to my knowledge, this can't be turned around (narrow by AuthorId first and then fulltext-search the subset that matches) because the FTS process is separate from SQL Server. Now my proposed solution that I seek feedback on: Does it make sense to create another computed column which will be included in the fulltext search which will identify the Author as text (e.g. AUTHORONE). That way I could get rid of the AuthorId restriction, and instead make it part of my fulltext search (a search for 'abc' would be 'abc' and 'AUTHORONE' - all executed as part of the fulltext search). Is this a good idea or not? Why?

    Read the article

  • SQL Server Multi-statement UDF - way to store data temporarily required

    - by Kharlos Dominguez
    Hello, I have a relatively complex query, with several self joins, which works on a rather large table. For that query to perform faster, I thus need to only work with a subset of the data. Said subset of data can range between 12 000 and 120 000 rows depending on the parameters passed. More details can be found here: http://stackoverflow.com/questions/3054843/sql-server-cte-referred-in-self-joins-slow As you can see, I was using a CTE to return the data subset before, which caused some performance problems as SQL Server was re-running the Select statement in the CTE for every join instead of simply being run once and reusing its data set. The alternative, using temporary tables worked much faster (while testing the query in a separate window outside the UDF body). However, when I tried to implement this in a multi-statement UDF, I was harshly reminded by SQL Server that multi-statement UDFs do not support temporary tables for some reason... UDFs do allow table variables however, so I tried that, but the performance is absolutely horrible as it takes 1m40 for my query to complete whereas the the CTE version only took 40minutes. I believe the table variables is slow for reasons listed in this thread: http://stackoverflow.com/questions/1643687/table-variable-poor-performance-on-insert-in-sql-server-stored-procedure Temporary table version takes around 1 seconds, but I can't make it into a function due to the SQL Server restrictions, and I have to return a table back to the caller. Considering that CTE and table variables are both too slow, and that temporary tables are rejected in UDFs, What are my options in order for my UDF to perform quickly? Thanks a lot in advance.

    Read the article

  • Multiple Ruby versions on one webserver?

    - by Legion
    The Ideal Using rvm, it would be awesome to be able to have multiple Rubies on one webserver, and through some sort of server configuration, be able to assign Ruby versions to different Rails/Sinatra/etc apps on a per-project basis. I am aware, from rvm's documentation, that Passenger only works with one Ruby at a time. :( The Compromise Failing that, it would be nice to at least be able to concoct a way to be able to assign projects to a Ruby 1.8 or a Ruby 1.9 interpreter. I've read that using Nginx as a reverse proxy allows running Apache and Nginx on the same box. Would it then be possible to have Apache+Passenger using one Ruby, and Nginx+Passenger using a different one? Maybe use something other than Passenger with Nginx? Am I Barking Up the Wrong Tree? Am I missing a good solution to this issue? Am I walking into a nightmare configuration situation? Is what I want even viable, or is it necessary to run another box to run a separate Ruby version?

    Read the article

  • EAGLContext, EAGLSharegroups, RenderBuffers, FrameBuffers, oh my!

    - by quixoto
    Hi all, I'm trying to wrap my head around the OpenGL object model on iPhone OS. I'm currently rendering into a few different UIViews (build on CAEAGLayers) on the screen. I currently have each of these as using separate EAGLContext, each of which has a color renderbuffer and a framebuffer. I'm rendering similar things in them, and I'd like to share textures between these instances to save memory overhead. My current understanding is that I could use the same setup (some number of contexts, each with a FBO/RBO) but if I spawn the later ones using the EAGLShareGroup of the first one, then I can simply use the texture names (GLuints) from the first one in the later ones. Is this accurate? If this is the case, I guess the followup question is: what's the benefit to having it be a "sharegroup"? Could I just reuse the same context, and attach multiple FBOs/RBOs to that context? I think I'm struggling with the abstraction layer of a sharegroup, which seems to share "objects" (textures and other named things) but not "state" (matrices, enabled/disabled states) which are owned by the context. What's the best way to think of this? Thanks for any enlightenment!

    Read the article

  • Data Mappers, Models and Images

    - by James
    Hi, I've seen and read plenty of blog posts and forum topics talking about and giving examples of Data Mapper / Model implementations in PHP, but I've not seen any that also deal with saving files/images. I'm currently working on a Zend Framework based project and I'm doing some image manipulation in the model (which is being passed a file path), and then I'm leaving it to the mapper to save that file to the appropriate location - is this common practise? But then, how do you deal with creating say 3 different size images from the one passed in? At the moment I have a "setImage($path_to_tmp_name)" which checks the image type, resizes and then saves back to the original filename. A call to "getImagePath()" then returns the current file path which the data mapper can use and then change with a call to "setImagePath($path)" once it's saved it to the appropriate location, say "/content/my_images". Does this sound practical to you? Also, how would you deal with getting the URL to that image? Do you see that as being something that the model should be providing? It seems to me like that model should worry about where the images are being stored or ultimately how they're accessed through a browser and so I'm inclined to put that in the ini file and just pass the URL prefix to the view through the controller. Does that sound reasonable? I'm using GD for image manipulation - not that that's of any relevance. UPDATE: I've been wondering if the image resizing should be done in the model at all. The model could require that it's provided a "main" image and a "thumb" image, both of certain dimensions. I've thought about creating a "getImageSpecs()" function in the model that would return something that defines the required sizes, then a separate image manipulation class could carry out the resizing and (perhaps in the controller?) and just pass the final paths in to the model using something like "setImagePaths($images)". Any thoughts much appreciated :) James.

    Read the article

  • Linking AS code to symbols defined in an external SWC?

    - by Ender
    (apologies ahead of time, I only really know Flash; my Flex experience is basically nil. There may be a very standard and obvious workflow solution that Flex people know about) I have a number of UI elements that are graphically quite complex (they're not components, they're just Sprites). Since it takes a long time to compile them, I've been trying to move them into an external .swc. However, I want to associate some code with these classes, but I don't want to have to recompile the graphical assets every time I make a code change. At the moment I have it set up like this: UI elements are created in a separate FLA and exported to a SWC. In my primary FLA, I have actionscript classes that extend each of the graphical assets in the SWC. For example: external.swc: (some symbol defined in the Library and exported for actionscript in frame 1) class: com.foo.WidgetGraphic base: flash.display.Sprite main.fla: Widget.as: package com.foo { public class Widget extends WidgetGraphic { ... } } This works, but is time-consuming and prone to error. I'd rather be able to avoid having to inherit from each graphical asset, and just define them directly. Is there a better way to do what I'm trying to accomplish? Note: the main concern here is compile time. I don't have any movies or audio or fonts, just a lot of vector art assets that appear to be slowing down my compilation time significantly. When I'm debugging I'm only making code changes, and would rather not have to keep recompiling the art...

    Read the article

  • R: disentangling scopes

    - by rescdsk
    Hi, Right now, in my R project, I have functions1.R with doFoo() and doBar(), functions2.R with other functions, and main.R with the main program in it, which first does source('functions1.R'); source('functions2.R'), and then calls the other functions. I've been starting the program from the R GUI in Mac OS X, with source('main.R'). This is fine the first time, but after that, the variables that were defined the first time through the program are defined for the second time functions*.R are sourced, and so the functions get a whole bunch of extra variables defined. I don't want that! I want an "undefined variable" error when my function uses a variable it shouldn't! Twice this has given me very late nights of debugging! So how do other people deal with this sort of problem? Is there something like source(), but that makes an independent namespace that doesn't fall through to the main one? Making a package seems like one solution, but it seems like a big pain in the butt compared to e.g. Python, where a source file is automatically a separate namespace. Any tips? Thank you!

    Read the article

  • improving data extraction from text file in Java

    - by owca
    I have CSV file with sample data in this form : 220 30 255 0 0 Javascript 200 20 0 255 128 Thinking in java , where the first column is height, second thickness, next three are rgb values for color and last one is title. All need to be treated as separate variables. I have already written my own solution for this, but I'm wondering if there are no better/easier/shorter ways of doing this. Extracted data will then be used to create Book object, throw every Book into array of books and print it with swing. Here's the code : private static Book[] addBook(Book b, Book[] bookTab){ Book[] tmp = bookTab; bookTab = new Book[tmp.length+1]; for(int i = 0; i < tmp.length; i++){ bookTab[i] = tmp[i]; } bookTab[tmp.length] = b; return bookTab; } public static void main(String[] args) { Book[] books = new Book[0]; try { BufferedReader file = new BufferedReader(new FileReader("K:\\books.txt")); String s; while ((s = file.readLine()) != null) { int hei, thick, R, G, B; String tit; hei = Integer.parseInt(s.substring(0, 3).replaceAll(" ", "")); thick = Integer.parseInt(s.substring(4, 6).replaceAll(" ", "")); R = Integer.parseInt(s.substring(10, 13).replaceAll(" ", "")); G = Integer.parseInt(s.substring(14, 17).replaceAll(" ", "")); B = Integer.parseInt(s.substring(18, 21).replaceAll(" ", "")); tit = s.substring(26); System.out.println(tyt+wys+grb+R+G+B); books = addBook(new Book(wys, grb, R, G, B, tyt),books); } file.close(); } catch (IOException e) { //do nothing } }

    Read the article

  • Multiple Table Inheritance vs. Single Table Inheritance in Ruby on Rails

    - by Tony
    I have been struggling for the past few hours thinking about which route I should go. I have a Notification model. Up until now I have used a notification_type column to manage the types but I think it will be better to create separate classes for the types of notifications as they behave differently. Right now, there are 3 ways notifications can get sent out: SMS, Twitter, Email Each notification would have: id subject message valediction sent_people_count deliver_by geotarget event_id list_id processed_at deleted_at created_at updated_at Seems like STI is a good candidate right? Of course Twitter/SMS won't have a subject and Twitter won't have a sent_people_count, valediction. I would say in this case they share most of their fields. However what if I add a "reply_to" field for twitter and a boolean for DM? My point here is that right now STI makes sense but is this a case where I may be kicking myself in the future for not just starting with MTI? To further complicate things, I want a Newsletter model which is sort of a notification but the difference is that it won't use event_id or deliver_by. I could see all subclasses of notification using about 2/3 of the notification base class fields. Is STI a no-brainer, or should I use MTI? Thanks!

    Read the article

  • NHibernate: Mapping multiple classes from a single table row

    - by Michael Kurtz
    I couldn't find an answer to this specific question. I am trying to keep my domain model object-oriented and re-use objects where possible. I am having an issue determining how to provide a mapping to multiple classes from a single row. Let me explain with an example: I have a single table, call it Customer. A customer has several attributes; but, for brevity, assume it has Id, Name, Address, City, State, ZipCode. I would like to create a Customer and Address class that look like this: public class Customer { public virtual long Id {get;set;} public virtual string Name {get;set;} public virtual Address Address {get;set;} } public class Address { public virtual string Address {get;set;} public virtual string City {get;set;} public virtual string State {get;set;} public virtual string ZipCode {get;set;} } What I am having trouble with is determining what the mapping would be for the Address class within the Customer class. There is no Address table and there isn't a "set" of addresses associated with a Customer. I just want a more object-oriented view of the Customer table in code. There are several other tables that have address information in them and it would be nice to have a reusable Address class to deal with them. Addresses are not shared so breaking all addresses into a separate table with foreign keys seems to be overkill and, actually, more painful since I would need foreign keys to multiple tables. Can someone enlighten me on this type of mapping? Please provide an example if you can. Thanks for any insights! -Mike

    Read the article

  • Membership Provider users in different tables

    - by Mike
    I have an existing database with users and administrators in different tables. I am rewriting an existing website in ASP.net and need to decide - should I merge the two tables into one users table and just have one provider, OR leave the tables separated and have two different providers. Administrators, they need the ability to create, edit and delete users. I am thinking that the membership/profile provider way of editing users (i.e. System.Web.Profile.ProfileBase pro = System.Web.Profile.ProfileBase.Create("User1"); pro.Initialize("User1", true); txtEmail.Text = pro["SecondaryEmail"].ToString(); is the best way to edit users because the provider handles it? You cannot use this if you have two separate providers? (because they are both looking at different tables). Or should I make a whole lot of methods to edit the users for the administrators? UPDATE: Making a custom membership provider look at both tables is fine, but then what about the profile provider? The profile provider GetPropertyValues and SetPropertyValues would be going on the same set of properties for users and admins. Mike

    Read the article

< Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >