Search Results

Search found 14297 results on 572 pages for 'self tracking entities'.

Page 82/572 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Track mass email campaigns

    - by daeliur
    Litmus released an email analytics service last month (may 2010). See here: http://litmusapp.com/email-analytics They boast a very cool "read rate" tracking: they can track normal reads, Skims, and Glanced/Deleted. How can they track skims and glanced/deleted? This to me seems impossible :) They also track forwards and prints. Prints are easy (they include a css @media print query with a bg image). But forwards? I think this might be a combo between subsequent opens and different IPs/reffering URLs. However, this means that if I open my mail and re-read it from another computer, it counts as a forward. Any ideas on this one? To summarize: Litmus Email Analytics says they can track email reads, skims, glanced/deleted, prints and forwards. How do they do it (skims, glanced/deleted and forwards)?

    Read the article

  • Blob ID matching over multiple frames in C++ (image analysis)

    - by pollux
    Dear reader, I'm working on a blob matching and tracking library in C++. Currently I'm using openCV to detect blobs and try to match blobs in a new frame by checking the position, velocity and size of the blob. This works quite okay and I'm receiving a high blob match rate (95% or higher). Sometimes blobs fall out of the image or new blobs appear. Now I need to give matched blobs the same ID as they had before. I'm wondering if there are typical or commonly used techniques for doing this. Or even some keywords I can use to google on. Thanks

    Read the article

  • MS Sync framework - Identity crisis resolution by partitioning the primary key.

    - by user326136
    Hello, We implementing offline feature to an existing application. We have implemented the syn with SQL Server internal change tracking and over WCF using MS Sync Framework (http://msdn.microsoft.com/en-us/sync/default.aspx) All of our tables have primary key as integer, we cannot move to GUID. So as you are thinking we will have identity crises between applications. So we decided to go with the way Merge replication does(http://msdn.microsoft.com/en-us/library/aa179416(SQL.80).aspx) partition the primary key range. Below is the example scenario - Server Table A - ID Range - 0 to 100 Client 1 Table A - ID Range - 101 to 200 Client 2 Table A - ID Range - 201 to 300 how to implement this ? i know we can use BCC CHECKIDENT (yourtable, reseed, value) CHECK (([ID]<=(100))) but this does not solve the issue.... Merge replication provides an option of "Not for replication"(http://msdn.microsoft.com/en-us/library/aa237102(SQL.80).aspx) to achieve insert form clients and still maintain the set range.. can i use that somehow here? please help...

    Read the article

  • Returning a 1x1 .gif as a response in Rails

    - by Avishai
    Hi, I'm building a Rails app that does conversion tracking on outside sites. I'd like to allow users to paste an image tag in their conversion pages (like AdWords), and whenever that image is requested, a conversion registers in my app. respond_to do |format| if @conversion.save flash[:notice] = 'Conversion was successfully created.' format.html { redirect_to(@conversion) } format.xml { render :xml => @conversion, :status => :created, :location => @conversion } format.js { render :json => @conversion, :status => :created } format.gif { head :status => :ok } else format.html { render :action => "new" } format.xml { render :xml => @conversion.errors, :status => :unprocessable_entity } end end This way, the browser gets a non-existent .gif image. Is there a better way to do this?

    Read the article

  • How does [self.tableView reloadData] know what data to reload?

    - by Scott Pendleton
    It bugs me to death that my viewcontroller, which happens to be a tableViewController, knows without being told that its property that is an NSArray or an NSDictionary holds the data that is to be loaded into the table for display. Seems like I should explicitly say something like: [self.tableView useData:self.MyArray]; Because, THAT'S WHAT I WANT TO DO! I want to have more than one array inside my tableViewController, and switch between one and the other programmatically. Please do not give me an answer like: "Never mind what you want to do, just set up two different tableViewControllers." No. I want to do it my way. I notice that when a tableViewController makes use of a searchViewController, you can do this: if (tableView == self.searchDisplayController.searchResultsTableView) { YES! I have even been able to do this: self.tableView = self.searchDisplayController.searchResultsTableView; [self.tableView reloadData]; But nowhere can I find how to set self.tableView back to the main datasource! I'm so tired of sifting, sifting through code for incomplete nuggets of enlightenment. Allow me to understate that Apple's documentation is grossly inadequate. It's astounding that so many slick iPhone programs came to be created. People said that once I went Mac, I'd never go back to Windows. Well, I'm anything but a convert.

    Read the article

  • Adding functionality to any TextReader

    - by strager
    I have a Location class which represents a location somewhere in a stream. (The class isn't coupled to any specific stream.) The location information will be used to match tokens to location in the input in my parser, to allow for nicer error reporting to the user. I want to add location tracking to a TextReader instance. This way, while reading tokens, I can grab the location (which is updated by the TextReader as data is read) and give it to the token during the tokenization process. I am looking for a good approach on accomplishing this goal. I have come up with several designs. Manual location tracking Every time I need to read from the TextReader, I call AdvanceString on the Location object of the tokenizer with the data read. Advantages Very simple. No class bloat. No need to rewrite the TextReader methods. Disadvantages Couples location tracking logic to tokenization process. Easy to forget to track something (though unit testing helps with this). Bloats existing code. Plain TextReader wrapper Create a LocatedTextReaderWrapper class which surrounds each method call, tracking a Location property. Example: public class LocatedTextReaderWrapper : TextReader { private TextReader source; public Location Location { get; set; } public LocatedTextReaderWrapper(TextReader source) : this(source, new Location()) { } public LocatedTextReaderWrapper(TextReader source, Location location) { this.Location = location; this.source = source; } public override int Read(char[] buffer, int index, int count) { int ret = this.source.Read(buffer, index, count); if(ret >= 0) { this.location.AdvanceString(string.Concat(buffer.Skip(index).Take(count))); } return ret; } // etc. } Advantages Tokenization doesn't know about Location tracking. Disadvantages User needs to create and dispose a LocatedTextReaderWrapper instance, in addition to their TextReader instance. Doesn't allow different types of tracking or different location trackers to be added without layers of wrappers. Event-based TextReader wrapper Like LocatedTextReaderWrapper, but decouples it from the Location object raising an event whenever data is read. Advantages Can be reused for other types of tracking. Tokenization doesn't know about Location tracking or other tracking. Can have multiple, independent Location objects (or other methods of tracking) tracking at once. Disadvantages Requires boilerplate code to enable location tracking. User needs to create and dispose the wrapper instance, in addition to their TextReader instance. Aspect-orientated approach Use AOP to perform like the event-based wrapper approach. Advantages Can be reused for other types of tracking. Tokenization doesn't know about Location tracking or other tracking. No need to rewrite the TextReader methods. Disadvantages Requires external dependencies, which I want to avoid. I am looking for the best approach in my situation. I would like to: Not bloat the tokenizer methods with location tracking. Not require heavy initialization in user code. Not have any/much boilerplate/duplicated code. (Perhaps) not couple the TextReader with the Location class. Any insight into this problem and possible solutions or adjustments are welcome. Thanks! (For those who want a specific question: What is the best way to wrap the functionality of a TextReader?) I have implemented the "Plain TextReader wrapper" and "Event-based TextReader wrapper" approaches and am displeased with both, for reasons mentioned in their disadvantages.

    Read the article

  • Database-as-a-Service on Exadata Cloud

    - by Gagan Chawla
    Note – Oracle Enterprise Manager 12c DBaaS is platform agnostic and is designed to work on Exadata/non-Exadata, physical/virtual, Oracle/non Oracle platforms and it’s not a mandatory requirement to use Exadata as the base platform. Database-as-a-Service (DBaaS) is an important trend these days and the top business drivers motivating customers towards private database cloud model include constant pressure to reduce IT Costs and Complexity, and also to be able to improve Agility and Quality of Service. The first step many enterprises take in their journey towards cloud computing is to move to a consolidated and standardized environment and Exadata being already a proven best-in-class popular consolidation platform, we are seeing now more and more customers starting to evolve from Exadata based platform into an agile self service driven private database cloud using Oracle Enterprise Manager 12c. Together Exadata Database Machine and Enterprise Manager 12c provides industry’s most comprehensive and integrated solution to transform from a typical silo’ed environment into enterprise class database cloud with self service, rapid elasticity and pay-per-use capabilities.   In today’s post, I’ll list down the important steps to enable DBaaS on Exadata using Enterprise Manager 12c. These steps are chalked down based on a recent DBaaS implementation from a real customer engagement - Project Planning - First step involves defining the scope of implementation, mapping functional requirements and objectives to use cases, defining high availability, network, security requirements, and delivering the project plan. In a Cloud project you plan around technology, business and processes all together so ensure you engage your actual end users and stakeholders early on in the project right from the scoping and planning stage. Setup your EM 12c Cloud Control Site – Once the project plan approval and sign off from stakeholders is achieved, refer to EM 12c Install guide and these are some important tips to follow during the site setup phase - Review the new EM 12c Sizing paper before you get started with install Cloud, Chargeback and Trending, Exadata plug ins should be selected to deploy during install Refer to EM 12c Administrator’s guide for High Availability, Security, Network/Firewall best practices and options Your management and managed infrastructure should not be combined i.e. EM 12c repository should not be hosted on same Exadata where target Database Cloud is to be setup Setup Roles and Users – Cloud Administrator (EM_CLOUD_ADMINISTRATOR), Self Service Administrator (EM_SSA_ADMINISTRATOR), Self Service User (EM_SSA_USER) are the important roles required for cloud lifecycle management. Roles and users are managed by Super Administrator via Setup menu –> Security option. For Self Service/SSA users custom role(s) based on EM_SSA_USER should be created and EM_USER, PUBLIC roles should be revoked during SSA user account creation. Configure Software Library – Cloud Administrator logs in and in this step configures software library via Enterprise menu –> provisioning and patching option and the storage location is OMS shared filesystem. Software Library is the centralized repository that stores all software entities and is often termed as ‘local store’. Setup Self Update – Self Update is one of the most innovative and cool new features in EM 12c framework. Self update can be accessed via Setup -> Extensibility option by Super Administrator and is the unified delivery mechanism to get all new and updated entities (Agent software, plug ins, connectors, gold images, provisioning bundles etc) in EM 12c. Deploy Agents on all Compute nodes, and discover Exadata targets – Refer to Exadata discovery cookbook for detailed walkthrough to ensure successful discovery of Exadata targets. Configure Privilege Delegation Settings – This step involves deployment of privilege setting template on all the nodes by Super Administrator via Setup menu -> Security option with the option to define whether to use sudo or powerbroker for all provisioning and patching operations. Provision Grid Infrastructure with RAC Database on Compute Nodes – Software is provisioned in this step via a provisioning profile using EM 12c database provisioning. In case of Exadata, Grid Infrastructure and RAC Database software is already deployed on compute nodes via OneCommand from Oracle, so SSA Administrator just needs to discover Oracle Homes and Listener as EM targets. Databases will be created as and when users request for databases from cloud. Customize Create Database Deployment Procedure – the actual database creation steps are "templatized" in this step by Self Service Administrator and the newly saved deployment procedure will be used during service template creation in next step. This is an important step and make sure you have locked all the required variables marked as locked as ‘Y’ in this table. Setup Self Service Portal – This step involves setting up of zones, user quotas, service templates, chargeback plan. The SSA portal is setup by Self Service Administrator via Setup menu -> Cloud -> Database option and following guided workflow. Refer to DBaaS cookbook for details. You also have an option to customize SSA login page via steps documented in EM 12c Cloud Administrator’s guide Final Checks – Define and document process guidelines for SSA users and administrators. Get your SSA users trained on Self Service Portal features and overall DBaaS model and SSA administrators should be familiar with Self Service Portal setup pieces, EM 12c database lifecycle management capabilities and overall EM 12c monitoring framework. GO LIVE – Announce rollout of Database-as-a-Service to your SSA users. Users can login to the Self Service Portal and request/monitor/view their databases in Exadata based database cloud. Congratulations! You just delivered a successful database cloud implementation project! In future posts, we will cover these additional useful topics around database cloud – DBaaS Implementation tips and tricks – right from setup to self service to managing the cloud lifecycle ‘How to’ enable real production databases copies in DBaaS with rapid provisioning in database cloud Case study of a customer who recently achieved success with their transformational journey from traditional silo’ed environment on to Exadata based database cloud using Enterprise Manager 12c. More Information – Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Exadata Discovery Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Performance due to entity update

    - by Rizzo
    I always think about 2 ways to code the global Step() function, both with pros and cons. Please note that AIStep is just to provide another more step for whoever who wants it. // Approach 1 step foreach( entity in entities ) { entity.DeltaStep( delta_time ); if( time_for_fixed_step ) entity.FixedStep(); if( time_for_AI_step ) entity.AIStep(); ... // all kind of updates you want } PRO: you just have to iterate once over all entities. CON: fidelity could be lower at some scenarios, since the entity.FixedStep() isn't going all at a time. // Approach 2 step foreach( entity in entities ) entity.DeltaStep( delta_time ); if( time_for_fixed_step ) foreach( entity in entities ) entity.FixedStep(); if( time_for_AI_step ) foreach( entity in entities ) entity.FixedStep(); // all kind of updates you want SEPARATED PRO: fidelity on FixedStep is higher, shouldn't be much time between all entities update, rather than Approach 1 where you may have to wait other updates until FixedStep() comes. CON: you iterate once for each kind of update. Also, a third approach could be a hybrid between both of them, something in the way of foreach( entity in entities ) { entity.DeltaStep( delta_time ); if( time_for_AI_step ) entity.AIStep(); // all kind of updates you want BUT FixedStep() } if( time_for_fixed_step ) { foreach( entity in entities ) { entity.FixedStep(); } } Just two loops, don't caring about time fidelity in nothing other than at FixedStep(). Any thoughts on this matter? Should it really matters to make all steps at once or am I thinking on problems that don't exist?

    Read the article

  • Are elements returned by Linq-to-Entities query streamed from the DB one at the time or are they retrieved all at once?

    - by carewithl
    Are elements returned by Linq-to-Entities query streamed from the database one at the time ( as they are requested ) or are they retrieved all at once: SampleContext context = new SampleContext(); // SampleContext derives from ObjectContext var search = context.Contacts; foreach (var contact in search) { Console.WriteLine(contact.ContactID); // is each Contact retrieved from the DB // only when foreach requests it? } thank you in advance

    Read the article

  • Why is Tracking SEO So Critical For Every Business?

    When you have a business and are using SEO marketing to help promote that business then you need to understand the different ways that you want to use for tracking SEO. There are many different ways that you can use to ensure your SEO marketing efforts are paying off like they should.

    Read the article

  • Need to override drawrect if your UIView is merely a container?

    - by Greg Maletic
    According to Apple's docs, "Subclasses need not override -[UIView drawRect:] if the subclass is a container for other views." I have a custom UIView subclass that is indeed merely a container for other views. Yet the contained views aren't getting drawn. Here's the pertinent code that sets up the custom UIView subclass: - (id)initWithFrame:(CGRect)frame { if ((self = [super initWithFrame:frame])) { // Consists of both an "on" light and an "off" light. We flick between the two depending upon our state. self.onLight = [[[LoyaltyCardNumberView alloc] initWithFrame:frame] autorelease]; self.onLight.backgroundColor = [UIColor clearColor]; self.onLight.on = YES; [self addSubview:self.onLight]; self.offLight = [[[LoyaltyCardNumberView alloc] initWithFrame:frame] autorelease]; self.offLight.backgroundColor = [UIColor clearColor]; self.offLight.on = NO; [self addSubview:self.offLight]; self.on = NO; } return self; } When I run the code that displays this custom UIView, nothing shows up. But when I add a drawRect method... - (void)drawRect:(CGRect)rect { [self.onLight drawRect:rect]; [self.offLight drawRect:rect]; } ...the subviews display. (Clearly, this isn't the right way to be doing this, not only because it's contrary to what the docs say, but because it -always- displays both subviews, completely ignoring some other code in my UIView that sets the hidden property of one of the views, it ignores the z-ordering, etc.) Anyway, the main question: why don't my subviews display when I'm not overriding drawRect:? Thanks!

    Read the article

  • Circular reference error when outputting LINQ to SQL entities with relationships as JSON in an ASP.N

    - by roosteronacid
    Here's a design-view screenshot of my dbml-file. The relationships are auto-generated by foreign keys on the tables. When I try to serialize a query-result into JSON I get a circular reference error..: public ActionResult Index() { return Json(new DataContext().Ingredients.Select(i => i)); } But if I create my own collection of "bare" Ingredient objects, everything works fine..: public ActionResult Index() { return Json(new Entities.Ingredient[] { new Entities.Ingredient(), new Entities.Ingredient(), new Entities.Ingredient() }); } ... Also; serialization works fine if I remove the relationships on my tables. How can I serialize objects with relationships, without having to turn to a 3rd-party library? I am perfectly fine with just serializing the "top-level" objects of a given collection.. That is; without the relationships being serialized as well.

    Read the article

  • Escape key event problem in wxPython?

    - by MA1
    Hi All The following key event is not working. Any idea? class Frame(wx.Frame): def __init__(self): wx.Frame.__init__(self, None, -1, title='testing', size=(300,380), style= wx.MINIMIZE_BOX|wx.SYSTEM_MENU |wx.CAPTION|wx.CLOSE_BOX|wx.CLIP_CHILDREN) self.tree = HyperTreeList(self, style = wx.TR_DEFAULT_STYLE | wx.TR_FULL_ROW_HIGHLIGHT | wx.TR_HAS_VARIABLE_ROW_HEIGHT | wx.TR_HIDE_ROOT) # create column self.tree.AddColumn("coll") self.Bind(wx.EVT_KEY_DOWN, self.OnKeyDown) def OnKeyDown(self, event): keycode = event.GetKeyCode() print "keycode ", keycode if keycode == wx.WXK_ESCAPE: print "closing" self.Close() Regards,

    Read the article

  • In Python epoll can I avoid the errno.EWOULDBLOCK, errno.EAGAIN ?

    - by davyzhang
    I wrote a epoll wrapper in python, It works fine but recently I found the performance is not not ideal for large package sending. I look down into the code and found there's actually a LOT of error Traceback (most recent call last): File "/Users/dawn/Documents/workspace/work/dev/server/sandbox/single_point/tcp_epoll.py", line 231, in send_now num_bytes = self.sock.send(self.response) error: [Errno 35] Resource temporarily unavailable and previously silent it as the document said, so my sending function was done this way: def send_now(self): '''send message at once''' st = time.time() times = 0 while self.response != '': try: num_bytes = self.sock.send(self.response) l.info('msg wrote %s %d : %r size %r',self.ip,self.port,self.response[:num_bytes],num_bytes) self.response = self.response[num_bytes:] except socket.error,e: if e[0] in (errno.EWOULDBLOCK,errno.EAGAIN): #here I printed it, but I silent it in normal days #print 'would block, again %r',tb.format_exc() break else: l.warning('%r %r socket error %r',self.ip,self.port,tb.format_exc()) #must break or cause dead loop break except: #other exceptions l.warning('%r %r msg write error %r',self.ip,self.port,tb.format_exc()) break times += 1 et = time.time() I googled it, and says it caused by temporarily network buffer run out So how can I manually and efficiently detect this error instead it goes to exception phase? Because it cause to much time to rasie/handle the exception.

    Read the article

  • How do I implement a dispatch table in a Perl OO module?

    - by Iain
    I want to put some subs that are within an OO package into an array - also within the package - to use as a dispatch table. Something like this package Blah::Blah; use fields 'tests'; sub new { my($class )= @_; my $self = fields::new($class); $self->{'tests'} = [ $self->_sub1 ,$self->_sub2 ]; return $self; } _sub1 { ... }; _sub2 { ... }; I'm not entirely sure on the syntax for this? $self->{'tests'} = [ $self->_sub1 ,$self->_sub2 ]; or $self->{'tests'} = [ \&{$self->_sub1} ,\&{$self->_sub2} ]; or $self->{'tests'} = [ \&{_sub1} ,\&{_sub2} ]; I don't seem to be able to get this to work within an OO package, whereas it's quite straightforward in a procedural fashion, and I haven't found any examples for OO. Any help is much appreciated, Iain

    Read the article

  • Unicode issue in Django

    - by Kave
    I seem to have a unicode problem with the deal_instance_name in the Deal model. It says: coercing to Unicode: need string or buffer, __proxy__ found The exception happens on this line: return smart_unicode(self.deal_type.deal_name) + _(u' - Set No.') + str(self.set) The line works if I remove smart_unicode(self.deal_type.deal_name) but why? Back then in Django 1.1 someone had the same problem on Stackoverflow I have tried both the unicode() as well as the new smart_unicode() without any joy. What could I be missing please? class Deal(models.Model): def __init__(self, *args, **kwargs): super(Deal, self).__init__(*args, **kwargs) self.deal_instance_name = self.__unicode__() deal_type = models.ForeignKey(DealType) deal_instance_name = models.CharField(_(u'Deal Name'), max_length=100) set = models.IntegerField(_(u'Set Number')) def __unicode__(self): return smart_unicode(self.deal_type.deal_name) + _(u' - Set No.') + str(self.set) class Meta: verbose_name = _(u'Deal') verbose_name_plural = _(u'Deals') Dealtype: class DealType(models.Model): deal_name = models.CharField(_(u'Deal Name'), max_length=40) deal_description = models.TextField(_(u'Deal Description'), blank=True) def __unicode__(self): return smart_unicode(self.deal_name) class Meta: verbose_name = _(u'Deal Type') verbose_name_plural = _(u'Deal Types')

    Read the article

  • Querying a Cassandra column family for rows that have not been updated in X days

    - by knorv
    I'm moving an existing MySQL based application over to Cassandra. So far finding the equivalent Cassandra data model has been quite easy, but I've stumbled on the following problem for which I'd appreciate some input: Consider a MySQL table holding millions of entities: CREATE TABLE entities ( id INT AUTO_INCREMENT NOT NULL, entity_information VARCHAR(...), entity_last_updated DATETIME, PRIMARY KEY (id), KEY (entity_last_updated) ); The table is regularly queried for entities that need to be updated: SELECT id FROM entities WHERE entity_last_updated IS NULL OR entity_last_updated < DATE_ADD(NOW(), INTERVAL -7*24 HOUR) ORDER BY entity_last_updated ASC; The entities returned by this queries are then updated using the following query: UPDATE entities SET entity_information = ?, entity_last_updated = NOW() WHERE id = ?; What would be the corresponding Cassandra data model that would allow me to store the given information and effectively query the entities table for entities that need to be updated (that is: entities that have not been updated in the last seven days)?

    Read the article

  • Perl: implementing a dispatch table in an OO module?

    - by Iain
    I want to put some subs that are within an OO package into an array - also within the package - to use as a dispatch table. Something like this package Blah::Blah; use fields 'tests'; sub new { my($class )= @_; my $self = fields::new($class); $self->{'tests'} = [ $self->_sub1 ,$self->_sub2 ]; return $self; } _sub1 { ... }; _sub2 { ... }; I'm not entirely sure on the syntax for this? $self->{'tests'} = [ $self->_sub1 ,$self->_sub2 ]; or $self->{'tests'} = [ \&{$self->_sub1} ,\&{$self->_sub2} ]; or $self->{'tests'} = [ \&{_sub1} ,\&{_sub2} ]; I don't seem to be able to get this to work within an OO package, whereas it's quite straightforward in a procedural fashion, and I haven't found any examples for OO. Any help is much appreciated, Iain

    Read the article

  • ObjectContext ConnectionString Sqlite

    - by codegarten
    I need to connect to a database in Sqlite so i downloaded and installed System.Data.SQLite and with the designer dragged all my tables. The designer created a .cs file with public class Entities : ObjectContext and 3 constructors: 1st public Entities() : base("name=Entities", "Entities") this one load the connection string from App.config and works fine. App.config <connectionStrings> <add name="Entities" connectionString="metadata=res://*/Db.TracModel.csdl|res://*/Db.TracModel.ssdl|res://*/Db.TracModel.msl;provider=System.Data.SQLite;provider connection string=&quot;data source=C:\Users\Filipe\Desktop\trac.db&quot;" providerName="System.Data.EntityClient" /> </connectionStrings> 2nd public Entities(string connectionString) : base(connectionString, "Entities") 3rd public Entities(EntityConnection connection) : base(connection, "Entities") Here is the problem, i already tried n configuration, already used EntityConnectionStringBuilder to make the connection string with no luck. Can you please point me in the right direction!? EDIT(1) How can i construct a valid connection string?!

    Read the article

  • What are appropriate ways to represent relationships between people in a database table?

    - by Emilio
    I've got a table of people - an ID primary key and a name. In my application, people can have 0 or more real-world relationships with other people, so Jack might "work for" Jane and Tom might "replace" Tony and Bob might "be an employee of" Rob and Bob might also "be married to" Mary. What's the best way to represent this in the database? A many to many intersect table? A series of self joins? A relationship table with one row per relationship pair and type, where I insert records for the relationship in both directions?

    Read the article

  • Making a python iterator go backwards?

    - by uberjumper
    Is there anyway to make a python list iterator to go backwards? Basically i have this class IterTest(object): def __init__(self, data): self.data = data self.__iter = None def all(self): self.__iter = iter(self.data) for each in self.__iter: mtd = getattr(self, type(each).__name__) mtd(each) def str(self, item): print item next = self.__iter.next() while isinstance(next, int): print next next = self.__iter.next() def int(self, item): print "Crap i skipped C" if __name__ == '__main__': test = IterTest(['a', 1, 2,3,'c', 17]) test.all() Running this code results in the output: a 1 2 3 Crap i skipped C I know why it gives me the output, however is there a way i can step backwards in the str() method, by one step?

    Read the article

  • Optimizing Haskell code

    - by Masse
    I'm trying to learn Haskell and after an article in reddit about Markov text chains, I decided to implement Markov text generation first in Python and now in Haskell. However I noticed that my python implementation is way faster than the Haskell version, even Haskell is compiled to native code. I am wondering what I should do to make the Haskell code run faster and for now I believe it's so much slower because of using Data.Map instead of hashmaps, but I'm not sure I'll post the Python code and Haskell as well. With the same data, Python takes around 3 seconds and Haskell is closer to 16 seconds. It comes without saying that I'll take any constructive criticism :). import random import re import cPickle class Markov: def __init__(self, filenames): self.filenames = filenames self.cache = self.train(self.readfiles()) picklefd = open("dump", "w") cPickle.dump(self.cache, picklefd) picklefd.close() def train(self, text): splitted = re.findall(r"(\w+|[.!?',])", text) print "Total of %d splitted words" % (len(splitted)) cache = {} for i in xrange(len(splitted)-2): pair = (splitted[i], splitted[i+1]) followup = splitted[i+2] if pair in cache: if followup not in cache[pair]: cache[pair][followup] = 1 else: cache[pair][followup] += 1 else: cache[pair] = {followup: 1} return cache def readfiles(self): data = "" for filename in self.filenames: fd = open(filename) data += fd.read() fd.close() return data def concat(self, words): sentence = "" for word in words: if word in "'\",?!:;.": sentence = sentence[0:-1] + word + " " else: sentence += word + " " return sentence def pickword(self, words): temp = [(k, words[k]) for k in words] results = [] for (word, n) in temp: results.append(word) if n > 1: for i in xrange(n-1): results.append(word) return random.choice(results) def gentext(self, words): allwords = [k for k in self.cache] (first, second) = random.choice(filter(lambda (a,b): a.istitle(), [k for k in self.cache])) sentence = [first, second] while len(sentence) < words or sentence[-1] is not ".": current = (sentence[-2], sentence[-1]) if current in self.cache: followup = self.pickword(self.cache[current]) sentence.append(followup) else: print "Wasn't able to. Breaking" break print self.concat(sentence) Markov(["76.txt"]) -- module Markov ( train , fox ) where import Debug.Trace import qualified Data.Map as M import qualified System.Random as R import qualified Data.ByteString.Char8 as B type Database = M.Map (B.ByteString, B.ByteString) (M.Map B.ByteString Int) train :: [B.ByteString] -> Database train (x:y:[]) = M.empty train (x:y:z:xs) = let l = train (y:z:xs) in M.insertWith' (\new old -> M.insertWith' (+) z 1 old) (x, y) (M.singleton z 1) `seq` l main = do contents <- B.readFile "76.txt" print $ train $ B.words contents fox="The quick brown fox jumps over the brown fox who is slow jumps over the brown fox who is dead."

    Read the article

  • how to handle multiple profiles per user?

    - by Scott Willman
    I'm doing something that doesn't feel very efficient. From my code below, you can probably see that I'm trying to allow for multiple profiles of different types attached to my custom user object (Person). One of those profiles will be considered a default and should have an accessor from the Person class. Can this be done better? from django.db import models from django.contrib.auth.models import User, UserManager class Person(User): public_name = models.CharField(max_length=24, default="Mr. T") objects = UserManager() def save(self): self.set_password(self.password) super(Person, self).save() def _getDefaultProfile(self): def_teacher = self.teacher_set.filter(default=True) if def_teacher: return def_teacher[0] def_student = self.student_set.filter(default=True) if def_student: return def_student[0] def_parent = self.parent_set.filter(default=True) if def_parent: return def_parent[0] return False profile = property(_getDefaultProfile) def _getProfiles(self): # Inefficient use of QuerySet here. Tolerated because the QuerySets should be very small. profiles = [] if self.teacher_set.count(): profiles.append(list(self.teacher_set.all())) if self.student_set.count(): profiles.append(list(self.student_set.all())) if self.parent_set.count(): profiles.append(list(self.parent_set.all())) return profiles profiles = property(_getProfiles) class BaseProfile(models.Model): person = models.ForeignKey(Person) is_default = models.BooleanField(default=False) class Meta: abstract = True class Teacher(BaseProfile): user_type = models.CharField(max_length=7, default="teacher") class Student(BaseProfile): user_type = models.CharField(max_length=7, default="student") class Parent(BaseProfile): user_type = models.CharField(max_length=7, default="parent")

    Read the article

  • PyQt: How to Know Progress of a Process Running background

    - by krishnanunni
    Hello there. Im in real confusion with the ProgressBar mechanisms. However now i need help on this "Can we know the percentage completion or time remaining of completion of a Process, that has been initiated from a Qt interface like this ` self.process = QProcess() self.connect(self.process, SIGNAL("readyReadStdout()"), self.readOutput) self.connect(self.process, SIGNAL("readyReadStderr()"), self.readErrors) tarsourcepath="sudo tar xvpf "+ self.path1 self.process.setArguments(QStringList.split(" ",tarsourcepath)) self.textLabel3.setText(self.__tr("Extracting.....")) self.process.start()` slots readOUtput just implements the collection of data fron stdout and transferring it to a text browser. I need to know is there any way we could monitor the ongoing process, making to knowpercentage completion, so that i can manage a progressbar for this. Thanks Experts

    Read the article

  • python object AttributeError: type object 'Track' has no attribute 'title'

    - by ccwhite1
    I apologize if this is a noob question, but I can't seem to figure this one out. I have defined an object that defines a music track (NOTE: originally had the just ATTRIBUTE vs self.ATTRIBUTE. I edited those values in to help remove confusion. They had no affect on the problem) class Track(object): def __init__(self, title, artist, album, source, dest): """ Model of the Track Object Contains the followign attributes: 'Title', 'Artist', 'Album', 'Source', 'Dest' """ self.atrTitle = title self.atrArtist = artist self.atrAlbum = album self.atrSource = source self.atrDest = dest I use ObjectListView to create a list of tracks in a specific directory ....other code.... self.aTrack = [Track(sTitle,sArtist,sAlbum,sSource, sDestDir)] self.TrackOlv.AddObjects(self.aTrack) ....other code.... Now I want to iterate the list and print out a single value of each item list = self.TrackOlv.GetObjects() for item in list: print item.atrTitle This fails with the error AttributeError: type object 'Track' has no attribute 'atrTitle' What really confuses me is if I highlight a single item in the Object List View display and use the following code, it will correctly print out the single value for the highlighted item list = self.TrackOlv.GetSelectedObject() print list.atrTitle

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >