Search Results

Search found 67075 results on 2683 pages for 'data model'.

Page 27/2683 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Trying and expand the contrib.auth.user model and add a "relatipnships" manage

    - by dotty
    I have the following model setup. from django.db import models from django.contrib.auth.models import User class SomeManager(models.Manager): def friends(self): # return friends bla bla bla class Relationship(models.Model): """(Relationship description)""" from_user = models.ForeignKey(User, related_name='from_user') to_user = models.ForeignKey(User, related_name='to_user') has_requested_friendship = models.BooleanField(default=True) is_friend = models.BooleanField(default=False) objects = SomeManager() relationships = models.ManyToManyField(User, through=Relationship, symmetrical=False) relationships.contribute_to_class(User, 'relationships') Here i take the User object and use contribute_to_class to add 'relationships' to the User object. The relationship show up, but if call User.relationships.friends it should run the friends() method, but its failing. Any ideas how i would do this? Thanks

    Read the article

  • CakePhp: model recursive associations and find

    - by Petecocoon
    Hello to everybody! I've some trouble with a find() on a model on CakePhp. I have three model relationed in this way: Project(some_fields, item_id) ------belongsTo----- Item(some_fields, item_id) ------belongsTo----- User(some_fields campi, nickname) I need to do a find() and retrieve all fields from project, a field from Item and the nickname field from User. This is my code: $this->set('projects', $this->Project->find('all', array('recursive' => 2))); but my output doesn't contains the user object. I've tried with Containable behaviour but the output is the same. What is broken? Many many Thanks Peter

    Read the article

  • Custom OData operation / customize EF model to hide join table in many-to-many relationship

    - by AC
    I've got a data model that has two tables with a join table for a many to many relationship & creating an OData service to expose the data for CRUD ops in a Silverlight app. What I'd like to do is abstract the join table from the service. I'm not sure if the best way to do this would be in the model (using EF in .NET3.5SP1) or if I should do it with a custom service operation. If I do it in the EF model (not sure how I'd do this), then the OOTB WCF Data Service stuff would make it easy to say [..]/Courses(1)/Modules ... otherwise I'd have to create a custom operation to do this. Is it possible to do this in the EF model and if so, how does that work?

    Read the article

  • Define Rails Model Persistent Attributes in Model File

    - by Kevin Sylvestre
    I recently played with MongoDB in Rails using Mongoid. I like the ability to define attributes for models within the model file (as opposed to in migrations): class Person include Mongoid::Document field :name, :type => String field :birthday, :type => Date end For projects that cannot use a schema-less database, does a similar feature exist? Any gems or plugins that generate schemas from a similar syntax would be greatly appreciated. Thanks.

    Read the article

  • How best can I extract a logical model from a physical DB model

    - by Dean
    We have made substantial changes to our physical DB, now as it is the ne dof the project I would like to abstract a logical model from this, to allow me to generate schemas for both Oracle and SQL Server. Can anyone guide me as to the best way to achieve this. I was hoping TOAD data modeller would help but I can't seem to see any options to do what I require?

    Read the article

  • Can't call method in model table class using Doctrine with Zend Framework

    - by Jeremy Hicks
    I'm using Doctrine with Zend Framework. For my model, I'm using a base class, the regular class (which extends the base class), and a table class. In my table class, I've created a method which does a query for records with a specific value for one of the fields in my model. When I try and call this method from my controller, I get an error message saying, "Message: Unknown method Doctrine_Table::getCreditPurchases". Is there something else I need to do to call functions in my table class? Here is my code: class Model_CreditTable extends Doctrine_Table { /** * Returns an instance of this class. * * @return object Model_CreditTable */ public static function getInstance() { return Doctrine_Core::getTable('Model_Credit'); } public function getCreditPurchases($id) { $q = $this->createQuery('c') ->where('c.buyer_id = ?', $id); return $q->fetchArray(); } } // And then in my controller method I have... $this->view->credits = Doctrine_Core::getTable('Model_Credit')->getCreditPurchases($ns->id);

    Read the article

  • Rails Model has_many with multiple foreign_keys

    - by Kenzie
    Relatively new to rails and trying to model a very simple family "tree" with a single Person model that has a name, gender, father_id and mother_id (2 parents). Below is basically what I want to do, but obviously I can't repeat the :children in a has_many (the first gets overwritten). class Person < ActiveRecord::Base belongs_to :father, :class_name => 'Person' belongs_to :mother, :class_name => 'Person' has_many :children, :class_name => 'Person', :foreign_key => 'mother_id' has_many :children, :class_name => 'Person', :foreign_key => 'father_id' end Is there a simple way to use has_many with 2 foreign keys, or maybe change the foreign key based on the object's gender? Or is there another/better way altogether? Thanks!

    Read the article

  • Getting the value from an array in the model in rails

    - by slythic
    Hi all, I have a relatively simple problem. I have a model named Item which I've added a status field. The status field will only have two options (Lost or Found). So I created the following array in my Item model: STATUS = [ [1, "Lost"], [2, "Found"]] In my form view I added the following code which works great: <%= collection_select :item, :status, Item::STATUS, :first, :last, {:include_blank => 'Select status'} %> This stores the numeric id (1 or 2) of the status in the database. However, in my show view I can't figure out how to convert from the numeric id (again, 1 or 2) to the text equivalent of Lost or Found. Any ideas on how to get this to work? Is there a better way to go about this? Many thanks, Tony

    Read the article

  • RoR model field without validators, has*, delegates, etc

    - by jackr
    How can I declare a field, in the Rails model, when it doesn't have any "has_" relations, or validations, or delegations? I just need to ensure its existence and column width in the schema. Currently, I have no mention of the field in the "schema section" of the model file, but it's referenced in various methods that use it, and this seems to work. However, depending on my exact creation workflow, the underlying database table may be created as t.binary "field_name", :limit => 32 or t.binary "field_name", :limit => 255 This is not a restriction on the value (any binary value is valid, even NULL), only on the table column declaration. As it happens, 32 is enough -- it never receives any larger value, it's only ever written to like this: self.field_name = SecureRandom.random_bytes(32)

    Read the article

  • Rails: Creating subfolders in model?

    - by keruilin
    I'm going to have a ton of subclasses, so want to organize them under a subfolder called stream. I added the following line to the environment.rb so that all classes in the subfolder would be loaded: Rails::Initializer.run do |config| ... config.load_paths += Dir["#{RAILS_ROOT}/app/models/*"].find_all { |f| File.stat(f).directory? } ... end I thought this would solve the issue in which by convention the model class is namespaced into an according module. However, when I try to call one of the classes called stream in the stream folder, I get the following error: NoMethodError: undefined method `new' for Stream:Module from (irb):28 from /usr/local/bin/irb:12:in `<main>' Here's the model for the parent and one child: class Stream end class EventStream < Stream end Any idea what the issue is?

    Read the article

  • OpenERP model tracking external resource, hooking into parent's copy() or copy_data()

    - by CB.
    I have added a new model (call it crm_lead_external) that is linked via a new one2many on crm_lead. Thus, my module has two models defined: an updated crm_lead (with _name=crm_lead) and a new crm_lead_external. This external model tracks a file and as such has a 'filename' field. I also created a unique SQL index on this filename field. This is part of my module: def copy(self, cr, uid, id, default=None, context=None): if not default: default = {} default.update({ 'state': 'new', 'filename': '', }) ret = super(crm_lead_external, self).copy(cr, uid, id, default, context=context) #do file copy return ret The intent here is to allow an external entity to be duplicated, but to retarget the file path. Now, if I click duplicate on the Lead, I get an IntegrityError on my unique constraint. Is there a particular reason why copy() isn't being called? Should I add this logic to copy_data()? Myst I really override copy() for the lead? Thanks in advance.

    Read the article

  • cakephp - form for belongsTo Model

    - by user1511579
    I createt the following model to link 2 relational tables: class Ficha extends AppModel { //public $useTable = 'ficha_seg'; var $primaryKey = 'id_ficha'; var $name = 'Ficha'; var $belongsTo = array( 'Perigo' => array( 'className' => 'Perigo', 'foreignKey' => false, 'conditions' => 'Perigo.id_fichas = Ficha.id_ficha' ) ); } Now, i have a form that requires data from the class Ficha, and then is redirected to another ctp page where i will input the data for the table "Perigos". However, since i'm still a newbie in cakephp i'm having difficult building that second form to insert the data on the table "Perigos". Here goes the code i built at the moment related to the second form: FichasController.php (the method where is it supposed to save the data on the table "Perigos": public function preencher_ficha(){ if ($this->request->is('ficha')) { $this->Ficha->create(); if ($this->Ficha->Perigo->save($this->request->data)) { $last_id=$this->Ficha->getLastInsertID(); $this->Session->setFlash('Your post has been updated '.$last_id.'.'); //$this->redirect(array('action' => 'preencher_ficha')); } else { $this->Session->setFlash('Unable to qualquer coisa your post.'); } } } The preencher_ficha.ctp file with the form: echo $this->Form->create('Ficha->Perigo', array('action' => 'index')); echo $this->Form->input('class_subst', array('label' => 'Classificação:')); echo $this->Form->input('simbolos_perigo', array('label' => 'Símbolos:')); echo $this->Form->input('frases_r', array('label' => 'Frases:')); echo $this->Form->end('Finalizar Ficha'); Here i guess the create part is wrong, but i think i have errors too in the controller part.

    Read the article

  • Ruby on Rails Increment Counter in Model

    - by febs
    I'm attempting to increment a counter in my User table from another model. class Count < ActiveRecord::Base belongs_to :user after_create :update_count def update_count user = User.find(self.user_id) user.increment(:count) end end So when count is created the goal would be to increment a counter column for that user. Currently it refuses to get the user after creation and I get a nil error. I'm using devise for my Users Is this the right (best practice) place to do it? I had it working in the controllers, but wanted to clean it up. I'm very inexperienced with Model callbacks.

    Read the article

  • Building dynamic OLAP data marts on-the-fly

    - by DrJohn
    At the forthcoming SQLBits conference, I will be presenting a session on how to dynamically build an OLAP data mart on-the-fly. This blog entry is intended to clarify exactly what I mean by an OLAP data mart, why you may need to build them on-the-fly and finally outline the steps needed to build them dynamically. In subsequent blog entries, I will present exactly how to implement some of the techniques involved. What is an OLAP data mart? In data warehousing parlance, a data mart is a subset of the overall corporate data provided to business users to meet specific business needs. Of course, the term does not specify the technology involved, so I coined the term "OLAP data mart" to identify a subset of data which is delivered in the form of an OLAP cube which may be accompanied by the relational database upon which it was built. To clarify, the relational database is specifically create and loaded with the subset of data and then the OLAP cube is built and processed to make the data available to the end-users via standard OLAP client tools. Why build OLAP data marts? Market research companies sell data to their clients to make money. To gain competitive advantage, market research providers like to "add value" to their data by providing systems that enhance analytics, thereby allowing clients to make best use of the data. As such, OLAP cubes have become a standard way of delivering added value to clients. They can be built on-the-fly to hold specific data sets and meet particular needs and then hosted on a secure intranet site for remote access, or shipped to clients' own infrastructure for hosting. Even better, they support a wide range of different tools for analytical purposes, including the ever popular Microsoft Excel. Extension Attributes: The Challenge One of the key challenges in building multiple OLAP data marts based on the same 'template' is handling extension attributes. These are attributes that meet the client's specific reporting needs, but do not form part of the standard template. Now clearly, these extension attributes have to come into the system via additional files and ultimately be added to relational tables so they can end up in the OLAP cube. However, processing these files and filling dynamically altered tables with SSIS is a challenge as SSIS packages tend to break as soon as the database schema changes. There are two approaches to this: (1) dynamically build an SSIS package in memory to match the new database schema using C#, or (2) have the extension attributes provided as name/value pairs so the file's schema does not change and can easily be loaded using SSIS. The problem with the first approach is the complexity of writing an awful lot of complex C# code. The problem of the second approach is that name/value pairs are useless to an OLAP cube; so they have to be pivoted back into a proper relational table somewhere in the data load process WITHOUT breaking SSIS. How this can be done will be part of future blog entry. What is involved in building an OLAP data mart? There are a great many steps involved in building OLAP data marts on-the-fly. The key point is that all the steps must be automated to allow for the production of multiple OLAP data marts per day (i.e. many thousands, each with its own specific data set and attributes). Now most of these steps have a great deal in common with standard data warehouse practices. The key difference is that the databases are all built to order. The only permanent database is the metadata database (shown in orange) which holds all the metadata needed to build everything else (i.e. client orders, configuration information, connection strings, client specific requirements and attributes etc.). The staging database (shown in red) has a short life: it is built, populated and then ripped down as soon as the OLAP Data Mart has been populated. In the diagram below, the OLAP data mart comprises the two blue components: the Data Mart which is a relational database and the OLAP Cube which is an OLAP database implemented using Microsoft Analysis Services (SSAS). The client may receive just the OLAP cube or both components together depending on their reporting requirements.  So, in broad terms the steps required to fulfil a client order are as follows: Step 1: Prepare metadata Create a set of database names unique to the client's order Modify all package connection strings to be used by SSIS to point to new databases and file locations. Step 2: Create relational databases Create the staging and data mart relational databases using dynamic SQL and set the database recovery mode to SIMPLE as we do not need the overhead of logging anything Execute SQL scripts to build all database objects (tables, views, functions and stored procedures) in the two databases Step 3: Load staging database Use SSIS to load all data files into the staging database in a parallel operation Load extension files containing name/value pairs. These will provide client-specific attributes in the OLAP cube. Step 4: Load data mart relational database Load the data from staging into the data mart relational database, again in parallel where possible Allocate surrogate keys and use SSIS to perform surrogate key lookup during the load of fact tables Step 5: Load extension tables & attributes Pivot the extension attributes from their native name/value pairs into proper relational tables Add the extension attributes to the views used by OLAP cube Step 6: Deploy & Process OLAP cube Deploy the OLAP database directly to the server using a C# script task in SSIS Modify the connection string used by the OLAP cube to point to the data mart relational database Modify the cube structure to add the extension attributes to both the data source view and the relevant dimensions Remove any standard attributes that not required Process the OLAP cube Step 7: Backup and drop databases Drop staging database as it is no longer required Backup data mart relational and OLAP database and ship these to the client's infrastructure Drop data mart relational and OLAP database from the build server Mark order complete Start processing the next order, ad infinitum. So my future blog posts and my forthcoming session at the SQLBits conference will all focus on some of the more interesting aspects of building OLAP data marts on-the-fly such as handling the load of extension attributes and how to dynamically alter the structure of an OLAP cube using C#.

    Read the article

  • The Best Data Integration for Exadata Comes from Oracle

    - by maria costanzo
    Oracle Data Integrator and Oracle GoldenGate offer unique and optimized data integration solutions for Oracle Exadata. For example, customers that choose to feed their data warehouse or reporting database with near real-time throughout the day, can do so without decreasing  performance or availability of source and target systems. And if you ask why real-time, the short answer is: in today’s fast-paced, always-on world, business decisions need to use more relevant, timely data to be able to act fast and seize opportunities. A longer response to "why real-time" question can be found in a related blog post. If we look at the solution architecture, as shown on the diagram below,  Oracle Data Integrator and Oracle GoldenGate are both uniquely designed to take full advantage of the power of the database and to eliminate unnecessary middle-tier components. Oracle Data Integrator (ODI) is the best bulk data loading solution for Exadata. ODI is the only ETL platform that can leverage the full power of Exadata, integrate directly on the Exadata machine without any additional hardware, and by far provides the simplest setup and fastest overall performance on an Exadata system. We regularly see customers achieving a 5-10 times boost when they move their ETL to ODI on Exadata. For  some companies the performance gain is even much higher. For example a large insurance company did a proof of concept comparing ODI vs a traditional ETL tool (one of the market leaders) on Exadata. The same process that was taking 5hrs and 11 minutes to complete using the competing ETL product took 7 minutes and 20 seconds with ODI. Oracle Data Integrator was 42 times faster than the conventional ETL when running on Exadata.This shows that Oracle's own data integration offering helps you to gain the most out of your Exadata investment with a truly optimized solution. GoldenGate is the best solution for streaming data from heterogeneous sources into Exadata in real time. Oracle GoldenGate can also be used together with Data Integrator for hybrid use cases that also demand non-invasive capture, high-speed real time replication. Oracle GoldenGate enables real-time data feeds from heterogeneous sources non-invasively, and delivers to the staging area on the target Exadata system. ODI runs directly on Exadata to use the database engine power to perform in-database transformations. Enterprise Data Quality is integrated with Oracle Data integrator and enables ODI to load trusted data into the data warehouse tables. Only Oracle can offer all these technical benefits wrapped into a single intelligence data warehouse solution that runs on Exadata. Compared to traditional ETL with add-on CDC this solution offers: §  Non-invasive data capture from heterogeneous sources and avoids any performance impact on source §  No mid-tier; set based transformations use database power §  Mini-batches throughout the day –or- bulk processing nightly which means maximum availability for the DW §  Integrated solution with Enterprise Data Quality enables leveraging trusted data in the data warehouse In addition to Starwood Hotels and Resorts, Morrison Supermarkets, United Kingdom’s fourth-largest food retailer, has seen the power of this solution for their new BI platform and shared their story with us. Morrisons needed to analyze data across a large number of manufacturing, warehousing, retail, and financial applications with the goal to achieve single view into operations for improved customer service. The retailer deployed Oracle GoldenGate and Oracle Data Integrator to bring new data into Oracle Exadata in near real-time and replicate the data into reporting structures within the data warehouse—extending visibility into operations. Using Oracle's data integration offering for Exadata, Morrisons produced financial reports in seconds, rather than minutes, and improved staff productivity and agility. You can read more about Morrison’s success story here and hear from Starwood here. From an Irem Radzik article.

    Read the article

  • Xna model parts are overlying others

    - by Federico Chiaravalli
    I am trying to import in XNA an .fbx model exported with blender. Here is my drawing code public void Draw() { Matrix[] modelTransforms = new Matrix[Model.Bones.Count]; Model.CopyAbsoluteBoneTransformsTo(modelTransforms); foreach (ModelMesh mesh in Model.Meshes) { foreach (BasicEffect be in mesh.Effects) { be.EnableDefaultLighting(); be.World = modelTransforms[mesh.ParentBone.Index] * GameCamera.World * Translation; be.View = GameCamera.View; be.Projection = GameCamera.Projection; } mesh.Draw(); } } The problem is that when I start the game some model parts are overlying others instead of being behind. I've tried to download other models from internet but they have the same problem.

    Read the article

  • Data access pattern

    - by andlju
    I need some advice on what kind of pattern(s) I should use for pushing/pulling data into my application. I'm writing a rule-engine that needs to hold quite a large amount of data in-memory in order to be efficient enough. I have some rather conflicting requirements; It is not acceptable for the engine to always have to wait for a full pre-load of all data before it is functional. Only fetching and caching data on-demand will lead to the engine taking too long before it is running quickly enough. An external event can trigger the need for specific parts of the data to be reloaded. Basically, I think I need a combination of pushing and pulling data into the application. A simplified version of my current "pattern" looks like this (in psuedo-C# written in notepad): // This interface is implemented by all classes that needs the data interface IDataSubscriber { void RegisterData(Entity data); } // This interface is implemented by the data access class interface IDataProvider { void EnsureLoaded(Key dataKey); void RegisterSubscriber(IDataSubscriber subscriber); } class MyClassThatNeedsData : IDataSubscriber { IDataProvider _provider; MyClassThatNeedsData(IDataProvider provider) { _provider = provider; _provider.RegisterSubscriber(this); } public void RegisterData(Entity data) { // Save data for later StoreDataInCache(data); } void UseData(Key key) { // Make sure that the data has been stored in cache _provider.EnsureLoaded(key); Entity data = GetDataFromCache(key); } } class MyDataProvider : IDataProvider { List<IDataSubscriber> _subscribers; // Make sure that the data for key has been loaded to all subscribers public void EnsureLoaded(Key key) { if (HasKeyBeenMarkedAsLoaded(key)) return; PublishDataToSubscribers(key); MarkKeyAsLoaded(key); } // Force all subscribers to get a new version of the data for key public void ForceReload(Key key) { PublishDataToSubscribers(key); MarkKeyAsLoaded(key); } void PublishDataToSubscribers(Key key) { Entity data = FetchDataFromStore(key); foreach(var subscriber in _subscribers) { subscriber.RegisterData(data); } } } // This class will be spun off on startup and should make sure that all data is // preloaded as quickly as possible class MyPreloadingThread { IDataProvider _provider; MyPreloadingThread(IDataProvider provider) { _provider = provider; } void RunInBackground() { IEnumerable<Key> allKeys = GetAllKeys(); foreach(var key in allKeys) { _provider.EnsureLoaded(key); } } } I have a feeling though that this is not necessarily the best way of doing this.. Just the fact that explaining it seems to take two pages feels like an indication.. Any ideas? Any patterns out there I should have a look at?

    Read the article

  • Data access pattern, combining push and pull?

    - by andlju
    I need some advice on what kind of pattern(s) I should use for pushing/pulling data into my application. I'm writing a rule-engine that needs to hold quite a large amount of data in-memory in order to be efficient enough. I have some rather conflicting requirements; It is not acceptable for the engine to always have to wait for a full pre-load of all data before it is functional. Only fetching and caching data on-demand will lead to the engine taking too long before it is running quickly enough. An external event can trigger the need for specific parts of the data to be reloaded. Basically, I think I need a combination of pushing and pulling data into the application. A simplified version of my current "pattern" looks like this (in psuedo-C# written in notepad): // This interface is implemented by all classes that needs the data interface IDataSubscriber { void RegisterData(Entity data); } // This interface is implemented by the data access class interface IDataProvider { void EnsureLoaded(Key dataKey); void RegisterSubscriber(IDataSubscriber subscriber); } class MyClassThatNeedsData : IDataSubscriber { IDataProvider _provider; MyClassThatNeedsData(IDataProvider provider) { _provider = provider; _provider.RegisterSubscriber(this); } public void RegisterData(Entity data) { // Save data for later StoreDataInCache(data); } void UseData(Key key) { // Make sure that the data has been stored in cache _provider.EnsureLoaded(key); Entity data = GetDataFromCache(key); } } class MyDataProvider : IDataProvider { List<IDataSubscriber> _subscribers; // Make sure that the data for key has been loaded to all subscribers public void EnsureLoaded(Key key) { if (HasKeyBeenMarkedAsLoaded(key)) return; PublishDataToSubscribers(key); MarkKeyAsLoaded(key); } // Force all subscribers to get a new version of the data for key public void ForceReload(Key key) { PublishDataToSubscribers(key); MarkKeyAsLoaded(key); } void PublishDataToSubscribers(Key key) { Entity data = FetchDataFromStore(key); foreach(var subscriber in _subscribers) { subscriber.RegisterData(data); } } } // This class will be spun off on startup and should make sure that all data is // preloaded as quickly as possible class MyPreloadingThread { IDataProvider _provider; MyPreloadingThread(IDataProvider provider) { _provider = provider; } void RunInBackground() { IEnumerable<Key> allKeys = GetAllKeys(); foreach(var key in allKeys) { _provider.EnsureLoaded(key); } } } I have a feeling though that this is not necessarily the best way of doing this.. Just the fact that explaining it seems to take two pages feels like an indication.. Any ideas? Any patterns out there I should have a look at?

    Read the article

  • Import csv data (SDK iphone)

    - by Ni
    I am new to cocoa. I have been working on these stuff for a few days. For the following code, i can read all the data in the string, and successfully get the data for plot. NSMutableArray *contentArray = [NSMutableArray array]; NSString *filePath = @"995,995,995,995,995,995,995,995,1000,997,995,994,992,993,992,989,988,987,990,993,989"; NSArray *myText = [filePath componentsSeparatedByString:@","]; NSInteger idx; for (idx = 0; idx < myText.count; idx++) { NSString *data =[myText objectAtIndex:idx]; NSLog(@"%@", data); id x = [NSNumber numberWithFloat:0+idx*0.002777778]; id y = [NSDecimalNumber decimalNumberWithString:data]; [contentArray addObject: [NSMutableDictionary dictionaryWithObjectsAndKeys:x, @"x", y, @"y", nil]]; } self.dataForPlot = contentArray; then, i try to load the data from csv file. the data in Data.csv file has the same value and the same format as 995,995,995,995,995,995,995,995,1000,997,995,994,992,993,992,989,988,987,990,993,989. I run the code, it is supposed to give the same graph output. however, it seems that the data is not loaded from csv file successfully. i can not figure out what's wrong with my code. NSMutableArray *contentArray = [NSMutableArray array]; NSString *filePath = [[NSBundle mainBundle] pathForResource:@"Data" ofType:@"csv"]; NSString *Data = [NSString stringWithContentsOfFile:filePath encoding:NSUTF8StringEncoding error:nil ]; if (Data) { NSArray *myText = [Data componentsSeparatedByString:@","]; NSInteger idx; for (idx = 0; idx < myText.count; idx++) { NSString *data =[myText objectAtIndex:idx]; NSLog(@"%@", data); id x = [NSNumber numberWithFloat:0+idx*0.002777778]; id y = [NSDecimalNumber decimalNumberWithString:data]; [contentArray addObject: [NSMutableDictionary dictionaryWithObjectsAndKeys:x, @"x", y, @"y",nil]]; } self.dataForPlot = contentArray; } The only difference is NSString *filePath = [[NSBundle mainBundle] pathForResource:@"Data" ofType:@"csv"]; NSString *Data = [NSString stringWithContentsOfFile:filePath encoding:NSUTF8StringEncoding error:nil ]; if (data){ } did i do anything wrong here?? Thanks for your help!!!!

    Read the article

  • changing the saved contents of the model , edit model fileds once saved

    - by imran-glt
    hi I have extended the user model and added extra fields to it i.e "latitude" "longitude" and status and than save it. up to here it works fine. but i want to allow the user to change his/her "latitude" "longitude" whenever he/she needs like the hotmail and yahoo allows change account feature. in my case the user only wants to chage the latitude and longitude i tried it in this way but it didnt work. is this the right way to do it ...... or is there any other way to change the saved contents view.py def status_change(request): print "status_change function called" if request.method == "POST": rform = registerForm(data = request.POST) uform = UserForm(data = request.POST) if rform.is_valid(): user = uform.save() register = rform.save() register.user = user register.save() return render_to_response('home.html') else: rform = registerForm() return render_to_response('status_change.html',{'rform':rform})

    Read the article

  • How to get the app a Django model is from?

    - by e-satis
    I have a model with a generic relation: TrackedItem --- genericrelation ---> any model I would like to be able to generically get, from the initial model, the tracked item. I should be able to do it on any model without modifying it. To do that I need to get the content type and the object id. Getting the object id is easy since I have the model instance, but getting the content type is not: ContentType.object.filter requires the model (which is just content_object.__class__.__name__) and the app_label. I have no idea of how to get in a reliable way the app in which a model is. For now I do app = content_object.__module__.split(".")[0], but it doesn't work with django contrib apps.

    Read the article

  • POST data not being received

    - by Alexander
    I've got an iPhone App that is supposed to send POST data to my server to register the device in a MySQL database so we can send notifications etc... to it. It sends it's unique identifier, device name, token, and a few other small things like passwords and usernames as a POST request to our server. The problem is that sometimes the server doesn't receive the data. And by this I mean, its not just receiving blank values for the POST inputs but, its not receiving ANY post data at all. I am logging all POST inputs to my server into some log files and when the script that relies on the POST data from the device fails (detects no data) I notice that its because NO POST data was sent. Is this a problem on the server, like refusing data or something or does this have to be on the client's side? What could be causing this?

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >