Search Results

Search found 7693 results on 308 pages for 'jen fields'.

Page 60/308 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Django Admin: OneToOne Relation as an Inline?

    - by Jim Robert
    I am putting together the admin for a satchmo application. Satchmo uses OneToOne relations to extend the base Product model, and I'd like to edit it all on one page. It is possible to have a OneToOne relation as an Inline? If not, what is the best way to add a few fields to a given page of my admin that will eventually be saved into the OneToOne relation? for example: class Product(models.Model): name = models.CharField(max_length=100) ... class MyProduct(models.Model): product = models.OneToOne(Product) ... I tried this for my admin but it does not work, and seems to expect a Foreign Key: class ProductInline(admin.StackedInline): model = Product fields = ('name',) class MyProductAdmin(admin.ModelAdmin): inlines = (AlbumProductInline,) admin.site.register(MyProduct, MyProductAdmin) Which throws this error: <class 'satchmo.product.models.Product'> has no ForeignKey to <class 'my_app.models.MyProduct'> Is the only way to do this a Custom Form? edit: Just tried the following code to add the fields directly... also does not work: class AlbumAdmin(admin.ModelAdmin): fields = ('product__name',)

    Read the article

  • How do you build a Windows Workflow Project with NAnt 0.90?

    - by LockeCJ
    I'm trying to build a Windows Workflow (WF) project using NAnt, but it doesn;t seem to be able to build the ".xoml" and ".rules" files. Here is the code of the csc task that I'm using: <csc debug="${build.Debug}" warninglevel="${build.WarningLevel}" target="library" output="${path::combine(build.OutputDir,assembly.Name+'.dll')}" verbose="${build.Verbose}" doc="${path::combine(build.OutputDir,assembly.Name+'.xml')}"> <sources basedir="${assembly.BaseDir}"> <include name="**/*.cs" /> <include name="**/*.xoml" /> <include name="**/*.rules" /> </sources> <resources basedir="${assembly.BaseDir}"> <include name="**/*.xsd" /> <include name="**/*.resx" /> </resources> <references> ... </references> </csc> Here's the output: Compiling 21 files to 'c:\Output\MyWorkFlowProject.dll'. [csc] c:\Projects\MyWorkFlowProject\AProcessFlow.xoml(1,1): error CS0116: A namespace does not directly contain members such as fields or methods [csc] c:\Projects\MyWorkFlowProject\BProcessFlow.xoml(1,1): error CS0116: A namespace does not directly contain members such as fields or methods [csc] c:\Projects\MyWorkFlowProject\CProcessFlow.rules(1,1): error CS0116: A namespace does not directly contain members such as fields or methods [csc] c:\Projects\MyWorkFlowProject\CProcessFlow.xoml(1,1): error CS0116: A namespace does not directly contain members such as fields or methods

    Read the article

  • SQL 2000 (MSDE) Hangs When It Receives an Erroneous Query from a Classic ASP Web Application

    - by Jimbo
    I have a SQL interface page in my classic ASP web app that allows admin users to run queries against the app's database (MSDE 2000) - it simply consists of a textarea that the user submits and the app returns the resulting list of records as below Dim oRS Set oRS = Server.CreateObject("ADODB.Recordset") oRS.ActiveConnection = sConnectionString // run the query - this is for the admin only so doesnt check for sql safe commands etc. oRS.Open Request.Form("txtSQL") If Not oRS.EOF Then // list the field names from the recordset For i = 0 to oRS.Fields.Count - 1 Response.Write oRS.Fields(i).name & "&nbsp;" Next // show the data for each record in the recordset While Not oRS.EOF For i = 0 to oRS.Fields.Count - 1 Response.Write oRS.Fields(i).value & "&nbsp;" Next Response.Write "<br />" oRS.Movenext() Wend End If The problem with this is that if you send it an invalid query (with a spelling mistake, invalid join etc.) instead of throwing back an error immediately, it hangs IIS (you can see this by trying to browse the app from another computer, it fails) for a number of minutes and THEN returns the error. I have NO idea why! Can anyone help?

    Read the article

  • Rails Autocompletion Issue - Rails 1.2.3 to 2.3.5

    - by Grant Sayer
    I have an issue with rails Autocompletion from some code that i've inherited from an old Rails 1.2.3 project that I'm porting to Rails 2.3.5. The issue revolves around javascript execution within the auto_complete helper :after_update_element. The scenario is: A user is presented with a popup form with a number of fields. In the first field as they enter text the auto_complete AJAX call occurs, returning a result, plus a series of other HTML data wrapped in <divs> so that the after_update_element call can iterate over the other data and fill in the remaining fields. The issue lies with the extraction of the other fields which works on IE, fails on Firefox. Here is the code: <%= text_field_with_auto_complete :item, :product_code, {:value => ""}, {:size => 40, :class => "input-text", :tabindex => 6, :select => 'code', :with => "element.name + '=' + escape(element.value) + '&supplier_id=' + $('item_supplier_id').value", :after_update_element => "function (ele, value) { $('item_supplier_id').value = Utilities.extract_value(value, 'supplier_id'); $('item_supplied_size').value = Utilities.extract_value(value, 'size')}"}%> Now the function Utilities is designed to grab the fields from the string of values and looks like: // // Extract a particular set of data from the autocomplete actions // Utilities.extract_value = function (value, className) { var result; var elements = document.getElementsByClassName(className, value); if (elements && elements.length == 1) { result = elements[0].innerHTML.unescapeHTML(); } return result; }; In Firefox the value of result is undefined

    Read the article

  • How to convert many-to-one XML data to DataSet?

    - by TruMan1
    I have an XML document that has a collection of objects. Each object has a key/value pair of label and value. I am trying to convert this into a DataSet, but when I do ds.ReadXml(xmlFile), then it creates two columns: label and value. What I would like is to have a column for each "label" and the value to be part of the row. here is my sample of the XML: <responses> <response> <properties id="1" Form="Account Request" Date="Tuesday, March 16, 2010 5:04:26 PM" Confirmation="True" /> <fields> <field> <label>Name</label> <value>John</value> </field> <field> <label>Email</label> <value>[email protected]</value> </field> <field> <label>Website</label> <value>http://domain1.com</value> </field> <field> <label>Phone</label> <value>999-999-9999</value> </field> <field> <label>Place of Birth</label> <value>Earth</value> </field> <field> <label>Misc</label> <value>Misc</value> </field> <field> <label>Comments</label> <value /> </field> <field> <label>Agree to Terms?</label> <value>True</value> </field> </fields> </response> <response> <properties id="2" Form="Account Request" Date="Tuesday, March 17, 2010 5:04:26 PM" Confirmation="True" /> <fields> <field> <label>Name</label> <value>John2</value> </field> <field> <label>Email</label> <value>[email protected]</value> </field> <field> <label>Website</label> <value>http://domain2.com</value> </field> <field> <label>Phone</label> <value>999-999-9999</value> </field> <field> <label>Place of Birth</label> <value>Earth</value> </field> <field> <label>Misc</label> <value>Misc</value> </field> <field> <label>Comments</label> <value /> </field> <field> <label>Agree to Terms?</label> <value>True</value> </field> </fields> </response> <response> <properties id="3" Form="Account Request" Date="Tuesday, March 18, 2010 5:04:26 PM" Confirmation="True" /> <fields> <field> <label>Name</label> <value>John3</value> </field> <field> <label>Email</label> <value>[email protected]</value> </field> <field> <label>Website</label> <value>http://domain3.com</value> </field> <field> <label>Phone</label> <value>999-999-9999</value> </field> <field> <label>Place of Birth</label> <value>Earth</value> </field> <field> <label>Misc</label> <value>Misc</value> </field> <field> <label>Comments</label> <value /> </field> <field> <label>Agree to Terms?</label> <value>True</value> </field> </fields> </response> <response> <properties id="4" Form="Account Request" Date="Tuesday, March 19, 2010 5:04:26 PM" Confirmation="True" /> <fields> <field> <label>Name</label> <value>John</value> </field> <field> <label>Email</label> <value>[email protected]</value> </field> <field> <label>Website</label> <value>http://domain4.com</value> </field> <field> <label>Phone</label> <value>999-999-9999</value> </field> <field> <label>Place of Birth</label> <value>Earth</value> </field> <field> <label>Misc</label> <value>Misc</value> </field> <field> <label>Comments</label> <value /> </field> <field> <label>Agree to Terms?</label> <value>True</value> </field> </fields> </response> </responses> How would I convert this to a DataSet so that I can load it into a gridview with the columns: Name, Email, Website, Phone, Place of Birth, Misc, Comments, and Agree to Terms? Then row 1 would be: John, [email protected], http://domain1.com, 999-999-9999, Earth, Misc, , True How can I do this with the XML provided?

    Read the article

  • Does my API design violate RESTful principles?

    - by peta
    Hello everybody, I'm currently (I try to) designing a RESTful API for a social network. But I'm not sure if my current approach does still accord to the RESTful principles. I'd be glad if some brighter heads could give me some tips. Suppose the following URI represents the name field of a user account: people/{UserID}/profile/fields/name But there are almost hundred possible fields. So I want the client to create its own field views or use predefined ones. Let's suppose that the following URI represents a predefined field view that includes the fields "name", "age", "gender": utils/views/field-views/myFieldView And because field views are kind of higher logic I don't want to mix support for field views into the "people/{UserID}/profile/fields" resource. Instead I want to do the following: utils/views/field-views/myFieldView/{UserID} Though Leonard Richardson & Sam Ruby state in their book "RESTful Web Services" that a RESTful design is somehow like an "extreme object oriented" approach, I think that my approach is object oriented and therefore accords to RESTful principles. Or am I wrong? When not: Are such "object oriented" approaches generally encouraged when used with care and in order to avoid query-based REST-RPC hybrids? Thanks for your feedback in advance, peta

    Read the article

  • Reading CSV files in numpy where delimiter is ","

    - by monch1962
    Hello all, I've got a CSV file with a format that looks like this: "FieldName1", "FieldName2", "FieldName3", "FieldName4" "04/13/2010 14:45:07.008", "7.59484916392", "10", "6.552373" "04/13/2010 14:45:22.010", "6.55478493312", "9", "3.5378543" ... Note that there are double quote characters at the start and end of each line in the CSV file, and the "," string is used to delimit fields within each line. When I try to read this into numpy via: import numpy as np data = np.genfromtxt(csvfile, dtype=None, delimiter=',', names=True) all the data gets read in as string values, surrounded by double-quote characters. Not unreasonable, but not much use to me as I then have to go back and convert every column to its correct type When I use delimiter='","' instead, everything works as I'd like, except for the 1st and last fields. As the start of line and end of line characters are a single double-quote character, this isn't seen as a valid delimiter for the 1st and last fields, so they get read in as e.g. "04/13/2010 14:45:07.008 and 6.552373" - note the leading and trailing double-quote characters respectively. Because of these redundant characters, numpy assumes the 1st and last fields are both String types; I don't want that to be the case Is there a way of instructing numpy to read in files formatted in this fashion as I'd like, without having to go back and "fix" the structure of the numpy array after the initial read?

    Read the article

  • PostgreSQL: BYTEA vs OID+Large Object?

    - by mlaverd
    I started an application with Hibernate 3.2 and PostgreSQL 8.4. I have some byte[] fields that were mapped as @Basic (= PG bytea) and others that got mapped as @Lob (=PG Large Object). Why the inconsistency? Because I was a Hibernate noob. Now, those fields are max 4 Kb (but average is 2-3 kb). The PostgreSQL documentation mentioned that the LOs are good when the fields are big, but I didn't see what 'big' meant. I have upgraded to PostgreSQL 9.0 with Hibernate 3.6 and I was stuck to change the annotation to @Type(type="org.hibernate.type.PrimitiveByteArrayBlobType"). This bug has brought forward a potential compatibility issue, and I eventually found out that Large Objects are a pain to deal with, compared to a normal field. So I am thinking of changing all of it to bytea. But I am concerned that bytea fields are encoded in Hex, so there is some overhead in encoding and decoding, and this would hurt the performance. Are there good benchmarks about the performance of both of these? Anybody has made the switch and saw a difference?

    Read the article

  • sorl-thumbnail unit tests fail by 1 pixel (!)

    - by stevejalim
    Hi I'm using sorl-thumbnail in a Django 1.2 (currently 1.2 RC) project and getting a surprising failure of four of sorl's built-in unit tests. Essentially, the resized images are all 1px shorter than the unit tests expect them to be. See below for details I'm developing on OSX 10.5.8 (not Snow Leopard) with Python 2.5.1 (r251:54863, Feb 6 2009, 19:02:12) and PIL 1.1.6. Any thoughts what might be up? Cheers Steve ====================================================================== FAIL: test_extension (sorl.thumbnail.tests.fields.FieldTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/django/myprojectnamehere/lib/sorl/thumbnail/tests/fields.py", line 66, in test_extension self.verify_thumbnail((50, 37), thumb, expected_filename) File "/usr/local/django/myprojectnamehere/lib/sorl/thumbnail/tests/base.py", line 92, in verify_thumbnail self.assertEqual(image.size, expected_size) AssertionError: (50, 38) != (50, 37) ====================================================================== FAIL: test_thumbnail (sorl.thumbnail.tests.fields.ImageWithThumbnailsFieldTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/django/myprojectnamehere/lib/sorl/thumbnail/tests/fields.py", line 111, in test_thumbnail self.verify_thumbnail((50, 37), thumb, expected_filename) File "/usr/local/django/myprojectnamehere/lib/sorl/thumbnail/tests/base.py", line 92, in verify_thumbnail self.assertEqual(image.size, expected_size) AssertionError: (50, 38) != (50, 37) ====================================================================== FAIL: testTag (sorl.thumbnail.tests.templatetags.ThumbnailTagTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/django/myprojectnamehere/lib/sorl/thumbnail/tests/templatetags.py", line 118, in testTag self.verify_thumbnail((90, 67), expected_filename=expected_fn) File "/usr/local/django/myprojectnamehere/lib/sorl/thumbnail/tests/base.py", line 92, in verify_thumbnail self.assertEqual(image.size, expected_size) AssertionError: (90, 68) != (90, 67)

    Read the article

  • T-SQL Table Joins - Unique Situation

    - by Dimitri
    Hello Everyone. This is my first time encountering the case like this and don't quite know how to handle. Situation: I have one table tblSettingsDefinition, with fields: ID, GroupID, Name, typeID, DefaultValue. Then I have tblSettingtypes with fields TypeID, Name. And I have final table, tblUserSettings with fields SettingID, SettingDefinitionID, UserID, Value. The whole point of this is to have customizable settings. Setting can be defined for a Group or as global setting (if GroupID is NULL). It will have a default value, but if user modifies the setting, an entry is added to tblUserSettings that stores new value. I want to have a query that grabs user settings by first looking at the tblUserSettings, and if it has records for the given user, grabs them, if not retrieves default settings. But the trick is that no matter if user has settings or not, I need to have fields from other two table retrieved to know the setting's Type, Name etc... (which are stored in those other tables). I'm writing query something like this: SELECT * FROM tblSettingDefinition SD LEFT JOIN tblUserSettings US ON SD.SettingID = US.SettingDefinitionID JOIN tblSettingTypes ST ON SD.TypeID=ST.ID WHERE US.UserID=@UserID OR ((SD.GroupID IS NULL) OR (SD.GroupID=(SELECT GroupID FROM tblUser WHERE ID=@UserID))) but it retrieves settings for all users from tblUserSettings instead of just ones that match current @UserID. And if @UserID has no records in tblUserSettings, still, all user settings are retrieved instead of the defaults from tblSettingDefinition. Hope I made myself clear. Any help would be highly appreciated. Thank you.

    Read the article

  • JSON Serialization of a Django inherited model

    - by Simon Morris
    Hello, I have the following Django models class ConfigurationItem(models.Model): path = models.CharField('Path', max_length=1024) name = models.CharField('Name', max_length=1024, blank=True) description = models.CharField('Description', max_length=1024, blank=True) active = models.BooleanField('Active', default=True) is_leaf = models.BooleanField('Is a Leaf item', default=True) class Location(ConfigurationItem): address = models.CharField(max_length=1024, blank=True) phoneNumber = models.CharField(max_length=255, blank=True) url = models.URLField(blank=True) read_acl = models.ManyToManyField(Group, default=None) write_acl = models.ManyToManyField(Group, default=None) alert_group= models.EmailField(blank=True) The full model file is here if it helps. You can see that Company is a child class of ConfigurationItem. I'm trying to use JSON serialization using either the django.core.serializers.serializer or the WadofStuff serializer. Both serializers give me the same problem... >>> from cmdb.models import * >>> from django.core import serializers >>> serializers.serialize('json', [ ConfigurationItem.objects.get(id=7)]) '[{"pk": 7, "model": "cmdb.configurationitem", "fields": {"is_leaf": true, "extension_attribute_10": "", "name": "", "date_modified": "2010-05-19 14:42:53", "extension_attribute_11": false, "extension_attribute_5": "", "extension_attribute_2": "", "extension_attribute_3": "", "extension_attribute_1": "", "extension_attribute_6": "", "extension_attribute_7": "", "extension_attribute_4": "", "date_created": "2010-05-19 14:42:53", "active": true, "path": "/Locations/London", "extension_attribute_8": "", "extension_attribute_9": "", "description": ""}}]' >>> serializers.serialize('json', [ Location.objects.get(id=7)]) '[{"pk": 7, "model": "cmdb.location", "fields": {"write_acl": [], "url": "", "phoneNumber": "", "address": "", "read_acl": [], "alert_group": ""}}]' >>> The problem is that serializing the Company model only gives me the fields directly associated with that model, not the fields from it's parent object. Is there a way of altering this behaviour or should I be looking at building a dictionary of objects and using simplejson to format the output? Thanks in advance ~sm

    Read the article

  • ruby, rails, railscasts example I messed up

    - by Sam
    If you saw the railscasts on nested forms this is the helper method to create links dynamically. However, after I upgraded to ruby 1.9.2 and rails 3 this doesn't work and I have now idea why. def link_to_add_fields(name, f, association) new_object = f.object.class.reflect_on_association(association).klass.new fields = f.fields_for(association, new_object, :child_index => "new_#{association}") do |builder| render(association.to_s.singularize + "_fields", :f => builder) end link_to_function(name, h("add_fields(this, \"#{association}\", \"#{escape_javascript(fields)}\")")) end here is the javascript function add_fields(link, association, content) { var new_id = new Date().getTime(); var regexp = new RegExp("new_" + association, "g") $(link).up().insert({ before: content.replace(regexp, new_id) }); } When I view source this is how the link is getting rendered: <p><a href="#" onclick="add_fields(this, &quot;dimensions&quot;, &quot;&quot;); return false;">Add Dimension</a></p> so &quot;&quot; is not the correct information to build a new template and something is going on with how the string for fields is getting set. such as fields= f.fields_for

    Read the article

  • Updating nullability of columns in SQL 2008

    - by Shaul
    I have a very wide table, containing lots and lots of bit fields. These bit fields were originally set up as nullable. Now we've just made a decision that it doesn't make sense to have them nullable; the value is either Yes or No, default No. In other words, the schema should change from: create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit null, BitField2 bit null, ... BitFieldN bit null ) to create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit not null, BitField2 bit not null, ... BitFieldN bit not null ) alter table MyTable add constraint DF_BitField1 default 0 for BitField1 alter table MyTable add constraint DF_BitField2 default 0 for BitField2 alter table MyTable add constraint DF_BitField3 default 0 for BitField3 So I've just gone in through the SQL Management Studio, updating all these fields to non-nullable, default value 0. And guess what - when I try to update it, SQL Mgmt studio internally recreates the table and then tries to reinsert all the data into the new table... including the null values! Which of course generates an error, because it's explicitly trying to insert a null value into a non-nullable column. Aaargh! Obviously I could run N update statements of the form: update MyTable set BitField1 = 0 where BitField1 is null update MyTable set BitField2 = 0 where BitField2 is null but as I said before, there are n fields out there, and what's more, this change has to propagate out to several identical databases. Very painful to implement manually. Is there any way to make the table modification just ignore the null values and allow the default rule to kick in when you attempt to insert a null value?

    Read the article

  • Classic ASP - Email form with attached file - please help

    - by apg1985
    Hi Guys, Ive got abit of a problem ive got an email web form that send the input to an email address but what I now need is a file input field were the user can also send an image as an attachment. So contact name, logo (attachment). Ive been told in order to send the attachment it needs to be saved in a folder on my hosting before it can be sent. Ive spoken to the hosting company and they dont have anything in place to make this easier such as aspupload. In the form name="contactname" and name="logo" I have a folder in the root directory called logos (this asp page also exists in the root directory) Man I hope someone can help me spent along time looking for answers Dim contactname, logo contactname = request.form("contactname") If request("contactname") <> "" THEN Set myMail=CreateObject("CDO.Message") myMail.Subject="Form" myMail.From="web@email" myMail.To="web@email" myMail.HTMLBody = "" & contactname & "" myMail.Configuration.Fields.Item("http://schemas.microsoft.com/cdo/configuration/sendusing") = 2 myMail.Configuration.Fields.Item("http://schemas.microsoft.com/cdo/configuration/smtpserver") = "relay.host" myMail.Configuration.Fields.Item("http://schemas.microsoft.com/cdo/configuration/smtpserverport") = 25 myMail.Configuration.Fields.Update myMail.Send set myMail=nothing

    Read the article

  • PHP Switch and Login

    - by Steve Rivera
    I'm fairly new with PHP and I am messing around with a login/registration system. I setup my sample website using a PHP-SWITCH script I found a while back: <?php switch($_GET['id']) { default: include('home.php'); /* LOGIN PAGES */ break; case "register_form": include ('includes/user_system/register_form.php'); } ? On the registration page the form links to my "register.php" which checks the validity of the form and to check for any blank fields and so on. "register.php" is supposed to refresh the page and add a reason to what the user did wrong when submitting the form. On my "register_form.php" page, which holds the actual form. This field is hidden until the user makes a mistake. <?php if (isset($reg_error)) { ?> , please try again. My "register.php" checks the form for all the errors. Here's the bit of code that will refresh the page with the reason for the error: // Check if any of the fields are missing if (empty($_POST['username']) || empty($_POST['password']) || empty($_POST['confirmpass'])) { // Reshow the form with an error $reg_error = 'One or more fields missing'; include 'register_form.php'; Now after I submit the form without any fields filled out I get the error code, but it refreshes to the actual "register_form.php". The problem with this is that because of my PHP-SWITCH script (helps me manage the site a lot easier) I don't have any formatting on that page. The actual URL to my "register_form.php" would be: "index.php?id=register_form.php". Now I have tried several different things such as changing it to: include 'index.php?id=register_form.php' And also changing it to: header(location:index.php?id=register_form.php') Unfortunately all this does is refresh the page without the reason for the error. I know this can be easily solved by just adding a Javascript Validator but I'd like to know if it is possible to refresh the page with the error using either "include" or "header()" while having a PHP-SWITCH script on the website.

    Read the article

  • Google Maps inside custom PODS page

    - by Sharath
    I have a custom pods page called addstory.php which uses a public form to display some input fields. For one of the fields, i have a input helper that displays a google map location which I can use to pick out a location which is stored in comma separated variable. Here is the input helper code... <div class="form pick <?php echo $location; ?>"> <div id="map" style="width:500; height:500"> </div> <script type="text/javascript"> $(function() { map=new GMap(document.getElementById("map")); map.centerAndZoom(new GPoint(77.595062,13.043359),6); map.setUIToDefault(); GEvent.addListener(map,"click",function(overlay,latlng) { map.clearOverlays(); var marker = new GMarker(latlng); map.addOverlay(marker); //Extra stuff here.. } ); }); </script> </div> And this is the code inside my addstory.php <?php get_header(); $rw = new Pod('rainwater'); $fields=array( 'name', 'email', 'location'=>array('input_helper'=>'gmap_location'), ); echo $rw->publicForm($fields); ?> I get the following error...and the google map doesn't load. a is null error source line: [Break on this error] function Qd(a){for(var b;b=a.firstChild;){Rg(b);a.removeChild(b)}}

    Read the article

  • Incorporating Devise Authentication into an already existing user structure?

    - by Kevin
    I have a fully functional authentication system with a user table that has over fifty columns. It's simple but it does hash encryption with salt, uses email instead of usernames, and has two separate kinds of users with an admin as well. I'm looking to incorporate Devise authentication into my application to beef up the extra parts like email validation, forgetting passwords, remember me tokens, etc... I just wanted to see if anyone has any advice or problems they've encountered when incorporating Devise into an already existing user structure. The essential fields in my user model are: t.string :first_name, :null => false t.string :last_name, :null => false t.string :email, :null => false t.string :hashed_password t.string :salt t.boolean :is_userA, :default => false t.boolean :is_userB, :default => false t.boolean :is_admin, :default => false t.boolean :active, :default => true t.timestamps For reference sake, here's the Devise fields from the migration: t.database_authenticatable :null => false t.confirmable t.recoverable t.rememberable t.trackable That eventually turn into these actual fields in the schema: t.string "email", :default => "", :null => false t.string "encrypted_password", :limit => 128, :default => "", :null => false t.string "password_salt", :default => "", :null => false t.string "confirmation_token" t.datetime "confirmed_at" t.datetime "confirmation_sent_at" t.string "reset_password_token" t.string "remember_token" t.datetime "remember_created_at" t.integer "sign_in_count", :default => 0 t.datetime "current_sign_in_at" t.datetime "last_sign_in_at" t.string "current_sign_in_ip" t.string "last_sign_in_ip" t.datetime "created_at" t.datetime "updated_at" What do you guys recommend? Do I just remove email, hashed_password, and salt from my migration and put in the 5 Devise migration fields and everything will be OK or do I need to do something else?

    Read the article

  • pdfmark for docinfo metadata in pdf is not accepting accented characters in Keywords or Subject

    - by rpilkey
    I am inserting metadata into postscript files with a program, to be distilled to pdf with Adobe Distiller. I am using this code that I grabbed from Thomas Merz's "Web Publishing with Acrobat-PDF": /pdfmark where {pop} {userdict /pdfmark /cleartomark load put} ifelse [ /Title (mot accenté) /Author (mot accenté) /Subject (mot accenté) /Keywords (mot accenté) /DOCINFO pdfmark When you look at the metadata, the accented characters turn into "?" in the Subject and Keyword fields, but not the Title and Author fields. The characters are the same ascii 233 I tried replacing them with octal encoding (\351), which came out the same (Title and Author okay, Subject and Keywords messed up). file encoding is latin-1,unix eol I found a mention on adobe forums, but the answer didn't make sense to me. http://forums.adobe.com/message/1165593 I changed the encoding to utf-8, inserted the characters binarily (in VIM : <Ctrl-v>u00e9), no change. I tried inserting the BOM in a few places, it didn't work. This is with Acrobat Pro 9 I didn't notice this problem with Acrobat Pro 7. Does anybody know of a workaround to get the accented characters into ALL the metadata fields when modifying a postscript file, or tell me if I'm doing it wrong? It seems weird that different fields would not accept the same bytes.

    Read the article

  • Entity Framework. Updating EntityCollection using disconnected objects via navigation property.

    - by yougotiger
    I have a question, much liket this unanswered one. I'm trying to work with the entity framework, and having a tough time getting my foreign tables to update. I have something basically like this in the DB: Incident (table): -ID -other fields Responses (table): -FK:Incident.ID -other fields And and entities that match: Incident (entity) -ID -Other fields -Responses (EntityCollection of Responses via navigation property) Each Incident can have 0 or more responses. In my Webpage, I have a form to allow the user to enter all the details of an Incident, including a list of responses. I can add everything to the database when a new Incident is created, however I'm having difficulty with editing the Incident. When the page loads for edit, I populate the form and then store the responses in the viewstate. When the user changes the list of responses (adds one, deletes one or edits one). I store this back into the viewstate. Then when the user clicks the save button, I'd like to save the changes to the Incident and the Responses back to the DB. I cannot figure out how to get the responses from the detached viewstate into the Incident object so that they can be updated together. Currently when the user clicks save, I'm getting the Incident to edit from the db, making changes to the Incident's fields and then saving it back to the DB. However I can't figure out how to have the detached list of responses from the viewstate attach to the Incident. I have tried the following without success: Clearning the Incident.Responses collection and adding the ones from the viewstate back in: Incident.Responses.Clear() for each objResponse in Viewstate("Responses") Incident.Responses.add(objResponse) next Creating an EntityCollection from my list and then assiging that to the Incident.Responses Incident.Responses = EntityCollectionFromViewstateList Iterating through the responses in Incident.Response and assigning the corresponding object from viewstate: for each ObjResponse in Incident.Responses objResponse = objCorrespondingModifedResonseFromViewState Next These all fail, I'd like to be able to merge the changes into the Inicdent object so that when the BLL calls SaveChanges on the changes to both the Incident and Responses will happen at the same time. Any suggestions? I keep finding lots of stuff about assigning foreign keys (singular), but I haven't found a great solution for doing a set of entities assigned to another entity in this manner.

    Read the article

  • Trying to insert a row using stored procedured with a parameter binded to an expression.

    - by Arvind Singh
    Environment: asp.net 3.5 (C# and VB) , Ms-sql server 2005 express Tables Table:tableUser ID (primary key) username Table:userSchedule ID (primary key) thecreator (foreign key = tableUser.ID) other fields I have created a procedure that accepts a parameter username and gets the userid and inserts a row in Table:userSchedule Problem: Using stored procedure with datalist control to only fetch data from the database by passing the current username using statement below works fine protected void SqlDataSourceGetUserID_Selecting(object sender, SqlDataSourceSelectingEventArgs e) { e.Command.Parameters["@CurrentUserName"].Value = Context.User.Identity.Name; } But while inserting using DetailsView it shows error Procedure or function OASNewSchedule has too many arguments specified. I did use protected void SqlDataSourceCreateNewSchedule_Selecting(object sender, SqlDataSourceSelectingEventArgs e) { e.Command.Parameters["@CreatedBy"].Value = Context.User.Identity.Name; } DetailsView properties: autogen fields: off, default mode: insert, it shows all the fields that may not be expected by the procedure like ID (primary key) not required in procedure and CreatedBy (user id ) field . So I tried removing the 2 fields from detailsview and shows error Cannot insert the value NULL into column 'CreatedBy', table 'D:\OAS\OAS\APP_DATA\ASPNETDB.MDF.dbo.OASTest'; column does not allow nulls. INSERT fails. The statement has been terminated. For some reason parameters value is not being set. Can anybody bother to understand this and help?

    Read the article

  • ExtJS 4 Chart Axis Display Issue in Chrome

    - by SerEnder
    I've run into an issue while using ExtJS 4.0.7. I'm trying to display a chart with two series on a Numeric/Category Chart. The chart displays correctly in Firefox, but while using Chrome (18.0.1025.142) the x axis labels are either all stacked upon each or else (when using rotate) rendered behind the chart in the specified angle. Any ideas would be appreciated. Screen shot in Firefox: Screen Shot in Chrome: And the code that's used to generate both: Ext.require(['Ext.chart.*']); Ext.onReady(function() { var iChartWidth = 800; // Defines chart width var iChartHeight = 550; // Defines chart height Ext.define('RulesCreatedModel',{ extend:'Ext.data.Model', fields:[ {name:'sHourName', type:'string'}, {name:'User_2', type:'number'}, {name:'User_1', type:'number'}, ] }); var RulesCreatedStore = Ext.create('Ext.data.Store',{ id:'RulesCreatedStore', model:'RulesCreateModel', fields: [ 'sHourName','dayNum','hour','User_2','User_1'], data:[{ 'sHourName':'3pm', 'User_1':82, 'User_2':56 },{ 'sHourName':'4pm', 'User_1':39, 'User_2':44 },{ 'sHourName':'5pm', 'User_1':80, 'User_2':14 },{ 'sHourName':'6pm', 'User_1':55, 'User_2':0, },{ 'sHourName':'7pm', 'User_1':36, 'User_2':0, },{ 'sHourName':'8pm', 'User_1':66, 'User_2':0 },{ 'sHourName':'9pm', 'User_1':39, 'User_2':0, },{ 'sHourName':'10pm', 'User_1':0, 'User_2':0 },{ 'sHourName':'11pm', 'User_1':0, 'User_2':0 },{ 'sHourName':'12am', 'User_1':0, 'User_2':0 },{ 'sHourName':'1am', 'User_1':0, 'User_2':0 },{ 'sHourName':'2am', 'User_1':0, 'User_2':0 },{ 'sHourName':'3am', 'User_1':0, 'User_2':0 },{ 'sHourName':'4am', 'User_1':0, 'User_2':0 },{ 'sHourName':'5am', 'User_1':0, 'User_2':0 },{ 'sHourName':'6am', 'User_1':0, 'User_2':0 },{ 'sHourName':'7am', 'User_1':0, 'User_2':1 },{ 'sHourName':'8am', 'User_1':0, 'User_2':99 },{ 'sHourName':'9am', 'User_1':0, 'User_2':28 },{ 'sHourName':'10am', 'User_1':0, 'User_2':28 },{ 'sHourName':'11am', 'User_1':0, 'User_2':153 },{ 'sHourName':'12pm', 'User_1':0, 'User_2':58 },{ 'sHourName':'1pm', 'User_1':0, 'User_2':42 },{ 'sHourName':'2pm', 'User_1':20, 'User_2':10 }] }); Ext.create('Ext.chart.Chart',{ id: 'RulesWrittenChart', renderTo: 'StatCharts_Display', width: iChartWidth, height: iChartHeight, animate: true, store: RulesCreatedStore, axes: [{ type: 'Numeric', position: 'left', fields: ['User_2','User_1'], title: 'Rules Written', grid: true },{ type: 'Category', position: 'bottom', fields: ['sHourName'], title: 'Hour', grid: false, label: { rotate: { degrees: 315 } } }], series: [{ type: 'line', axis: 'left', xField: 'sHourName', yField: 'User_2', highlight: { size: 3, radius: 3 } },{ type: 'line', axis: 'left', xField: 'sHourName', yField: 'User_1', highlight: { size: 3, radius: 3 } }] }); });

    Read the article

  • How to create more complex Lucene query strings?

    - by boris callens
    This question is a spin-off from this question. My inquiry is two-fold, but because both are related I think it is a good idea to put them together. How to programmatically create queries. I know I could start creating strings and get that string parsed with the query parser. But as I gather bits and pieces of information from other resources, there is a programattical way to do this. What are the syntax rules for the Lucene queries? --EDIT-- I'll give a requirement example for a query I would like to make: Say I have 5 fields: First Name Last Name Age Address Everything All fields are optional, the last field should search over all the other fields. I go over every field and see if it's IsNullOrEmpty(). If it's not, I would like to append a part of my query so it adds the relevant search part. First name and last name should be exact matches and have more weight then the other fields. Age is a string and should exact match. Address can varry in order. Everything can also varry in order. How should I go about this?

    Read the article

  • django: control json serialization

    - by abolotnov
    Is there a way to control json serialization in django? Simple code below will return serialized object in json: co = Collection.objects.all() c = serializers.serialize('json',co) The json will look similar to this: [ { "pk": 1, "model": "picviewer.collection", "fields": { "urlName": "architecture", "name": "\u0413\u043e\u0440\u043e\u0434 \u0438 \u0430\u0440\u0445\u0438\u0442\u0435\u043a\u0442\u0443\u0440\u0430", "sortOrder": 0 } }, { "pk": 2, "model": "picviewer.collection", "fields": { "urlName": "nature", "name": "\u041f\u0440\u0438\u0440\u043e\u0434\u0430", "sortOrder": 1 } }, { "pk": 3, "model": "picviewer.collection", "fields": { "urlName": "objects", "name": "\u041e\u0431\u044a\u0435\u043a\u0442\u044b \u0438 \u043d\u0430\u0442\u044e\u0440\u043c\u043e\u0440\u0442", "sortOrder": 2 } } ] You can see it's serializing it in a way that you are able to re-create the whole model, shall you want to do this at some point - fair enough, but not very handy for simple JS ajax in my case: I want bring the traffic to minimum and make the whole thing little clearer. What I did is I created a view that passes the object to a .json template and the template will do something like this to generate "nicer" json output: [ {% if collections %} {% for c in collections %} {"id": {{c.id}},"sortOrder": {{c.sortOrder}},"name": "{{c.name}}","urlName": "{{c.urlName}}"}{% if not forloop.last %},{% endif %} {% endfor %} {% endif %} ] This does work and the output is much (?) nicer: [ { "id": 1, "sortOrder": 0, "name": "????? ? ???????????", "urlName": "architecture" }, { "id": 2, "sortOrder": 1, "name": "???????", "urlName": "nature" }, { "id": 3, "sortOrder": 2, "name": "??????? ? ?????????", "urlName": "objects" } ] However, I'm bothered by the fast that my solution uses templates (an extra step in processing and possible performance impact) and it will take manual work to maintain shall I update the model, for example. I'm thinking json generating should be part of the model (correct me if I'm wrong) and done with either native python-json and django implementation but can't figure how to make it strip the bits that I don't want. One more thing - even when I restrict it to a set of fields to serialize, it will keep the id always outside the element container and instead present it as "pk" outside of it.

    Read the article

  • copy entire row (without knowing field names)

    - by Todd Webb
    Using SQL Server 2008, I would like to duplicate one row of a table, without knowing the field names. My key issue: as the table grows and mutates over time, I would like this copy-script to keep working, without me having to write out 30+ ever-changing fields, ugh. Also at issue, of course, is IDENTITY fields cannot be copied. My code below does work, but I wonder if there's a more appropriate method than my thrown-together text string SQL statement? So thank you in advance. Here's my (yes, working) code - I welcome suggestions on improving it. Todd alter procedure spEventCopy @EventID int as begin -- VARS... declare @SQL varchar(8000) -- LIST ALL FIELDS (*EXCLUDE* IDENTITY FIELDS). -- USE [BRACKETS] FOR ANY SILLY FIELD-NAMES WITH SPACES, OR RESERVED WORDS... select @SQL = coalesce(@SQL + ', ', '') + '[' + column_name + ']' from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = 'EventsTable' and COLUMNPROPERTY(OBJECT_ID('EventsTable'), COLUMN_NAME, 'IsIdentity') = 0 -- FINISH SQL COPY STATEMENT... set @SQL = 'insert into EventsTable ' + ' select ' + @SQL + ' from EventsTable ' + ' where EventID = ' + ltrim(str(@EventID)) -- COPY ROW... exec(@SQL) -- REMEMBER NEW ID... set @EventID = @@IDENTITY -- (do other stuff here) -- DONE... -- JUST FOR KICKS, RETURN THE SQL STATEMENT SO I CAN REVIEW IT IF I WISH... select EventID = @EventID, SQL = @SQL end

    Read the article

  • help me to choose between two software architecture

    - by alex
    // stupid title, but I could not think anything smarter I have a code (see below, sorry for long code but it's very-very simple): namespace Option1 { class AuxClass1 { string _field1; public string Field1 { get { return _field1; } set { _field1 = value; } } // another fields. maybe many fields maybe several properties public void Method1() { // some action } public void Method2() { // some action 2 } } class MainClass { AuxClass1 _auxClass; public AuxClass1 AuxClass { get { return _auxClass; } set { _auxClass = value; } } public MainClass() { _auxClass = new AuxClass1(); } } } namespace Option2 { class AuxClass1 { string _field1; public string Field1 { get { return _field1; } set { _field1 = value; } } // another fields. maybe many fields maybe several properties public void Method1() { // some action } public void Method2() { // some action 2 } } class MainClass { AuxClass1 _auxClass; public string Field1 { get { return _auxClass.Field1; } set { _auxClass.Field1 = value; } } public void Method1() { _auxClass.Method1(); } public void Method2() { _auxClass.Method2(); } public MainClass() { _auxClass = new AuxClass1(); } } } class Program { static void Main(string[] args) { // Option1 Option1.MainClass mainClass1 = new Option1.MainClass(); mainClass1.AuxClass.Field1 = "string1"; mainClass1.AuxClass.Method1(); mainClass1.AuxClass.Method2(); // Option2 Option2.MainClass mainClass2 = new Option2.MainClass(); mainClass2.Field1 = "string2"; mainClass2.Method1(); mainClass2.Method2(); Console.ReadKey(); } } What option (option1 or option2) do you prefer ? In which cases should I use option1 or option2 ? Is there any special name for option1 or option2 (composition, aggregation) ?

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >