Search Results

Search found 10106 results on 405 pages for 'fail fast'.

Page 324/405 | < Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >

  • Why would a variable in Scala code mysteriously become null?

    - by Alex R
    I've isolated the problem down to this: Predef.println("the value of argv1 here is " + argv(1)); var n: $ = undef; n = argv(1); Predef.println("the value of argv1 here is " + argv(1)); Predef.println("the value of n here is " + n); Predef.println("the class of n here is " + n.getClass); Here's the definition of $: class $ { println("constructed a new $ of type: " + this.getClass); def value: $ = this; def toValue: Value = { new ConstStringValue(this.toString()) }; def -(sym: Symbol): $ = { println("looked up: " + sym); this } def -(sym: $): $ = { println("looked up: " + sym); this } def update(sym: Symbol, any: Any) { println("update called: " + sym + "=" + any); } def apply(sym: Symbol) = { this } def apply(obj: $) = { this } def apply() = { this } def +(o:$) = this.toValue.div(o.toValue) def *(o:$) = this.toValue.mul(o.toValue) def >(o:$) = this.toValue.gt(o.toValue) def <(o:$) = this.toValue.lt(o.toValue) def ++() = { this } def -=(o:$) = { this } } When run, the code prints: the value of argv1 here is 10 the value of argv1 here is 10 the value of n here is null java.lang.NullPointerException at test_1_php$.include(_tmp.scala:149) at php.script.main(php.scala:57) at test_1_php.main(_tmp.scala) [...] Why would n mysteriously lose its value (or fail to take one on)?

    Read the article

  • Setting attributes of a class during construction from **kwargs

    - by Carson Myers
    Python noob here, Currently I'm working with SQLAlchemy, and I have this: from __init__ import Base from sqlalchemy.schema import Column, ForeignKey from sqlalchemy.types import Integer, String from sqlalchemy.orm import relationship class User(Base): __tablename__ = "users" id = Column(Integer, primary_key=True) username = Column(String, unique=True) email = Column(String) password = Column(String) salt = Column(String) openids = relationship("OpenID", backref="users") User.__table__.create(checkfirst=True) #snip definition of OpenID class def create(**kwargs): user = User() if "username" in kwargs.keys(): user.username = kwargs['username'] if "email" in kwargs.keys(): user.username = kwargs['email'] if "password" in kwargs.keys(): user.password = kwargs['password'] return user This is in /db/users.py, so it would be used like: from db import users new_user = users.create(username="Carson", password="1234") new_user.email = "[email protected]" users.add(new_user) #this function obviously not defined yet but the code in create() is a little stupid, and I'm wondering if there's a better way to do it that doesn't require an if ladder, and that will fail if any keys are added that aren't in the User object already. Like: for attribute in kwargs.keys(): if attribute in User: user.__attribute__[attribute] = kwargs[attribute] else: raise Exception("blah") that way I could put this in its own function (unless one hopefully already exists?) So I wouldn't have to do the if ladder again and again, and so I could change the table structure without modifying this code. Any suggestions?

    Read the article

  • In .NET, What Is Fastest Way to Initialize Multi-Dimensional Array to Non-Default Value

    - by AMissico
    How do I initialize a multi-dimensional array of a primitive type as fast as possible? I am stuck with using multi-dimensional arrays. My problem is performance. The following routine initializes a 100x100 array in approx. 500 ticks. Removing the int.MaxValue initialization results in approx. 180 ticks just for the looping. Approximately 100 ticks to create the array without looping and without initializing to int.MaxValue. Routines similiar to this are called a few tens-of-thousands to several million times. I am open to suggestions on how to optimize this non-default initialization of an array. One idea I had is to use a smaller primitive type when available. For instance, using byte instead of int, saves 100 ticks. I would be happy with this, but I am hoping that I don't have to change the primitive data type. public int[,] CreateArray(Size size) { int[,] array = new int[size.Width, size.Height]; for (int x = 0; x < size.Width; x++) { for (int y = 0; y < size.Height; y++) { array[x, y] = int.MaxValue; } } return array; }

    Read the article

  • Important question about linq to SQL performance on high loaded web applications

    - by Alex
    I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).  After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it. But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ @ -- (increase and decrease values). I used to do it like this UPDATE table SET value=value+1 WHERE ID=@I'd It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved. Stats.registeredusers++; Db.submitchanges(); Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1". But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception" You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002. Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net. So my question is how can you solve the problem with linq

    Read the article

  • Core Data Inferred Migration – Automatic "lightweight" vs Manual

    - by ohhorob
    I've updated the model of an existing iPhone app in some simple ways (remove attribute, add attribute, remove index), and can use automatic lightweight migration to migrate the persistent store. Due to the typical size of the data set, the processing time is not insignificant, and warrants feedback for the user. NSMigrationManager provides a simple but useful migrationProgress value that sends KVO notifications as the migration is performed. That forms the basis of providing feedback, however attempting to use an inferred model ([NSMappingModel inferredMappingModelForSourceModel:destinationModel:error:]) results in drastically different timing for the exact same dataset. Profile results on and original iPhone (2G) Automatic inferred lightweight migration PROFILE: CacheManager -migrateStore PROFILE: 0.6130 (+0.6130) models loaded PROFILE: 1.1759 (+0.5629) delegate -CacheManagerWillMigrate: PROFILE: 1.2516 (+0.0757) persistent store coordinator loaded PROFILE: 5.1436 (+3.8920) automatic lightweight migration completed PROFILE: 5.5435 (+0.3999) delegate -CacheManagerDidFinishMigration:withError: Manual inferred migration PROFILE: CacheManager -migrateStore PROFILE: 0.6660 (+0.6660) models loaded PROFILE: 1.1471 (+0.4811) inferred mapping model generated PROFILE: 1.4046 (+0.2574) delegate -CacheManagerWillMigrate: PROFILE: 1.5058 (+0.1013) persistent store coordinator loaded PROFILE: 22.6952 (+21.1894) manual migration completed PROFILE: 23.1478 (+0.4525) delegate -CacheManagerDidFinishMigration:withError: So, with an inferred model, the manual migration takes over 5 times longer than automatic! It's a big inconsistency, and the lightweight option that NSPersistentStoreCoordinator -addPersistentStoreWithType:configuration:URL:options:error: provides absolutely no indication of progress while processing. Can anybody provide a supported way to get the migrationProgress values during automatic migration, OR a way to configure an inferred mapping model to be as fast during manual processing as automatic?

    Read the article

  • linux new/delete, malloc/free large memory blocks

    - by brian_mk
    Hi folks, We have a linux system (kubuntu 7.10) that runs a number of CORBA Server processes. The server software uses glibc libraries for memory allocation. The linux PC has 4G physical memory. Swap is disabled for speed reasons. Upon receiving a request to process data, one of the server processes allocates a large data buffer (using the standard C++ operator 'new'). The buffer size varies depening upon a number of parameters but is typically around 1.2G Bytes. It can be up to about 1.9G Bytes. When the request has completed, the buffer is released using 'delete'. This works fine for several consecutive requests that allocate buffers of the same size or if the request allocates a smaller size than the previous. The memory appears to be free'd ok - otherwise buffer allocation attempts would eventually fail after just a couple of requests. In any case, we can see the buffer memory being allocated and freed for each request using tools such as KSysGuard etc. The problem arises when a request requires a buffer larger than the previous. In this case, operator 'new' throws an exception. It's as if the memory that has been free'd from the first allocation cannot be re-allocated even though there is sufficient free physical memory available. If I kill and restart the server process after the first operation, then the second request for a larger buffer size succeeds. i.e. killing the process appears to fully release the freed memory back to the system. Can anyone offer an explanation as to what might be going on here? Could it be some kind of fragmentation or mapping table size issue? I am thinking of replacing new/delete with malloc/free and use mallopt to tune the way the memory is being released to the system. BTW - I'm not sure if it's relevant to our problem, but the server uses Pthreads that get created and destroyed on each processing request. Cheers, Brian.

    Read the article

  • Using pam_python in a script running with mod_python

    - by markys
    Hi ! I would like to develop a web interface to allow users of a Linux system to do certain tasks related to their account. I decided to write the backend of the site using Python and mod_python on Apache. To authenticate the users, I thought I could use python_pam to query the PAM service. I adapted the example bundled with the module and got this: # out is the output stream used to print debug def auth(username, password, out): def pam_conv(aut, query_list, user_data): out.write("Query list: " + str(query_list) + "\n") # List to store the responses to the different queries resp = [] for item in query_list: query, qtype = item # If PAM asks for an input, give the password if qtype == PAM.PAM_PROMPT_ECHO_ON or qtype == PAM.PAM_PROMPT_ECHO_OFF: resp.append((str(password), 0)) elif qtype == PAM.PAM_PROMPT_ERROR_MSG or qtype == PAM.PAM_PROMPT_TEXT_INFO: resp.append(('', 0)) out.write("Our response: " + str(resp) + "\n") return resp # If username of password is undefined, fail if username is None or password is None: return False service = 'login' pam_ = PAM.pam() pam_.start(service) # Set the username pam_.set_item(PAM.PAM_USER, str(username)) # Set the conversation callback pam_.set_item(PAM.PAM_CONV, pam_conv) try: pam_.authenticate() pam_.acct_mgmt() except PAM.error, resp: out.write("Error: " + str(resp) + "\n") return False except: return False # If we get here, the authentication worked return True My problem is that this function does not behave the same wether I use it in a simple script or through mod_python. To illustrate this, I wrote these simple cases: my_username = "markys" my_good_password = "lalala" my_bad_password = "lololo" def handler(req): req.content_type = "text/plain" req.write("1- " + str(auth(my_username,my_good_password,req) + "\n")) req.write("2- " + str(auth(my_username,my_bad_password,req) + "\n")) return apache.OK if __name__ == "__main__": print "1- " + str(auth(my_username,my_good_password,sys.__stdout__)) print "2- " + str(auth(my_username,my_bad_password,sys.__stdout__)) The result from the script is : Query list: [('Password: ', 1)] Our response: [('lalala', 0)] 1- True Query list: [('Password: ', 1)] Our response: [('lololo', 0)] Error: ('Authentication failure', 7) 2- False but the result from mod_python is : Query list: [('Password: ', 1)] Our response: [('lalala', 0)] Error: ('Authentication failure', 7) 1- False Query list: [('Password: ', 1)] Our response: [('lololo', 0)] Error: ('Authentication failure', 7) 2- False I don't understand why the auth function does not return the same value given the same inputs. Any idea where I got this wrong ? Here is the original script, if that could help you. Thanks a lot !

    Read the article

  • MSTest unit test passes by itself, fails when other tests are run

    - by Sarah Vessels
    I'm having trouble with some MSTest unit tests that pass when I run them individually but fail when I run the entire unit test class. The tests test some code that SLaks helped me with earlier, and he warned me what I was doing wasn't thread-safe. However, now my code is more complicated and I don't know how to go about making it thread-safe. Here's what I have: public static class DLLConfig { private static string _domain; public static string Domain { get { return _domain = AlwaysReadFromFile ? readCredentialFromFile(DOMAIN_TAG) : _domain ?? readCredentialFromFile(DOMAIN_TAG); } } } And my test is simple: string expected = "the value I know exists in the file"; string actual = DLLConfig.Domain; Assert.AreEqual(expected, actual); When I run this test by itself, it passes. When I run it alongside all the other tests in the test class (which perform similar checks on different properties), actual is null and the test fails. I note this is not a problem with a property whose type is a custom Enum type; maybe I'm having this problem with the Domain property because it is a string? Or maybe it's a multi-threaded issue with how MSTest works?

    Read the article

  • Custom UITableviewcell, CGGradient still shows when cell is selected?

    - by Burnsoft Ltd
    I'm using a custom tableview cell (like Tweetie's fast scrolling) i've added a gradient to the context, which looks really nice, but when I select the cell, the gradient is still visible. I'm not sure how to go about removing the gradient when the cell is selected? any ideas? cheers Nik - (void)drawContentView:(CGRect)r { CGContextRef context = UIGraphicsGetCurrentContext(); UIColor *backgroundColor = [UIColor whiteColor]; UIColor *textColor = [UIColor blackColor]; UIColor *dateColor = [UIColor colorWithRed:77.f/255.f green:103.f/255.f blue:155.f/255.f alpha:1]; if(self.selected) { backgroundColor = [UIColor clearColor]; textColor = [UIColor whiteColor]; } [backgroundColor set]; CGContextFillRect(context, r); //add gradient CGGradientRef myGradient; CGColorSpaceRef myColorspace; size_t num_locations = 2; CGFloat locations[2] = {0.0, 1.0}; CGFloat components[8] = {0.9f, 0.9f, 0.9f, 0.7f, // Bottom Colour: Red, Green, Blue, Alpha. 1.0f, 1.0f, 1.0f, 1.0}; // Top Colour: Red, Green, Blue, Alpha. myColorspace = CGColorSpaceCreateDeviceRGB(); myGradient = CGGradientCreateWithColorComponents (myColorspace, components, locations, num_locations); CGColorSpaceRelease(myColorspace); CGPoint startPoint, endPoint; startPoint.x = 0; startPoint.y = self.frame.size.height; endPoint.x = 0; endPoint.y = self.frame.size.height-15; // just keep the gradient static size, never mind how big the cell is CGContextDrawLinearGradient (context, myGradient, startPoint, endPoint, 0); CGGradientRelease(myGradient); //gradient end //rest of custom drawing goes here.... } Should I be doing something in the if cell selected code?

    Read the article

  • git clone fails with "index-pack" failed?

    - by gct
    So I created a remote repo that's not bare (because I need redmine to be able to read it), and it's set to be shared with the group (so git init --shared=group). I was able to push to the remote repo and now I'm trying to clone it. If I clone it over the net I get this: remote: Counting objects: 4648, done. remote: Compressing objects: 100% (2837/2837), done. error: git-upload-pack: git-pack-objects died with error.B/s fatal: git-upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed I'm able to clone it locally without a problem, and I ran "git fsck", which only reports some dangling trees/blobs, which I understand aren't a problem. What could be causing this? I'm still able to pull from it, just not clone. I should note the remote git version is 1.5.6.5 while local is 1.6.0.4 I tried cloning my local copy of the repo, stripping out the .git folder and pushing to a new repo, then cloning the new repo and I get the same error, which leads me to believe it may be a file in the repo that's causing git-upload-pack to fail... Edit: I have a number of windows binaries in the repo, because I just built the python modules and then stuck them in there so everyone else didn't have to build them as well. If I remove the windows binaries and push to a new repo, I can clone again, perhaps that gives a clue. Trying to narrow down exactly what file is causing the problem now.

    Read the article

  • How to compile ocaml to native code

    - by Indra Ginanjar
    i'm really interested learning ocaml, it fast (they said it could be compiled to native code) and it's functional. So i tried to code something easy like enabling mysql event scheduler. #load "unix.cma";; #directory "+mysql";; #load "mysql.cma";; let db = Mysql.quick_connect ~user:"username" ~password:"userpassword" ~database:"databasename"();; let sql = Printf.sprintf "SET GLOBAL EVENT_SCHEDULER=1;" in (Mysql.exec db sql);; It work fine on ocaml interpreter, but when i was trying to compile it to native (i'm using ubuntu karmic), neither of these command worked ocamlopt -o mysqleventon mysqleventon.ml unix.cmxa mysql.cmxa ocamlopt -o mysqleventon mysqleventon.ml unix.cma mysql.cma i also tried ocamlc -c mysqleventon.ml unix.cma mysql.cma all of them resulting same message File "mysqleventon.ml", line 1, characters 0-1: Error: Syntax error Then i tried to remove the "# load", so the code goes like this let db = Mysql.quick_connect ~user:"username" ~password:"userpassword" ~database:"databasename"();; let sql = Printf.sprintf "SET GLOBAL EVENT_SCHEDULER=1;" in (Mysql.exec db sql);; The ocamlopt resulting message File "mysqleventon.ml", line 1, characters 9-28: Error: Unbound value Mysql.quick_connect I hope someone could tell me, where did i'm doing wrong.

    Read the article

  • javascript XSL in google chrome

    - by Guy
    Hi, I'm using the following javascript code to display xml/xsl: function loadXMLDoc(fname) { var xmlDoc; // code for IE if (window.ActiveXObject) { xmlDoc=new ActiveXObject("Microsoft.XMLDOM"); } // code for Mozilla, Firefox, Opera, etc. else if (document.implementation && document.implementation.createDocument) { xmlDoc=document.implementation.createDocument("","",null); } else { alert('Your browser cannot handle this script'); } try { xmlDoc.async=false; xmlDoc.load(fname); return(xmlDoc); } catch(e) { try //Google Chrome { var xmlhttp = new window.XMLHttpRequest(); xmlhttp.open("GET",file,false); xmlhttp.send(null); xmlDoc = xmlhttp.responseXML.documentElement; return(xmlDoc); } catch(e) { error=e.message; } } } function displayResult() { xml=loadXMLDoc("report.xml"); xsl=loadXMLDoc("report.xsl"); // code for IE if (window.ActiveXObject) { ex=xml.transformNode(xsl); document.getElementById("example").innerHTML=ex; } // code for Mozilla, Firefox, Opera, etc. else if (document.implementation && document.implementation.createDocument) { xsltProcessor=new XSLTProcessor(); xsltProcessor.importStylesheet(xsl); resultDocument = xsltProcessor.transformToFragment(xml,document); document.getElementById("example").appendChild(resultDocument); } } It works find for IE and Firefox but chrome is fail in the line: document.getElementById("example").appendChild(resultDocument); Thank you for you help

    Read the article

  • making check boxes clickable once in javascript?

    - by OVERTONE
    Sorry but im an absolute noob with javascript. Ive made a form for a simple quiz but cant figure out how to make radio's only click once. I can select two or three buttons as my answer. i want to change this. <form name = "Beginners Quiz"> <p>Film speed refers to:</p> <p><input type="radio" name="Answer 1" id="Answer1" value = "a" onclick = "recordAnswer(1,this.value"/>How long it takes to develop film. <br/> <p><input type="radio" name="Answer 2" id="Answer2" value = "b" onclick = "recordAnswer(1,this.value"/>How fast film moves through film-transport system. <br/> <p><input type="radio" name="Answer 3" id="Answer3" value = "c" onclick = "recordAnswer(1,this.value"/> How sensitive the film is to light. <br/> <p><input type="radio" name="Answer 4" id="Answer4" value = "d" onclick = "recordAnswer(1,this.value"/> None of these makes sense. <br/> ive been rooting around w3shcools tutorials to no avail. can someone shed some light?

    Read the article

  • Bulk Insert of hundreds of millions of records

    - by Dave Jarvis
    What is the fastest way to insert 237 million records into a table that has rules (for distributing the data across 84 child tables)? First I tried inserts. No go. Then I tried inserts with BEGIN/COMMIT. Not nearly fast enough. Next, I tried COPY FROM, but then noticed the documentation states that the rules are ignored. (And it was having difficulties with the column order and date format -- it said that '1984-07-1' was not a valid integer; true, but a bit unexpected.) Some example data: station_id,taken,amount,category_id,flag 1,'1984-07-1',0,4, 1,'1984-07-2',0,4, 1,'1984-07-3',0,4, 1,'1984-07-4',0,4,T Here is the table structure (with one rule included): CREATE TABLE climate.measurement ( id bigserial NOT NULL, station_id integer NOT NULL, taken date NOT NULL, amount numeric(8,2) NOT NULL, category_id smallint NOT NULL, flag character varying(1) NOT NULL DEFAULT ' '::character varying ) WITH ( OIDS=FALSE ); ALTER TABLE climate.measurement OWNER TO postgres; CREATE OR REPLACE RULE i_measurement_01_001 AS ON INSERT TO climate.measurement WHERE date_part('month'::text, new.taken)::integer = 1 AND new.category_id = 1 DO INSTEAD INSERT INTO climate.measurement_01_001 (id, station_id, taken, amount, category_id, flag) VALUES (new.id, new.station_id, new.taken, new.amount, new.category_id, new.flag); I can generate the data into any format. Am looking for something that won't take four days. I originally had the data in MySQL (still do), but am hoping to get a performance increase by switching to PostgreSQL and am eager to use its PL/R extensions for stats. I was also thinking about using: http://pgbulkload.projects.postgresql.org/ Any help, tips, or guidance would be greatly appreciated. Thank you!

    Read the article

  • AJAX vs ActiveX/Flash for browser-based game

    - by iconiK
    I have been following the usage of JavaScript for the past few years, and with the release of extremely fast scripting engines (V8, SquirrelFish Extrene, TraceMonkey, etc.) the possibilities of JavaScript have increased dramatically. However, the usage share of Internet Explorer coupled with it's total lack of support for recent standards makes me want to drop a bomb on Microsoft's HQ, as it creates a huge amount of problems for any website. The game will need to be pretty dynamic client-side, with animations and other eye-candy things, but not a full-blown game like those that run directly in the OS using DirectX or OpenGL. However, this might be a little stretch for JavaScript and will certainly feel extremely slow in Internet Explorer (given that the current IE engine can be hundreds of times slower than SFX; gotta see what IE9 will bring), would it be better to just do the whole thing in Flash? I know this means requiring the plug-in AND I have no experience whatsoever with Flash (other than browsing YouTube :P). It also means I can't just output directly from PHP, I would have to use XML or some other format to pass data to it (JSON is directly integrated in JS and PHP can deal with it easily). Another idea would be to provide an alternative interface just for IE, though I don't know how (ActiveX maybe? or with Flash, then why not just provide it to all browsers) or totally not supporting it and requiring the use of other browsers, although this is plain stupid from a business perspective. So here am I, wondering what approach to take and thus asking for your advice. How should I build the client-side? AJAX in all browsers, Flash in all browsers or a mix (AJAX for "modern" browsers and something else for the "grandpa": IE).

    Read the article

  • How to generate random numbers of lognormal distribution within specific range in Matlab

    - by Harpreet
    My grain sizes are defined as D=[1.19,1.00,0.84,0.71,0.59,0.50,0.42]. The problem is described below in steps. Grain sizes should follow lognormal distribution. The mean of the grain sizes is fixed as 0.84 and the standard deviation should be as low as possible but not zero. 90% of the grains (by weight %) fall in the size range of 1.19 to 0.59, and the rest 10% fall in size range of 0.50 to 0.42. Now I want to find the probabilities (weight percentage) of the grains falling in each grain size. It is allowable to split this grain size distribution into further small sizes but it must always be in the range of 1.19 and 0.42, i.e. 'D' can be continuous but 0.42 < D < 1.19. I need it fast. I tried on my own but I am not able to get the correct result. I am getting negative probabilities (weight percentages). Thanks to anyone who helps. I didn't incorporate the point 3 as I came to know about that condition later. Here are simple steps I tried: %% D=[1.19,1.00,0.84,0.71,0.59,0.50,0.42]; s=0.30; % std dev of the lognormal distribution m=0.84; % mean of the lognormal distribution mu=log(m^2/sqrt(s^2+m^2)); % mean of the associated normal dist. sigma=sqrt(log((s^2/m^2)+1)); % std dev of the associated normal dist. [r,c]=size(D); for i=1:c D(i)=mu+(sigma.*randn(1)); w(i)=(log(D(i))-mu)/sigma; % the probability or the wt. percentage of the grain sizes end grain_size=exp(D); %%

    Read the article

  • Django - how to write users and profiles handling in best way?

    - by SpankMe
    Hey, I am writing simple site that requires users and profiles to be handled. The first initial thought is to use django's build in user handling, but then the user model is too narrow and does not contain fields that I need. The documentation mentions user profiles, but user profiles section has been removed from djangobook covering django 1.0 (ideally, the solution should work with django 1.2), and the Internet is full of different solutions, not making the choice easier (like user model inheritance, user profiles and django signals, and so on). I would like to know, how to write this in good, modern, fast and secure way. Should I try to extend django builtin user model, or maybe should I create my own user model wide enough to keep all the information I need? Below you may find some specifications and expectations from the working solution: users should be able to register and authenticate every user should have profile (or model with all required fields) users dont need django builtin admin panel, but they need to edit their profiles/models via simple web form Please, let me know how do you solve those issues in your applications, and what is the best current way to handle users with django. Any links to articles/blogs or code examples are highly appreciated!

    Read the article

  • How to enforce foreign keys using Xerial SQLite JDBC?

    - by Space_C0wb0y
    According to their release notes, the Xerial SQLite JDBC driver supports foreign keys since version 3.6.20.1. I have tried some time now to get a foreign key constraint to be enforced, but to no avail. Here is what I came up with: public static void main(String[] args) throws ClassNotFoundException, SQLException { Class.forName("org.sqlite.JDBC"); SQLiteConfig config = new SQLiteConfig(); config.enforceForeignKeys(true); Connection connection = DriverManager.getConnection("jdbc:sqlite::memory:", config.toProperties()); connection.createStatement().executeUpdate( "CREATE TABLE artist(" + "artistid INTEGER PRIMARY KEY, " + "artistname TEXT);"); connection.createStatement().executeUpdate( "CREATE TABLE track("+ "trackid INTEGER," + "trackname TEXT," + "trackartist INTEGER," + "FOREIGN KEY(trackartist) REFERENCES artist(artistid)" + ");"); connection.createStatement().executeUpdate( "INSERT INTO track VALUES(14, 'Mr. Bojangles', 3)"); } The table definitions are taken directly from the sample in the SQLite documentation. This is supposed to fail, but it doesn't. I also checked, and it really inserts the tuple (no ignore or something like that). Does anyone have any experience with that, or knows how to make it work?

    Read the article

  • SQL2008 merge replication fails to update depdendent items when table is added

    - by Dan Puzey
    Setup: an existing SQL2008 merge replication scenario. A large server database, including views and stored procs, being replicated to client machines. What I'm doing: * adding a new table to the database * mark the new table for replication (using SP_AddMergeArticle) * alter a view (which is already part of the replicated content) is updated to include fields from this new table (which is joined to the tables in the existing view). A stored procedure is similarly updated. The problem: the table gets replicated to client machines, but the view is not updated. The stored procedure is also not updated. Non-useful workaround: if I run the snapshot agent after calling SP_AddMergeArticle and before updating the view/SP, both the view and the stored procedure changes correctly replicate to the client. The bigger problem: I'm running a list of database scripts in a transaction, as part of a larger process. The snapshot agent can't be run during a transaction, and if I interrupt the transaction (e.g. by running the scripts in multiple transactions), I lose the ability to roll back the changes should something fail. Does anyone have any suggestions? It seems like I must be missing something obvious, because I don't see why the changes to the view/sproc wouldn't be replicating anyway, regardless of what's going on with the new table.

    Read the article

  • fastest SCM tool available for Embedded software development

    - by wrapperm
    Hi All, In my company, presently we are using Rational clearcase as the Software Configuration Management tool for our Embedded software development. The software is basically for Automobiles, to be specific for Engines (I dont think these information really matters). But I find Clearcase to be very slow is performing any the activities (accesing files, branching and labelling), in addition to which there are various other limitations. We have recently decided to research on some free & open source, distributed version control system which could be able to handle our large projects with speed and efficiency. This tool should be a full-fledged repository with complete history and full revision tracking capabilities, not dependent on network access or a central server. Branching and merging are fast and easy to do. It should have multisite development facility. With these above mentioned requirement, we have come up with some of the tools that are presently available in the market: GIT, Mercurial, Bazaar, Subversion, CVS, Perforce, and Visual SourceSafe. I need everybody's help in finding me an approrpiate SCM tool for me which meets the above mentioned requirements. Thanking you in Advance, Rahamath.

    Read the article

  • Adding a minimum display time for Silverlight splash screen.

    - by David
    When hosting a silverlight application on a webpage it is possible to use the splashscreensource parameter to specify a simple Silverlight 1.0 (xaml+javascript) control to be displayed while the real xap file is downloaded, and which can receive notification of the downloads progress through onSourceDownloadProgressChanged. If the xap file is in cache, the splash screen is not shown (and if the download only takes 1 second, the splash screen will only be shown for 1 second). I know this is not best practice in general, but I am looking for a way to specify a minimum display time for the splash screen - even if the xap cached or the download is fast, the splash screen would remain up for at least, let's say, 5 seconds (for example to show a required legal disclaimer, corporate identity mark or other bug). I do want to do it in the splash screen exclusively (rather then in the main xap) as I want it to be clean and uninterupted (for example a sound bug) and shown to the user as soon as they open the page, rather then after the download (which could take anywhere from 1 to 20+ seconds). I'd prefer not to accomplish this with preloading - replacing the splash screen with a full Silverlight xap application (with it's own loading screen), which then programmably loads and displays the full xap after a minimum wait time.

    Read the article

  • Complete failure to compile when include CSS Friendly Adapters

    - by david
    Background - I am trying to use the friendly adapters to override the default styling for the standard asp.net menu control that is used by an existing project. The existing project functions normally and compiles when requested without incident. Adding in the code for the for the CSS Friendly adapter and not only does it not compile, but it never even really starts. The Problem in Detail - I am using the sample code from Scott on this page: http://weblogs.asp.net/scottgu/archive/2006/09/08/CSS-Control-Adapter-Toolkit-Update.aspx. The sample project compiles fine, just within the existing project does it fail. Fails without a line number or any other traceable info. It definately appears to be related to the CSSMenuAdapter.browser file, which has been referenced by others online as the cause of similar error. I have tried addind and readding, using as a dll, using as a code file in app code, etc. I am working with aspdotnetstorefront in this case, although it is not unique to them as I have found other references in software packages online. Only thing is, no one ever says what solved the issue. I am using Windows 7, VS2008 Express and SQL Express 2008 R2. The full error msg is: Error 10 Exception of type 'System.OutOfMemoryException' was thrown. Notice that there is no file, line, or column info. Really need some help here. I have been working on this a long time. This really should have tag: cssfriendlyadapter but I could not create that.

    Read the article

  • Slow Databinding setup time in C# .NET 4.0

    - by Svisstack
    Hello, I have got a problem. I have windows forms application with dynamic generated layout, but i have a problem in performance. In this form i use DataBinding from .NET 4.0 and databinding after setup works fine, but he binding setup time for ONE control blocking my application on approx 0.7 second. I have some controls and time of binging setuping is around 2 minutes. I trying all possible solutions, I dont have any ideas without write self binding class. Why is wrong with my code? case "Boolean": { Binding b = new Binding("Checked", __bindingsource, __ep.Name); CheckBox cb = new CheckBox(); /* * HERE is the problem */ cb.DataBindings.Add(b); /* * HERE is the end of problem */ __flp.Controls.Add(cb); __bindingcontrol.AddBinding(b); break; } Without problem code lines all works fast and without binding ;-( but i want binding turn on in normal speed. PS1. I have suspended layout in generation time. PS2. I have same problem with binding TextBox'es, PictureBoxe's, CheckBox is only example. How to do that?

    Read the article

  • Using map/reduce for mapping the properties in a collection

    - by And
    Update: follow-up to MongoDB Get names of all keys in collection. As pointed out by Kristina, one can use Mongodb 's map/reduce to list the keys in a collection: db.things.insert( { type : ['dog', 'cat'] } ); db.things.insert( { egg : ['cat'] } ); db.things.insert( { type : [] }); db.things.insert( { hello : [] } ); mr = db.runCommand({"mapreduce" : "things", "map" : function() { for (var key in this) { emit(key, null); } }, "reduce" : function(key, stuff) { return null; }}) db[mr.result].distinct("_id") //output: [ "_id", "egg", "hello", "type" ] As long as we want to get only the keys located at the first level of depth, this works fine. However, it will fail retrieving those keys that are located at deeper levels. If we add a new record: db.things.insert({foo: {bar: {baaar: true}}}) And we run again the map-reduce +distinct snippet above, we will get: [ "_id", "egg", "foo", "hello", "type" ] But we will not get the bar and the baaar keys, which are nested down in the data structure. The question is: how do I retrieve all keys, no matter their level of depth? Ideally, I would actually like the script to walk down to all level of depth, producing an output such as: ["_id","egg","foo","foo.bar","foo.bar.baaar","hello","type"] Thank you in advance!

    Read the article

  • git clone fails with "index-pack" failed?

    - by gct
    So I created a remote repo that's not bare (because I need redmine to be able to read it), and it's set to be shared with the group (so git init --shared=group). I was able to push to the remote repo and now I'm trying to clone it. If I clone it over the net I get this: remote: Counting objects: 4648, done. remote: Compressing objects: 100% (2837/2837), done. error: git-upload-pack: git-pack-objects died with error.B/s fatal: git-upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed I'm able to clone it locally without a problem, and I ran "git fsck", which only reports some dangling trees/blobs, which I understand aren't a problem. What could be causing this? I'm still able to pull from it, just not clone. I should note the remote git version is 1.5.6.5 while local is 1.6.0.4 I tried cloning my local copy of the repo, stripping out the .git folder and pushing to a new repo, then cloning the new repo and I get the same error, which leads me to believe it may be a file in the repo that's causing git-upload-pack to fail... Edit: I have a number of windows binaries in the repo, because I just built the python modules and then stuck them in there so everyone else didn't have to build them as well. If I remove the windows binaries and push to a new repo, I can clone again, perhaps that gives a clue. Trying to narrow down exactly what file is causing the problem now.

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >