Search Results

Search found 10966 results on 439 pages for 'kevin db'.

Page 378/439 | < Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >

  • DAL Layer : EF 4.0 or Normal Data access layer with Stored Procedure

    - by Harryboy
    Hello Experts, Application : I am working on one mid-large size application which will be used as a product, we need to decide on our DAL layer. Application UI is in Silverlight and DAL layer is going to be behind service layer. We are also moving ahead with domain model, so our DB tables and domain classes are not having same structure. So patterns like Data Mapper and Repository will definitely come into picture. I need to design DAL Layer considering below mentioned factors in priority manner Speed of Development with above average performance Maintenance Future support and stability of the technology Performance Limitation : 1) As we need to strictly go ahead with microsoft, we can not use NHibernate or any other ORM except EF 4.0 2) We can use any code generation tool (Should be Open source or very cheap) but it should only generate code in .Net, so there would not be any licensing issue on per copy basis. Questions I read so many articles about EF 4.0, on outset it looks like that it is still lacking in features from NHibernate but it is considerably better then EF 1.0 So, Do you people feel that we should go ahead with EF 4.0 or we should stick to ADO .Net and use any code geneartion tool like code smith or any other you feel best Also i need to answer questions like what time it will take to port application from EF 4.0 to ADO .Net if in future we stuck up with EF 4.0 for some features or we are having serious performance issue. In reverse case if we go ahead and choose ADO .Net then what time it will take to swith to EF 4.0 Lastly..as i was going through the article i found the code only approach (with POCO classes) seems to be best suited for our requirement as switching is really easy from one technology to other. Please share your thoughts on the same and please guide on the above questions

    Read the article

  • mysql complex key or + auto increment key (guid)

    - by darko
    Hi, I have not very big db. I am using auto increment primary keys and in my case there is no problem with that. GUID is not necessary. I have a table containing this fields: from_destination to_testination shipper quantity Where the fields 1,2,3 needs to be unique. Also I have second table that for the fields 1,2,3 stores bought quantities per day One to many. from_destination to_destination shipper date reserved_quantity case 1 Is it better to make fields 1,2,3 as primary complex key in the first table and the same fields in the second table to be foreign key First table from_destination | to_destination | primary shipper | quaitity Second table second_id - autoincrement primary from_destination | to_destination | foreign key shipper | date reserved_quantity Case 2 or just to add auto increment filed in the first table and make fields 1,2,3 unique. In the second table there will be one ingeger foreign key pointing to the first table, and one auto increment key for the table. First table first_id - autoincrement primary from_destination | to_destination | unique shipper | quaitity Second table second_id - autoincrement primary first_id - forein date reserved_quantity If so why we need complex keys, when we can have one field auto increment or GUID and all other fields that are candidates for complex key to be unique. Regards

    Read the article

  • Problem installing mysql gem on Snow Leopard: uninitialized constant MysqlCompat::MysqlRes

    - by emson
    Hi All I've got a problem trying to install the Ruby mysql gem driver. I recently upgraded to Snow Leopard and did the Hivelogic manual install of MySQL. This all seems to work fine as I can access mysql from the command line and make changes to the database. My problem is that if I now use rake db:migrate I get: rake aborted! uninitialized constant MysqlCompat::MysqlRes (See full trace by running task with --trace) Now it appears that my mysql gem isn't working correctly as I can access MySQL fine from Python using the Python driver (which I compiled to). I therefore tried to rebuild the gem using the following command from this site: http://techliberty.blogspot.com/, (incidentally I am using a recent Intel MacBook Pro): sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/usr/local/mysql/bin/mysql_config This compiles although I get No definition for the documentation: Building native extensions. This could take a while... Successfully installed mysql-2.8.1 1 gem installed Installing ri documentation for mysql-2.8.1... No definition for next_result No definition for field_name ... I'm a little stumped as my mysql_config is located in the correct place: /usr/local/mysql/bin/mysql_config And I have removed all other instances of the mysql gem, from my system. Any suggestions would be greatly appreciated. Many thanks. PS I saw this previous post http://stackoverflow.com/questions/1332207/uninitialized-constant-mysqlcompatmysqlres-using-mms2r-gem but it doesn't seem applicable for my version.

    Read the article

  • Sort and limit queryset by comment count and date using queryset.extra() (django)

    - by thornomad
    I am trying to sort/narrow a queryset of objects based on the number of comments each object has as well as by the timeframe during which the comments were posted. Am using a queryset.extra() method (using django_comments which utilizes generic foreign keys). I got the idea for using queryset.extra() (and the code) from here. This is a follow-up question to my initial question yesterday (which shows I am making some progress). Current Code: What I have so far works in that it will sort by the number of comments; however, I want to extend the functionality and also be able to pass a time frame argument (eg, 7 days) and return an ordered list of the most commented posts in that time frame. Here is what my view looks like with the basic functionality in tact: import datetime from django.contrib.comments.models import Comment from django.contrib.contenttypes.models import ContentType from django.db.models import Count, Sum from django.views.generic.list_detail import object_list def custom_object_list(request, queryset, *args, **kwargs): '''Extending the list_detail.object_list to allow some sorting. Example: http://example.com/video?sort_by=comments&days=7 Would get a list of the videos sorted by most comments in the last seven days. ''' try: # this is where I started working on the date business ... days = int(request.GET.get('days', None)) period = datetime.datetime.utcnow() - datetime.timedelta(days=int(days)) except (ValueError, TypeError): days = None period = None sort_by = request.GET.get('sort_by', None) ctype = ContentType.objects.get_for_model(queryset.model) if sort_by == 'comments': queryset = queryset.extra(select={ 'count' : """ SELECT COUNT(*) AS comment_count FROM django_comments WHERE content_type_id=%s AND object_pk=%s.%s """ % ( ctype.pk, queryset.model._meta.db_table, queryset.model._meta.pk.name ), }, order_by=['-count']).order_by('-count', '-created') return object_list(request, queryset, *args, **kwargs) What I've Tried: I am not well versed in SQL but I did try just to add another WHERE criteria by hand to see if I could make some progress: SELECT COUNT(*) AS comment_count FROM django_comments WHERE content_type_id=%s AND object_pk=%s.%s AND submit_date='2010-05-01 12:00:00' But that didn't do anything except mess around with my sort order. Any ideas on how I can add this extra layer of functionality? Thanks for any help or insight.

    Read the article

  • Reformat SQLGeography polygons to JSON

    - by James
    I am building a web service that serves geographic boundary data in JSON format. The geographic data is stored in an SQL Server 2008 R2 database using the geography type in a table. I use [ColumnName].ToString() method to return the polygon data as text. Example output: POLYGON ((-6.1646509904325884 56.435153006374627, ... -6.1606079906751 56.4338050060666)) MULTIPOLYGON (((-6.1646509904325884 56.435153006374627 0 0, ... -6.1606079906751 56.4338050060666 0 0))) Geographic definitions can take the form of either an array of lat/long pairs defining a polygon or in the case of multiple definitions, an array or polygons (multipolygon). I have the following regex that converts the output to JSON objects contained in multi-dimensional arrays depending on the output. Regex latlngMatch = new Regex(@"(-?[0-9]{1}\.\d*)\s(\d{2}.\d*)(?:\s0\s0,?)?", RegexOptions.Compiled); private string ConvertPolysToJson(string polysIn) { return this.latlngMatch.Replace(polysIn.Remove(0, polysIn.IndexOf("(")) // remove POLYGON or MULTIPOLYGON .Replace("(", "[") // convert to JSON array syntax .Replace(")", "]"), // same as above "{lng:$1,lat:$2},"); // reformat lat/lng pairs to JSON objects } This is actually working pretty well and converts the DB output to JSON on the fly in response to an operation call. However I am no regex master and the calls to String.Replace() also seem inefficient to me. Does anyone have any suggestions/comments about performance of this?

    Read the article

  • Sending jQuery.ajax data simultaneous to a form submit

    - by dscher
    I have a bit of a conundrum. I have a form which has numerous fields. There is one field for links where you enter a link, click an add button, and the link(using jQuery) gets added to a link_array. I want this array to be sent via the jQuery.ajax method when the form is submitted. If I send the link_array using $.ajax like this: $.ajax({ type: "POST", url: "add_stock", dataType: "json", data: { "links": link_array } }); when the add link button is selected the data goes no problem to the correct place and gets put in the db correctly. If I bind the above function to the submit form button using $(#stock_form).submit(..... then the rest of the form data is sent but not the link_array. I can obviously pass the link array back into a hidden field in HTML but then I'd have to unpack the array into comma separate values and break the comma-separated string apart in PHP. It just seems 100X easier to unpack the Javascript array in PHP without an other fuss. So, how is it that you can send an array from javascript using $.ajax concurrent to the rest of the $_POST data in HTML? Please note that I'm using Kohana 3.0 framework but really that shouldn't make a difference, what I want to do is add this js array to the $_POST array that is already going. Thanks!

    Read the article

  • I don't know How to release NSString.

    - by Beomseok
    NSArray *path = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [path objectAtIndex:0]; NSString *databasePath = [documentsDirectory stringByAppendingPathComponent:@"DB"]; NSString *fileName = [newWordbookName stringByAppendingString:@".csv"]; NSString *fullPath = [databasePath stringByAppendingPathComponent:fileName]; [[NSFileManager defaultManager] createFileAtPath:fullPath contents:nil attributes:nil]; [databasePath release]; //[fileName release]; Error! //[fullPath release]; Error! //NSLog(@"#1 :databasePath: %d",[databasePath retainCount]); //NSLog(@"#1 :fileName: %d",[fileName retainCount]); //NSLog(@"#1 :fullPath: %d",[fullPath retainCount]); Hi guys, I'm using this code and want to release NSString* .. so, I declare fileName, fullPath, and databasePath of NSString. But database is released, fileName, fullpath doen't release. I don't know why it happen. I know that NSArray is Autoreleased. But Is documentsDirectory autoreleased? (newWordbookName is nsstring type) I hope that I look through a document about iPhone memory management. Please advice for me.

    Read the article

  • MySQL can't access root account or reset with mysqladmin

    - by glumptious
    So if I type mysql -u root I'm supposedly logged in, however upon trying to create or access a database I get this lovely error: ERROR 1044 (42000): Access denied for user ''@'localhost' to database 'test1'. I haven't the foggiest idea why after logging in as root it's trying access DB's as ''@'localhost' and it's driving me a bit crazy right now. Possibly related, when I try to set the root password I get the error mysqladmin: Can't turn off logging; error: 'Access denied; you need (at least one of) the SUPER privilege(s) for this operation'. I've tried removing mysql-server via running apt-get purge mysql-server and then reinstalling with no luck. This is running Ubuntu Server 12.10 64-bit and mysql is indeed running. --Edit-- I wonder if perhaps there is no root user. So I try to start MySQL with --skip-grant-tables and the create the root user but then I'm given this: ERROR 1290 (HY000): The MySQL server is running with the --skip-grant-tables option so it cannot execute this statement. Fun fun fun fun fun.

    Read the article

  • Please help me work out this error, regarding the use of a HTTPService to connect Flex4 & Ruby on Ra

    - by ben
    I have a HTTPService in Flash Builder 4, that is defined as follows: <s:HTTPService id="getUserDetails" url="http://localhost:3000/users/getDetails" method="GET"/> It gets called as follows: getUserDetails.send({'user[username]': calleeInput.text}); Here is a screenshot of the network monitor, showing that the parameter is being sent correctly (it is 'kirsty'): Here is the Ruby on Rails method that it's connected to: def getDetails @user = User.find_by_username(:username) render :xml => @user end When I run it, I get the following error output in the console: Processing UsersController#list (for 127.0.0.1 at 2010-04-30 17:48:03) [GET] User Load (1.1ms) SELECT * FROM "users" Completed in 30ms (View: 16, DB: 1) | 200 OK [http://localhost/users/list] Processing UsersController#getDetails (for 127.0.0.1 at 2010-04-30 17:48:13) [GET] Parameters: {"user"={"username"="kirsty"}} User Load (0.3ms) SELECT * FROM "users" WHERE ("users"."username" = '--- :username ') LIMIT 1 ActionView::MissingTemplate (Missing template users/getDetails.erb in view path app/views): app/controllers/users_controller.rb:36:in getDetails' /usr/local/lib/ruby/1.8/webrick/httpserver.rb:104:in service' /usr/local/lib/ruby/1.8/webrick/httpserver.rb:65:in run' /usr/local/lib/ruby/1.8/webrick/server.rb:173:in start_thread' /usr/local/lib/ruby/1.8/webrick/server.rb:162:in start' /usr/local/lib/ruby/1.8/webrick/server.rb:162:in start_thread' /usr/local/lib/ruby/1.8/webrick/server.rb:95:in start' /usr/local/lib/ruby/1.8/webrick/server.rb:92:in each' /usr/local/lib/ruby/1.8/webrick/server.rb:92:in start' /usr/local/lib/ruby/1.8/webrick/server.rb:23:in start' /usr/local/lib/ruby/1.8/webrick/server.rb:82:in `start' Rendering rescues/layout (internal_server_error) I'm not sure if the error is being caused by bad code in the getDetails Ruby on Rails method? I'm new to RoR, and I think I remember reading somewhere that every method should have a view. I'm just using this method to get info into the Flex 4 app, do I still need to make a view for it? Is that what's causing the error? Any help would be GREATLY appreciated, I've been stuck on this for a few days now! Thanks.

    Read the article

  • Why use INCLUDE in a SQL index

    - by StarLite
    I recently encountered an index in a database I maintain that was of the form: CREATE INDEX [IX_Foo] ON [Foo] ( Id ASC ) INCLUDE ( SubId ) In this particular case, the performance problem that I was encountering (a slow SELECT filtering on both Id and SubId) could be fixed by simply moving the SubId column into the index proper rather than as an included column. This got me thinking however that I don't understand the reasoning behind included columns at all, when generally, they could simply be a part of the index itself. Even if I don't particularly care about the items being in the index itself is there any downside to having column in the index rather than simply being included. After some research, I am aware that there are a number of restrictions on what can go into an indexed column (maximum width of the index, and some column types that can't be indexed like 'image'). In these cases I can see that you would be forced to include the column in the index page data. The only thing I can think of is that if there are updates on SubId, the row will not need to be relocated if the column is included (though the value in the index would need to be changed). Is there something else that I'm missing? I'm considering going through the other indexes in the database and shifting included columns in the index proper where possible. Would this be a mistake? I'm primarily interested in MS SQL Server, but information on other DB engines is welcome also.

    Read the article

  • how to handle multiple profiles per user?

    - by Scott Willman
    I'm doing something that doesn't feel very efficient. From my code below, you can probably see that I'm trying to allow for multiple profiles of different types attached to my custom user object (Person). One of those profiles will be considered a default and should have an accessor from the Person class. Can this be done better? from django.db import models from django.contrib.auth.models import User, UserManager class Person(User): public_name = models.CharField(max_length=24, default="Mr. T") objects = UserManager() def save(self): self.set_password(self.password) super(Person, self).save() def _getDefaultProfile(self): def_teacher = self.teacher_set.filter(default=True) if def_teacher: return def_teacher[0] def_student = self.student_set.filter(default=True) if def_student: return def_student[0] def_parent = self.parent_set.filter(default=True) if def_parent: return def_parent[0] return False profile = property(_getDefaultProfile) def _getProfiles(self): # Inefficient use of QuerySet here. Tolerated because the QuerySets should be very small. profiles = [] if self.teacher_set.count(): profiles.append(list(self.teacher_set.all())) if self.student_set.count(): profiles.append(list(self.student_set.all())) if self.parent_set.count(): profiles.append(list(self.parent_set.all())) return profiles profiles = property(_getProfiles) class BaseProfile(models.Model): person = models.ForeignKey(Person) is_default = models.BooleanField(default=False) class Meta: abstract = True class Teacher(BaseProfile): user_type = models.CharField(max_length=7, default="teacher") class Student(BaseProfile): user_type = models.CharField(max_length=7, default="student") class Parent(BaseProfile): user_type = models.CharField(max_length=7, default="parent")

    Read the article

  • Embedded non-relational (nosql) data store

    - by Igor Brejc
    I'm thinking about using/implementing some kind of an embedded key-value (or document) store for my Windows desktop application. I want to be able to store various types of data (GPS tracks would be one example) and of course be able to query this data. The amount of data would be such that it couldn't all be loaded into memory at the same time. I'm thinking about using sqlite as a storage engine for a key-value store, something like y-serial, but written in .NET. I've also read about FriendFeed's usage of MySQL to store schema-less data, which is a good pointer on how to use RDBMS for non-relational data. sqlite seems to be a good option because of its simplicity, portability and library size. My question is whether there are any other options for an embedded non-relational store? It doesn't need to be distributable and it doesn't have to support transactions, but it does have to be accessible from .NET and it should have a small download size. UPDATE: I've found an article titled SQLite as a Key-Value Database which compares sqlite with Berkeley DB, which is an embedded key-value store library.

    Read the article

  • PHP Error handling in WordPress plugin

    - by AaronM
    I am a newbie to both PHP and Wordpress (but do ok in C#), and am struggling to understand the error handling in a custom plugin I am attempting to write. The basics of the plugin is to query an exsiting MSSQL database (note its not the standard MYSQL db...) and return the rows back to the screen. This was working well, but the hosting provider has taken my database offline, which led me to the error handling problem (which I thought was ok). The following code is failing to connect to the database (as expected), but puts an error onto the screen and stops the page processing. It does not even output the 'or die' error text. QUESTION: How can I just output a simple "Cant load data" message, and continue on normally? function generateData() { global $post; if ("$post->post_title" == "Home") { try { $myServer = "<servername>"; $myUser = "<username>"; $myPass = "<password>"; $myDB = "<dbName>"; //connection to the database $dbhandle = mssql_connect($myServer, $myUser, $myPass) or die("Couldn't open database $myDB"); //... query processing here... } catch (Exception $e) { echo "Cannot load data<br />"; } } return $content; } Error being generated: (line 31 is $dbhandle = mssql_connect...) Warning: mssql_connect() [function.mssql-connect]: Unable to connect to server: <servername> in <file path> on line 31 Fatal error: Maximum execution time of 30 seconds exceeded in <file path> on line 31

    Read the article

  • Cannot read value from SYS_CONTEXT

    - by AppleGrew
    I have a PL/SQL procedure which sets some variable in user session, like the following:- Dbms_Session.Set_Context( NAMESPACE =>'MY_CTX', ATTRIBUTE => 'FLAG_NAME', Value => 'some value'); Just after this (in the same procedure), I try to read the value of this flag, using:- SYS_CONTEXT('MY_CTX', 'FLAG_NAME'); The above returns nothing. How did the DB lose this value? The weirder part is that if I invoke this proc directly from Oracle SQL Developer then it works. It doesn't work when I invoke this proc from my web application from callable statement. --EDIT-- Added an example as to how we are invoking the proc from our Java code. String statement = "Begin package_name.proc_name( flag_val => :1); END;"; OracleCallableStatement st = <some object by some framework> .createCallableStatement(statement); st.setString(1, 'flag value'); st.execute(); st.close();

    Read the article

  • J2ME cache issue

    - by kiennt
    I have to write a J2ME app to retrieve images from server and display in mobile phone. I have seen and test that Snaptu have a mechanism to cache image, event with 100 images (both normal size and zoom size). I wonder how they can do that? I though that those guys use rms to save image stream to data. But when i check in working folder of simulater( I use Windows XP and Sun Wireless Toolkit 3.0, the Emulator device i use to run my program is CLDC Device 1 - my working folder is C:\Document And Settings\Administrator\javame-sdk\3.0\work\6\appdb), i see some .db file. When i delete these files, i still can view cache image in my emulator???? I also thought that those guys use heap memory to save image. But it is not correct because when i set limit device memory is 2MB (like some mobile phones), and i load and view 100 images in zoom size, it didn't make OutOfMemory Error? It so weird. Any one can help me? Thanks

    Read the article

  • PHP mySQL Error

    - by happyCoding25
    Hello, Im new to php so I decided to follow this tutorial for a simple login screen. I got the code setup but when I try login I get this error: Warning: mysql_num_rows(): supplied argument is not a valid MySQL result resource in (a long file path to the script) on line 27 The code I got from the tutorial is: <?php ob_start(); $host="thehost"; // Host name $username="myusername"; // Mysql username $password="mypass"; // Mysql password $db_name="test"; // Database name $tbl_name="members"; // Table name // Connect to server and select databse. mysql_connect("$host", "$username", "$password")or die("cannot connect"); mysql_select_db("$db_name")or die("cannot select DB"); // Define $myusername and $mypassword $myusername=$_POST['myusername']; $mypassword=$_POST['mypassword']; // To protect MySQL injection (more detail about MySQL injection) $myusername = stripslashes($myusername); $mypassword = stripslashes($mypassword); $myusername = mysql_real_escape_string($myusername); $mypassword = mysql_real_escape_string($mypassword); $sql="SELECT * FROM $tbl_name WHERE username='$myusername' and password='$mypassword'"; $result=mysql_query($sql); // Mysql_num_row is counting table row $count=mysql_num_rows($result); // If result matched $myusername and $mypassword, table row must be 1 row if($count==1){ // Register $myusername, $mypassword and redirect to file "login_success.php" session_register("myusername"); session_register("mypassword"); header("location:login_success.php"); } else { echo "Wrong Username or Password"; } ob_end_flush(); ?> (Note: All of the mySQL database info is filled in on my version) Aslo, the author gives a php5 version and a normal php version. I have tried both and gotten the same error. If anyone knows why this is happening help is really appreciated.

    Read the article

  • Add new SVN "repo" in poorly constructed repo/project setup

    - by Dave Masselink
    Unfortunately, the answer to this question isn't quite as simple as it sounds... but I hope it can still be relatively simple. Please read all the way through before telling me that the answer is: "svnadmin create... duh" I'm working for a company that set up their SVN server in an odd way (at least in terms of what I'm used to). We've all been there, right? Rather than giving each project a separate repository... they have a folder on the server called "/var/www/svn/repos/" which is the actual SVN repo (has conf/, db/, README.txt, etc. in it). Then they distinguish their projects by adding top level folders into the ONE repository (ex: Project1, Project2, etc.) I don't like this setup and might one day get around to converting the setup to what I'm used to, where each project is its own repository (with separate logs, dbs, etc.) But my question is this: What is the best way to add a new empty project to the current setup? Is there anyway to add a new top level folder/project to the repo through use of svnadmin? It can/should just be an empty folder that I'll start building a new project in. I know that I could do this by checking out the whole singular repository and then adding a new top level folder into my local checkout, then re-committing. But I'd really prefer not to do this because someone has created folders/projects that are just GBs of log data... and I don't want to wait through the download of this just to add a single empty folder. Let me know if there is any more info you'd need to know. I do have root/sudo access on the server in question. Thanks in advance for your help! Dave

    Read the article

  • google maps api v2 - dynamic load (tens of thousands of) markers

    - by Adam
    Hello, how made with JavaScript+PHP+MYSQL and Google Maps API v2 dynamic load of markers? atm I have map follow example http://googlemapsapi.martinpearman.co.uk/infusions/google_maps_api/basic_page.php?map_id=8 but my marker_data_01.php (where are all markers listed - look code of example) have atm 4MB and will only have more, and more. So the question is: how load only this markers to marker_data_01.php (of some other modification of it, can be on same file as map, meaningless, I load it all from MySQL atm) what I look now: so for example (I dont know what number will good but I write this only for show what I wanna made OR JUST something like it), so top left corner for example have position: 10, top right corner for example have position: 30, bottom left corner for example have position: 5, bottom right corner for example have position: 15. -- so load only this markers what are in this box 10-30-5-15 with for example GET, and when I move map for example to 17-12-48-20 box then made next GET request and with this mysql quote and download next markers that what I see now, with this I can have map with unlimited markers, and when will be a lot of markers then clustering can link them, so with this ppl dont will need do "preload" of all markers DB (what have 4mb now and will have more), but only download that what they see at the moment, I know that is possible because a lot sites have it but I am not master of code langs, I know only a bit php and mysql (and html) :) // sorry for my english

    Read the article

  • How do I get LongVarchar out param from SPROC in ADO.NET 2.0 with SQLAnywhere 10?

    - by todthomson
    Hi All, I have sproc 'up_selfassessform_view' which has the following parameters: in ai_eqidentkey SYSKEY in ai_acidentkey SYSKEY out as_eqcomments TEXT_STRING out as_acexplanation TEXT_STRING  -  which are domain objects - SYSKEY is 'integer' and TEXT_STRING is 'long varchar'. I can call the sproc fine from iSQL using the following code: create variable @eqcomments TEXT_STRING; create variable @acexamples TEXT_STRING; call up_selfassessform_view (75000146, 3, @eqcomments, @acexamples); select @eqcomments, @acexamples;  - which returns the correct values from the DB (so I know the SPROC is good). I have configured the out param in ADO.NET like so (which has worked up until now for 'integer', 'timestamp', 'varchar(255)', etc): SAParameter as_acexplanation = cmd.CreateParameter(); as_acexplanation.Direction = ParameterDirection.Output; as_acexplanation.ParameterName = "as_acexplanation"; as_acexplanation.SADbType = SADbType.LongVarchar; cmd.Parameters.Add(as_acexplanation); When I run the following code: SADataReader reader = cmd.ExecuteReader(); I receive the following error: Parameter[2]: the Size property has an invalid size of 0. Which (I suppose) makes sense... But the thing is, I don't know the size of the field (it's just "long varchar" it doesn't have a predetermined length - unlike varchar(XXX)). Anyhow, just for fun, I add the following: as_acexplanation.Size = 1000; and the above error goes away, but now when I call: as_acexplanation.Value i get back a string of length = 1000 which is just '\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0...' (\0 repeated 1000 times). So I'm really really stuck... Any help one this one would be much appreciated. Cheers! ;) Tod T.

    Read the article

  • IntegrityError: foreign key violation upon delete

    - by Lukasz Korzybski
    I have Order and Shipment model. Shipment has a foreign key to Order. class Order(...): ... class Shipment() order = m.ForeignKey('Order') ... Now in one of my views I want do delete order object along with all related objects. So I invoke order.delete(). I have Django 1.0.4, PostgreSQL 8.4 and I use transaction middleware, so whole request is enclosed in single transaction. The problem is that upon order.delete() I get: ... File "/usr/local/lib/python2.6/dist-packages/django/db/backends/__init__.py", line 28, in _commit return self.connection.commit() IntegrityError: update or delete on table "main_order" violates foreign key constraint "main_shipment_order_id_fkey" on table "main_shipment" DETAIL: Key (id)=(45) is still referenced from table "main_shipment". I checked in connection.queries that proper queries are executed in proper order. First shipment is deleted, after that django executes delete on order row: {'time': '0.000', 'sql': 'DELETE FROM "main_shipment" WHERE "id" IN (17)'}, {'time': '0.000', 'sql': 'DELETE FROM "main_order" WHERE "id" IN (45)'} Foreign key have ON DELETE NO ACTION (default) and is initially deferred. I don't know why I get foreign key constraint violation. I also tried to register pre_delete signal and manually delete shipment objects before delete on order is called, but it resulted in the same error. I can change ON DELETE behaviour for this key in Postgres but it would be just a hack, I wonder if anyone has a better idea what's going on here. There is also a small detail, my Order model inherits from Cart model, so it actually doesn't have id field but cart_ptr_id and after DELETE on order is executed there is also DELETE on cart, but it seems unrelated? to the shipment-order problem so I simplified it in the example.

    Read the article

  • Business entity: private instance VS single instance

    - by taoufik
    Suppose my WinForms application has a business entity Order, the entity is used in multiple views, each view handles a different domain or use-case in the application. As an example, one managing orders, the other one digging into one order and displaying additional data. If I'd use nHibernate (or any other ORM) and use one session/dataContext per view (or per db action), I'd end up getting two different instances for the same Order (let's say orderId = 1). Although functionally the same entity, they are technically two different instances. Yes, I could implement Equals/GetHashcode to make them "seem" the same. Why would you go for a single instance per entity vs private instances per view or per use-case? Having single instances has the advantage of sharing INotifyPropertyChanged events, and sharing additional (non-persistent) data. Having a private instance in each view would give you the flexibility of the undo functionality on a view level. In the example above, I'd allow the user to change order details, and give them the flexibility to not save the change. Here, synchronisation between the view/use-case happens on a data persistence level. What would your argument be?

    Read the article

  • Populate a form from SQL

    - by xrum
    Hi, I am trying to populate a web from from a SQL table. This is what I have right now, though I am not sure if it's the best way to do things, please give me suggestions: Public Class userDetails Public address1 As String Public address2 As String Public city As String ... ... ... End Class Public Class clsPerson 'set SQL connection Dim objFormat As New clsFormat Dim objConn As New clsConn() Dim connStr As String = objConn.getConn() Dim myConnection As New Data.SqlClient.SqlConnection(connStr) Public Function GetPersonDetails() As userDetails 'connection and all other good stuff here Try ' Execute the command myConnection.Open() dr = myCommand.ExecuteReader() ' Make sure a record was returned If dr.Read() Then ' Create and Populate ApplicantDetails userDetails.address1 = dr("address1") userDetails.address2 = objFormat.CheckNull(dr("address2")) userDetails.city = objFormat.CheckNull(dr("city")) .... Else Err.Raise(4938, "clsUser", "Error in GetUserDetails - User Not Found") End If dr.Close() Finally myConnection.Close() End Try Return userDetails End Function i then use GetPersonDetails() function in my backend to populate the form. like so: Dim userDetails as new userDetails userdetails = getPersonDetails() txtAddress.text = userdetails.address1 etc.... however, there are like 50 fields in the User db, and it seems like a lot of retyping... please help me find a better way to do this. Thank you!

    Read the article

  • How to find unique values in jagged array

    - by David Liddle
    I would like to know how I can count the number of unique values in a jagged array. My domain object contains a string property that has space delimitered values. class MyObject { string MyProperty; //e.g = "v1 v2 v3" } Given a list of MyObject's how can I determine the number of unique values? The following linq code returns an array of jagged array values. A solution would be to store a temporary single array of items, looped through each jagged array and if values do not exist, to add them. Then a simple count would return the unique number of values. However, was wondering if there was a nicer solution. db.MyObjects.Where(t => !String.IsNullOrEmpty(t.MyProperty)) .Select(t => t.Categories.Split(new char[] { ' ' }, StringSplitOptions.RemoveEmptyEntries)) .ToArray() Below is a more readable example: array[0] = { "v1", "v2", "v3" } array[1] = { "v1" } array[2] = { "v4", "v2" } array[3] = { "v1", "v5" } From all values the unique items are v1, v2, v3, v4, v5. The total number of unique items is 5. Is there a solution, possibly using linq, that returns either only the unique values or returns the number of unique values?

    Read the article

  • ArrayAdapter and android:textFilterEnabled

    - by TiGer
    Hi there, I have created a layout which includes a ListView... The data shown within this ListView isn't too complicated, it's mostly an Array which is passed when the extended Activity is started... The rows themselves exist out of an icon and a text, thus an ImageView and a TextView... To fill up the ListView I use an ArrayAdapter simply because an Array is passed containing all the text-items that should be shown.. Now I'd like to actually be able to filter those, thus I found the android:textFilterEnabled paramater to add on the ListView xml declaration... Now a search field is shown nicely but when I enter some letters it won't filter but it will simply delete the whole list... I found out that that's because the textfilter has no idea what it should filter... So now my question is : I know I need to tell the textfilter what it should filter, I also still have my array filled with the text that should get filtered so how do i couple those two ??? I have seen examples extending a CursorAdapter, but again, I don't have a Cursor, I don't want to do calls to a DB I want to re-utilize my Array with data and obviously the ArrayAdapter itself so that the data will be represented decently on screen (i.e with my ImageView and TextView layout)... Thanks in advance for any help/pointers/tips/code...

    Read the article

  • If attacker has original data and encrypted data, can they determine the passphrase?

    - by Brad Cupit
    If an attacker has several distinct items (for example: e-mail addresses) and knows the encrypted value of each item, can the attacker more easily determine the secret passphrase used to encrypt those items? Meaning, can they determine the passphrase without resorting to brute force? This question may sound strange, so let me provide a use-case: User signs up to a site with their e-mail address Server sends that e-mail address a confirmation URL (for example: https://my.app.com/confirmEmailAddress/bill%40yahoo.com) Attacker can guess the confirmation URL and therefore can sign up with someone else's e-mail address, and 'confirm' it without ever having to sign in to that person's e-mail account and see the confirmation URL. This is a problem. Instead of sending the e-mail address plain text in the URL, we'll send it encrypted by a secret passphrase. (I know the attacker could still intercept the e-mail sent by the server, since e-mail are plain text, but bear with me here.) If an attacker then signs up with multiple free e-mail accounts and sees multiple URLs, each with the corresponding encrypted e-mail address, could the attacker more easily determine the passphrase used for encryption? Alternative Solution I could instead send a random number or one-way hash of their e-mail address (plus random salt). This eliminates storing the secret passphrase, but it means I need to store that random number/hash in the database. The original approach above does not require storage in the database. I'm leaning towards the the one-way-hash-stored-in-the-db, but I still would like to know the answer: does having multiple unencrypted e-mail addresses and their encrypted counterparts make it easier to determine the passphrase used?

    Read the article

< Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >