Search Results

Search found 5233 results on 210 pages for 'records'.

Page 46/210 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Django Forms, Foreign Key and Initial return all associated values

    - by gramware
    I a working with Django forms. The issue I have is that Foreign Key fields and those using initial take all associated entries (all records associated with that record other then the one entry i wanted e.g instead of getting a primary key, i get the primary key, post subject, post body and all other values attributed with that record). The form and the other associated queries still work well, but this behaviour is clogging my database. How do i get the specific field i want instead of all records. An example of my models is here: A form field for childParentId returns postID, postSubject and postBody instead of postID alone. Also form = ForumCommentForm(initial = {'postSubject':forum.objects.get(postID = postID), }) returns all records related to postID. class forum(models.Model): postID = models.AutoField(primary_key=True) postSubject = models.CharField(max_length=25) postBody = models.TextField() postPoster = models.ForeignKey(UserProfile) postDate = models.DateTimeField(auto_now_add=True) child = models.BooleanField() childParentId = models.ForeignKey('self',blank=True, null=True) deleted = models.BooleanField() def __unicode__(self): return u'%s %s %s %s %s' % (self.postSubject, self.postBody, self.postPoster, self.postDate, self.postID

    Read the article

  • SSIS- Sharepoint list data transfer issue

    - by Vicky
    Hi , We are trying to transfer data from oracle database (about 60,0000) records only to a sharepoint list using SSIS. But we are getting following error when records reaches around 19000 . The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020 and System.ServiceModel.ProtocolException: The remote server returned an unexpected response: (400) Bad Request. Earlier we thought if could because of Sharepoint list limit so we tried by reducing two of the columns and then it has went fine. So we left with one of the column of Datatype DT_STR and length 400 in oracle beacuse of which issue might be happening, It is mapped to sharepoint custom list field of multiline type. We also verified if length of field is issue but in oracle DB for all records max length for this column is only 239 so length issue is also ruled out. Any one who has faced this kind of issue or knows cause of this issue.Kindly let us know.. Thanks and regards, Vicky

    Read the article

  • Saving selected rows in a jqGrid while paging

    - by Dan
    I have a jqGrid with which users will select records. A large number of records could be selected across multiple pages. The selected rows seem to get cleared out when the user pages through the data. Is it up to the developer to manually track the selected rows in an array? I'm fine doing this, but I'm not sure what the best way is. I'm not sure I want to be splicing an array whenever any number of records are selected as that seems like it could really slow things down. My end goal is to have a jQueryUI dialog that, when closed, while store all the selected rows so I can post it to the server. Insight, questions, comments; all are appreciated! Note: added aspnetmvc tag only because this is for an MVC app

    Read the article

  • SSRS 2005 - Usability analysis - Is SSRS a good option for this scenario?

    - by Sach
    How practical is it to consider SSRS 2005 or SSRS 2008 as a reporting solution for a report that has to show reports with millions of records (records vary from 3 to 10 million)? Is there any threshold on the size of report in SSRS? How do I know that for a huge report, wheather SSRS will consume the whole memory and start paging the operations to disk or it will give a memory leak error? Even if I keep on increasing the memory how can I be sure that certain memory will be sufficient for such huge reports for the report server? All the above questions are haunting me because I have a dedicated report server with a decent hardware and OS configuration (8 processors, 8GB RAM, 64 bit OS and 64 bit SQL Server 2005). Still my report with around 2 million records is taking more than 6 minutes and going from one page to another takes 3 minutes!!! My datasource is on separate server and when I execute only the stored proc there, it returns the results in less than 2 minutes.

    Read the article

  • How do I shrink my SQL Server Database?

    - by Rory Becker
    I have a Database nearly 1.9Gb Database in size MSDE2000 does not allow DBs that exceed 2.0Gb I need to shrink this DB (and many like it at various client locations) I have found and deleted many 100's of 1000's of records which are considered unneeded. these records account for a large percentage of some of the main (largest) tables in the Database. Therefore it's reasonable to assume much space should now be retrievable. So now I need to shrink the DB to account for the missing records I execute "DBCC ShrinkDatabase('MyDB')"......No effect. I have tried the various shrink facilities provided in MSSMS.... Still no effect. I have backed up the database and restored it... Still no effect. Still 1.9Gb Why? Whatever procedure I eventually find needs to be replayable on a client machine with access to nothing other than OSql or similar.

    Read the article

  • SQL Server 2000 incorrect message: duplicate key with unique index

    - by iar
    Database server: SQL Server 2000 - 8.00.760 - SP3 - Standard Edition I have a table with 5 columns, in which I want to add a large (300.000) number of records. There is a unique index on the combination of two columns. If I add the records with this unique index in place, SQL Server gives this error message: Msg 2601, Level 14, State 3, Line 1 Cannot insert duplicate key row in object 'TESTTABLE' with unique index 'test'. The statement has been terminated. (0 row(s) affected) However, if I delete the unique index and then add the records, I do not get any error when I create this unique index afterwards.

    Read the article

  • Passing sql results to views hard-codes views to database column names

    - by Galen
    I just realized that i may not be following best practices in regards to the MVC pattern. My issue is that my views "know" information about my database Here's my situation in psuedo code... My controller invokes a method from my model and passes it directly to the view view.records = tableGateway.getRecords() view.display() in my view each records as record print record.name print record.address ... In my view i have record.name and record.address, info that's hard-coded to my database. Is this bad? What other ways around it are there other than iterating over everything in the controller and basically rewriting the records collection. And that just seems silly. Thanks

    Read the article

  • Removing New line character in Fields PHP

    - by Aruna
    Hi, i am trying to upload an excel file and to store its contents in the Mysql database. i am having a problem in saving the contents.. like My csv file is in the form of "1","aruna","IEEE paper" "2","nisha","JOurnal magazine" actually i am having 2 records and i am using the code <?php $string = file_get_contents( $_FILES["file"]["tmp_name"] ); //echo $string; foreach ( explode( "\n", $string ) as $userString ) { echo $userString; } ? since in the Csv record there is a new line inserted in between IEEE and paper it is dispaying me as 3 records.. How to remove this new line code wise and to modify the code so that only the new line between the records 1 and 2 is considered... Pls help me....

    Read the article

  • Slow insert speed in Postgresql memory tablespace

    - by Prashant
    Hi, I have a requirement where I need to store the records at rate of 10,000 records/sec into a database (with indexing on a few fields). Number of columns in one record is 25. I am doing a batch insert of 100,000 records in one transaction block. To improve the insertion rate, I changed the tablespace from disk to RAM.With that I am able to achieve only 5,000 inserts per second. I have also done the following tuning in the postgres config: Indexes : no fsync : false logging : disabled Other information: - Tablespace : RAM - Number of columns in one row : 25 (mostly integers) - CPU : 4 core, 2.5 GHz - RAM : 48 GB I am wondering why a single insert query is taking around 0.2 msec on average when database is not writing anything on disk (as I am using RAM based tablespace). Is there something I am doing wrong? Help appreciated. Prashant

    Read the article

  • performance issue in a select query from a single table

    - by daedlus
    Hi , I have a table as below dbo.UserLogs ------------------------------------- Id | UserId |Date | Name| P1 | Dirty ------------------------------------- There can be several records per userId[even in millions] I have clustered index on Date column and query this table very frequently in time ranges. The column 'Dirty' is non-nullable and can take either 0 or 1 only so I have no indexes on 'Dirty' I have several millions of records in this table and in one particular case in my application i need to query this table to get all UserId that have at least one record that is marked dirty. I tried this query - select distinct(UserId) from UserLogs where Dirty=1 I have 10 million records in total and this takes like 10min to run and i want this to run much faster than this. [i am able to query this table on date column in less than a minute.] Any comments/suggestion are welcome. my env 64bit,sybase15.0.3,Linux

    Read the article

  • Perl TDS character sets

    - by skiphoppy
    I'm using the FreeTDS driver with DBD::Sybase, connecting to an MS SQL Server. When I query certain values of certain records, I get this error: DBD::Sybase::st fetchrow_arrayref failed: OpenClient message: LAYER = (0) ORIGIN = (0) SEVERITY = (9) NUMBER = (99) Server , database Message String: WARNING! Some character(s) could not be converted into client's character set. Unconverted bytes were changed to question marks ('?'). This seems to happen for records that contain special Windows character-set characters, such as curly quotes, copied and pasted from people's Outlook and Word messages. Unfortunately, I do not have any control of this database; sanitizing the input on the way in is obviously the way to go, but is not available to me. What FreeTDS settings do I need to change to be able to successfully query these records?

    Read the article

  • batch update SharePoint data

    - by Faiz
    Hi, I need to import user profile information to a sharepoint list. Also, the user profile information will be periodically updated to the list , therefore i think i should use a timer job to update the information. Importing for the first time works without any issues. But can you suggest what can be the best way to update information. The update should take care of updating existing records in profile , mark as orphaned for the records that are deleted from user profile and add new records in user profile to the list. Thanks

    Read the article

  • Problem with Efficient Gridview paging without datasource control

    - by Ronnie Overby
    I am trying to do efficient paging with a gridview without using a datasource control. By efficient, I mean I only retrieve the records that I intend to show. I am trying to use the PagerTemplate to build my pager functionality. In short, the problem is that if I bind only the records that I intend to show on the current page, the gridview doesn't render its pager template, so I don't get the paging controls. It's almost as if I MUST bind more records than I intend to show on a given page, which is not something I want to do.

    Read the article

  • One large file or multiple small files?

    - by Dan
    I have an application (currently written in Python as we iron out the specifics but eventually it will be written in C) that makes use of individual records stored in plain text files. We can't use a database and new records will need to be manually added regularly. My question is this: would it be faster to have a single file (500k-1Mb) and have my application open, loop through, find and close a file OR would it be faster to have the records separated and named using some appropriate convention so that the application could simply loop over filenames to find the data it needs? I know my question is quite general so direction to any good articles on the topic are as appreciated as much as suggestions. Thanks very much in advance for your time, Dan

    Read the article

  • How to send to view $data and $data2 at same time?

    - by artmania
    Hi friends, I use codeigniter and issue about how to send to view $data and $data_cat2 at same time? $data['records'] = $this->gallery_model->get_all(1); $data_cat2['records'] = $this->gallery_model->get_all(2); $this->load->view('gallery/index.php',$data); I tried to put an another load-view for $data_cat2. and it just repeated the all page :/ Thanks regards!!! appreciate helps! sorry for silly question, just sorted it! feel shame to ask that: $data['records'] = $this->gallery_model->get_all(1); $data['records_cat2'] = $this->gallery_model->get_all(2); and using the record_cat2 in view file.

    Read the article

  • "SELECT TOP", "LEFT OUTER JOIN", "ORDER BY" gives extra rows

    - by Codesleuth
    I have the following Access query I'm running through OLE DB in .NET: SELECT TOP 25 tblClient.ClientCode, tblRegion.Region FROM (tblClient LEFT OUTER JOIN tblRegion ON tblClient.RegionCode = tblRegion.RegionCode) ORDER BY tblRegion.Region There are 431 records within tblClient that have RegionCode set to NULL. For some reason, the query above returns all these 431 records instead of the first 25. If I change the query to ORDER BY tblClient.Client (the name of the client) like so: SELECT TOP 25 tblClient.ClientCode, tblRegion.Region FROM (tblClient LEFT OUTER JOIN tblRegion ON tblClient.RegionCode = tblRegion.RegionCode) ORDER BY tblClient.Client I get the expected result set of 25 records, showing a mixture of region names and NULL values. Why is it that ordering by a field retrieved through a LEFT OUTER JOIN will the TOP clause not work?

    Read the article

  • How to clean sys.conversation_endpoints

    - by Manjoor
    I have a table, a trigger on the table implemented using service broker. More than Half million records are inserted daily into the table. The asynchronous SP is used to check sveral condition by using inserted data and update other tables. It was running fine for last 1 month and the SP was get executed withing 2-3 seconds of insertion of record. But now it take more than 90 minute. At present sys.conversation_endpoints have too much records. (Note that all the table are truncated daily as I do not need those records day after) Other database activities are normal (average 60% CPU Utilization). Now where i need to look?? I can re-create database without any problem but i don't think it is a good way to resolve the problem

    Read the article

  • I need a groovy criteria to get all the elements after i make sort on nullable inner object

    - by user1773876
    I have two domain classes named IpPatient,Ward as shown below. class IpPatient { String ipRegNo Ward ward static constraints = { ward nullable:true ipRegNo nullable:false } } class Ward { String name; static constraints = { name nullable:true } } now i would like to create criteria like def criteria=IpPatient.createCriteria() return criteria.list(max:max , offset:offset) { order("ward.name","asc") createAlias('ward', 'ward', CriteriaSpecification.LEFT_JOIN) } At present IpPatient table has 13 records, where 8 records of IpPatient doesn't have ward because ward can be null. when i sort with wardName i am getting 5 records those contain ward. I need a criteria to get all the elements after i make sort on nullable inner object.

    Read the article

  • question about frequency of updating access

    - by I__
    i have a table in an access database this access database is used on a regular basis, basically from 9-5 someone else has a copy of this exact table. sometimes records are added, sometimes deleted, and sometimes data within the records is updated. i need to update the access database table with the offsite table every hour or so. what is the best algorithm of updating the data? there are about 5000 records. would it severely lock up the table for a few seconds every hour? i would like to publicly apologize for my rude comment to david fenton

    Read the article

  • ado.net slow updating large tables

    - by brett
    The problem: 100,000+ name & address records in an access table (2003). Need to iterate through the table & update detail with the output from a 3rd party dll. I currently use ado, and it works at an acceptable speed (less than 5 minutes on a network share). We will soon need to update to access 2007 and its 'non jet' accdb format to maintain compatability with clients. I've tried using ado.net datsets, but updating the records takes hours! We process 5-10 of these tables per day - so this cannot be a solution. Any ideas on the fastest way to update individual records using ado.net? Surely we didn't take such a hugh backward step with ado.net? Any help would be appreciated.

    Read the article

  • Getting past Salesforce trigger governors

    - by Jake
    I'm trying to write an "after update" trigger that does a batch update on all child records of the record that has just been updated. This needs to be able to handle 15k+ child records at a time. Unfortunately, the limit appears to be 100, which is so far below my needs it's not even close to acceptable. I haven't tried splitting the records into batches of 100 each, since this will still put me at a cap of 10k updates per trigger execution. (Maybe I could just daisy-chain triggers together? ugh.) Does anyone know what series of hoops I can jump through to overcome yet another ridiculously short-sighted limitation by this awful development "platform"?

    Read the article

  • High Throughput and Windows Workflow Foundation

    - by SometimesUseful
    Can WWF handle high throughput scenarios where several dozen records are 'actively' being processed in parallel at any one time? We want to build a workflow process which handles a few thousand records per hour. Each record takes up to a minute to process, because it makes external web service calls. We are testing Windows Workflow Foundation to do this. But our demo programs show processing of each record appear to be running in sequence not in parallel, when we use parallel activities to process several records at once within one workflow instance. Should we use multiple workflow instances or parallel activities? Are there any known patterns for high performance WWF processing?

    Read the article

  • SQL query for getting data in two fields from one column.

    - by AmiT
    I have a table [Tbl1] containing two fields. ID as int And TextValue as nvarchar(max) Suppose there are 7 records. I need a resultset that has two columns Text1 and Text2. The Text1 should have first 4 records and Text2 should have remaining 3 records. [Tbl1] ID | TextValue 1. | Apple 2. | Mango 3. | Orange 4. | Pineapple 5. | Banana 6. | Grapes 7. | Sapota Now, the result-set should have Text1 | Text2 Apple | Banana Mango | Grapes Orange | Sapota Pineapple |

    Read the article

  • Query joining in sql server 2005

    - by Domnic
    I have two queries like: SELECT PC_COMP_CODE, PC_SL_LDGR_CODE, PC_SL_ACNO ACCOUNT, COUNT(PC_CHEQUE_NO) CHQS, SUM(CONVERT(FLOAT, PC_AMOUNT)) CHQ_AMT FROM GLAS_PDC_CHEQUES WHERE PC_COMP_CODE = '1' AND PC_DISCD IS NULL GROUP BY PC_SL_LDGR_CODE, PC_SL_ACNO ,PC_COMP_CODE ORDER BY PC_SL_ACNO -------------------------------------------------- SELECT COAD_PTY_FULL_NAME,PC_COMP_CODE, PC_SL_LDGR_CODE, PC_SL_ACNO, PC_DEPT_NO, PC_DOC_TYPE, PC_CHEQUE_NO, PC_BANK_AC_NO FROM GLAS_PTY_ADDRESS,GLAS_SBLGR_MASTERS,GLAS_PDC_CHEQUES WHERE COAD_COMP_CODE = '1' AND SLMA_COMP_CODE = COAD_COMP_CODE AND SLMA_ADDR_ID = COAD_ADDR_ID AND SLMA_LDGRCTL_CODE = PC_SL_LDGR_CODE AND PC_COMP_CODE=SLMA_COMP_CODE AND SLMA_ACNO = PC_SL_ACNO AND SLMA_LDGRCTL_YEAR = DBO.GLAS_VALIDATIONS_GET_OPEN_YEAR(PC_COMP_CODE) If I execute first query alone I get 5 records... If I join the above two query like: SELECT PC_COMP_CODE, PC_SL_LDGR_CODE, PC_SL_ACNO ACCOUNT, COUNT(PC_CHEQUE_NO) CHQS, SUM(CONVERT(FLOAT, PC_AMOUNT)) CHQ_AMT, COAD_PTY_FULL_NAME FROM GLAS_PDC_CHEQUES LEFT OUTER JOIN GLAS_SBLGR_MASTERS ON( SLMA_COMP_CODE=PC_COMP_CODE AND SLMA_LDGRCTL_CODE = PC_SL_LDGR_CODE AND SLMA_ACNO = PC_SL_ACNO ) LEFT OUTER JOIN GLAS_PTY_ADDRESS ON( SLMA_COMP_CODE = COAD_COMP_CODE AND SLMA_ADDR_ID = COAD_ADDR_ID) WHERE PC_COMP_CODE = '1' AND PC_DISCD IS NULL AND SLMA_LDGRCTL_YEAR = DBO.GLAS_VALIDATIONS_GET_OPEN_YEAR(PC_COMP_CODE) GROUP BY PC_SL_LDGR_CODE, PC_SL_ACNO ,PC_COMP_CODE,COAD_PTY_FULL_NAME ORDER BY PC_SL_ACNO then I just get 2 records.... I need that 5 records to display after join..... How can I do it?

    Read the article

  • Does Ruby on Rails "has_many" array provide data on a "need to know" basis?

    - by Jian Lin
    On Ruby on Rails, say, if the Actor model object is Tom Hanks, and the "has_many" fans is 20,000 Fan objects, then actor.fans gives an Array with 20,000 elements. Probably, the elements are not pre-populated with values? Otherwise, getting each Actor object from the DB can be extremely time consuming. So it is on a "need to know" basis? So does it pull data when I access actor.fans[500], and pull data when I access actor.fans[0]? If it jumps from each record to record, then it won't be able to optimize performance by doing sequential read, which can be faster on the hard disk because those records could be in the nearby sector / platter layer -- for example, if the program touches 2 random elements, then it will be faster just to read those 2 records, but what if it touches all elements in random order, then it may be faster just to read all records in a sequential way, and then process the random elements. But how will RoR know whether I am doing only a few random elements or all elements in random?

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >