Search Results

Search found 5233 results on 210 pages for 'records'.

Page 183/210 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • PostgreSQL: return select count(*) from old_ids;

    - by Alexander Farber
    Hello, please help me with 1 more PL/pgSQL question. I have a PHP-script run as daily cronjob and deleting old records from 1 main table and few further tables referencing its "id" column: create or replace function quincytrack_clean() returns integer as $BODY$ begin create temp table old_ids (id varchar(20)) on commit drop; insert into old_ids select id from quincytrack where age(QDATETIME) > interval '30 days'; delete from hide_id where id in (select id from old_ids); delete from related_mks where id in (select id from old_ids); delete from related_cl where id in (select id from old_ids); delete from related_comment where id in (select id from old_ids); delete from quincytrack where id in (select id from old_ids); return select count(*) from old_ids; end; $BODY$ language plpgsql; And here is how I call it from the PHP script: $sth = $pg->prepare('select quincytrack_clean()'); $sth->execute(); if ($row = $sth->fetch(PDO::FETCH_ASSOC)) printf("removed %u old rows\n", $row['count']); Why do I get the following error? SQLSTATE[42601]: Syntax error: 7 ERROR: syntax error at or near "select" at character 9 QUERY: SELECT select count(*) from old_ids CONTEXT: SQL statement in PL/PgSQL function "quincytrack_clean" near line 23 Thank you! Alex

    Read the article

  • import csv file/excel into sql database asp.net

    - by kiev
    Hi everyone! I am starting a project with asp.net visual studio 2008 / SQL 2000 (2005 in future) using c#. The tricky part for me is that the existing DB schema changes often and the import files columns will all have to me matched up with the existing db schema since they may not be one to one match on column names. (There is a lookup table that provides the tables schema with column names I will use) I am exploring different ways to approach this, and need some expert advice. Is there any existing controls or frameworks that I can leverage to do any of this? So far I explored FileUpload .NET control, as well as some 3rd party upload controls to accomplish the upload such as SlickUpload but the files uploaded should be < 500mb Next part is reading of my csv /excel and parsing it for display to the user so they can match it with our db schema. I saw CSVReader and others but for excel its more difficult since I will need to support different versions. Essentially The user performing this import will insert and/or update several tables from this import file. There are other more advance requirements like record matching but and preview of the import records, but I wish to get through understanding how to do this first. Update: I ended up using csvReader with LumenWorks.Framework for uploading the csv files.

    Read the article

  • Linq2Sql: query - subquery optimisation

    - by Budda
    I have the following query: IList<InfrStadium> stadiums = (from sector in DbContext.sectors where sector.Type=typeValue select new InfrStadium(sector.TeamId) ).ToList(); and InfrStadium class constructor: private InfrStadium(int teamId) { IList<Sector> teamSectors = (from sector in DbContext.sectors where sector.TeamId==teamId select sector) .ToList<>(); ... work with data } Current implementation perform 1+n queries, where n - number of records fetched the 1st time. I want to optimize that. And another one I would love to do using 'group' operator in way like this: IList<InfrStadium> stadiums = (from sector in DbContext.sectors group sector by sector.TeamId into team_sectors select new InfrStadium(team_sectors.Key, team_sectors) ).ToList(); with appropriate constructor: private InfrStadium(int iTeamId, IEnumerable<InfrStadiumSector> eSectors) { IList<Sector> teamSectors = eSectors.ToList(); ... work with data } But attempt to launch query causes the following error: Expression of type 'System.Int32' cannot be used for constructor parameter of type 'System.Collections.Generic.IEnumerable`1[InfrStadiumSector]' Question 1: Could you please explain, what is wrong here, I don't understand why 'team_sectors' is applied as 'System.Int32'? I've tried to change query a little (replace IEnumerable with IQueryeable): IList<InfrStadium> stadiums = (from sector in DbContext.sectors group sector by sector.TeamId into team_sectors select new InfrStadium(team_sectors.Key, team_sectors.AsQueryable()) ).ToList(); with appropriate constructor: private InfrStadium(int iTeamId, IQueryeable<InfrStadiumSector> eSectors) { IList<Sector> teamSectors = eSectors.ToList(); ... work with data } In this case I've received another but similar error: Expression of type 'System.Int32' cannot be used for parameter of type 'System.Collections.Generic.IEnumerable1[InfrStadiumSector]' of method 'System.Linq.IQueryable1[InfrStadiumSector] AsQueryableInfrStadiumSector' Question 2: Actually, the same question: can't understand at all what is going on here... P.S. I have another to optimize query idea (describe here: Linq2Sql: query optimisation) but I would love to find a solution with 1 request to DB).

    Read the article

  • DB Design to store custom fields for a table

    - by Fazal
    Hi All, this question came up based on the responses I got for the question http://stackoverflow.com/questions/2785033/getting-wierd-issue-with-to-number-function-in-oracle As everyone suggested that storing Numeric values in VARCHAR2 columns is not a good practice (which I totally agree with), I am wondering about a basic Design choice our team has made and whether there are better way to design. Problem Statement : We Have many tables where we want to give certain number of custom fields. The number of required custom fields is known, but what kind of attribute is mapped to the column is available to the user E.g. I am putting down a hypothetical scenario below Say you have a laptop which stores 50 attribute values for every laptop record. Each laptop attributes are created by the some admin who creates the laptop. A user created a laptop product lets say lap1 with attributes String, String, numeric, numeric, String Second user created laptop lap2 with attributes String,numeric,String,String,numeric Currently there data in our design gets persisted as following Laptop Table Id Name field1 field2 field3 field4 field5 1 lap1 lappy lappy 12 13 lappy 2 lap2 lappy2 13 lappy2 lapp2 12 This example kind of simulates our requirement and our design Now here if somebody is lookinup records for lap2 table doing a comparison on field2, We need to apply TO_NUMBER. select * from laptop where name='lap2' and TO_NUMBER(field2) < 15 TO_NUMBER fails in some cases when query plan decides to first apply to_number instead of the other filter. QUESTION Is this a valid design? What are the other alternative ways to solve this problem One of our team mates suggested creating tables on the fly for such cases. Is that a good idea How do popular ORM tools give custom fields or flex fields handling? I hope I was able to make sense in the question. Sorry for such a long text.. This causes us to use TO_NUMBER when queryio

    Read the article

  • Can one connection get details of another? Or, how can I get the most detailed pending transaction

    - by bob-the-destroyer
    Is there a Mysql statement which provides full details of any other open connection or user? For this particular case, on myisam tables specifically. Looking at Mysql's SHOW TABLE STATUS documentation, it's missing some very important information for my purpose. For example: remote odbc connection one is inserting several thousand records, which due to a slow connection speed can take up to an hour. Tcp connection two, using PHP on the server's localhost, is running select queries with aggregate functions on that data. Before allowing connection two to run those queries, I'd like connection two to first check to make sure there's no pending inserts on any other connection on those specific tables so it can instead wait until all data is available. If the table is currently being written to, I'd like to spit back to the user of connection two an approximation of how much longer to wait based on the number of pending inserts. Ideally by table, I'd like to get back using a query the timestamp when connection one began the write, total inserts left to be done, and total inserts already completed. Instead of insert counts, even knowing number of bytes written and left to write would work just fine here. Obviously since connection two is a tcp connection via a PHP script, all I can really use in that script is some sort of query. I suppose if I have to, since it is on localhost, I can exec() it if the only way is by a mysql command line option that outputs this info, but I'd rather not. I suppose I could simply update a custom-made transaction log before and after this massive insert task which the PHP script can check, but hopefully there's already a built-in Mysql feature I can take advantage of.

    Read the article

  • Can't get DataGridView to refresh over Linq to SQL (WinForm)

    - by GringoFrenzy
    Very strange situation here: I'm using L2S to populate a DataGridView. Code follows: private void RefreshUserGrid() { var UserQuery = from userRecord in this.DataContext.tblUsers orderby userRecord.DisplayName select userRecord; UsersGridView.DataSource = UserQuery; //I have also tried //this.UserBindingSource.DataSource = UserQuery; //UsersGridView.Datasource = UserBindingSource; UsersGridView.Columns[0].Visible = false; } Whenever I use L2S to Add/Delete records from the database, the GridView refreshes perfectly well. However, if someone is editing the grid and makes a mistake, I want them to be able to hit a refresh button and have their mistakes erased by reloading from the datasource. For the life of me, I can't get it to work. The code I am currently using on my refresh button is this: private void button1_Click(object sender, EventArgs e) { this.DataContext.Refresh(RefreshMode.OverwriteCurrentValues); RefreshUserGrid(); } But the damn GridView remains unaffected. All that happens is the selected row becomes unselected. I have tried .Refresh(), .Invalidate(), I've tried changing the DataSource to NULL and back again (all suggestions from similar posts here)....none of it works. The only time the Grid refreshes is if I restart the app. I must be missing something fundamental, but I'm totally stumped and so are my colleagues. Any ideas? Thanks!

    Read the article

  • SSRS How to access the current value within a list control?

    - by Dale Burrell
    In SQL Server Reporting Services I have a report which has a list control which groups on currency. Within the list control I display the detailed rows of all records filtered to those with a value = £500. i.e. the top earners. However for each row I need to calculate the percentage of its amount over the total of the entire dataset. Because I am filtering it I can't use Sum(Fields!Amount.Value) as that only sums the data after filtering, so I am trying a conditional sum over the entire dataset, but am struggling with the correct condition e.g =100.00*Fields!Amount.Value/Sum((IIf(Fields!Currency.Value = "£", Fields!Amount.Value, CDec(0))),"DataSet") So where the hardcoded currency symbol is I need to access the current value of currency for the list control, but because my sum is scoped at dataset level any field access is dataset level. Ideally I'd like something like the following, otherwise any other ideas on how to solve this problem. =100.00*Fields!Amount.Value/Sum((IIf(Fields!Currency.Value = myListControl.Value, Fields!Amount.Value, CDec(0))),"DataSet") In fact, thinking about it, it would work if I just could access the row level data at that point, but how to do that when its at dataset scope within the sum statement? Hope that makes sense, any help appreciated.

    Read the article

  • Dynamic evaluation of a table column within an insert before trigger

    - by Tim Garver
    HI All, I have 3 tables, main, types and linked. main has an id column and 32 type columns. types has id, type linked has id, main_id, type_id I want to create an insert before trigger on the main table. It needs to compare its 32 type columns to the values in the types table if the main table column has an 'X' for its value and insert the main_id and types_id into the linked table. i have done a lot of searching, and it looks like a prepared statement would be the way to go, but i wanted to ask the experts. The issue, is i dont want to write 32 IF statements, and even if i did, i need to query the types table to get the ID for that type, seems like a huge waist of resources. Ideally i want to do this inside of my trigger: BEGIN DECLARE @types results_set -- (not sure if this is a valid type); -- (iam sure my loop syntax is all wrong here)... SET @types = (select * from types) for i=0;i<types.records;i++ { IF NEW.[i.type] = 'X' THEN insert into linked (main_id,type_id) values (new.ID, i.id); END IF; } END; Anyway, This is what i was hoping to do, maybe there is a way to dynamically set the field name inside of a results loop, but i cant find a good example of this. Thanks in advance Tim

    Read the article

  • How to stop the submission of the form using JQUERY?

    - by Param-Ganak
    Hello frineds! I have a form which have many check boxes in it. User will check one or many checkboxes and click on Delete button which is actually submit button then the respective records for selected check boxes are get deleted. Required operation steps are as follows:- When user selects one or more checkboxes and clicks the submit button then a JQUERY code is get executed in which It should checks that if any check box is selected; if not; then is displays alert message and do not submit the form. If it founds that one or more checkboxes are selected it should ask user for his/her confirmation using confirm alert box about the confirmation of deletion. If user press YES button on confirm box then it should submit the form using javascript submit statement. If user pressed the NO button then it should unchecks the checked selected checkboxes and should not submit the form. This is the JQUERY code I had written. $(document).ready(function (){ $("#id_delete").click(function(){ var temp=$("#id_frm input:checkbox:checked"); var len=temp.length; if(len==0) { alert("Please select the order."); } else { if(confirm("Are you sure to delete the selected orders.")) { $("#id_frm").submit(); } else { temp.checked=false; } } }); }); Problems: This code is not working properly. Here 1. It submits form when there is no check box is selected. 2. It submits the form when user press NO button on confirmation dialog box. Please guide me friends in this problem.

    Read the article

  • How to paginate json data?

    - by Pandiya Chendur
    I used json data and iterated div elements to get my result.... function Iteratejsondata(HfJsonValue) { //var jsonObj = eval('(' + HfJsonValue + ')'); var jsonObj = JSON.parse(HfJsonValue); for (var i = jsonObj.Table.length - 1; i >= 0; i--) { var employee = jsonObj.Table[i]; $('<div class="resultsdiv"><br /><span class="resultName">' + employee.Emp_Name + '</span><span class="resultfields" style="padding-left:100px;">Category&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Desig_Name + '</span><br /><br /><span id="SalaryBasis" class="resultfields">Salary Basis&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.SalaryBasis + '</span><span class="resultfields" style="padding-left:25px;">Salary&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.FixedSalary + '</span><span style="font-size:110%;font-weight:bolder;padding-left:25px;">Address&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Address + '</span></div>').insertAfter('#ResultsDiv'); } $(".resultsdiv:even").addClass("resultseven"); $(".resultsdiv").hover(function() { $(this).addClass("resultshover"); }, function() { $(this).removeClass("resultshover"); }); } I get 22 records in total... Now i want to paginate these divs (i.e) 5 divs per page... Any suggestion...

    Read the article

  • Formating a table date field from the Model in Codeigniter

    - by Landitus
    Hi, I', trying to re-format a date from a table in Codeigniter. The Controller is for a blog. I was succesfull when the date conversion happens in the View. I was hoping to convert the date in the Model to have things in order. Here's the date conversion as it happens in the View. This is inside the posts loop: <?php foreach($records as $row) : ?> <?php $fdate = "%d <abbr>%M</abbr> %Y"; $dateConv = mdate($fdate, mysql_to_unix($row->date)); ?> <div class="article section"> <span class="date"><?php echo $dateConv ;?></span> ... Keeps going ... This is the Model: class Novedades_model extends Model { function getAll() { $this->db->order_by('date','desc'); $query = $this->db->get('novedades'); if($query->num_rows() > 0) { foreach ($query->result() as $row) { $data[] = $row; } } return $data; } } How can I convert the date in the Model? Can I access the date key and refactor it?

    Read the article

  • Why might my PHP log file not entirely be text?

    - by Fletcher Moore
    I'm trying to debug a plugin-bloated Wordpress installation; so I've added a very simple homebrew logger that records all the callbacks, which are basically listed in a single, ultimately 250+ row multidimensional array in Wordpress (I can't use print_r() because I need to catch them right before they are called). My logger line is $logger->log("\t" . $callback . "\n"); The logger produces a dandy text file in normal situations, but at two points during this particular task it is adding something which causes my log file to no longer be encoded properly. Gedit (I'm on Ubuntu) won't open the file, claiming to not understand the encoding. In vim, the culprit corrupt callback (which I could not find in the debugger, looking at the array) is about in the middle and printed as ^@lambda_546 and at the end of file there's this cute guy ^M. The ^M and ^@ are blue in my vim, which has no color theme set for .txt files. I don't know what it means. I tried adding an is_string($callback) condition, but I get the same results. Any ideas?

    Read the article

  • How to do an additional search on archive in rails if record not found, by extending model?

    - by Nick Gorbikoff
    Hello, I was wondering if somebody knows an elegant solution to the following: Suppose I have a table that holds orders, with a bunch of data. So I'm at 1M records, and searches begin to take time. So I want to speed it up by archiving some data that is more than 3 years old - saving it into a table called orders-archive, and then purging them from the orders table. So if we need to research something or customer wants to pull older information - they still can, but 99% of the lookups are done on the orders no older than a year and a half - so there is no reason to keep looking through older data all the time. These move & purge operations can be then croned to be done on a weekly basis. I already did some tests and I know that I will slash my search times by about 4 times. So far so good, right? However I was thinking about how to implement older archival lookups and the only reasonable thing I can think of is some sort of if-else If not found in orders, do a search in orders-archive. However - I have about 20 tables that I want to archive and god knows how many searches / finds are done through out the code, that I don't want to modify. So I was wondering if there is an elegant rails-way solution to this problem, by extending a model somehow? Has anyone dealt with similar case before? Thank you.

    Read the article

  • Import & modify date data in MATLAB

    - by niko
    I have a .csv file with records written in the following form: 2010-04-20 15:15:00,"8.9915176259e+00","8.8562623697e+00" 2010-04-20 15:30:00,"8.5718021723e+00","8.6633827160e+00" 2010-04-20 15:45:00,"8.4484844117e+00","8.4336586330e+00" 2010-04-20 16:00:00,"1.1106980342e+01","8.4333062208e+00" 2010-04-20 16:15:00,"9.0643470589e+00","8.6885660103e+00" 2010-04-20 16:30:00,"8.2133517943e+00","8.2677822671e+00" 2010-04-20 16:45:00,"8.2499419380e+00","8.1523501983e+00" 2010-04-20 17:00:00,"8.2948492278e+00","8.2884797924e+00" From these data I would like to make clusters - I would like to add a column with number indicating the hour - so in case of the first row a value 15 has to be added in a new row. The first problem is that calling a function [numData, textData, rawData] = xlsread('testData.csv') creates an empty matrix numData and one-column textData and rawData structures. Is it possible to create any template which recognizes a yyyy, MM, dd, hh, mm, ss values from the data above? What I would basically like to do with these data is to categorize the values by hours so from the example row of input: 2010-04-20 15:15:00,"8.9915176259e+00","8.8562623697e+00" update 1: in Matlab the line above is recognized as a string: '2010-04-26 13:00:00,"1.0428104753e+00","2.3456394130e+00"' I would want this to be the output: 15, 8.9915176259e+00, 8.8562623697e+00 update 1: a string has to be parsed Does anyone know how to parse a string and retrieve a timestamp, value1 (1.0428104753e+00) and value2 (2.3456394130e+00) from it as separate values?

    Read the article

  • Is there an efficient way in LINQ to use a contains match if and only if there is no exact match?

    - by Peter
    I have an application where I am taking a large number of 'product names' input by a user and retrieving some information about each product. The problem is, the user may input a partial name or even a wrong name, so I want to return the closest matches for further selection. Essentially if product name A exactly matches a record, return that, otherwise return any contains matches. Otherwise return null. I have done this with three separate statements, and I was wondering if there was a more efficient way to do this. I am using LINQ to EF, but I materialize the products to a list first for performance reasons. productNames is a List of product names (input by the user). products is a List of product 'records' var directMatches = (from s in productNames join p in products on s.ToLower() equals p.name.ToLower() into result from r in result.DefaultIfEmpty() select new {Key = s, Product = r}); var containsMatches = (from d in directMatches from p in products where d.Product == null && p.name.ToLower().Contains(d.Key) select new { d.Key, Product = p }); var matches = from d in directMatches join c in containsMatches on d.Key equals c.Key into result from r in result.DefaultIfEmpty() select new {d.Key, Product = d.Product ?? (r != null ? r.Product: null) };

    Read the article

  • SQL Grouping with multiple joins combining results incorrectly

    - by Matt
    Hi I'm having trouble with my query combining records when it shouldn't. I have two tables Authors and Publications, they are related by Publication ID in a many to many relationship. As each author can have many publications and each publication has many Authors. I want my query to return every publication for a set of authors and include the ID of each of the other authors that have contributed to the publication grouped into one field. (I am working with mySQL) I have tried to picture it graphically below Table: authors Table:publications AuthorID | PublicationID PublicationID | PublicationName 1 | 123 123 | A 1 | 456 456 | B 2 | 123 789 | C 2 | 789 3 | 123 3 | 456 I want my result set to be the following AuthorID | PublicationID | PublicationName | AllAuthors 1 | 123 | A | 1,2,3 1 | 456 | B | 1,3 2 | 123 | A | 1,2,3 2 | 789 | C | 2 3 | 123 | A | 1,2,3 3 | 456 | B | 1,3 This is my query Select Author1.AuthorID, Publications.PublicationID, Publications.PubName, GROUP_CONCAT(TRIM(Author2.AuthorID)ORDER BY Author2.AuthorID ASC)AS 'AuthorsAll' FROM Authors AS Author1 LEFT JOIN Authors AS Author2 ON Author1.PublicationID = Author2.PublicationID INNER JOIN Publications ON Author1.PublicationID = Publications.PublicationID WHERE Author1.AuthorID ="1" OR Author1.AuthorID ="2" OR Author1.AuthorID ="3" GROUP BY Author2.PublicationID But it returns the following instead AuthorID | PublicationID | PublicationName | AllAuthors 1 | 123 | A | 1,1,1,2,2,2,3,3,3 1 | 456 | B | 1,1,3,3 2 | 789 | C | 2 It does deliver the desired output when there is only one AuhorID in the where statement. I have not been able to figure it out, does anyone know where i'm going wrong?

    Read the article

  • Refactoring an ASP.NET 2.0 app to be more "modern"

    - by Wayne M
    This is a hypothetical scenario. Let's say you've just been hired at a company with a small development team. The company uses an internal CRM/ERP type system written in .NET 2.0 to manage all of it's day to day things (let's simplify and say customer accounts and records). The app was written a couple of years ago when .NET 2.0 was just out and uses the following architectural designs: Webforms Data layer is a thin wrapper around SqlCommand that calls stored procedures Rudimentary DTO-style business objects that are populated via the sprocs A "business logic" layer that acts as a gateway between the webform and database (i.e. code behind calls that layer) Let's say that as there are more changes and requirements added to the application, you start to feel that the old architecture is showing its age, and changes are increasingly more difficult to make. How would you go about introducing refactoring steps to A) Modernize the app (i.e. proper separation of concerns) and B) Make sure that the app can readily adapt to change in the organization? IMO the changes would involve: Introduce an ORM like Linq to Sql and get rid of the sprocs for CRUD Assuming that you can't just throw out Webforms, introduce the M-V-P pattern to the forms Make sure the gateway classes conform to SRP and the other SOLID principles. Change the logic that is re-used to be web service methods instead of having to reuse code What are your thoughts? Again this is a totally hypothetical scenario that many of us have faced in the past, or may end up facing.

    Read the article

  • php - create columns from mysql list

    - by user271619
    I have a long list generated from a simple mysql select query. Currently (shown in the code below) I am simply creating list of table rows with each record. So, nothing complicated. However, I want to divide it into more than one column, depending on the number of returned results. I've been wrapping my brain around how to count this in the php, and I'm not getting the results I need. <table> <? $query = mysql_query("SELECT * FROM `sometable`"); while($rows = mysql_fetch_array($query)){ ?> <tr> <td><?php echo $rows['someRecord']; ?></td> </tr> <? } ?> </table> Obviously there's one column generated. So if the records returned reach 10, then I want to create a new column. In other words, if the returned results are 12, I have 2 columns. If I have 22 results, I'll have 3 columns, and so on.

    Read the article

  • SQL Structure of DB table with different types of columns

    - by Dmitry Dvornikov
    I have a problem with the optimization of the structure of the database. I'll try to explain it exactly. I create a project, where we can add different values??, but this values must have different types of the columns in the database (eg, int, double , varchar). What is the best way to store the different types of values ??in the database. In the project I'm using Propel 1.6. The point is availability to add value with 'int', 'varchar' and other columns types, to search the table was efficient. In total, I have two ideas. The first is to create a table of "value", which will have columns: "id ", "value_int", "value_double", "value_varchar", etc - with the corresponding column types. Depending on the type of values??, records will be saved with the value in the appropriate column (the rest will be NULL). The second solution is to create separate tables such as "value_int", "value_varchar" etc. There would be columns: "id", "value", which correspond to the relevant types of "value" (ie, such as int, varchar, etc). I must admit that I do not believe any of the above solutions, originally I was thinking about one table "value", where the column would be a "text" type - but this solution would probably be even worse. I would like to know your opinion on this topic, maybe something else would be better. Thanks in advance. EDIT: For example : We have three tables: USER: [table of users] * id * name FIELD: [table of profile fields - where the column 'type' is the type of field, eg int or varchar) * id * type * name VALUE : * id * User_id - ( FK user.id ) * Field_id - ( FK field.id ) * value So we have in each row an user in USER table, and the profile is stored in the VALUE table. Bit each profile field may have a different type (column 'type' in the FIELD table), and based on that I would want this value to add to the appropriate column of the appropriate type.

    Read the article

  • How do I keep users from spoofing data through a form?

    - by Jonathan
    I have a site which has been running for some time now that uses a great deal of user input to build the site. Naturally there are dozens of forms on the site. When building the site, I often used hidden form fields to pass data back to the server so that I know which record to update. an example might be: <input type="hidden" name="id" value="132" /> <input type="text" name="total_price" value="15.02" /> When the form is submitted, these values get passed to the server and I update the records based on the data passed (i.e. the price of record 132 would get changed to 15.02). I recently found out that you can change the attributes and values via something as simple as firebug. So...I open firebug and change the id value to "155" and the price value to "0.00" and then submit the form. Viola! I view product number 155 on the site and it now says that it's $0.00. This concerns me. How can I know which record to update without either a query string (easily modified) or a hidden input element passing the id to the server? And if there's no better way (I've seen literally thousands of websites that pass the data this way), then how would I make it so that if a user changes these values, the data on the server side is not executed (or something similar to solve the issue)? I've thought about encrypting the id and then decrypting it on the other side, but that still doesn't protect me from someone changing it and just happening to get something that matches another id in the database. I've also thought about cookies, but I've heard that those can be manipulated as well. Any ideas? This seems like a HUGE security risk to me.

    Read the article

  • iPhone: Low memory crash...

    - by MacTouch
    Once again I'm hunting memory leaks and other crazy mistakes in my code. :) I have a cache with frequently used files (images, data records etc. with a TTL of about one week and a size limited cache (100MB)). There are sometimes more then 15000 files in a directory. On application exit the cache writes an control file with the current cache size along with other useful information. If the applications crashes for some reason (sh.. happens sometimes) I have in such case to calculate the size of all files on application start to make sure I know the cache size. My app crashes at this point because of low memory and I have no clue why. Memory leak detector does not show any leaks at all. I do not see any too. What's wrong with the code below? Is there any other fast way to calculate the total size of all files within a directory on iPhone? Maybe without to enumerate the whole contents of the directory? The code is executed on the main thread. NSUInteger result = 0; NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSDirectoryEnumerator *dirEnum = [[[NSFileManager defaultManager] enumeratorAtPath:path] retain]; int i = 0; while ([dirEnum nextObject]) { NSDictionary *attributes = [dirEnum fileAttributes]; NSNumber* fileSize = [attributes objectForKey:NSFileSize]; result += [fileSize unsignedIntValue]; if (++i % 500 == 0) { // I tried lower values too [pool drain]; } } [dirEnum release]; dirEnum = nil; [pool release]; pool = nil; Thanks, MacTouch

    Read the article

  • Loading child entities with JPA on Google App Engine

    - by Phil H
    I am not able to get child entities to load once they are persisted on Google App Engine. I am certain that they are saving because I can see them in the datastore. For example if I have the following two entities. public class Parent implements Serializable{ @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Extension(vendorName="datanucleus", key="gae.encoded-pk", value="true") private String key; @OneToMany(cascade=CascadeType.ALL) private List<Child> children = new ArrayList<Child>(); //getters and setters } public class Child implements Serializable{ @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Extension(vendorName="datanucleus", key="gae.encoded-pk", value="true") private String key; private String name; @ManyToOne private Parent parent; //getters and setters } I can save the parent and a child just fine using the following: Parent parent = new Parent(); Child child = new Child(); child.setName("Child Object"); parent.getChildren().add(child); em.persist(parent); However when I try to load the parent and then try to access the children (I know GAE lazy loads) I do not get the child records. //parent already successfully loaded parent.getChildren.size(); // this returns 0 I've looked at tutorial after tutorial and nothing has worked so far. I'm using version 1.3.3.1 of the SDK. I've seen the problem mentioned on various blogs and even the App Engine forums but the answer is always JDO related. Am I doing something wrong or has anyone else had this problem and solved it for JPA?

    Read the article

  • NHibernate will not insert a record

    - by Brian Beckham
    I have an application that is now 4+ years old that is exhibiting some odd behavior on our latest deployment. The application uses nHibernate for all inserts / updates / selects, etc. We are currently using .NET 2.0, and nHibernate 1.2 (I know, we need to upgrade) This deployment is on Windows 2008 Server x64, IIS 7.5 - what I have seen so far is that the application runs, but is unable to insert or update records in the DB - reads seem fine so far, but writes are a problem. SOME writes actually work, inserts into some small tables, but most never even make it to the DB. Using SQL Profiler, the insert / updates never make it to the server, and turning log4net up to DEBUG, and show_sql true - the select statements appear, but the insert / update statements never make it into the log at all, and never show up at the server. What's even more odd is that the application seems to be oblivious to this - the commandandclose runs without exception (open session in view with an httpmodule), the domain objects come back with uuid's generated, etc. but never get persisted. Certainly an upgrade is due, but I would hate to try it during a deployment, and without time to accurately test the app. Any ideas?

    Read the article

  • Strange behavior with large Object Types

    - by Peter Lang
    I recognized that calling a method on an Oracle Object Type takes longer when the instance gets bigger. The code below just adds rows to a collection stored in the Object Type and calls the empty dummy-procedure in the loop. Calls are taking longer when more rows are in the collection. When I just remove the call to dummy, performance is much better (the collection still contains the same number of records): Calling dummy: Not calling dummy: 11 0 81 0 158 0 Code to reproduce: Create Type t_tab Is Table Of VARCHAR2(10000); Create Type test_type As Object( tab t_tab, Member Procedure dummy ); Create Type Body test_type As Member Procedure dummy As Begin Null; --# Do nothing End dummy; End; Declare v_test_type test_type := New test_type( New t_tab() ); Procedure run_test As start_time NUMBER := dbms_utility.get_time; Begin For i In 1 .. 200 Loop v_test_Type.tab.Extend; v_test_Type.tab(v_test_Type.tab.Last) := Lpad(' ', 10000); v_test_Type.dummy(); --# Removed this line in second test End Loop; dbms_output.put_line( dbms_utility.get_time - start_time ); End run_test; Begin run_test; run_test; run_test; End; I tried with both 10g and 11g. Can anyone explain/reproduce this behavior?

    Read the article

  • querying huge database table takes too much of time in mysql

    - by Vijay
    Hi all, I am running sql queries on a mysql db table that has 110Mn+ unique records for whole day. Problem: Whenever I run any query with "where" clause it takes at least 30-40 mins. Since I want to generate most of data on the next day, I need access to whole db table. Could you please guide me to optimize / restructure the deployment model? Site description: mysql Ver 14.12 Distrib 5.0.24, for pc-linux-gnu (i686) using readline 5.0 4 GB RAM, Dual Core dual CPU 3GHz RHEL 3 my.cnf contents : [root@reports root]# cat /etc/my.cnf [mysqld] datadir=/data/mysql/data/ socket=/tmp/mysql.sock sort_buffer_size = 2000000 table_cache = 1024 key_buffer = 128M myisam_sort_buffer_size = 64M # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 [mysql.server] user=mysql basedir=/data/mysql/data/ [mysqld_safe] err-log=/data/mysql/data/mysqld.log pid-file=/data/mysql/data/mysqld.pid [root@reports root]# DB table details: CREATE TABLE `RAW_LOG_20100504` ( `DT` date default NULL, `GATEWAY` varchar(15) default NULL, `USER` bigint(12) default NULL, `CACHE` varchar(12) default NULL, `TIMESTAMP` varchar(30) default NULL, `URL` varchar(60) default NULL, `VERSION` varchar(6) default NULL, `PROTOCOL` varchar(6) default NULL, `WEB_STATUS` int(5) default NULL, `BYTES_RETURNED` int(10) default NULL, `RTT` int(5) default NULL, `UA` varchar(100) default NULL, `REQ_SIZE` int(6) default NULL, `CONTENT_TYPE` varchar(50) default NULL, `CUST_TYPE` int(1) default NULL, `DEL_STATUS_DEVICE` int(1) default NULL, `IP` varchar(16) default NULL, `CP_FLAG` int(1) default NULL, `USER_LOCATE` bigint(15) default NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 MAX_ROWS=200000000; Thanks in advance! Regards,

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >