Search Results

Search found 7650 results on 306 pages for 'soa record'.

Page 136/306 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • Generic version control strategy for select table data within a heavily normalized database

    - by leppie
    Hi Sorry for the long winded title, but the requirement/problem is rather specific. With reference to the following sample (but very simplified) structure (in psuedo SQL), I hope to explain it a bit better. TABLE StructureName { Id GUID PK, Name varchar(50) NOT NULL } TABLE Structure { Id GUID PK, ParentId GUID (FK to Structure), NameId GUID (FK to StructureName) NOT NULL } TABLE Something { Id GUID PK, RootStructureId GUID (FK to Structure) NOT NULL } As one can see, Structure is a simple tree structure (not worried about ordering of children for the problem). StructureName is a simplification of a translation system. Finally 'Something' is simply something referencing the tree's root structure. This is just one of many tables that need to be versioned, but this one serves as a good example for most cases. There is a requirement to version to any changes to the name and/or the tree 'layout' of the Structure table. Previous versions should always be available. There seems to be a few possibilities to tackle this issue, like copying the entire structure, but most approaches causes one to 'loose' referential integrity. Example if one followed this approach, one would have to make a duplicate of the 'Something' record, given that the root structure will be a new record, and have a new ID. Other avenues of possible solutions are looking into how Wiki's handle this or go a lot further and look how proper version control systems work. Currently, I feel a bit clueless how to proceed on this in a generic way. Any ideas will be greatly appreciated. Thanks leppie

    Read the article

  • Stopping cookies being set from a domain (aka "cookieless domain") to increase site performance

    - by Django Reinhardt
    I was reading in Google's documentation about improving site speed. One of their recommendations is serving static content (images, css, js, etc.) from a "cookieless domain": Static content, such as images, JS and CSS files, don't need to be accompanied by cookies, as there is no user interaction with these resources. You can decrease request latency by serving static resources from a domain that doesn't serve cookies. Google then says that the best way to do this is to buy a new domain and set it to point to your current one: To reserve a cookieless domain for serving static content, register a new domain name and configure your DNS database with a CNAME record that points the new domain to your existing domain A record. Configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain. In your web pages, reference the domain name in the URLs for the static resources. This is pretty straight forward stuff, except for the bit where it says to "configure your web server to serve static resources from the new domain, and do not allow any cookies to be set anywhere on this domain". From what I've read, there's no setting in IIS that allows you to say "serve static resources", so how do I prevent ASP.NET from setting cookies on this new domain? At present, even if I'm just requesting a .jpg from the new domain, it sets a cookie on my browser, even though our application's cookies are set to our old domain. For example, ASP.NET sets an ".ASPXANONYMOUS" cookie that (as far as I'm aware) we're not telling it to do. Apologies if this is a real newb question, I'm new at this! Thanks.

    Read the article

  • Passing Values to Controllers

    - by Dru
    I'm trying to allow users to 'favorite' links (that is, create a new Favorite record with their user_id and the link_id) This is what I have so far.. When I click favorite (as a user), the new record is assigned to the user_id but the link_id field is nil. How can I pass the link_id into my FavoritesController? My View Code Added Link Model Code class FavoritesController < ApplicationController def create @user = User.find(session[:user_id]) @favorite = @user.favorites.create :link_id => params[:id] redirect_to :back end end The Favorite model belongs to :user and :link Note: I've also tried this but when I click 'favorite', there's an error "Couldn't find Link without an ID." Update <%= link_to "Favorite", :controller => :favorites, :action => :create, :link_id => link.id %> with class FavoritesController < ApplicationController def create @user = User.find(session[:user_id]) @favorite = @user.favorites.create :link_id => :params[:link_id] redirect_to :back end end Returns "can't convert Symbol into Integer" app/controllers/favorites_controller.rb:4:in [] app/controllers/favorites_controller.rb:4:in create I've tried forcing it into an Integer several ways with .to_i

    Read the article

  • rails not recognizing project

    - by tipu
    I can create a new project using rails and I can use stuff like rails migration ... and i (correctly) get a error because the sqlite gem is missing. but when i try using rails migration ... with a project i checked out from github, it doesn't recognize that it is a rails project i get: Usage: rails new APP_PATH [options] Options: -d, [--database=DATABASE] # Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite3/frontbase/ibm_db) # Default: sqlite3 -O, [--skip-active-record] # Skip Active Record files [--dev] # Setup the application with Gemfile pointing to your Rails checkout -J, [--skip-prototype] # Skip Prototype files -T, [--skip-test-unit] # Skip Test::Unit files -G, [--skip-git] # Skip Git ignores and keeps -b, [--builder=BUILDER] # Path to an application builder (can be a filesystem path or URL) [--edge] # Setup the application with Gemfile pointing to Rails repository -m, [--template=TEMPLATE] # Path to an application template (can be a filesystem path or URL) -r, [--ruby=PATH] # Path to the Ruby binary of your choice # Default: /usr/bin/ruby1.8 [--skip-gemfile] # Don't create a Gemfile and it goes on. any ideas? edit: it's probably an important detail that earlier my rails wasn't working at all. i had to cp /usr/bin/ruby to /usr/bin/local/ruby

    Read the article

  • Spring 3 MVC validation BindingResult doesn't contain any errors

    - by Travelsized
    I'm attempting to get a Spring 3.0.2 WebMVC project running with the new annotated validation support. I have a Hibernate entity annotated like this: @Entity @Table(name = "client") public class Client implements Serializable { private static final long serialVersionUID = 1L; @Id @Basic(optional = false) @Column(name = "clientId", nullable = false) @NotEmpty private Integer clientId; @Basic(optional = false) @Column(name = "firstname", nullable = false, length = 45) @NotEmpty @Size(min=2, max=45) private String firstname; ... more fields, getters and setters } I've turned on mvc annotation support in my applicationContext.xml file: <mvc:annotation-driven /> And I have a method in my controller that responds to a form submission: @RequestMapping(value="/addClient.htm") public String addClient(@ModelAttribute("client") @Valid Client client, BindingResult result) { if(result.hasErrors()) { return "addClient"; } service.storeClient(client); return "clientResult"; } When my app loads in the server, I can see in the server log file that it loads a validator: 15 [http-8084-2] INFO org.hibernate.validator.util.Version - Hibernate Validator 4.0.2.GA The problem I'm having is that the validator doesn't seem to kick in. I turned on the debugger, and when I get into the controller method, the BindingResult contains 0 errors after I submit an empty form. (The BindingResult does show that it contains the Client object as a target.) It then proceeds to insert a record without an Id and throws an exception. If I fill out an Id but leave the name blank, it creates a record with the Id and empty fields. What steps am I missing to get the validation working?

    Read the article

  • Design pattern to integrate Rails with a Comet server

    - by empire29
    I have a Ruby on Rails (2.3.5) application and an APE (Ajax Push Engine) server. When records are created within the Rails application, i need to push the new record out on applicable channels to the APE server. Records can be created in the rails app by the traditional path through the controller's create action, or it can be created by several event machines that are constantly monitoring various inputstream and creating records when they see data that meets a certain criteria. It seems to me that the best/right place to put the code that pushes the data out to the APE server (which in turn pushes it out to the clients) is in the Model's after_create hook (since not all record creations will flow through the controller's create action). The final caveat is I want to push a piece of formatted HTML out to the APE server (rather than a JSON representation of the data). The reason I want to do this is 1) I already have logic to produce the desired layout in existing partials 2) I don't want to create a javascript implementation of the partials (javascript that takes a JSON object and creates all the HTML around it for presentation). This would quickly become a maintenance nightmare. The problem with this is it would require "rendering" partials from within the Model (which im having trouble doing anyhow because they don't seem to have access to Helpers when they're rendered in this manner). Anyhow - Just wondering what the right way to go about organizing all of this is. Thanks

    Read the article

  • Loading Dimension Tables - Methodologies

    - by Nev_Rahd
    Hello, Recently I been working on project, where need to populated Dim Tables from EDW Tables. EDW Tables are of type II which does maintain historical data. When comes to load Dim Table, for which source may be multiple EDW Tables or would be single table with multi level pivoting (on attributes). Mean: There would be 10 records - one for each attribute which need to be pivoted on domain_code to make a single row in Dim. Out of these 10 records there would be some attributes with same domain_code but with different sub_domain_code, which needs further pivoting on subdomain code. Ex: if i got domain code: 01,02, 03 = which are straight pivot on domain code I would also have domain code: 10 with subdomain code / version as 2006,2007,2008,2009 That means I need to split my source table with above attributes into two = one for domain code and other for domain_code + version. so far so good. When it comes to load Dim Table: As per design specs for Dimensions (originally written by third party), what they want is: for every single change in EDW (attribute), it should assemble all the related records (for that NK) mean new one with other attribute values which are current = process them to create a new dim record and insert it. That mean if a single extract contains 100 records updated (one for each NK), it should assemble 100 + (100*9) records to insert / update dim table. How good is this approach. Other way I tried to do is just do a lookup into dim table for that NK get the value's of recent records (attributes which not changed) and insert it and update the current one. What would be the better approach assembling records at source side for one attribute change or looking into dim table's recent record and process it. If this doesn't make sense, would like to elaborate it further. Thanks

    Read the article

  • Hundreds of custom UserControls create thousands of USER Objects

    - by Andy Blackman
    I'm creating a dashboard application that shows hundreds of "items" on a FlowLayoutPanel. Each "item" is a UserControl that is made up of 12 or labels. My app queries a database and then creates an "item" instance for each record, populating ethe labels and textboxes with data before adding it to the FlowLayoutPanel. After adding about 560 items to the panel, I noticed that the USER Objects count in my Task Manager had gone up to about 7300, which was much much larger than any other app on my machine. I did a quick spot of mental arithmetic (OK, I might have used calc.exe) and figured that 560 * 13 (12 labels plus the UserControl itself) is 7280. So that suddenly gave away where all the objects were coming from... Knowing that there is a 10,000 USER object limit before windows throws in the towel, I'm trying to figure better ways of drawing these items onto the FlowLayoutPanel. My ideas so far are as follows: 1) User-draw the "item", using graphics.DrawText and DrawImage in place of many of the labels. I'm hoping that this will mean 1 item = 1 USER Object, not 13. 2) Have 1 instance of the "item", then for each record, populate the instance and use the Control.DrawToBitmap() method to grab an image and then use that in the FlowLayoutPanel (or similar) So... Does anyone have any other suggestions ??? P.S. It's a zoomable interface, so I have already ruled out "Paging" as there is a requirement to see all items at once :( Thanks everyone.

    Read the article

  • HELP ME !! I am Not able to update form data to mysql using php and jquery

    - by Jimson Jose
    i tired and was unable to find the answer i am looking for an answer. my problem is that i am unable to update the values enterd in the form. I have attached all the files i'm using MYSQL database to fetch data. what happens is that i'm able to add and delete records from form using jquery and PHP scripts to MYSQL database, but i am not able to update data which was retrived from database. the file structure is as follows index.php is a file with jquery functions where it displays form for adding new data to MYSQL using save.php file and list of all records are view without refrishing page (calling load-list.php to view all records from index.php works fine, and save.php to save data from form) - Delete is an function called from index.php to delete record from mysql database (function calling delete.php works fine) - Update is an function called from index.php to update data using update-form.php by retriving specific record from mysql tabel, (works fine) Problem lies in updating data from update-form.php to update.php (in which update query is wrriten for mysql) i had tried in many ways at last i had figured out that data is not being transfred from update-form.php to update.php there is a small problem in jquery ajax function where it is not transfering data to update.php page. some thing is missing in calling update.php page it is not entering into that page I am new bee in programming i had collected this script from many forums and made this one.So i was limited in solving this problem i cam to know that this is good platform for me and many where we get a help to create new things.. please guide me with your help to complete my effors !!!!! i will be greatfull to all including ths site which gave me an oppurtunity to present my self..... please find the link below to download all files which is of 35kb (virus free assurance) download mysmallform files in ZIPped format, including mysql query thanks a lot in advance, May GOD bless YOU and THIS SITE

    Read the article

  • Best method to search hierarchical data

    - by WDuffy
    I'm looking at building a facility which allows querying for data with hierarchical filtering. I have a few ideas how I'm going to go about it but was wondering if there are any recommendations or suggestions that might be more efficient. As an example imagine that a user is searching for a job. The job areas would be as follows. 1: Scotland 2: --- West Central 3: ------ Glasgow 4: ------ Etc 5: --- North East 6: ------ Ayrshire 7: ------ Etc A user can search specific (i.e. Glasgow) or in a larger area (i.e. Scotland). The two approaches I am considering are: keep a note of children in the database for each record (i.e. cat 1 would have 2, 3, 4 in its children field) and query against that record with a SELECT * FROM Jobs WHERE Category IN Areas.childrenField. Use a recursive function to find all results who have a relation to the selected area. The problems I see from both are: Holding this data in the db will mean having to keep track of all changes to structure. Recursion is slow and inefficent. Any ideas, suggestion or recommendations on the best approach? I'm using C# ASP.NET with MSSQL 2005 DB.

    Read the article

  • Optimize master-detail insert statements

    - by Dave Jarvis
    Quest After a day of running (against nearly 1 GB of data), a set of statements are tumbling down to 40 inserts per second. I am looking to increase that by an order of magnitude or two. SQL Code The code to insert the information comes in two parts: a master record and detail records. The master record: INSERT INTO MONTH_REF (DISTRICT_ID, STATION_ID, CATEGORY_ID, YEAR, MONTH) VALUES ('101', '0066', '010', 1984, 07); The detail records: INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0, ' ', 1); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0.5, ' ', 2); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES ((SELECT ID FROM MONTH_REF M WHERE M.DISTRICT_ID = '101' AND M.STATION_ID = '0066' AND M.CAT EGORY_ID = '010' AND M.YEAR = 1984 AND M.MONTH = 07), 0, 'T', 3); Proposed Solution INSERT INTO MONTH_REF (DISTRICT_ID, STATION_ID, CATEGORY_ID, YEAR, MONTH) VALUES ('101', '0066', '010', 1984, 07); SET @month_ref_id := (SELECT LAST_INSERT_ID()); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0, ' ', 1); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0.5, ' ', 2); INSERT INTO DAILY (MONTH_REF_ID, AMOUNT, DAILY_FLAG_ID, DAY) VALUES (@month_ref_id, 0, 'T', 3); Constraints The MONTH_REF table has an AUTO_INCREMENT primary key and is indexed on it. The DAILY table has no index and no primary key. A primary key can be added to the DAILY table, if it would help. Question Is there a more efficient way to execute the (billion or so) insert statements than the proposed solution? Thank you!

    Read the article

  • FileHelpers cannot map converted field into destination array

    - by jaffa
    I have the following record (reduced for brevity): [DelimitedRecord(",")] [IgnoreFirst] [IgnoreEmptyLines()] public class ImportRecord { [FieldQuoted] [FieldTrim(TrimMode.Both)] public string FirstName; [FieldQuoted] [FieldTrim(TrimMode.Both)] public string LastName; [FieldQuoted] [FieldTrim(TrimMode.Both)] [FieldOptional] [FieldConverter(typeof(TestPropertyConverter))] public int[] TestProperty; } Converter code: public class TestPropertyConverter : ConverterBase { public override object StringToField(string from) { var ret = from.Split('|').Select(x => Convert.ToInt32(x)).ToArray(); return ret; } } So an example record could be: John, Smith, 1|2|3|4 It would expect the values 1,2,3,4 to expand and fill the TestProperty array. However, I'm getting the following exception: At least one element in the source array could not be cast down to the destination array type. I've tried to debug into the code and it seems to blow-up in the ExtractFieldValue() function inside FieldBase.cs where it tries to return out of the function. The following line seems to be the culprit: res.ToArray(ArrayType); It seems to expect the 'res' variable to be the destination type array, but it contains 1 element of the array itself. Can anyone suggest if I'm doing this wrong or a possible fix?

    Read the article

  • Compare values in serialized column in Doctrine with Query Builder

    - by ReynierPM
    I'm building a FormType for a Symfony2 project but I need some Query Builder on the field since I need to compare some values with the one stored on DB and show the results. This is what I have: .... ->add('servicio', 'entity', array( 'mapped' => false, 'class' => 'ComunBundle:TipoServicio', 'property' => 'nombre', 'required' => true, 'label' => false, 'expanded' => true, 'multiple' => true, 'query_builder' => function (EntityRepository $er) { return $er->createQueryBuilder('ts') ->where('ts.tipo_usuario = (:tipo)') ->setParameter('tipo', 1); } )) .... But tipo_usuario at DB table is stored as serialized text for example: record1: value1 | a:1:{i:0;s:1:"1";} record2: value2 | a:4:{i:0;s:1:"1";i:1;s:1:"2";i:2;s:1:"3";i:3;s:1:"4";} I'll have two different forms (I don't know how to pass the Request to a form) in the first one I'll only show the first record and for the second one the first and second record for example: First form will show: checkbox: value1 Second form will show: checkbox: value1 checkbox: value2 I achieve this? Any help?

    Read the article

  • Is it Possible to Use Constraints on Hierarchical Data in a Self-Referential Table?

    - by pbarney
    Suppose you have the following table, intended to represent hierarchical data: +--------+-------------+ | Field | Type | +--------+-------------+ | id | int(10) | | parent | int(10) | | name | varchar(45) | +--------+-------------+ The table is self-referential in that the parent_id refers to id. So you might have the following data: +----+--------+---------------+ | id | parent | name | +----+--------+---------------+ | 1 | 0 | fruit | | 2 | 0 | vegetable | | 3 | 1 | apple | | 4 | 1 | orange | | 5 | 3 | red delicious | | 6 | 3 | granny smith | | 7 | 3 | gala | +----+--------+---------------+ Using MySQL, I am trying to impose a (self-referential) foreign key constraint upon the data to cascade on update and prevent deletion of a record if it has any "children." So I used the following: CREATE TABLE `test`.`fruit` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `parent` INT(10) UNSIGNED, `name` VARCHAR(45) NOT NULL, PRIMARY KEY (`id`), CONSTRAINT `fk_parent` FOREIGN KEY (`parent`) REFERENCES `fruit` (`id`) ON UPDATE CASCADE ON DELETE RESTRICT ) ENGINE = InnoDB; From what I understand, this should fit my requirements. (And parent must default to null to allow insertions, correct?) The problem is, if I change the id of a record, it will not cascade: Cannot delete or update a parent row: a foreign key constraint fails (`test`.`fruit`, CONSTRAINT `fk_parent` FOREIGN KEY (`parent`) REFERENCES `fruit` (`id`) ON UPDATE CASCADE) What am I missing? Feel free to correct me if my terminology is screwed up... I'm new to constraints.

    Read the article

  • MySQL help, counting information on last records

    - by ee12csvt
    I need some advice I have two tables, one holds unique serial numbers of items (items) and the other holds status changes and other information for these items (details) The Tables are set up as follows Item itemID itemName itemDate details detID itemID modlvl status detDate All items have at least one record in the details table, but over time the status has changed or the modification level has changed (Both of these are identified by numbers which are held in other appropriate tables) and a new record is created each time the status/modlvl changes I want to display a table on my webpage using php that identifies the different mod levels of the items and shows a count of each of the current status of the items EDIT Hi Ronnis, This is an example of the data in the tables and what I want to achieve The current Mod Levels range from 1 to 3 Status representations are 1 In Use 2 In Store 3 Being repaired 4 In Transit 5 For Disposal 6 Disposed 7 Lost Item itemID OrigMod created 1000 1 2009-10-01 22:12:12 1001 1 2009-10-01 22:12:12 1002 1 2009-10-01 22:12:12 1003 1 2009-10-01 22:12:12 1004 1 2009-10-01 22:12:12 1005 1 2009-10-01 22:12:12 1006 1 2009-10-01 22:12:12 1007 1 2009-10-01 22:12:12 1008 1 2009-10-01 22:12:12 1009 1 2009-10-01 22:12:12 1010 1 2009-10-01 22:12:12 Details detID itemID modlvl detDate status 1 1000 1 2009-10-01 1 2 1001 1 2009-10-01 1 3 1002 1 2009-10-01 1 4 1003 1 2009-10-01 1 5 1004 1 2009-10-01 1 6 1005 1 2009-10-01 1 7 1006 1 2009-10-01 1 8 1007 1 2009-10-01 1 9 1008 1 2009-10-01 1 10 1009 1 2009-10-01 1 11 1010 1 2009-10-01 1 12 1001 1 2010-02-01 2 13 1001 1 2010-02-03 4 14 1001 1 2010-03-01 3 15 1000 1 2010-03-14 2 16 1001 2 2010-04-01 4 17 1006 1 2010-04-01 2 18 1001 2 2010-04-03 2 19 1006 1 2010-04-14 4 20 1006 1 2010-05-01 5 21 1002 1 2010-05-02 2 22 1003 1 2010-05-10 2 23 1010 1 2010-06-01 2 24 1006 1 2010-06-18 6 25 1010 1 2010-07-01 7 26 1007 1 2010-07-02 2 27 1007 1 2010-07-04 4 28 1003 1 2010-07-10 2 29 1007 1 2010-07-11 3 30 1007 2 2010-07-12 4 31 1007 2 2010-07-15 2 32 1001 2 2010-08-31 1 33 1001 2 2010-09-10 2 34 1001 2 2010-10-01 4 35 1008 1 2010-10-01 2 36 1001 2 2010-10-05 3 37 1008 1 2010-10-05 4 38 1008 1 2010-10-10 3 39 1001 3 2010-10-20 4 40 1001 3 2010-10-25 2 Using the tables above I want to get this result MoLvl Use Store Repd Transit Displ Dispd Lost Total 1 3 3 1 0 0 1 1 9 2 0 1 0 0 0 0 0 1 3 0 1 0 0 0 0 0 1 Total 3 5 1 0 0 1 1 11

    Read the article

  • redirect gridview selected row data to two different pages on clicking on two different select button asp.net?

    - by user559800
    Ihave a gridview item template field namely Status as mentioned above ... i want when user click on hold button of particular row then the record from the particular row is transfered to another page. ... means.... if i click on the hold button of 1st row of gridview then seats=35 and booking closed =08:00:00 PM willbe trasferred to Me.Response.Redirect("Select_seats.aspx?s_no=" & label22.Text.ToString & "&" & "journey=" & label6.Text & "&" & "seater=" & label4.Text & "&" & "sleeper=" & label2.Text & "&" & "service=" & lab5.Text.ToString) .. and if i click on the manage button of same row then the record of that row will be transferred to Me.Response.Redirect("Select_nfo.aspx?s_no=" & label22.Text.ToString & "&" & "journey=" & label6.Text & "&" & "seater=" & label4.Text & "&" & "sleeper=" & label2.Text & "&" & "service=" & lab5.Text.ToString)

    Read the article

  • Most watched videos this week

    - by Jan Hancic
    I have a youtube like web-page where users upload&watch videos. I would like to add a "most watched videos this week" list of videos to my page. But this list should not contain just the videos that ware uploaded in the previous week, but all videos. I'm currently recording views in a column, so I have no information on when a video was watched. So now I'm searching for a solution to how to record this data. The first is the most obvious (and the correct one, as far as I know): have a separate table in which you insert a new line every time you want to record a new view (storing the ID of the video and the timestamp). I'm worried that I would quickly get huge amounts of data in this table, and queries using this table would be extremely slow (we get about 3 million views a month). The second solution isn't as flexible but is more easy on the database. I would add 7 columns to the "videos" table (one for each day of the week): views_monday, views_tuesday , views_wednesday, ... And increment the value in the correct column based on the day it is. And I would reset the current day's column to 0 at midnight. I could then easily get the most watched videos of the week by summing this 7 columns. What do you think, should I bother with the first solution or will the second one suffice for my case? If you have a better solution please share! Oh, I'm using MySQL.

    Read the article

  • Use MySQL trigger to update another table when duplicate key found

    - by Jason
    Been scratching my head on this one, hoping one of you kind people and direct me towards solving this problem. I have a mysql table of customers, it contains a lot of data, but for the purpose of this question, we only need to worry about 4 columns 'ID', 'Firstname', 'Lastname', 'Postcode' Problem is, the table contains a lot of duplicated customers. A new table is being created where each customer is unique and for us, we decide a unique customer is based on 'Firstname', 'Lastname' and 'Postcode' However, (this is the important bit) we need to ensure each new "unique" customer record also can be matched to the original multiple entries of that customer in the original table. I believe the best way to do this is to have a third table, that has 'NewUniqueID', 'OldCustomerID'. So we can search this table for 'NewUniqueID' = '123' and it would return multiple 'OldCustomerID' values where appropriate. I am hoping to make this work using a trigger and the on duplicate key syntax. So what would happen is as follows: An query is run taking the old customer table and inserting it in to the new unique table. (A standard Insert Select query) On duplicate key continue adding records, but add one entry in to the third table noting the 'NewUniqueID' that duped along with the 'OldCustomerID' of the record we were trying to insert. Hope this makes sense, my apologies if it isn't clear. I welcome and appreciate any thoughts on this one! Many thanks Jason

    Read the article

  • How do I set the LRECL in C#.NET?

    - by donde
    I have been trying to ftp a dtl file from .net to, what I beleive, is an AS400. The error being reported back to me is: "One or more lines have been truncated" and the admin is saying the file is coming over with 256 lines that have variable length columns. I found this explanation online: we have to establish defaults because no specifics about the file exist. The default RECFM is V and LRECL is 256. This means that SAS will scan the input record looking for the CR & LF to tell us that we are at the EOR. If the marker isn't found within the limit of the LRECL, SAS discards the data from the LRECL value to the end of the record and adds a message to the Log that "One or more lines have been truncated". So I need to set the LRECL. How do I do this in C# .NET? FtpWebRequest ftp = (FtpWebRequest)FtpWebRequest.Create(ftpfullpath); ftp.Credentials = new NetworkCredential(user, pwd); ftp.KeepAlive = false; ftp.UseBinary = false; ftp.Method = WebRequestMethods.Ftp.UploadFile; FileStream fs = File.OpenRead(inputfilepath + ftpfileName); byte[] buffer = new byte[fs.Length]; fs.Read(buffer, 0, buffer.Length); fs.Close(); Stream ftpstream = ftp.GetRequestStream(); int i = 0; int intBlock = 1786; int intBuffLeft = buffer.Length; while (i < buffer.Length) { if (intBuffLeft >= 1786) { ftpstream.Write(buffer, i, intBlock); } else { ftpstream.Write(buffer, i, intBuffLeft); } i += intBlock; intBuffLeft -= 1786; } ftpstream.Close();

    Read the article

  • SQL Server race condition issue with range lock

    - by Freek
    I'm implementing a queue in SQL Server (please no discussions about this) and am running into a race condition issue. The T-SQL of interest is the following: set transaction isolation level serializable begin tran declare @RecordId int declare @CurrentTS datetime2 set @CurrentTS=CURRENT_TIMESTAMP select top 1 @RecordId=Id from QueuedImportJobs with (updlock) where Status=@Status and (LeaseTimeout is null or @CurrentTS>LeaseTimeout) order by Id asc if @@ROWCOUNT> 0 begin update QueuedImportJobs set LeaseTimeout = DATEADD(mi,5,@CurrentTS), LeaseTicket=newid() where Id=@RecordId select * from QueuedImportJobs where Id = @RecordId end commit tran RecordId is the PK and there is also an index on Status,LeaseTimeout. What I'm basically doing is select a record of which the lease happens to be expired, while simultaneously updating the lease time with 5 minutes and setting a new lease ticket. So the problem is that I'm getting deadlocks when I run this code in parallel using a couple of threads. I've debugged it up to the point where I found out that the update statement sometimes gets executing twice for the same record. Now, I was under the impression that the with (updlock) should prevent this (it also happens with xlock btw, not with tablockx). So it actually look like there is a RangeS-U and a RangeX-X lock on the same range of records, which ought to be impossible. So what am I missing? I'm thinking it might have something to do with the top 1 clause or that SQL Server does not know that where Id=@RecordId is actually in the locked range? Deadlock graph: Table schema (simplified):

    Read the article

  • how to add an ajax jQuery code, that return a row from php file [DB] and then change the value of input text?

    - by Nisreen KH
    //this is a code in php which is fetch a record from database <?php include("dbconn.inc"); $data=mysql_query("SELECT `subtitle`, `mian-title`, `page_num`, `note_path`, `flash_path` FROM `main-page` WHERE `page_num` =".$r." LIMIT 1"); echo $data; if (mysql_num_rows($data) == 0) { echo "No rows found, nothing to print so am exiting"; exit; } while ($row = mysql_fetch_assoc($data)) { echo $row['page_num']; } include("close.php"); ?> //and this is a code in html <form method="post" action="data.php"> <input name="Button1" type="submit" value="&lt;&lt;" /> <input name="test" style="width: 31px; height: 23px" type="text" /> // this is a value of page_num in DB. <input name="Button2" type="submit" value="&gt;&gt;" > </form> //i want to add an ajax code to retrieve the record from php code, and then change the input values depends on returned data ,and also to prevent refreshing the page always!!

    Read the article

  • How to effectively execute this cron job?

    - by Lost_in_code
    I have a table with 200 rows. I'm running a cron job every 10 minutes to perform some kind of insert/update operation on the table. The operation needs to be performed only on 5 rows at a time every time the cron job runs. So in first 10 mins records 1-5 are updated, records 5-10 in the 20th minute and so on. When the cron job runs for the 20th time, all the records in the table would have been updated exactly once. This is what is to be achieved at least. And the next cron job should repeat the process again. The problem: is that, every time a cron job runs, the insert/update operation should be performed on N rows (not just 5 rows). So, if N is 100, all records would've been updated by just 2 cron jobs. And the next cron job would repeat the process again. Here's an example: This is the table I currently have (200 records). Every time a cron job executes, it needs to pick N records (which I set as a variable in PHP) and update the time_md5 field with the current time's MD5 value. +---------+-------------------------------------+ | id | time_md5 | +---------+-------------------------------------+ | 10 | 971324428e62dd6832a2778582559977 | | 72 | 1bd58291594543a8cc239d99843a846c | | 3 | 9300278bc5f114a290f6ed917ee93736 | | 40 | 915bf1c5a1f13404add6612ec452e644 | | 599 | 799671e31d5350ff405c8016a38c74eb | | 56 | 56302bb119f1d03db3c9093caf98c735 | | 798 | 47889aa559636b5512436776afd6ba56 | | 8 | 85fdc72d3b51f0b8b356eceac710df14 | | .. | ....... | | .. | ....... | | .. | ....... | | .. | ....... | | 340 | 9217eab5adcc47b365b2e00bbdcc011a | <-- 200th record +---------+-------------------------------------+ So, the first record(id 10) should not be updated more than once, till all 200 records are updated once - the process should start over once all the records are updated once. I have some idea on how this could be achieved, but I'm sure there are more efficient ways of doing it. Any suggestions?

    Read the article

  • Entity Framework one-to-one relationship mapping flattened in code

    - by Josh Close
    I have a table structure like so. Address: AddressId int not null primary key identity ...more columns AddressContinental: AddressId int not null primary key identity foreign key to pk of Address County State AddressInternational: AddressId int not null primary key identity foreign key to pk of Address ProvinceRegion I don't have control over schema, this is just the way it is. Now, what I want to do is have a single Address object. public class Address { public int AddressId { get; set; } public County County { get; set; } public State State { get; set } public ProvinceRegion { get; set; } } I want to have EF pull it out of the database as a single entity. When saving, I want to save the single entity and have EF know to split it into the three tables. How would I map this in EF 4.1 Code First? I've been searching around and haven't found anything that meets my case yet. UPDATE An address record will have a record in Address and one in either AddressContinental or AddressInternational, but not both.

    Read the article

  • How do you bind SQL Data to a .NET DataGridView?

    - by Jordan S
    I am trying to bind a table in an SQL database to a DataGridView Control. I would like to make it so that when the user enters a new line of data in the DataGridView that a record is automatically added to the database. Is there a way to do this using LINQ to SQL? I have tried using the code below but after I add a new entry I dont think the data gets added to the DB. Please Help! BOMClassesDataContext DB = new BOMClassesDataContext(); var mfrs = from m in DB.Manufacturers select m; BindingSource bs = new BindingSource(); bs.DataSource = mfrs; dataGridView1.DataSource = bs; I tried adding DB.SubmitChanges() to the CellValueChanged eventhandler and that partially works. If I click the bottom empty row it automatically fills in the ID (identity) column of the table with a "0" instead of the next unused value. If I change that value manually to the next available then it adds the new record fine but if I leave it at 0 it does nothing. How can i fix this?

    Read the article

  • Do I lose the benefits of macro recording if I develop Excel apps in Visual Studio?

    - by DanM
    I've written lots of Excel macros in the past using the following development process: Record a macro. Open the VBA editor. Edit the macro. I'm now experimenting with a Visual Studio 2008 "Excel 2007 Add-In" project (C#), and I'm wondering if I will have to give up this development process. Questions: I know I can still record macros using Excel, but is there any way to access the resulting code in Visual Studio? Or do I just have to copy and paste then C#-ize it? What happens with my "Personal Macro Workbook"? Can I use the macros I have stored in there within C#? Or is there some way to convert them to C#? If there is some support for opening and editing VBA macros in Visual Studio, can you provide a very brief summary of how it works or point me to a good reference? Do you have any other tips for transitioning from writing macros in VBA using Excel's built-in editor to writing them in C# with Visual Studio?

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >