Search Results

Search found 6728 results on 270 pages for 'srv record'.

Page 87/270 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • PHP 2D Array to MySQL Database

    - by Ashley Ward
    I have a PHP 2D array, many keys, each with one value, I need to put this into a MySQL database. The database has 8 fields. Eg. Field1, Field2, Field3, etc. I am trying to ensure value1 => field1, value2 =>field2, value3 => field3 and so on, when one record is full (i.e. after 8 values) a new record should be created for the next 8 values and so on. I am aware that the 1st, 9th, 17th, 26th values etc, will need an insert statement and the intermediate values will be an update statement. What is the best way of going about this?

    Read the article

  • Resize Ext.form.ComboBox to fit its content

    - by ITRushn
    There are quite few solutions on Ext forums, but I wasn’t able to get any of them work. It seems I am missing something minor. I need to resize combobox to fit its content when it’s first created. I do not need to worry about resizing it when content is changing. Is there any working examples using Extjs 3.2? Current Code: var store = new Ext.data.ArrayStore({ fields: ['view', 'value', 'defaultselect'], data: Ext.configdata.dataCP }); comboCPU = new Ext.form.ComboBox({ tpl: '<tpl for="."><div class="x-combo-list-item"><b>{view}</b><br /></div></tpl>', store: store, displayField: 'view', width: 600, typeAhead: true, forceSelection: true, mode: 'local', triggerAction: 'all', editable: false, emptyText: 'empty text', selectOnFocus: true, listeners: { select: AdjustPrice, change: AdjustPrice, beforeselect: function (combo, record, index) { return ('false' == record.data.disabled); } }, applyTo: 'confelement' }); I've also tried removing width: 600 and replacing it with minListWidth: 600 but that result following and didnt fix the issue.

    Read the article

  • Need Help on entity framework

    - by Sarathi1904
    I have 3 tables(Roles,Actions and RoleActionLinks). Roles table has few columns(RoleID,RoleName,Desc). Actions table has few colums(ActionID,ActionName,Desc). In RoleActionLink is created for store the association between Roles and Actions and this table has the columns such as RoleID,ActionID When I created the data model(edmx). it shows only Role and Action as entity. i did not find RoleActionLink table. but even there is no direct relation between Roles and Actions table, both tables are automatically related using RoleActionLink table. When i create the new Action, a action record should be populated in Action table(this is works fine). At the same time, i need to populate record in RoleActionLinks table. But i dont have the entity to populate. Please tell me how to accomplish my needs.

    Read the article

  • How to check if a DateTime range is within another 3 month DateTime range

    - by Jamie
    Hi I have a Start Date and End Date per record in a db. I need to check to see where the time period falls in a 2 year period broken into two lots of quarters then display what quarters each record falls into. Quarter 1 includes June 09, Jul 09, Aug 09 Quarter 2 includes Sept 09, Oct 09, Nov 09 Quarter 3 includes Dec 09, Jan 10, Feb 10 Quarter 4 includes Mar 10, Apr 10, May 10 Quaretr 5 includes Jun 10, Jul 10... e.g. 01/10/09 - 01/06/10 would fall into quarters 2, 3, 4 & 5 I am very new to .NET so any examples would be much appreciated.

    Read the article

  • jqGrid local date manipulation; problem with row ids when deleting and adding new rows

    - by Sam
    I'm using jqGrid as a client side grid input, allowing the user to input multiple records before POSTing all the data back at once. I'm having a problem where if the user has added a few records (say 3 ) the id's for the records will be 1,2,3. if the user deletes record 2, you're left with 1 and 3 for the id of the records. When the user now adds a new records, jqGrid assigns that records the id 3 again since it just seems to count the total records and increments it by one for the new record. This causes problems when selecting rows as now the row id's are 1, 3 and 3. Does anyone know how to access the row ids of records as I could probably use the afterSubmit event and reassign the row id's increasing from 1. ( so after i delete row id 2, this will set the other row id's to 1 and 2) Any other suggestions to solve this problem? Thanks

    Read the article

  • Can't figure out how to list all the people that don't live in same City as...

    - by AspOnMyNet
    I’d like to list all the people ( Person table ) that don’t live in same city as those cities listed in Location table. Thus, if Location table holds a record with City=’New York’ and State=’Moon’, but Person table holds a record with FirstName=’Someone’, City=’New York’ and Location=’Mars’, then Someone is listed in the resulting set, since she lives in New York located on Mars and not New York located on Moon, thus we’re talking about different cities with the same name. I tried solving it with the following query, but results are wrong: SELECT Person.FirstName, Person.LastName, Person.City, Person.State FROM Person INNER JOIN Location ON (Person.City <> Location.City AND Person.State = Location.State) OR (Person.City = Location.City AND Person.State <> Location.State) OR (Person.City <> Location.City AND Person.State <> Location.State) ORDER BY Person.LastName; Any ideas?

    Read the article

  • Converting SQL statement into Linq

    - by DMan
    I'm trying to convert the following to a LINQ to SQL statement in C#. Can anyone give me a hand? Basically my table keeps record of all history of changes such that the created date max date for each seedlot is the most recent record and the correct one to show. SELECT reports.* FROM [dbo].[Reports] reports WHERE reports.createdDate IN ( SELECT MAX(report_max_dates.createdDate) FROM [dbo].[Reports] report_max_dates GROUP BY report_max_dates.Lot ) So far this is what I have. var result = (from report in db.Reports where report.createdDate == (from report_max in db.Reports group report_max by report_max.Lot into report_max_grouped select report_max_grouped).Max() select report); I can't figure out how to get the MAX dates for all reports and how to do an IN statement on the report.createdDate. Thansk, Dman

    Read the article

  • SQL count NULL cells

    - by Giuseppe
    Dear All, I have the following problem. I have a table in a db, with many columns. I can do different kind of select queries, to show, for example, for each record that satisfies a condition: all cells from columns with names ending in _t0 all cells from columns with names ending in _t1 ... To get the column lists to form the queries I use the information schema. Now, the problem: each query returns a record with a subset of the columns of the big table. This means that I can get a row of (all!) NULLs. How can I ask my query to reject such rows without having to type in explicitely the column names (i.e. by saying where col_1 is not null, col_2 is not null...)? Is it possible? Thanks in advance!!! Sep

    Read the article

  • Rails 3.1 text_field form helper and jQuery datepicker date format

    - by DanyW
    I have a form field in my Rails view like this: <%= f.text_field :date, :class => "datepicker" %> A javascript function converts all input fields of that class to a jQueryUI datepicker. $(".datepicker").datepicker({ dateFormat : "dd MM yy", buttonImageOnly : true, buttonImage : "<%= asset_path('iconDatePicker.gif') %>", showOn : "both", changeMonth : true, changeYear : true, yearRange : "c-20:c+5" }) So far so good. I can edit the record and it persists the date correctly all the way to the DB. My problem is when I want to edit the record again. When the form is pre-populated from the record it displays in 'yyyy-mm-dd' format. The javascript only formats dates which are selected in the datepicker control. Is there any way I can format the date retrieved from the database? At which stage can I do this? Thanks, Dany. Some more details following the discussion below, my form is spread across two files, a view and a partial. Here's the view: <%= form_tag("/shows/update_individual", :method => "put") do %> <% for show in @shows %> <%= fields_for "shows[]", show do |f| %> <%= render "fields", :f => f %> <% end %> <% end %> <%= submit_tag "Submit"%> <% end %> And here's the _fields partial view: <p> <%= f.label :name %> <%= f.text_field :name %> </p> <p> <%= f.date %> <%= f.label :date %> <%= f.text_field :date, :class => "datepicker" %> </p> Controller code: def edit_individual @shows = Show.find(params[:show_ids]) end

    Read the article

  • Write a sql for searching with multiple conditions

    - by Lu Lu
    Hello everyone, I have a table Student with 2 fields: Name: nvarchar(256) Age: int User will use a WinForm application to input a Name and a Age for searching. If inputted Name is empty, sql will not query Name field. If inputted Age is 0, sql will not query Age field. If Name is Null and inputted Name is empty - record is matched. If Name is Null and inputted Name is not empty - record is not matched. And also similar for Age field. My problem is that how to write a sql like this. P/S: I use SQL Server 2005. Please help me. Thanks.

    Read the article

  • Hard to append a table with many records into another without generating duplicates

    - by Bill Mudry
    I may seem to be a bit wordy at first but for the hope it will be easier for all of you to understand what I am doing in the first place. I have an uncommon but enjoyable activity of collecting as many species of wood from around the world as I can (over 2,900 so far). Ok, that is the real world. Meanwhile I have spent over 8 years compiling over 5.8 meg of text data on all the woods of the world. That got so large that learning some basic PHP and MySQL was most welcome so I could build a new database driven home for all this research. I am still slow at it but getting there. The original premise was to find evidence of as many species of woods in the world I can. The more names identified, the more successful the project. I have named the project TAXA for ease of conversation (short for Taxonomy). You are most welcome to take a look at what I have so far at www.prowebcanada.com/taxa. It is 95% dynamically driven. So far I am reporting about 6,500 botanical wood names and, as said above, the more I can report, the more successful is the project. I have a file of all the woods in the second largest wood collection in the world, the Tervuren wood collection in the Netherlands with over 11,300 wood names even after cleaning out all duplicates. That is almost twice the number I am reporting now so porting all the new wood names from Tervuren to the 'species' table where I keep the reported data would be a major desirable advancement in the project. At one point I was able to add all the Tervuren records to the species table but over 3,000 duplicates also formed. They were not in the Tervuren file in the first place but represent the same wood names common to both files. It is common sense that there would be woods common to both that when merged would create new duplicates. At one point and with the help of others from another forum, I may very well have finally got the proper SQL statement. When I ran it, though, the system said (semi-amusingly at first) ----- that it had gone away! After looking up on the Net what could have have done this, one reason is that the MySQL timeout lapses and probably because of the large size of files I am running. I am running this on a rented account on Godaddy so I cannot go about trying to adjust any config file. For safety, I copied the tervuren.sql file as tervuren_target.sql and the species.sql file as species_master.sql tp use as working files just to make sure I protect the original files from destruction or damage. Later I can name the species_master back to just species.sql once I am happy all worked well. The species file has about 18 columns in it but only 5 columns match the columns in the Tervuren file (name for name and collation also). The rest of the columns are just along for the ride, so to speak. The common key in both is the 'species_name" columns in both. I am not sure it is at all proper to call one a primary key and the other a foreign key since there really is no relational connection to them. One is just more data for the other and can disappear after, never to be referred to the working code in the application. I have been very surprised and flabbergasted on how hard it can be to append records from one large table into another (with same column names plus others) without generating NEW duplicates in the first place. Watch out thinking that a SELECT DISTINCT statement may do the job because absolutely NO records in the species table must get destroyed in the process and there is no way (well, that I know of) to tell the 'DISTINCT" command this. Yes, the original 'species' table has duplicates in it even before all this but, trust me ---- they have to be removed the long hard way manually record by record or I will lose precious information. It is more important to just make sure no NEW duplicates form through bringing in new names in the tervuren_target.species_name into species.species_name. I am hoping and thinking that a straight SQL solution should work --- except for that nasty timeout. How do I get past that? Could it mean that I may have to turn to a PHP plus SQL method?? Or ..... would I have to break up the Tervuren files into a few smaller ones and run them independently (hope not....)" So far, what seems should be easy has proven to be unexpectedly tricky. I appreciate any help you can give but start from the assumption that this may be harder to do right than it may seem on the surface. By the way --- I am running a quad 64 bit system with Windows 7, so at least I have some fairly hefty power on the client end. I have a direct ethernet cable feeding a cable connection to the Internet. Once I get an algorithm and code working for this, I also have many other lists to process that could make the 'species' table grow even more. It could be equivalent to (ahem) lighting a rocket under my project (especially compared to do this record by record manually)! This is my first time in this forum, so I do not know how I can receive any replies. Do I have to to come back here periodically or are replies emailed out also? It would be great if you CC'd copies to me at billmudry at rogers.com :-) Much thanks for your patience and help, Bill Mudry Mississauga, Ontario Canada (next to Toronto).

    Read the article

  • automatic id generation which is a primary key

    - by abhi
    in vb.net while entering a new entry i want to assign a record a unique id like in case of numeric i do this way Dim ItemID As Integer KAYAReqConn.Open() SQLCmd = New SqlCommand("SELECT ISNULL(MAX(ItemID),0) AS ItemID from MstItem", ReqConn) Dim dr As SqlDataReader dr = SQLCmd.ExecuteReader If dr.HasRows Then dr.Read() ItemID = dr("ItemID") + 1 End If dr.Close() in this case m using itemid as a unique id and the format is 1,2,3... and m finding out the max and assigning to a new record but how to assign if the previous id is of the a00001,a00002,a00003,a00004...so on. how i do i produce a unique id in this case

    Read the article

  • Comparing tables with filtering in different databases and visualizing diffs/pushing changes

    - by MicMit
    I am building certain GUI in C# for a content manager and looking for the tools or code snippets or open libraries ( code ideally in C# ) which allow me the following : 1. For table A in database X (test ) and table A in database Y (production) and for a simple filter ( e.g. listname = "XYZ" ) I need to show additions/deletions/updates in some way. which might be side-by-side or just 2 record added table with some fields 2 record deleted table with some fields Considering that this task is very common, I guess, certain components should exist ? Components either return some collections from parameters given for further visualizing or just produce for e.g. html report. 2. I need to push changes for the filter I mentioned in 1 and update table in production database for this filter only ( ie for the particular list approved by content person). Again probably there are certain SQL code generators - components in addition to diffs or standalone.

    Read the article

  • Replay attacks for HTTPS requests

    - by MatthewMartin
    Let's say a security tester uses a proxy, say Fiddler, and records an HTTPS request using the administrator's credentials-- on replay of the entire request (including session and auth cookies) the security tester is able to succesfully (re)record transactions. The claim is that this is a sign of a CSRF vulnerability. What would a malicious user have to do to intercept the HTTPS request and replay it? It this a task for script kiddies, well funded military hacking teams or time-traveling-alien technology? Is it really so easy to record the SSL sessions of users and replay them before the tickets expire? No code in the application currently does anything interesting on HTTP GET, so AFAIK, tricking the admin into clicking a link or loading a image with a malicious URL isn't an issue.

    Read the article

  • Optimizing ROW_NUMBER() in SQL Server

    - by BlueRaja
    We have a number of machines which record data into a database at sporadic intervals. For each record, I'd like to obtain the time period between this recording and the previous recording. I can do this using ROW_NUMBER as follows: WITH TempTable AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY Machine_ID ORDER BY Date_Time) AS Ordering FROM dbo.DataTable ) SELECT [Current].*, Previous.Date_Time AS PreviousDateTime FROM TempTable AS [Current] INNER JOIN TempTable AS Previous ON [Current].Machine_ID = Previous.Machine_ID AND Previous.Ordering = [Current].Ordering + 1 The problem is, it goes really slow (several minutes on a table with about 10k entries) - I tried creating separate indicies on Machine_ID and Date_Time, and a single joined-index, but nothing helps. Is there anyway to rewrite this query to go faster?

    Read the article

  • Is it safe to convert a mysqlpp::sql_blob to a std::string?

    - by Runcible
    I'm grabbing some binary data out of my MySQL database. It comes out as a mysqlpp::sql_blob type. It just so happens that this BLOB is a serialized Google Protobuf. I need to de-serialize it so that I can access it normally. This gives a compile error, since ParseFromString() is not intended for mysqlpp:sql_blob types: protobuf.ParseFromString( record.data ); However, if I force the cast, it compiles OK: protobuf.ParseFromString( (std::string) record.data ); Is this safe? I'm particularly worried because of this snippet from the mysqlpp documentation: "Because C++ strings handle binary data just fine, you might think you can use std::string instead of sql_blob, but the current design of String converts to std::string via a C string. As a result, the BLOB data is truncated at the first embedded null character during population of the SSQLS. There’s no way to fix that without completely redesigning either String or the SSQLS mechanism." Thanks for your assistance!

    Read the article

  • Optimising Database Calls

    - by Dwaine Bailey
    I have a database that is filled with information for films, which is (in turn) read in to the database from an XML file on a webserver. What happens is the following: Gather/Parse XML and store film info as objects Begin Statement For every film object we found: Check to see if record for film exists in database If no film record, write data for film Commit Statement Currently I just test for the existence of a film using (the very basic): SELECT film_title FROM film WHERE film_id = ? If that returns a row, then the film exists, if not then I need to add it... The only problem is, is that there are many many hundreds of records in the database (lots of films!) and because it has to check for the existence of a film in the database before it can write it, the whole process ends up taking quite a while (about 27 seconds for 210 films) Is there a more efficient method of doing this, or just any suggestions in general? Programming Language is Objective-C, database is in sqlite3 Thanks, Dwaine

    Read the article

  • How do you make a Custom Data Generator for SQL XML DataType.

    - by Keith Sirmons
    Howdy, I am using Visual Studio 2010 and am playing around with the Database Projects. I am creating a DataGenerationPlan to insert data into a simple table, in which one of the column datatypes is XML. Out of the box, the generation plan uses the Regular Expression generator and generates something like this : HGcSv9wa7yM44T9x5oFT4pmBkEmv62lJ7OyAmCnL6yqXC2X.......... I am looking at creating a custom data Generator for this data type and have followed this site for the basics: http://msdn.microsoft.com/en-us/library/aa833244.aspx This example works if I am creating a string datatype and using it for a nvarchar datatype. What do I need to change to hook this Generator to the XML Datatype? Below are my code files. The string property works for nvarchar. The XElement property does not work for the xml datatype, and the RecordXMLDataGenerator is not listed as an option in the Generator column for the generation plan. CustomDataGenerators: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Data.Schema.Tools.DataGenerator; using Microsoft.Data.Schema.Extensibility; using Microsoft.Data.Schema; using Microsoft.Data.Schema.Sql; using System.Xml.Linq; namespace CustomDataGenerators { [DatabaseSchemaProviderCompatibility(typeof(SqlDatabaseSchemaProvider))] public class RecordXMLDataGenerator : Generator { private XElement _RecordData; [Output(Description = "Generates string of XML Data for the Record.", Name = "RecordDataString")] public string RecordDataString { get { return _RecordData.ToString(SaveOptions.None); } } [Output(Description = "Generates XML Data for the Record.", Name = "RecordData")] public XElement RecordData { get { return _RecordData; } } protected override void OnGenerateNextValues() { base.OnGenerateNextValues(); XElement element = new XElement("Root", new XElement("Children1", 1), new XElement("Children6", 6) ); _RecordData = element; } } } XML Extensions File: <?xml version="1.0" encoding="utf-8" ?> <extensions assembly="" version="1" xmlns="urn:Microsoft.Data.Schema.Extensions" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:Microsoft.Data.Schema.Extensions Microsoft.Data.Schema.Extensions.xsd"> <extension type="CustomDataGenerators.RecordXMLDataGenerator" assembly="CustomDataGenerators, Version=1.0.0.0, Culture=neutral, PublicKeyToken=xxxxxxxxxxxx" enabled="true"/> </extensions> Table.sql: CREATE TABLE [dbo].[Record] ( id int IDENTITY (1,1) NOT NULL, recordData xml NULL, userId int NULL, test nvarchar(max) NULL, rowver rowversion NULL, CONSTRAINT pk_RecordID PRIMARY KEY (id) )

    Read the article

  • Determining the magnitude of a certain frequency on the iPhone

    - by eagle
    I'm wondering what's the easiest/best way to determine the magnitude of a given frequency in a sound. It's my understanding that a FFT function will return the magnitudes of all frequencies in a signal. I'm wondering if there is any shortcut I could use if I'm only concerned about a specific frequency. I'll be using the iPhone mic to record the audio. My guess is that I'll be using the Audio Queue Services for recording since I don't need to record the audio to a file. I'm using SDK 4.0, so I can use any of the functions defined in the Accelerate framework (e.g. FFT functions) if needed. Update: I updated the question to be more clear as per Conrad's suggestion.

    Read the article

  • Finding gaps (missing records) in database records using SQL

    - by Tony_Henrich
    I have a table with records for every consecutive hour. Each hour has some value. I want a T-SQL query to retrieve the missing records (missing hours, the gaps). So for the DDL below, I should get a record for missing hour 04/01/2010 02:00 AM (assuming date range is between the first and last record). Using SQL Server 2005. Prefer a set based query. DDL: CREATE TABLE [Readings]( [StartDate] [datetime] NOT NULL, [SomeValue] [int] NOT NULL ) INSERT INTO [Readings]([StartDate], [SomeValue]) SELECT '20100401 00:00:00.000', 2 UNION ALL SELECT '20100401 01:00:00.000', 3 UNION ALL SELECT '20100401 03:00:00.000', 45

    Read the article

  • About Network Address Translation (NAT)?

    - by Rudi
    Just curious about a particular scenario of NAT. Let's suppose we have 4 computers sharing a global IP address under the NAT. I understand that the NAT box keeps an internal record to know which computer to forward requests to. But let's say on computer #2 I'm trying to download a file. And let's say on computer #1, #3, and #4, I'm just browsing the web normally. When the browser initiates a TCP connection to get that file, how does it know which computer to give it to? I mean like, each of the four computers is using port 80 to browse the web right? How does the NAT's record distinguish which "port 80" belongs to which computer?

    Read the article

  • Sorting the data returned by a database

    - by Rishabh Ohri
    hi all, In our project we have a requirement that when a set of records are returned by the database the records should be sorted with respect to the TITLE field in the record. The records will have to be sorted alphabetically but if the title of a record has a number in it then it should come after the records whose title only consists of alphabets. Details: we are using SQL Server , and c#. The data from the database comes to an Entity class whic forwards the data to other layers. So, What will be the possible and effective solution for this requirement.

    Read the article

  • CakePHP: How do I order results based on a 2-level deep association model?

    - by KcYxA
    I'm hoping I won't need to resort to custom queries. A related question would be: how do I retrieve data so that if an associated model is empty, no record is retrieved at all, as opposed to an empty array for an associated model? As an oversimplified example, say I have the following models: City -- Street -- House How do I sort City results by House numbers? How do I retrieve City restuls that have at least one House in it? I don't want a record with a city name and details and an empty House array as it messes up pagination results. CakePHP retrieves House records belonging to the Street in a separate query, so putting somthing like 'House.number DESC' into the 'order' field of the search query returns a 'field does not exist' error. Any ideas?

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >