Search Results

Search found 27248 results on 1090 pages for 'table adapter'.

Page 235/1090 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • Rendering partial for table row with form_tag is getting crazy!

    - by xopht
    I have 23(column)x6(row) table and change the row with link_to_remote function. each tr tag has its own id attribute. change link call change action and change action changes the row using render function wit partial. _change.html.erb <td id="row_1">1</td> . . omitted . . <td id="row_23">23</td> link_to_remote function <%= link_to_remote 'Change', :update => 'row_1', :url => change_path %> change action def change logger.debug render :partial => 'change' end If I coded like above, everything work okay. This means all changed-columns are in one row. But, if I wrap partial code with *form_for* function like below... <% form_for 'change' do %> <td id="row_1">1</td> . . omitted . . <td id="row_23">23</td> <% end %> Then, one column located in one row and that column is the first column. I've looked up the log file, but it was normal html tags. What's wrong?

    Read the article

  • Query Parameter Value Is Null When Enum Item 0 is Cast with Int32

    - by Timothy
    When I use the first item in a zero-based Enum cast to Int32 as a query parameter, the parameter value is null. I've worked around it by simply setting the first item to a value of 1, but I was wondering though what's really going on here? This one has me scratching my head. Why is the parameter value regarded as null, instead of 0? Enum LogEventType : int { SignIn, SignInFailure, SignOut, ... } private static DataTable QueryEventLogSession(DateTime start, DateTime stop) { DataTable entries = new DataTable(); using (FbConnection conn = new FbConnection(DSN)) { using (FbDataAdapter adapter = new FbDataAdapter( "SELECT event_type, event_timestamp, event_details FROM event_log " + "WHERE event_timestamp BETWEEN @start AND @stop " + "AND event_type IN (@signIn, @signInFailure, @signOut) " + "ORDER BY event_timestamp ASC", conn)) { adapter.SelectCommand.Parameters.AddRange(new Object[] { new FbParameter("@start", start), new FbParameter("@stop", stop), new FbParameter("@signIn", (Int32)LogEventType.SignIn), new FbParameter("@signInFailure", (Int32)LogEventType.SignInFailure), new FbParameter("@signOut", (Int32)LogEventType.SignOut)}); Trace.WriteLine(adapter.SelectCommand.CommandText); foreach (FbParameter p in adapter.SelectCommand.Parameters) { Trace.WriteLine(p.Value.ToString()); } adapter.Fill(entries); } } return entries; }

    Read the article

  • C++ constant reference lifetime

    - by aaa
    hello I have code that looks like this: class T {}; class container { const T &first, T &second; container(const T&first, const T & second); }; class adapter : T {}; container(adapter(), adapter()); I thought lifetime of constant reference would be lifetime of container. However, it appears otherwise, adapter object is destroyed after container is created, leading dangling reference. What is the correct lifetime? how to correctly implement binding temporary object to class member reference? Thanks

    Read the article

  • Modeling many-to-one with constraints?

    - by Greg Beech
    I'm attempting to create a database model for movie classifications, where each movie could have a single classification from each of one of multiple rating systems (e.g. BBFC, MPAA). This is the current design, with all implied PKs and FKs: TABLE Movie ( MovieId INT ) TABLE ClassificationSystem ( ClassificationSystemId TINYINT ) TABLE Classification ( ClassificationId INT, ClassificationSystemId TINYINT ) TABLE MovieClassification ( MovieId INT, ClassificationId INT, Advice NVARCHAR(250) -- description of why the classification was given ) The problem is with the MovieClassification table whose constraints would allow multiple classifications from the same system, whereas it should ideally only permit either zero or one classifications from a given system. Is there any reasonable way to restructure this so that a movie having exactly zero or one classifications from any given system is enforced by database constraints, given the following requirements? Do not duplicate information that could be looked up (i.e. duplicating ClassificationSystemId in the MovieClassification table is not a good solution because this could get out of sync with the value in the Classification table) Remain extensible to multiple classification systems (i.e. a new classification system does not require any changes to the table structure)? Note also the Advice column - each mapping of a movie to a classification needs to have a textual description of why that classification was given to that movie. Any design would need to support this.

    Read the article

  • Why would using a Temp table be faster than a nested query?

    - by Mongus Pong
    We are trying to optimise some of our queries. One query is doing the following: SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, INTO [#Gadget] FROM task t SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) as Client FROM [#Gadget] order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC DROP TABLE [#Gadget] (I have removed the complex subquery, cos I dont think its relevant other than to explain why this query has been done as a two stage process.) Now I would have thought it would be far more efficient to merge this down into a single query using subqueries as : SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) FROM ( SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, FROM task t ) as sub order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC This would give the optimiser better information to work out what was going on and avoid any temporary tables. It should be faster. But it turns out it is a lot slower. 8 seconds vs under 5 seconds. I cant work out why this would be the case as all my knowledge of databases imply that subqueries would always be faster than using temporary tables. Can anyone explain what could be going on!?!?

    Read the article

  • Adding a footer to a table in another view?

    - by cannyboy
    The app I'm making has a settings view which I want to show on the first run of the app. Normally (after the first run) this view will be pushed onto the screen and there will be a "Back" button in the nav bar. However, on first launch I don't want there to be a back button. Instead, I want to add a 'Done' button at the footer of the table... and then the view can be popped. Here's my code (in my initial view's viewDidLoad). I have removed the back button, but don't know how to add the footer and button. NSUserDefaults *prefs = [NSUserDefaults standardUserDefaults]; if ([prefs stringForKey:@"firstRun"] == nil) { SettingsViewController *settingsView = [[SettingsViewController alloc] initWithNibName:@"SettingsView" bundle:nil]; settingsView.hidesBottomBarWhenPushed = YES; settingsView.navigationItem.hidesBackButton = TRUE; // add footer button here? [[self navigationController] pushViewController:settingsView animated:YES]; [settingsView release]; [prefs setObject:@"OK" forKey:@"firstRun"]; } else { //something }

    Read the article

  • Query Parameter Value Is Null When Enum Item 0 is Cast to Int32

    - by Timothy
    When I use the first item in a zero-based Enum cast to Int32 as a query parameter, the parameter value is null. I've worked around it by simply setting the first item to a value of 1, but I was wondering though what's really going on here? This one has me scratching my head. Why does the parameter regarded the value as null, instead of 0? Enum LogEventType : int { SignIn, SignInFailure, SignOut, ... } private static DataTable QueryEventLogSession(DateTime start, DateTime stop) { DataTable entries = new DataTable(); using (FbConnection conn = new FbConnection(DSN)) { using (FbDataAdapter adapter = new FbDataAdapter( "SELECT event_type, event_timestamp, event_details FROM event_log " + "WHERE event_timestamp BETWEEN @start AND @stop " + "AND event_type IN (@signIn, @signInFailure, @signOut) " + "ORDER BY event_timestamp ASC", conn)) { adapter.SelectCommand.Parameters.AddRange(new Object[] { new FbParameter("@start", start), new FbParameter("@stop", stop), new FbParameter("@signIn", (Int32)LogEventType.SignIn), new FbParameter("@signInFailure", (Int32)LogEventType.SignInFailure), new FbParameter("@signOut", (Int32)LogEventType.SignOut)}); Trace.WriteLine(adapter.SelectCommand.CommandText); foreach (FbParameter p in adapter.SelectCommand.Parameters) { Trace.WriteLine(p.Value.ToString()); } adapter.Fill(entries); } } return entries; }

    Read the article

  • Zend_Paginator / Doctrine 2

    - by Kevin
    I'm using Doctrine 2 with my Zend Framework application and a typical query result could yield a million (or more) search results. I want to use Zend_Paginator in line with this result set. However, I don't want to return all the results as an array and use the Array adapter as this would be inefficient, instead I would like to supply the paginator the total amount of rows then and array of results based on limit/offset amounts. Is this doable using the Array adapter or would I need to create my own pagination adapter?

    Read the article

  • here is my code for spinner with dropdownlist:

    - by user555910
    I have spinner in my application .The spinner have drop down list.I want to take the value of the dropdown list from the database .how can i do this ? here is my code for spinner with dropdownlist: ArrayAdapter adapter = new ArrayAdapter(this, android.R.layout.simple_spinner_item, selectdefault); adapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); spinner.setAdapter(adapter);

    Read the article

  • Set bounds for markers generated by jQuery table loop?

    - by abemonkey
    I have some jQuery code that goes through a table of location results and puts corresponding pins on a map. I am having trouble figuring out how to set the bounds so that when it goes through the loop and generates the markers on the map that it zooms and pans to fit the markers in the view. I've tried implementing code from some similar questions on this site but nothing seems to be working. Please let me know what code I should be using and where the heck I should put it in my script: $(function() { var latlng = new google.maps.LatLng(44, 44); var settings = { zoom: 15, center: latlng, disableDefaultUI: false, mapTypeId: google.maps.MapTypeId.ROADMAP }; var map = new google.maps.Map(document.getElementById("map_canvas"), settings); $('tr').each(function(i) { var the_marker = new google.maps.Marker({ title: $(this).find('.views-field-title').text(), map: map, clickable: true, position: new google.maps.LatLng( parseFloat($(this).find('.views-field-latitude').text()), parseFloat($(this).find('.views-field-longitude').text()) ) }); var infowindow = new google.maps.InfoWindow({ content: $(this).find('.views-field-title').text() + $(this).find('.adr').text() }); new google.maps.event.addListener(the_marker, 'click', function() { infowindow.open(map, the_marker); }); }); }); `

    Read the article

  • Hibernate CreateSQL Query Problem

    - by Shaded
    Hello All I'm trying to use hibernates built in createsql function but it seems that it doesn't like the following query. List =hibernateSession.createSQLQuery("SELECT number, location FROM table WHERE other_number IN (SELECT f.number FROM table2 AS f JOIN table3 AS g on f.number = g.number WHERE g.other_number = " + var + ") ORDER BY number").addEntity(Table.class).list(); I have a feeling it's from the nested select statement, but I'm not sure. The inner select is used elsewhere in the code and it returns results fine. This is my mapping for the first table: <hibernate-mapping> <class name="org.efs.openreports.objects.Table" table="table"> <id name="id" column="other_number" type="java.lang.Integer"> <generator class="native"/> </id> <property name="number" column="number" not-null="true" unique="true"/> <property name="location" column="location" not-null="true" unique="true"/> </class> </hibernate-mapping> And the .java public class Table implements Serializable { private Integer id;//panel_facility private Integer number; private String location; public Table() { } public void setId(Integer id) { this.id = id; } public Integer getId() { return id; } public void setNumber(Integer number) { this.number = number; } public Integer number() { return number; } public String location() { return location; } public void setLocation(String location) { this.location = location; } } Any suggestions? Edit (Added mapping)

    Read the article

  • Fetching only the first record in the table (via the view) without changing controller?

    - by bgadoci
    I am trying to only fetch the first record in my table for display. I am creating a site where a user can upload multiple images and attach to a post but I only want to display the first image view for each post. For further clarification posts belong_to projects. So when you are on the projects show page you see multiple posts. In this view I only want to display the first image for each post. Is there a way to do this in the view without affecting the controller (as later I want to allow users to browse all photos through the addition of a lightbox). Here is my /views/posts/_post.html.erb code: <% div_for post do %> <% post.photos.each do | photo | %> <%= image_tag(photo.data.url(:large), :alt => '') %> <%= photo.description %> <% end unless post.photos.first.new_record? rescue nil %> <%= link_to h(post.link_title), post.link %> <%= h(post.description) %> <%= link_to 'Manage this post', edit_post_path(post) %> <% end %> UPDATE: I am using a photos model to attach multiple photos to each post and using paperclip here.

    Read the article

  • How to speed up a slow UPDATE query

    - by Mike Christensen
    I have the following UPDATE query: UPDATE Indexer.Pages SET LastError=NULL where LastError is not null; Right now, this query takes about 93 minutes to complete. I'd like to find ways to make this a bit faster. The Indexer.Pages table has around 506,000 rows, and about 490,000 of them contain a value for LastError, so I doubt I can take advantage of any indexes here. The table (when uncompressed) has about 46 gigs of data in it, however the majority of that data is in a text field called html. I believe simply loading and unloading that many pages is causing the slowdown. One idea would be to make a new table with just the Id and the html field, and keep Indexer.Pages as small as possible. However, testing this theory would be a decent amount of work since I actually don't have the hard disk space to create a copy of the table. I'd have to copy it over to another machine, drop the table, then copy the data back which would probably take all evening. Ideas? I'm using Postgres 9.0.0. UPDATE: Here's the schema: CREATE TABLE indexer.pages ( id uuid NOT NULL, url character varying(1024) NOT NULL, firstcrawled timestamp with time zone NOT NULL, lastcrawled timestamp with time zone NOT NULL, recipeid uuid, html text NOT NULL, lasterror character varying(1024), missingings smallint, CONSTRAINT pages_pkey PRIMARY KEY (id ), CONSTRAINT indexer_pages_uniqueurl UNIQUE (url ) ); I also have two indexes: CREATE INDEX idx_indexer_pages_missingings ON indexer.pages USING btree (missingings ) WHERE missingings > 0; and CREATE INDEX idx_indexer_pages_null ON indexer.pages USING btree (recipeid ) WHERE NULL::boolean; There are no triggers on this table, and there is one other table that has a FK constraint on Pages.PageId.

    Read the article

  • After creating a mysql user with all privileges, the user cannot create databases in phpMyAdmin and only sees information_schema table

    - by GHarping
    This is a recurring problem for some reason... Using mysql 5.5, I am simply trying to create a user that can connect to the database remotely, have access to all databases, and create databases. I have created a user using: create user 'dev'@'%' identified by 'abcdefg'; then granted all permissions using: GRANT ALL ON *.* to 'dev'@'192.168.%' IDENTIFIED BY 'abcdefg' WITH GRANT OPTION; and the result is that the user cannot create databases, and can only see information_schema database for some reason. Databases Create database: Documentation No Privileges Database Ascending information_schema Total: 1 Does anyone know why this might be happening?

    Read the article

  • Weird Rails database errors

    - by Jason Swett
    I've had some trouble getting my Rails app to connect to PostgreSQL so I decided to just say screw it and use SQLite for now. (I'm using the tutorial here: http://guides.rubyonrails.org/getting_started.html) I started a BRAND NEW, fresh Rails app from this tutorial. When I visit my app in the browser after deleting public/index.html, I get this the first time: Please install the pg adapter: `gem install activerecord-pg-adapter` (no such file to load -- active_record/connection_adapters/pg_adapter) That's odd to me because I'm not mentioning PostgreSQL anywhere. Here's my databases.yml: # SQLite version 3.x # gem install sqlite3-ruby (not necessary on OS X Leopard) development: adapter: sqlite3 database: db/development.sqlite3 pool: 5 timeout: 5000 # Warning: The database defined as "test" will be erased and # re-generated from your development database when you run "rake". # Do not set this db to the same as development or production. test: adapter: sqlite3 database: db/test.sqlite3 pool: 5 timeout: 5000 production: adapter: sqlite3 database: db/production.sqlite3 pool: 5 timeout: 5000 To make things more confusing, I only get that "pg adapter" error on the first load. For every subsequent page request, I get this error: ActiveRecord::ConnectionNotEstablished So even though I removed all mention of PostgreSQL, I'm still getting errors. What could be going on?

    Read the article

  • Function to extract data in insert into satement for a table.

    - by user269484
    Hi...I m using this Function to extract the data but unable to extract LONG datatype. Can anyone help me? create or replace Function ExtractData(v_table_name varchar2) return varchar2 As b_found boolean:=false; v_tempa varchar2(8000); v_tempb varchar2(8000); v_tempc varchar2(255); begin for tab_rec in (select table_name from user_tables where table_name=upper(v_table_name)) loop b_found:=true; v_tempa:='select ''insert into '||tab_rec.table_name||' ('; for col_rec in (select * from user_tab_columns where table_name=tab_rec.table_name order by column_id) loop if col_rec.column_id=1 then v_tempa:=v_tempa||'''||chr(10)||'''; else v_tempa:=v_tempa||',''||chr(10)||'''; v_tempb:=v_tempb||',''||chr(10)||'''; end if; v_tempa:=v_tempa||col_rec.column_name; if instr(col_rec.data_type,'CHAR') 0 then v_tempc:='''''''''||'||col_rec.column_name||'||'''''''''; elsif instr(col_rec.data_type,'DATE') 0 then v_tempc:='''to_date(''''''||to_char('||col_rec.column_name||',''mm/dd/yyyy hh24:mi'')||'''''',''''mm/dd/yyyy hh24:mi'''')'''; else v_tempc:=col_rec.column_name; end if; v_tempb:=v_tempb||'''||decode('||col_rec.column_name||',Null,''Null'','||v_tempc||')||'''; end loop; v_tempa:=v_tempa||') values ('||v_tempb||');'' from '||tab_rec.table_name||';'; end loop; if Not b_found then v_tempa:='-- Table '||v_table_name||' not found'; else v_tempa:=v_tempa||chr(10)||'select ''-- commit;'' from dual;'; end if; return v_tempa; end; /

    Read the article

  • What is preferred method for searching table data using stored procedure?

    - by Mourya
    I have a customer table with Cust_Id, Name, City and search is based upon any or all of the above three. Which one Should I go for ? Dynamic SQL: declare @str varchar(1000) set @str = 'Select [Sno],[Cust_Id],[Name],[City],[Country],[State] from Customer where 1 = 1' if (@Cust_Id != '') set @str = @str + ' and Cust_Id = ''' + @Cust_Id + '''' if (@Name != '') set @str = @str + ' and Name like ''' + @Name + '%''' if (@City != '') set @str = @str + ' and City like ''' + @City + '%''' exec (@str) Simple query: select [Sno],[Cust_Id],[Name],[City],[Country],[State] from Customer where (@Cust_Id = '' or Cust_Id = @Cust_Id) and (@Name = '' or Name like @Name + '%') and (@City = '' or City like @City + '%') Which one should I prefer (1 or 2) and what are advantages? After going through everyone's suggestion , here is what i finally got. DECLARE @str NVARCHAR(1000) DECLARE @ParametersDefinition NVARCHAR(500) SET @ParametersDefinition = N'@InnerCust_Id varchar(10), @InnerName varchar(30),@InnerCity varchar(30)' SET @str = 'Select [Sno],[Cust_Id],[Name],[City],[Country],[State] from Customer where 1 = 1' IF(@Cust_Id != '') SET @str = @str + ' and Cust_Id = @InnerCust_Id' IF(@Name != '') SET @str = @str + ' and Name like @InnerName' IF(@City != '') SET @str = @str + ' and City like @InnerCity' -- ADD the % symbol for search based upon the LIKE keyword SELECT @Name = @Name + '%', @City = @City+ '%' EXEC sp_executesql @str, @ParametersDefinition, @InnerCust_Id = @Cust_Id, @InnerName = @Name, @InnerCity = @City; References : http://blogs.lessthandot.com/index.php/DataMgmt/DataDesign/changing-exec-to-sp_executesql-doesn-t-p http://msdn.microsoft.com/en-us/library/ms175170.aspx

    Read the article

  • mysql join with conditional

    - by Conor H
    Hi There, I am currently working on a MySQL query that contains a table: TBL:lesson_fee -fee_type_id (PRI) -lesson_type_id (PRI) -lesson_fee_amount this table contains the fees for a particular 'lesson type' and there are different 'fee names' (fee_type). Which means that there can be many entries in this table for one 'lesson type' In my query I am joining this table onto the rest of the query via the 'lesson_type' table using: lesson_fee INNER JOIN (other joins here) ON lesson_fee.lesson_type_id = lesson_type.lesson_type_id The problem with this is that it is currently returning duplicate data in the result. 1 row for every duplicate entry in the 'lesson fee' table. I am also joining the 'fee type' table using this 'fee_type_id' Is there a way of telling MySQL to say "Join the lesson_fee table rows that have lesson_fee.lesson_type_id and fee_type_id = client.fee_type_id". UPDATE: Query: SELECT lesson_booking.lesson_booking_id,lesson_fee.lesson_fee_amount FROM fee_type INNER JOIN (lesson_fee INNER JOIN (color_code INNER JOIN (employee INNER JOIN (horse_owned INNER JOIN (lesson_type INNER JOIN (timetable INNER JOIN (lesson_booking INNER JOIN CLIENT ON client.client_id = lesson_booking.client_id) ON lesson_booking.timetable_id = timetable.timetable_id) ON lesson_type.lesson_type_id = timetable.lesson_type_id) ON horse_owned.horse_owned_id = lesson_booking.horse_owned_id) ON employee.employee_id = timetable.employee_id) ON employee.color_code_id = color_code.color_code_id) ON lesson_fee.lesson_type_id = lesson_type.lesson_type_id) ON lesson_fee.fee_type_id = client.fee_type_id WHERE booking_date = '2010-04-06' ORDER BY lesson_booking_id ASC How do I keep the format(indentation) of my query?

    Read the article

  • How to efficiently use LOCK_ESCALATION mssql 2008

    - by Avias
    I'm currently having troubles with frequent deadlocks with a specific user table in MS SQL 2008. Here are some facts about this particular table: Has a large amount of rows (1 to 2 million) All the indexes used on this table only has "use row lock" ticked on its option rows are frequently updated by multiple transactions but are unique (e.g. probably a thousand or more update statements are executed to different unique rows every hour) the table does not use partitions. Upon checking the table on sys.tables, I found that the lock_escalation is set to TABLE I'm very tempted to turn the lock_escalation for this table to DISABLE but I'm not really sure what side effect this would incur. From What I understand, using DISABLE will minimize escalating locks to TABLE level which if combined with the row lock settings of the indexes should theoretically minimize the deadlocks I am encountering.. From what I have read in Determining threshold for lock escalation it seems that locking automatically escalates when a single transaction fetches 5000 rows.. What does a single transaction mean in this sense? A single session/connection getting 5000 rows thru individual update/select statements? Or is it a single sql update/select statement that fetches 5000 or more rows? Any insight is appreciated, btw, n00b DBA here Thanks

    Read the article

  • How to handle product ratings in a database

    - by Mel
    Hello, I would like to know what is the best approach to storing product ratings in a database. I have in mind the following two (simplified, and assuming a MySQL db) scenarios: Scenario 1: Create two columns in the product table to store number of votes and the sum of all votes. Use columns to get an average on the product display page: products(productedID, productName, voteCount, voteSum) Pros: I will only need to access one table, and thus execute one query to display product data and ratings. Cons: Write operations will be executed in a table whose original purpose is only to furnish product data. Scenario 2: Create an additional table to store ratings. products(productID, productName) ratings(productID, voteCount, voteSum) Pros: Isolate ratings into a separate table, leaving the products table to furnish data on available products. Cons: I will have to execute two separate queries on product page requests (one for data and another for ratings). In terms of performance, which of the following two approaches is best: Allow users to execute an occasional write query to a table that will handle hundreds of read requests? Execute two queries at every product page, but isolate the write query into a separate table. I'm a novice to database development, and often find myself struggling with simple questions such as these. Many thanks,

    Read the article

  • How do I show all group headers in Access 2007 reports?

    - by Newbie
    This is a question about Reports in Access 2007. I'm unsure whether the solution will involve any programming, but hopefully someone will be able to help me. I have a report which lists all records from a particular table (call it A), and groups them by their associated record in a related table (call it B). I use the 'group headers' to add the information from table-B into the report. The problem occurs when I filter the records from table-A that are shown in the report. If I filter out all table-A records that relate to a particular record (call it X) in table-B, the report no longer shows the record-X group header. As a possible workaround, I have tried to ensure that I have one empty record in table-A for each of the records in table-B. That way I can specify NOT to filter out these empty records. However, the outcome is ugly one-record-high blank spaces at the start of each group in the report. Does anyone know of an alternative solution?

    Read the article

  • Retrieving values from a table in HTML using jQuery?

    - by Mo
    Hi i was just wondering whats the best way to retrieve the following labels and values from this HTMl code using jquery and storing them in to a array or hash map of some sort where i have for e.g "DataSet:" : "prod" or ["Dataset", "Prod"]? <table id="metric_summary"> <tbody> <tr class="editable_metrics"> <td><label>DataSet:</label></td> <td><input name="DataSet" value="prod"></td> </tr> <tr class="editable_metrics"> <td><label>HostGroup:</label></td> <td><input name="HostGroup" value="MONITOR-PORTAL-IAD"></td> </tr> <tr class="editable_metrics"> <td><label>Host:</label></td> <td><input name="Host" value="ALL"></td> </tr> <tr class="editable_metrics"> <td><label>Class:</label></td> <td><input name="Class" value="CPU"></td> </tr> <tr class="editable_metrics"> <td><label>Object:</label></td> <td><input name="Object" value="cpu"></td> </tr> <tr class="editable_metrics"> <td><label>Metric:</label></td> <td><input name="Metric" value="CapacityCPUUtilization"></td> </tr> thanks

    Read the article

  • Find multiple patterns with a single preg_match_all in PHP

    - by Mark
    Using PHP and preg_match_all I'm trying to get all the HTML content between the following tags (and the tags also): <p>paragraph text</p> don't take this <ul><li>item 1</li><li>item 2</li></ul> don't take this <table><tr><td>table content</td></tr></table> I can get one of them just fine: preg_match_all("(<p>(.*)</p>)siU", $content, $matches, PREG_SET_ORDER); Is there a way to get all the <p></p> <ul></ul> <table></table> content with a single preg_match_all? I need them to come out in the order they were found so I can echo the content and it will make sense. So if I did a preg_match_all on the above content then iterated through the $matches array it would echo: <p>paragraph text</p> <ul><li>item 1</li><li>item 2</li></ul> <table><tr><td>table content</td></tr></table>

    Read the article

  • How to current snapshot of MySQL Table and store it into CSV file(after creating it) ?

    - by Rachel
    I have large database table, approximately 5GB, now I wan to getCurrentSnapshot of Database using "Select * from MyTableName", am using PDO in PHP to interact with Database. So preparing a query and then executing it // Execute the prepared query $result->execute(); $resultCollection = $result->fetchAll(PDO::FETCH_ASSOC); is not an efficient way as lots of memory is being user for storing into the associative array data which is approximately, 5GB. My final goal is to collect data returned by Select query into an CSV file and put CSV file at an FTP Location from where Client can get it. Other Option I thought was to do: SELECT * INTO OUTFILE "c:/mydata.csv" FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY "\n" FROM my_table; But I am not sure if this would work as I have cron that initiates the complete process and we do not have an csv file, so basically for this approach, PHP Scripts will have to create an CSV file. Do a Select query on the database. Store the select query result into the CSV file. What would be the best or efficient way to do this kind of task ? Any Suggestions !!!

    Read the article

  • Dynamic programming Approach- Knapsack Puzzle

    - by idalsin
    I'm trying to solve the Knapsack problem with the dynamical programming(DP) approach, with Python 3.x. My TA pointed us towards this code for a head start. I've tried to implement it, as below: def take_input(infile): f_open = open(infile, 'r') lines = [] for line in f_open: lines.append(line.strip()) f_open.close() return lines def create_list(jewel_lines): #turns the jewels into a list of lists jewels_list = [] for x in jewel_lines: weight = x.split()[0] value = x.split()[1] jewels_list.append((int(value), int(weight))) jewels_list = sorted(jewels_list, key = lambda x : (-x[0], x[1])) return jewels_list def dynamic_grab(items, max_weight): table = [[0 for weight in range(max_weight+1)] for j in range(len(items)+1)] for j in range(1,len(items)+1): val= items[j-1][0] wt= items[j-1][1] for weight in range(1, max_weight+1): if wt > weight: table[j][weight] = table[j-1][weight] else: table[j][weight] = max(table[j-1][weight],table[j-1][weight-wt] + val) result = [] weight = max_weight for j in range(len(items),0,-1): was_added = table[j][weight] != table[j-1][weight] if was_added: val = items[j-1][0] wt = items[j-1][1] result.append(items[j-1]) weight -= wt return result def totalvalue(comb): #total of a combo of items totwt = totval = 0 for val, wt in comb: totwt += wt totval += val return (totval, -totwt) if totwt <= max_weight else (0,0) #required setup of variables infile = "JT_test1.txt" given_input = take_input(infile) max_weight = int(given_input[0]) given_input.pop(0) jewels_list = create_list(given_input) #test lines print(jewels_list) print(greedy_grab(jewels_list, max_weight)) bagged = dynamic_grab(jewels_list, max_weight) print(totalvalue(bagged)) The sample case is below. It is in the format line[0] = bag_max, line[1:] is in form(weight, value): 575 125 3000 50 100 500 6000 25 30 I'm confused as to the logic of this code in that it returns me a tuple and I'm not sure what the output tuple represents. I've been looking at this for a while and just don't understand what the code is pointing me at. Any help would be appreciated.

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >