I'm using SMS 2008 & I'm looking for where the registered servers are stores on my local machine. I have searched the registry with no luck.
AHIA,
Larry...
I have a form with many fields...
The action is set to a php page which queries mysql...
Should I sanitize with mysql_real_escape_string every single variable?
Or can I ignore sanitizing drop-lists and radios for instance?
Also, besides mysql_real_escape_string, what else should I do to prevent attacks?
Thanks
what is the proper way of doing the following:
getting DATE as user input
running a query
generating a report that uses the query
this is the solution i was thinking:
have a form that takes user input
run the query
open the report
what is the correct way of doing this?
One day, wordpress suddenly jumped from pots id 9110 to 890000000 post.
Days later I'd like to move back new posts to continue from id 9111.
I'm sure that id will never reach id 890000000, no problem here, but id is an autoincrement field and "ALTER TABLE wp8_posts AUTO_INCREMENT =9111" is not working.
Can I force id to continue from 9111 ?
result=sqlstring.executeQuery("select distinct table_name,owner from all_tables ")
rs.append(str(i)+' , '+result.getString("table_name")+' , '+result.getString("owner"))
If i want to display the query select * from all_tables or ' select count(*) from all_tables'
how can i get the output to display . Please suggest thanks
i am trying to write a statment for counting the employees attendance and execute thier id , name and the days that he has working on the last 3 months by counting the duplicate id on NewTimeAttendance for month 1 , 2 and 3 ..
i tried to count :
Select COUNT(employeeid) from NewTimeAttendance where employeeid=1 and (month=1 or month =2 or month = 3)
This is absolutely working ,but just for one employee...
the secound try:
SELECT COUNT(NewEmployee.EmployeeID)
FROM NewEmployee INNER JOIN NewTimeAttendance
ON NewEmployee.EmployeeID = NewTimeAttendance.EmployeeID
and (month=1 or month =2 or month = 3)
This is working , but it counts all employees .. and i want it to execute each EmployeeId, EmployeeName and number of days as new record
last try: (before you see the code ... it is wrong ..but i am trying)
for i in 0..27 loop
SELECT COUNT(NewEmployee.EmployeeID),NewEmployee.EmployeeId,EmployeeName
FROM NewEmployee INNER JOIN NewTimeAttendance
ON NewEmployee.EmployeeID(i) = NewTimeAttendance.EmployeeID
and (month=1 or month =2 or month = 3)
end loop
i realy need help...thanks in advance
i have a database that already has a users table
COLUMNS:
userID - int
loginName - string
First - string
Last - string
i just installed the asp.net membership table. Right now all of my tables are joined into my users table foreign keyed into the "userId" field
How do i integrate asp.net_users table into my schema? here are the ideas i thought of:
Add a membership_id field to my users table and on new inserts, include that new field in my users table. This seems like the cleanest way as i dont need to break any existing relationships.
break all existing relationship and move all of the fields in my user table into the asp.net_users table. This seems like a pain but ultimately will lead to the most simple, normalized solution
any thoughts?
I've written some really nice, funky libraries for use in LinqToSql. (Some day when I have time to think about it I might make it open source... :) )
Anyway, I'm not sure if this is related to my libraries or not, but I've discovered that when I have a large number of changed objects in one transaction, and then call DataContext.GetChangeSet(), things start getting reaalllly slooowwwww. When I break into the code, I find that my program is spinning its wheels doing an awful lot of Equals() comparisons between the objects in the change set. I can't guarantee this is true, but I suspect that if there are n objects in the change set, then the call to GetChangeSet() is causing every object to be compared to every other object for equivalence, i.e. at best (n^2-n)/2 calls to Equals()...
Yes, of course I could commit each object separately, but that kinda defeats the purpose of transactions. And in the program I'm writing, I could have a batch job containing 100,000 separate items, that all need to be committed together. Around 5 billion comparisons there.
So the question is: (1) is my assessment of the situation correct? Do you get this behavior in pure, textbook LinqToSql, or is this something my libraries are doing? And (2) is there a standard/reasonable workaround so that I can create my batch without making the program geometrically slower with every extra object in the change set?
Hello,
Specifications: MySQL 4.1+
I've certain situation that requires certain result set from MySQL query, let's see the current query first & then ask my question:
SELECT thread.dateline AS tdateline, post.dateline AS pdateline, MIN(post.dateline)
FROM thread AS thread
LEFT JOIN post AS post ON(thread.threadid = post.threadid)
LEFT JOIN forum AS forum ON(thread.forumid = forum.forumid)
WHERE post.postid != thread.firstpostid
AND thread.open = 1
AND thread.visible = 1
AND thread.replycount >= 1
AND post.visible = 1
AND (forum.options & 1)
AND (forum.options & 2)
AND (forum.options & 4)
AND forum.forumid IN(1,2,3)
GROUP BY post.threadid
ORDER BY tdateline DESC, pdateline ASC
As you can see, mainly I need to select dateline of threads from 'thread' table, in addition to dateline of the second post of each thread, that's all under the conditions you see in the WHERE CLAUSE. Since each thread has many posts, and I need only one result per thread, I've used GROUP BY CLAUSE for that purpose.
This query will return only one post's dateline with it's related unique thread.
My questions are:
How to limit returned threads per
each forum!? Suppose I need only 5
threads -as a maximum- to be
returned for each forum declared in
the WHERE CLAUSE 'forum.forumid
IN(1,2,3)', how can this be
achieved.
Is there any recommendations for
optimizing this query (of course
after solving the first point)?
Notes:
I prefer not to use sub-queries, but if it's the only solution available I'll accept it. Double queries not recommended. I'm sure there's a smart solution for this situation.
Appreciated advice in advance :)
When you run something similar to:
UPDATE table SET datetime = NOW();
on a table with 1 000 000 000 records and the query takes 10 seconds to run, will all the rows have the exact same time (minutes and seconds) or will they have different times? In other words, will the time be when the query started or when each row is updated?
I'm running MySQL, but I'm thinking this applies to all dbs.
I have two tables, videos and videos_ratings. The videos table has an int videoid field (and many others but those fields are not important I think) and many records. The videos_ratings table has 3 int fields: videoid, rating, rated_by which has many records (multiple records for each fields from the videos table) but not for all records from the videos table.
Currently I have the following mysql query:
SELECT `videos`.*, avg(`videos_ratings`.`vote`)
FROM `videos`, `videos_ratings`
WHERE `videos_ratings`.`videoid` = `videos`.`videoid`
GROUP BY `videos_ratings`.`videoid`
ORDER BY RAND() LIMIT 0, 12
It selects all the records from table videos that have a rating in table video_ratings and calculates the average correctly. But what I need is to select all records from the videos table, no matter if there is a rating for that record or not. And if there aren't any records in the videos_ratings table for that particular videos record, the average function should show 0.
Hope someone could understand what I want... :)
Thanks!
Hello,
I want to loop the update statement, but it only loops once.
Here is the code I am using:
do {
mysql_select_db($database_ll, $ll);
$query_query= "update table set ex='$71[1]' where field='val'";
$query = mysql_query($query_query, $ll) or die(mysql_error());
$row_domain_all = mysql_fetch_assoc($query);
} while ($row_query = mysql_fetch_assoc($query));
Thanks
Jean
Here are my tables
respondents:
field sample value
respondentid : 1
age : 2
gender : male
survey_questions:
id : 1
question : Q1
answer : sample answer
answers:
respondentid : 1
question : Q1
answer : 1 --id of survey question
I want to display all respondents who answered the certain survey, display all answers and total all the answer and group them according to the age bracket.
I tried using this query:
SELECT
res.Age,
res.Gender,
answer.id,
answer.respondentid,
SUM(CASE WHEN res.Gender='Male' THEN 1 else 0 END) AS males,
SUM(CASE WHEN res.Gender='Female' THEN 1 else 0 END) AS females,
CASE
WHEN res.Age < 1 THEN 'age1'
WHEN res.Age BETWEEN 1 AND 4 THEN 'age2'
WHEN res.Age BETWEEN 4 AND 9 THEN 'age3'
WHEN res.Age BETWEEN 10 AND 14 THEN 'age4'
WHEN res.Age BETWEEN 15 AND 19 THEN 'age5'
WHEN res.Age BETWEEN 20 AND 29 THEN 'age6'
WHEN res.Age BETWEEN 30 AND 39 THEN 'age7'
WHEN res.Age BETWEEN 40 AND 49 THEN 'age8'
ELSE 'age9'
END AS ageband
FROM Respondents AS res
INNER JOIN Answers as answer ON answer.respondentid=res.respondentid
INNER JOIN Questions as question ON answer.Answer=question.id
WHERE answer.Question='Q1' GROUP BY ageband ORDER BY res.Age ASC
I was able to get the data but the listing of all answers are not present. Do I have to subquery SELECT into my current SELECT statement to show the answers?
I want to produce something like this:
ex: # of Respondents is 3 ages: 2,3 and 6
Question: what are your favorite subjects?
Ages 1-4:
subject 1: 1
subject 2: 2
subject 3: 2
total respondents for ages 1-4 : 2
Ages 5-10:
subject 1: 1
subject 2: 1
subject 3: 0
total respondents for ages 5-10 : 1
I think the best way to explain this is to tell you what I have.
I have two tables A and B both have columns Field1 and Field2. However Field 2 is not populated in table B
I want to populate field 2 of table B with field 2 of table A where field 1 of table A matches field 1 of table B.
something like update tableB set Field2 = tableA.field2 where tablea.field1 = tableb.field1.
The reason this may seem so odd and obscure is that I'm tyring to do an inital data load form an old database to a new one.
please let me know if you need clarification
Hi,
I want to execute the following query using Subsonic:
SELECT MAX([restore_date]) FROM [msdb].[dbo].[restorehistory]
While the aggregate part is easy for me, the problem is with the name of the table. How should I force Subsonic to select from different database than default one.
So I'm trying to take a search string (could be any number of words) and turn each value into a list to use in the following IN statement) in addition, I need a count of all these values to use with my having count filter
$search_array = explode(" ",$this->search_string);
$tag_count = count($search_array);
$db = Connect::connect();
$query = "select p.id
from photographs p
left join photograph_tags c
on p.id = c.photograph_id
and c.value IN ($search_array)
group by p.id
having count(c.value) >= $tag_count";
This currently returns no results, any ideas?
I need to get the most recent record for each device from an upgrade request log table. A device is unique based on a combination of its hardware ID and its MAC address. I have been attempting to do this with GROUP BY but I am not convinced this is safe since it looks like it may be simply returning the "top record" (whatever SQLite or MySQL thinks that is).
I had hoped that this "top record" could be hinted at by way of ORDER BY but that does not seem to be having any impact as both of the following queries returns the same records for each device, just in opposite order:
SELECT extHwId,
mac,
created
FROM upgradeRequest
GROUP BY extHwId, mac
ORDER BY created DESC
SELECT extHwId,
mac,
created
FROM upgradeRequest
GROUP BY extHwId, mac
ORDER BY created ASC
Is there another way to accomplish this? I've seen several somewhat related posts that have all involved sub selects. If possible, I would like to do this without subselects as I would like to learn how to do this without that.
How to apply an update after an insert or update in POSTGRESQL; I have got a table which has a field lastupdate; I want that field to be set up whenever the row is updated or when it was inserted.
I tried this trigger, but It is not working! HELP!!
CREATE OR REPLACE FUNCTION fn_update_profile()
RETURNS TRIGGER AS $update_profile$
BEGIN
IF (TG_OP = 'INSERT' OR TG_OP = 'UPDATE' ) THEN
UPDATE profile SET lastupdate=now() where oid=OLD.oid;
RETURN NULL;
ELSEIF (TG_OP = 'DELETE') THEN
RETURN NULL;
END IF;
RETURN NULL; -- result is ignored since this is an AFTER trigger
END;
$update_profile$ LANGUAGE plpgsql;
I have a column whose value is a json array. For example:
[{"att1": "1", "att2": "2"}, {"att1": "3", "att2": "4"}, {"att1": "5", "att2": "6"}]
What i would like is to provide a view where each element of the json array is transformed into a row and the attributes of each json object into columns. Keep in mind that the json array doesn't have a fixed size.
Any ideas on how i can achieve this ?
My table looks like this with duplicates in col1
col1, col2, col3, col4
1, 1, 0, a
1, 2, 1, a
1, 3, 1, a
2, 4, 1, b
3, 5, 0, c
I want to select distinct col1 with max (col3) and min(col2);
so result set will be:
col1, col2, col3, col4
1, 2, 1, a
2, 4, 1, b
3, 5, 0, c
I have a solution but looking for best ideas?
This is my current query - its not getting the required result. I want it do display all of the "resources" even if they dont have a connection.
SELECT *
FROM (`user_permissions`)
JOIN `user_groups` ON `user_groups`.`id` = `user_permissions`.`role`
JOIN `user_resources` ON `user_resources`.`id` = `user_permissions`.`resource`
WHERE `role` = '4'
When I try left join or right join it still returns the same result. The result I get is:
id | role | resource | name
5 | 4 | 2 | Changelog
I want
id | role | resource | name
5 | 4 | 2 | Changelog
null | null | null | Resource2
null | null | null | Resource3
Is this possible?
Given this result-set:
mysql> EXPLAIN SELECT c.cust_name, SUM(l.line_subtotal) FROM customer c
-> JOIN slip s ON s.cust_id = c.cust_id
-> JOIN line l ON l.slip_id = s.slip_id
-> JOIN vendor v ON v.vend_id = l.vend_id WHERE v.vend_name = 'blahblah'
-> GROUP BY c.cust_name
-> HAVING SUM(l.line_subtotal) > 49999
-> ORDER BY c.cust_name;
+----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+
| 1 | SIMPLE | v | ref | PRIMARY,idx_vend_name | idx_vend_name | 12 | const | 1 | Using where; Using temporary; Using filesort |
| 1 | SIMPLE | l | ref | idx_vend_id | idx_vend_id | 4 | csv_import.v.vend_id | 446 | |
| 1 | SIMPLE | s | eq_ref | PRIMARY,idx_cust_id,idx_slip_id | PRIMARY | 4 | csv_import.l.slip_id | 1 | |
| 1 | SIMPLE | c | eq_ref | PRIMARY,cIndex | PRIMARY | 4 | csv_import.s.cust_id | 1 | |
+----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+
4 rows in set (0.04 sec)
I'm a bit baffled as to why the query referenced by this EXPLAIN statement is still taking about a minute to execute. Isn't it true that this query only has to search through 449 rows? Anyone have any idea as to what could be slowing it down so much?
i have a good, working valid non-corrupted database in mssql that i want to revert to a point in time
how is that done?
the standard RESTORE command requires a full backup as a starting point, and then log backups thereafter.
i cant understand why this must be done from a backup. if my db is good and the logs are OK, why cant i just revert with a STOPAT from the live logs in the db?
one dba suggested that whenever i want to restore i should THEN make a log backup and then RESTORE with STOPAT. i believe it would work but sounds a little backwards
any better ideas?
thank you very much
today my problem is this i have 2 column and i wish check if the sum of that columns isn't Higher then a value(485 for example) and if is do a query...i though to do
SELECT * FROM table WHERE ColumnA+ColumnB<485
But isn't working... i've already tried with
SELECT Sum(ColumnA)+Sum(ColumnB) AS Total FROM table
but it gives me 1 column with the sum of all rows, i instead want a row for every sum. so how can i do..? xD i hope you understood if not just ask that i try to explain it better! and thanks in advice for who want to help me!
EDIT: I Found out XD the problem was that the columns was Smallint and the result of 1 or more rows was more than 32k so it wasn't working! Thanks At all!!