Search Results

Search found 10101 results on 405 pages for 'temporary tables'.

Page 279/405 | < Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >

  • Using PHP variables inside SQL statements?

    - by Homer
    For some reason I can't pass a var inside a mysql statement. I have a function that can be used for multiple tables. So instead of repeating the code I want to change the table that is selected from like so, function show_all_records($table_name) { mysql_query("SELECT * FROM $table_name"); etc, etc... } And to call the function I use show_all_records("some_table") or show_all_records("some_other_table") depending on which table I want to select from at the moment. But it's not working, is this because variables can't be passed through mysql statements?

    Read the article

  • In which order I had to simplify this Boolean expression?

    - by user3662105
    I have to simplify this Boolean expression but i quite found it difficult since don't know in which order had to start with. the expression is: (x iff y) or (y iff z) if x found it little complicated to simplify since i don't know the order do I have to write it like: ((x iff y) or (y iff z)) if x or: (x iff y) or ((y iff z) if x) if you can give me a spot on this section of Boolean algebra. and I really will appreciate it if you gave me steps on how to simplify it. and had to say that I already tried a lot, solved a lot, tried wolfram alpha and others too even compared the results using truth tables but get different results and didn't know the right one from them. thanks in advance for your helping

    Read the article

  • A better way to build this MySQL statement with subselects

    - by Corey Maass
    I have five tables in my database. Members, items, comments, votes and countries. I want to get 10 items. I want to get the count of comments and votes for each item. I also want the member that submitted each item, and the country they are from. After posting here and elsewhere, I started using subselects to get the counts, but this query is taking 10 seconds or more! SELECT `items_2`.*, (SELECT COUNT(*) FROM `comments` WHERE (comments.Script = items_2.Id) AND (comments.Active = 1)) AS `Comments`, (SELECT COUNT(votes.Member) FROM `votes` WHERE (votes.Script = items_2.Id) AND (votes.Active = 1)) AS `votes`, `countrys`.`Name` AS `Country` FROM `items` AS `items_2` INNER JOIN `members` ON items_2.Member=members.Id AND members.Active = 1 INNER JOIN `members` AS `members_2` ON items_2.Member=members.Id LEFT JOIN `countrys` ON countrys.Id = members.Country GROUP BY `items_2`.`Id` ORDER BY `Created` DESC LIMIT 10 My question is whether this is the right way to do this, if there's better way to write this statement OR if there's a whole different approach that will be better. Should I run the subselects separately and aggregate the information?

    Read the article

  • Domain-driven design with Zend

    - by mik
    This question is a continuation of my previous question here http://stackoverflow.com/questions/2122850/zend-models-architecture (big thanks to Bill Karwin). I've made some reading including this article http://weierophinney.net/matthew/archives/202-Model-Infrastructure.html and this question http://stackoverflow.com/questions/373054/how-to-properly-create-domain-using-zend-framework Now I understand, what domain driven design is. But examples are still very simple and poor. They are based on one table and one model. Now, my question is: do they use Domain Model Design in real-world PHP projects? I've been looking for some good documentation about this, but I haven't found anything good enough, that explains how to manage several tables and transfer them to Domain Objects. As long as I know, there is Hibernate library, that has this features in Java, but what should I use in PHP (Zend Framework)?

    Read the article

  • How to add Transactions with a DataSet created using the Add Connection Wizard?

    - by RoguePlanetoid
    I have a DataSet that I have added to my project where I can Insert and Add records using the Add Query function in Visual Studio 2010, however I want to add transactions to this, I have found a few examples but cannot seem to find one that works with these. I know I need to use the SQLClient.SQLTransaction Class somehow. I used the Add New Data Source Wizard and added the Tables/View/Functions I need, I just need an example using this process such as How to get the DataConnection my DataSet has used. Assuming all options have been set in the wizard and I am only using the pre-defined adapters and options asked for in this wizard, how to I add the Transaction logic to my Database. For example I have a DataSet called ProductDataSet with the XSD created for this, I have then added my Stock table as a Datasource and Added an AddStock method with a wizard, this also if a new item calls an AddItem method, if either of these fails I want to rollback the AddItem and AddStock in this case.

    Read the article

  • Sql server indexed view

    - by Jose
    OK, I'm confused about sql server indexed views(using 2008) I've got an indexed view called AssignmentDetail when I look at the execution plan for select * from AssignmentDetail it shows the execution plan of all the underlying indexes of all the other tables that the indexed view is supposed to abstract away. I would think that the execution plan woul simply be an clustered index scan of PK_AssignmentDetail(the name of the clustered index for my view) but it doesn't. There seems to be no performance gain with this indexed view what am I supposed to do? Should I also create a non-clustered index with all of the columns so that it doesn't have to hit all the other indexes? Any insight would be greatly appreciated

    Read the article

  • Preview result of update/insert query whithout comitting changes to database in MySQL?

    - by Camsoft
    I am writing a script to import CSV files into existing tables within my database. I decided to do the insert/update operations myself using PHP and INSERT/UPDATE statements, and not use MySQL's LOAD INFILE command, I have good reasons for this. What I would like to do is emulate the insert/update operations and display the results to the user, and then give them the option of confirming that this is OK, and then committing the changes to the database. I'm using InnoDB database engine with support for transactions. Not sure if this helps but was thinking down the line of insert/update, query data, display to user, then either commit or rollback transaction? Any advise would be appreciated.

    Read the article

  • How to check if a Statistics is auto-created in a SQL Server 2000 DB using T-SQL?

    - by The Shaper
    Hi all. A while back I had to come up with a way to clean up all indexes and user-created statistics from some tables in a SQL Server 2005 database. After a few attempts it worked, but now I gotta have it working in SQL Server 2000 databases as well. For SQL Server 2005, I used SELECT Name FROM sys.stats WHERE object_id = object_id(@tableName) AND auto_created = 0 to fetch Statistics that were user-created. However, SQL 2000 doesn't have a sys.stats table. I managed to fetch the indexes and statistics in a distinguishable way from the sysindexes table, but I just couldn't figure out what the sys.stats.auto_created match is for SQL 2000. Any pointers? BTW: T-SQL please.

    Read the article

  • How to run a loop of queries in access?

    - by tksy
    Hi I have a database with a table which is full of conditions and error messages for checking another database. I want to run a loop such that each of these conditions is checked against all the tables in the second database and generae a report which gives the errors. Is this possible in ms access. For example, querycrit table id query error 1 speed<25 and speed>56 speed above limit 2 dist<56 or dist >78 dist within limit I have more than 400 queries like this of different variables. THe table against which I am running the queries is records table id speed dist accce decele aaa bbb ccc 1 33 34 44 33 33 33 33 2 45 44 55 55 55 22 23 regards ttk

    Read the article

  • Weighted Average with LINQ

    - by jsmith
    My goal is to get a weighted average from one table, based on another tables primary key. Example Data: Table1 Key WEIGHTED_AVERAGE 0200 0 Table2 ForeignKey LENGTH PCR 0200 105 52 0200 105 60 0200 105 54 0200 105 -1 0200 47 55 I need to get a weighted average based on the length of a segment and I need to ignore values of -1. I know how to do this in SQL, but my goal is to do this in LINQ. It looks something like this in SQL: SELECT Sum(t2.PCR*t2.LENGTH)/Sum(t2.LENGTH) AS WEIGHTED_AVERAGE FROM Table1 t1, Table2 t2 WHERE t2.PCR <> -1 AND t2.ForeinKey = t1.Key; I am still pretty new to LINQ, and having a hard time figuring out how I would translate this. The result weighted average should come out to roughly 55.3. Thank you.

    Read the article

  • Sphinx non-fulltext, integer only search

    - by James
    Hello guys, I've got a few tables that literally only hold integers, no "words" and for some reason Sphinx is unable to hold this data in it's library. Just returns "0 bytes" errors for these indexes. Is it possible to do this? If so, how? Below is an example from my Sphinx.conf for one that fails. source track { type = mysql sql_host = host sql_user = user sql_pass = pass sql_db = db sql_port = port sql_query = SELECT id, user, time FROM track; sql_attr_uint = user sql_attr_uint = time sql_query_info = SELECT * FROM track WHERE id=$id } index track { source = track path = /var/lib/sphinx/track docinfo = extern charset_type = utf-8 min_prefix_len = 1 enable_star = 1 }

    Read the article

  • Will SQL Server Partitioning increase performance without changing filegroups

    - by Tom
    Scenario I have a 10 million row table. I partition it into 10 partitions, which results in 1 million rows per partition but I do not do anything else (like move the partitions to different file groups or spindles) Will I see a performance increase? Is this in effect like creating 10 smaller tables? If I have queries that perform key lookups or scans, will the performance increase as if they were operating against a much smaller table? I'm trying to understand how partitioning is different from just having a well indexed table, and where it can be used to improve performance. Would a better scenario be to move the old data (using partition switching) out of the primary table to a read only archive table? Is having a table with a 1 million row partition and a 9 million row partition analagous (performance wise) to moving the 9 million rows to another table and leaving only 1 million rows in the original table?

    Read the article

  • SQL server 2005 remote connection problem, cannot solve it help please thank you

    - by user287745
    note:- if this question does not fit this site please do not just close it but also redirect the question to the fitting sister site, thank you" the steps taken and the error are mentioned please help, i am stuck here! installed sql server 2005 express on both computers installed sql server management studio express on both computers ran each management studio and connect to instance sqlserver using windows authentication ( one computer connection example "A-63A9D4D7E7834\SQLEXPRESS" ) created a database in the databases named as "test1" created a few tables with data saved and exit. did everything what this site says " How to configure SQL Server 2005 to allow remote connections" [add h t t p here as spam prevention] ://support.microsoft.com/kb/914277/en-us" but i have just disable the firewalls completely :turn off connecting to A-63A9D4D7E7834 started "SQL Server Management Studio Express" on computer A-63A9D4D7E7834 sever name: "ALL-E425BE6C41D\SQLEXPRESS" authentication: "windows authentication" and CONNECT I GET THE FOLLOWING ERROR Cannot connect to ALL-E425BE6C41D\SQLEXPRESS. ADDITIONAL INFORMATION: Login failed for user 'ALL-E425BE6C41D\Guest'. (Microsoft SQL Server, Error: 18456) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=18456&LinkId=20476 BUTTONS: OK HELP

    Read the article

  • Query to return internal details about stored function in SQL Server database

    - by Anthony
    I have been given access to a SQL Server database that is currently used by 3rd party app. As such, I don't have any documentation on how that application stores the data or how it retrieves it. I can figure a few things out based on the names of various tables and the parameters that the user-defined functions takes and returns, but I'm still getting errors at every other turn. I was thinking that it would be really helpful if I could see what the stored functions were doing with the parameters given to return the output. Right now all I've been able to figure out is how to query for the input parameters and the output columns. Is there any built-in information_schema table that will expose what the function is doing between input and output?

    Read the article

  • Database Modeling - Either/Or in Many-to-Many

    - by EkoostikMartin
    I have an either/or type of situation in a many-to-many relationship I'm trying to model. So I have these tables: Message ---- *MessageID MessageText Employee ---- *EmployeeID EmployeeName Team ---- *TeamID TeamName MessageTarget ---- MessageID EmployeeID (nullable) TeamID (nullable) So, a Message can have either a list of Employees, or a list of Teams as a MessageTarget. Is the MessageTarget table I have above the best way to implement this relationship? What constraints can I place on the MessageTarget effectively? How should I create a primary key on MessageTarget table?

    Read the article

  • Using memtables in sql. When is it reasonable and is it safe?

    - by Spiros
    I was just reading an update from a friend's project, mentioning the use of memtables to store data temporatily and then flush to a table on disk. Up to now, I have never faced a situation where I would use a memtable, or a situation where I would think the use of a mem table would be beneficial; so I wonder, when would someone use mem tables? what makes a memtable (appart from access speed) a reasonable choice? and how safe is it, even for temp data? there is always the limitation of available physical memory.

    Read the article

  • Virtual Function Implementation

    - by Gokul
    Hi, I have kept hearing this statement. Switch..Case is Evil for code maintenance, but it provides better performance(since compiler can inline stuffs etc..). Virtual functions are very good for code maintenance, but they incur a performance penalty of two pointer indirections. Say i have a base class with 2 subclasses(X and Y) and one virtual function, so there will be two virtual tables. The object has a pointer, based on which it will choose a virtual table. So for the compiler, it is more like switch( object's function ptr ) { case 0x....: X->call(); break; case 0x....: Y->call(); }; So why should virtual function cost more, if it can get implemented this way, as the compiler can do the same in-lining and other stuff here. Or explain me, why is it decided not to implement the virtual function execution in this way? Thanks, Gokul.

    Read the article

  • Entity Framework doesn't like 0..1 to * relationships.

    - by Orion Adrian
    I have a database framework where I have two tables. The first table has a single column that is an identity and primary key. The second table contains two columns. One is a nvarchar primary key and the other is a nullable foreign key to the first table. On the default import of the database I get the following error: Condition cannot be specified for Column member 'ForeignKeyId' because it is marked with a 'Computed' or 'Identity' StoreGeneratedPattern. where ForeignKeyId is the second foreign key reference in the second table. Is this just something the entity model doesn't do? Or am I missing something?

    Read the article

  • Finding underlying cause of Window 7 Account corruption.

    - by Carl Jokl
    I have been having trouble with my Sister's computer which I built. It is running Windows 7 Ultimate x64. The problem is that I have had problems with the accounts becoming corrupted. First problems manifest themselves in the form of Windows saying the profile failed to be loaded properly and a temporary profile. Eventually the account will not allow login at all. An error message along the lines the authentication service failing the login. I have found information about this problem and how to fix it. The problem being that something has corrupted the account profile and backing up and recreating the accounts fixes the problem. I have been able to fix things and get logins working again but over the period of usually about a week it happens again. Bit by bit the accounts corrupt and then it is back to square one. I am frustrated because I don't know what the underlying cause of the problem is i.e. what is causing the accounts to be corrupted in the first place. At the moment I am just treating the symptoms. I was hoping someone who may have more experience with dealing with this problem might be able to help me find the root cause. Some articles suggest that Norton Internet Security is a big culprit of this problem which is installed. I could try uninstalling Norton and see if it helps. The one thing which is different about this computer to any other I have built is that it has a solid state drive. Actually it has both a hard drive and solid state drive. The documents and settings i.e. the Users directory is stored on the hard drive. This was done following an article about moving the user account data onto a separate drive on Windows 7 which I found on the Internet. Moving the User accounts is more of a pain under Windows 7 and this solution involved creating a low level file system link to the folder from the boot drive (Solid State) to the Hard Drive. The idea is that the computer behaves just as if it is accessing the User's folder from the boot drive but actually the data is stored on the hard drive. This may have nothing to do with the cause of the problem but due to the problem being user account corruption it is a possibility I have not been able to rule out. Any help would be appreciated as I would be glad to see the back of this problem.

    Read the article

  • Core Data => Adding a related object always nil

    - by mongeta
    Hello, I have two tables related: DataEntered and Model DataEntered -currentModel Model One DataEntered can have only ONE Model, but a Model can stay into many DataEntered. The relationship is from DataEntered to Model (No To Many-relathionship) and no inverse relation. XCode generates the setters for DataEnteredModel: @property (nonatomic, retain) NSSet * current_model; - (void)addCurrent_modelObject:(CarModel *)value; - (void)addCurrent_model:(NSSet *)value; I have a Table and when I select a model, I want to store it to DataEntered: Model *model = [fetchedResultsController objectAtIndexPath:indexPath]; NSLog(@"Model %@",model.name); // ==> gives me the correct model name [dataEntered addCurrent_modelObject:model]; // ==> always nil [dataEntered setCurrent_model:[fetchedResultsController objectAtIndexPath:indexPath]]; // the same, always nil what I'm doing wrong ????? thanks, r.

    Read the article

  • Managing Foreign Keys

    - by jwzk
    So I have a database with a few tables. The first table contains the user ID, first name and last name. The second table contains the user ID, interest ID, and interest rating. There is another table that has all of the interest ID's. For every interest ID (even when new ones are added), I need to make sure that each user has an entry for that interest ID (even if its blank, or has defaults). Will foreign keys help with this scenario? or will I need to use PHP to update each and every record when I add a new key?

    Read the article

  • Synchronising local and remote DB

    - by nico
    Hi everyone, I have a general question about DB synchronisation. So, I'm developing a website locally (PHP + MySQL) and I would like to be able to synchronise at least the structure (and maybe the contents) of the two DB when one of the two is changed (normally I would change the local copy). Right now what I'm doing is to use mysqldump to dump the modified tables and then import them in the remote DB or do it by hand if the changes are minimal. However I find this tedious and error-prone. For the PHP I'm currently using Quanta+ which has the handy feature of finding files that have changed and just upload those. Is there something similar for MySQL? Otherwise how do you keep your DBs synchronised? Thanks nico PS: I'm sorry if this was already asked, I saw other questions that deal with similar topics, but couldn't really find an answer.

    Read the article

  • Kohana 3 ORM limitations

    - by yoda
    Hi, What are the limitations of Kohana 3 ORM regarding table relationships? I'm trying to modify the build-in Auth module in order to accept groups of users in adition, having now the following tables : groups groups_users roles roles_groups users user_tokens By default, this module is set to work without groups, and linking the users and roles using a third table named roles_users, but I need to add groups to it. I'm linking, as you can see by the names, the groups to users and the roles to groups, but I'm failing building the ORM code for it, so that's pretty much the question here, if ORM is limited to 2 relationships or if it can handle 3 in this case. Cheers!

    Read the article

  • Importing/Exporting Relationships in MS Access

    - by lamcro
    I have a couple of mdb files with the exact table structure. I have to change the primary key of the main table from autonumber to number in all of them, which means I have to: Drop the all the relationships the main table has Change the main table Create the relationships again,... for all the tables. Is there any way to export the relationships from one file and importing them to all the rest? I am sure this can be done with some macro/vb code. Does anyone has an example I could use? Thanks.

    Read the article

  • Creating a proper CMS thoughts

    - by dallasclark
    I'm just about to expand the functionality of our own CMS but was thinking of restructuring the database to make it simpler to add/edit data types and values. Currently, the CMS is quite flat - the CMS requires a field in the database for every type of stored value. The first option that comes to mind is simply a table which keeps the data types (ie: Address 1, Suburb, Email Address etc) and another table which holds values for each of these data types. Just like how Wordpress keeps values in the 'options' table, serialize would be used to store an array of values. The second option is how Drupal works, the CMS creates tables for every data type. Unlike Wordpress, this can be a bit of an overkill but really useful for SQL queries when ordering and grouping by a particular value. What's everyone's thoughts?

    Read the article

< Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >