Search Results

Search found 19966 results on 799 pages for 'datetime query'.

Page 493/799 | < Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >

  • Recommended approach for error handling with PHP and MYSQL

    - by iama
    I am trying to capture database (MYSQL) errors in my PHP web application. Currently, I see that there are functions like mysqli_error(), mysqli_errno() for capturing the last occurred error. However, this still requires me to check for error occurrence using repeated if/else statements in my php code. You may check my code below to see what I mean. Is there a better approach to doing this? (or) Should I write my own code to raise exceptions and catch them in one single place? What is the recommended approach? Also, does PDO raise exceptions? Thanks. function db_userexists($name, $pwd, &$dbErr) { $bUserExists = false; $uid = 0; $dbErr = ''; $db = new mysqli(SERVER, USER, PASSWORD, DB); if (!mysqli_connect_errno()) { $query = "select uid from user where uname = ? and pwd = ?"; $stmt = $db->prepare($query); if ($stmt) { if ($stmt->bind_param("ss", $name, $pwd)) { if ($stmt->bind_result($uid)) { if ($stmt->execute()) { if ($stmt->fetch()) { if ($uid) $bUserExists = true; } } } } if (!$bUserExists) $dbErr = $db->error(); $stmt->close(); } if (!$bUserExists) $dbErr = $db->error(); $db->close(); } else { $dbErr = mysqli_connect_error(); } return $bUserExists; }

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Handling Denormalized Schema with Eclipselink

    - by iamrohitbanga
    Hello All I have a denormalized table containing employee information. The fields are employee id, name and department name. The primary key is a composite one consisting of all three fields. An employee can belong to multiple departments. I want to read/write the objects in the table using the Eclipselink Dynamic Persistence API (which is infact a wrapper on top of JPA descriptors etc.). Example Data: 1 e1 dep1 2 e1 dep2 3 e2 dep1 4 e2 dep3 5 e3 dep1 5 e3 dep2 5 e3 dep3 A normal ReadAllQuery (select query) on the table returns a DynamicEntity corresponding to each row in the table. However I want to club all entities based on the emp id and return all the departments he belongs to as a list. I can merge the entities after retrieving them but if I can use some Eclipselink feature out of the box then it would be better. One way to do the read is the following: I create two dynamic types corresponding to employee: Having id,name as the primary key Having id, department as the primary key, I create a OneToManyMapping from the first type to the second one. Then when I query the first type it does return the departments to which employee belongs as a list of DynamicEntity of the second type. This satisfies the read scenario. Is there a better way of doing this? Is this inherently supported by Eclipselink or JPA? I cannot get the same dynamic type configuration working for the write scenario. This is because when I write the changes using the writeObject method of UnitOfWork, it generates insert queries which enter the following entries in the table id name department 102 emp_102 102 st 102 dep_102 102 dep_102 102 dep_102 instead of: id name department 102 emp_102 st 102 emp_102 dep_102 102 emp_102 dep_102 102 emp_102 dep_102 Is there any way I can get write to work with this schema using eclipselink? I want to avoid doing the heavy lifting of merging the rows for such a denormalized schema or generating each row before doing a write. Is there no clean way of doing this using Eclipselink or JPA? Thanks in Advance.

    Read the article

  • JDBC going to the wrong address

    - by DCSoft
    When I try and connect it my mysql database with JDBC in java, it doesn't go to my web server. Here is the code String dbtime; String dbUrl = "jdbc:mysql://184.172.176.18:3306/dcsoft_dcsoft_balloon"; String dbUser = "myuser"; String dcPass = "mypass"; String dbClass = "com.mysql.jdbc.Driver"; String query = "Select * FROM users"; try { Class.forName("com.mysql.jdbc.Driver"); Connection con = DriverManager.getConnection(dbUrl, dbUser, dcPass); Statement stmt = con.createStatement(); ResultSet rs = stmt.executeQuery(query); while (rs.next()) { dbtime = rs.getString(1); System.out.println(dbtime); } //end while con.close(); } //end try catch(ClassNotFoundException e) { e.printStackTrace(); } catch(SQLException e) { e.printStackTrace(); } This code is supposed to go to my web server but it gives this error java.sql.SQLException: Access denied for user 'dcsoft_dcsoft_java'@'jamesposse.force9.co.uk' (using password: YES) jamesposse.force9.co.uk is the not the address im trying to connect to I'm trying to connect to 184.172.176.18:3306. Thanks.

    Read the article

  • foreign key and index issue

    - by George2
    Hello everyone, I am using SQL Server 2008 Enterprise. I have a table and one of its column is referring to another column in another table (in the same database) as foreign key, here is the related SQL statement, in more details, column [AnotherID] in table [Foo] refers to another table [Goo]'s column [GID] as foreign key. [GID] is primary key and clustered index on table [Goo]. My question is, in this way, if I do not create index on [AnotherID] column on [Foo] explicitly, will there be an index created automatically for [AnotherID] column on [Foo] -- because its foreign key reference column [GID] on table [Goo] already has primary clustered key index? CREATE TABLE [dbo].[Foo]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [AnotherID] [int] NULL, [InsertTime] [datetime] NULL CONSTRAINT DEFAULT (getdate()), CONSTRAINT [PK_Foo] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] ALTER TABLE [dbo].[Foo] WITH CHECK ADD CONSTRAINT [FK_Foo] FOREIGN KEY([Goo]) REFERENCES [dbo].[Goo] ([GID]) ALTER TABLE [dbo].[Foo] CHECK CONSTRAINT [FK_Foo] thanks in advance, George

    Read the article

  • How do I return the IDENTITY for an inserted record from a stored Proecedure?

    - by user54197
    I am adding data to my database, but would like to retrieve the UnitID that is Auto generated. using (SqlConnection connect = new SqlConnection(connections)) { SqlCommand command = new SqlCommand("ContactInfo_Add", connect); command.Parameters.Add(new SqlParameter("name", name)); command.Parameters.Add(new SqlParameter("address", address)); command.Parameters.Add(new SqlParameter("Product", name)); command.Parameters.Add(new SqlParameter("Quantity", address)); command.Parameters.Add(new SqlParameter("DueDate", city)); connect.Open(); command.ExecuteNonQuery(); } ... ALTER PROCEDURE [dbo].[Contact_Add] @name varchar(40), @address varchar(60), @Product varchar(40), @Quantity varchar(5), @DueDate datetime AS BEGIN SET NOCOUNT ON; INSERT INTO DBO.PERSON (Name, Address) VALUES (@name, @address) INSERT INTO DBO.PRODUCT_DATA (PersonID, Product, Quantity, DueDate) VALUES (@Product, @Quantity, @DueDate) END

    Read the article

  • Handling null values with PowerShell dates

    - by Tim Ferrill
    I'm working on a module to pull data from Oracle into a PowerShell data table, so I can automate some analysis and perform various actions based on the results. Everything seems to be working, and I'm casting columns into specific types based on the column type in Oracle. The problem I'm having has to do with null dates. I can't seem to find a good way to capture that a date column in Oracle has a null value. Is there any way to cast a [datetime] as null or empty?

    Read the article

  • Mongodb update: how to check if an update succeeds or fails?

    - by zmg
    I think the title pretty much says it all. I'm working with Mongodb in PHP using the pecl driver. My updates are working great, but I'd like to build some error checking into my funciton(s). I've tried using lastError() in a pretty simple function: function system_db_update_object($query, $values, $database, $collection) { $connection = new Mongo(); $collection = $connection->$database->$collection; $connection->$database->resetError(); //Added for debugging $collection->update( $query, array('$set' => $values)); //$errorArray = $connection->$database->lastError(); var_dump($connection->$database->lastError());exit; // Var dump and /Exit/ } But pretty much regardless of what I try to update (whether it exists or not) I get these same basic results: array(4) { ["err"]=> NULL ["updatedExisting"]=> bool(true) ["n"]=> float(1) ["ok"]=> float(1) } Any help or direction would be greatly appreciated.

    Read the article

  • How do I differentiate between different descendents with the same name?

    - by zotty
    I've got some XML I'm trying to import with c#, which looks something like this: <run> <name = "bob"/> <date = "1958"/> </run> <run> <name = "alice"/> <date = "1969"/> </run> I load my xml using XElement xDoc=XElement.Load(filename); What I want to do is have a class for "run", under which I can store names and dates: public class RunDetails { public RunDetails(XElement xDoc, XNamespace xmlns) { var query = from c in xDoc.Descendants(xmlns + "run").Descendants(xmlns + "name") select c; int i=0; foreach (XElement a in query) { this.name= new NameStr(a, xmlns); // a class for names Name.Add(this.name); //Name is a List<NameStr> i++; } // Here, i=2, but what I want is a new instance of the RunDetails class for each <run> } } How can I set up my code to create a new instance of the RunDetails class for every < run, and to only select the < name and < date inside a given < run?

    Read the article

  • How can I exclude LEFT JOINed tables from TOP in SQL Server?

    - by Kalessin
    Let's say I have two tables of books and two tables of their corresponding editions. I have a query as follows: SELECT TOP 10 * FROM (SELECT hbID, hbTitle, hbPublisherID, hbPublishDate, hbedID, hbedDate FROM hardback LEFT JOIN hardbackEdition on hbID = hbedID UNION SELECT pbID, pbTitle, pbPublisher, pbPublishDate, pbedID, pbedDate FROM paperback Left JOIN paperbackEdition on pbID = pbedID ) books WHERE hbPublisherID = 7 ORDER BY hbPublishDate DESC If there are 5 editions of the first two hardback and/or paperback books, this query only returns two books. However, I want the TOP 10 to apply only to the number of actual book records returned. Is there a way I can select 10 actual books, and still get all of their associated edition records? In case it's relevant, I do not have database permissions to CREATE and DROP temporary tables. Thanks for reading! Update To clarify: The paperback table has an associated table of paperback editions. The hardback table has an associated table of hardback editions. The hardback and paperback tables are not related to each other except to the user who will (hopefully!) see them displayed together.

    Read the article

  • Null Value Statement

    - by Sam
    Hi All, I have created a table called table1 and it has 4 columns named Name,ID,Description and Date. I have created them like Name varchar(50) null, ID int null,Description varchar(50) null, Date datetime null I have inserted a record into the table1 having ID and Description values. So Now my table1 looks like this: Name ID Description Date Null 1 First Null One of them asked me to modify the table such a way that The columns Name and Date should have Null values instead of Text Null. I don't know what is the difference between those I mean can anyone explain me the difference between these select statements: SELECT * FROM TABLE1 WHERE NAME IS NULL SELECT * FROM TABLE1 WHERE NAME = 'NULL' SELECT * FROM TABLE1 WHERE NAME = ' ' Can anyone explain me?

    Read the article

  • Help needed to write a LINQ with GROUP bY(C#3.0)

    - by Newbie
    I have a datatable whose structure is as under Week Dates Key_Factors Factor_Values --- ----- ----------- ------------- 1 29/12/2000 Factor_1 19.20 1 29/12/2000 Factor_2 20.67 1 29/12/2000 Factor_3 10 2 21/12/2007 Factor_1 20.54 2 21/12/2007 Factor_4 21.70 I have a Object model like WeekNumber(int) Dates(Datetime) FactorDictionary (Dictionary<string,double>) I am trying to populate the data from DataTable to my Object Model whose needed output is as under Desired Output ---------------- WeekNumber : 1 Dates : 29/12/2000 FactorDictionary: Key_Factors: Factor_1 Factor_Values:19.20 Key_Factors: Factor_2 Factor_Values:20.67 Key_Factors: Factor_3 Factor_Values:10 WeekNumber : 2 Dates : 21/12/2007 FactorDictionary: Key_Factors: Factor_1 Factor_Values:20.54 Key_Factors: Factor_4 Factor_Values:21.70 i.e. The result is grouped by weeks. Can I achieve the same by using LINQ. I am using C#(3.0) with framework(3.5) Thanks

    Read the article

  • How do I efficiently locate key-value pairs in a multi-dimensional PHP array?

    - by Kyle Noland
    I have an array in PHP as a result of the following query to a Wordpress database: SELECT * FROM wp_postmeta WHERE post_id = :id I am returned a multidimensional array that looks like this: Array ( [0] => Array ( [meta_id] => 380 [post_id] => 72 [meta_key] => _edit_last [meta_value] => 1 ) ... etc. What is the best way to find a particular key-value pair in this array? For instance, how would I located the row where [meta_key] = event_name so that I can extract that same row's [meta_value] value into a PHP variable? I realize I could turn this into many individual MySQL queries. Does anyone have an opinion of the efficiency of doing 10 SQL queries in a row rather than searching the array 10 times? I would think since the array is in memory, that will be the fastest method to find the values I need. Alternatively, is there a better way to query the database from the beginning so that my result set is formatted in a way that is easier to search?

    Read the article

  • Extract Bullets and Tables information in Word doc from c#

    - by Siva
    Hi All, I need to create an word document based on the template in c#. I have tags for only the paragraphs. Is there any way to replace the bullets and tables that are already available in the template based on the user input. I was able to replace the paragraph with input text using the Replace command in the Word InterOp. Need help to do the rest of the items. Replace the bullets based on the user input Fill the tables with the input values Code for replacing the Paragraph based on the tag: FindAndReplace(wordApplication, "/date/", DateTime.Now.Date.ToString("MMM dd, yyyy")); FindAndReplace(){ wordApplication.Selection.Find.Execute(ref findText, ref matchCase, ref matchWholeWord, ref matchWildCards, ref matchSoundsLike, ref matchAllWordsForms, ref forward, ref wrap, ref format, ref replaceWithText, ref replace, ref matchKashida, ref matchDiacritics, ref matchAlefHamsa, ref matchControl); } Thanks in Advance. ASAP

    Read the article

  • MySQL updating a field to result of a function

    - by jdborg
    mysql> CREATE FUNCTION test () -> RETURNS CHAR(16) -> NOT DETERMINISTIC -> BEGIN -> RETURN 'IWantThisText'; -> END$$ Query OK, 0 rows affected (0.00 sec) mysql> SELECT test(); +------------------+ | test() | +------------------+ | IWantThisText | +------------------+ 1 row in set (0.00 sec) mysql> UPDATE `table` -> SET field = test() -> WHERE id = 1 Query OK, 1 row affected, 1 warning (0.01 sec) Rows matched: 1 Changed: 1 Warnings: 1 mysql> SHOW WARNINGS; +---------+------+----------------------------------------------------------------+ | Level | Code | Message | +---------+------+----------------------------------------------------------------+ | Warning | 1265 | Data truncated for column 'test' at row 1 | +---------+------+----------------------------------------------------------------+ 1 row in set (0.00 sec) mysql> SELECT field FROM table WHERE id = 1; +------------------+ | field | +------------------+ | NULL | +------------------+ 1 row in set (0.00 sec) What I am doing wrong? I just want field to be set to the returned value of test() Forgot to mention field is VARCHR(255)

    Read the article

  • How to get time difference in milliseconds

    - by jason45
    Hi, I can't wrap my brain around this one so I hope someone can help. I have a song track that has the song length in milliseconds. I also have the date the song played in DATETIME format. What I am trying to do is find out how many milliseconds is left in the song play time. Example $tracktime = 219238; $dateplayed = '2011-01-17 11:01:44'; $starttime = strtotime($dateplayed); I am using the following to determine time left but it does not seem correct. $curtime = time(); $timeleft = $starttime+round($tracktime/1000)-$curtime; Any help would be greatly appreciated.

    Read the article

  • RIA Service/oData ... "Requests that attempt to access a single element using key values from a resu

    - by user327911
    I've recently started working up a sample project to play with an oData feed coming from a RIA service. I am able to view the feed and the metadata via any web browser, however, if I try to perform certain query operations on the feed I receive "unsupported" exceptions. Sample oData feed: ProductSet http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet/ 2010-04-28T14:02:10Z http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet(guid'b0a2b170-c6df-441f-ae2a-74dd19901128') 2010-04-28T14:02:10Z b0a2b170-c6df-441f-ae2a-74dd19901128 Product 0 Type 1 Active Sample web.config entry: Sample service: [EnableClientAccess()] public class ProductService : DomainService { [Query(IsDefault = true)] public IQueryable GetProducts() { IList products = new List(); for (int i = 0; i < 90; i++) { Product product = new Product { Id = Guid.NewGuid(), Name = "Product " + i.ToString(), ProductType = i < 30 ? "Type 1" : ((i > 30 && i < 60) ? "Type 2" : "Type 3"), Status = i % 2 == 0 ? "Active" : "NotActive" }; products.Add(product); } return products.AsQueryable(); } } If I provide the url "http://localhost:50880/Services/Rebirth-Web-Services-ProductService.svc/OData/ProductSet(guid'b0a2b170-c6df-441f-ae2a-74dd19901128')" to my web browser I receive the following xml: Requests that attempt to access a single element using key values from a result set are not supported. I'm new to RIA and oData. Could this be something as simple as my web browsers not supporting this type of querying on the result set or something else? Thanks ahead! Corey

    Read the article

  • LINQ 2 SQL Insert Error(with Guids)

    - by Refracted Paladin
    I have the below LINQ method that I use to create the empty EmploymentPLan. After that I simply UPDATE. For some reason this works perfectly for myself but for my users they are getting the following error -- The target table 'dbo.tblEmploymentPrevocServices' of the DML statement cannot have any enabled triggers if the statement contains an OUTPUT clause without INTO clause. This application is a WinForm app that connects to a local SQL 2005 Express database that is a part of a Merge Replication topology. This is an INTERNAL App only installed through ClickOnce. public static Guid InsertEmptyEmploymentPlan(int planID, string user) { using (var context = MatrixDataContext.Create()) { var empPlan = new tblEmploymentQuestionnaire { PlanID = planID, InsertDate = DateTime.Now, InsertUser = user, tblEmploymentJobDevelopmetService = new tblEmploymentJobDevelopmetService(), tblEmploymentPrevocService = new tblEmploymentPrevocService() }; context.tblEmploymentQuestionnaires.InsertOnSubmit(empPlan); context.SubmitChanges(); return empPlan.EmploymentQuestionnaireID; } }

    Read the article

  • What's the best way to access a MS Access database using PHP?

    - by Jack Roscoe
    Hi, I need to access some data from an MS Access database and retrieve some data from it using PHP. I've looked around the web, and found the following line which seems to correctly connect to the database: $conn->Open("DRIVER={Microsoft Access Driver (*.mdb)}; DBQ=C:\wamp\www\data\MYDB.mdb"); However, I have tried to retrieve some data in the following way: $query = "SELECT pageid FROM pages_table"; $result = mysqli_query($conn, $query); $amount_of_pages = 0; if(mysqli_num_rows($result) <= 0) echo "No results found."; else while($row = mysqli_fetch_array($result, MYSQL_ASSOC)) $amount_of_pages++; And was presented with the following errors: Warning: mysqli_query() expects parameter 1 to be mysqli, object given in C:\wamp\www\data\index.php on line 19 Warning: mysqli_num_rows() expects parameter 1 to be mysqli_result, null given in C:\wamp\www\data\index.php on line 23 No results found. I don't really understand the connection to the Access database, is there something I should be doing differently? Thanks in advance for any help.

    Read the article

  • nextgen gallery order issue

    - by mro
    Hi, wonder if anyone can help. I think what I'm after won't be solved by any exsiting code in nextgen plugin (in wordpress) due to the custom way I'm using it hence I come to stackoverflow for some opnions. Bascially - I am only really using the admin of nextgen to work with the gallerys etc. The actual meat of the functionality I'm querying the nextgen DB's direct from my code, I would have loved to use the inbuilt gallerys in nextgen, but my spec specifics were so custom I couldn't. My issue is, I need to pull the images from the DB's in the order it is in the admin (ie if a user pulls the sort order around in the drag and drop area). I have noticed however this doesn't affect the image id order in the DB, and wouldn't expect it to - that would be some complex shifting around just to reorder everytime surely. So obviously when I query the DB the order it's looking at is when it was created, by image id, with my filtering on top. I'm wondering though if there is a way I can query that sort order that's determined in the admin somehow, then at least I could sort the array somehow in the code ? does next gen store it's user custom sort order somewhere ? Hope this makes sense :) any thoughts appreciated. Thanks

    Read the article

  • Linq To Sql Entity Updated from Trigger

    - by James Helms
    I have a Table called Address. I have a Trigger for insert on that table that does some spacial calculations on the address that determines what neighborhood boundaries it is in. address = new Address { Street = this.Street, City = this.City, State = this.State, ZipCode = this.ZipCode, latitude = this.Latitude, longitude = this.Longitude, YearBuilt = this.YearBuilt, LotSize = this.LotSize, FinishedSize = this.FinishedSize, Bedrooms = this.Bedrooms, Bathrooms = this.Bathrooms, UseCode = this.UseCode, HOA = this.HOA, UpdateDate = DateTime.Now }; db.AddToAddresses(address); db.SaveChanges(); In the database i can clearly see that the Trigger ran and updated the neighborhoodID in the address table for the row. I tried to just reload that record to get the assigned id like this: address = (from a in db.Addresses where a.AddressID == address.AddressID select a).First(); In the debugger i can clearly see that the address.AddressID is correct, entity doesn't update in memory. Is there any work around for this?

    Read the article

  • where is the best palce to count the lazy load property using JPA

    - by Ke
    Let's say we have a "Question" and "Answer" entity, @Entity public class Question extends IdEntity { @Lob private String content; @Transient private int answerTotal; @OneToMany(fetch = FetchType.LAZY) private List<Answer> answers = new ArrayList<Answer>(); ...... I need to tell how many answers for the question every time Question is queryed. So I need to do count: String count = "select count(o) from Answer o WHERE o.question=:q"; My question is, where is the best place to do the count? (Because I did a lot of query about Question entity, by date, by tag, by category, by asker, etc. It is obviously not a good solution to add count operation in each query. My first attempt is to implement a @PostLoad listener, so every time Question entity is loaded, I do count. However, EntityManager cannot be injected in listener. So this way does not work. Any hint?

    Read the article

  • trying to backup mysql database using php

    - by user225269
    I got this code from this site: http://www.php-mysql-tutorial.com/wikis/mysql-tutorials/using-php-to-backup-mysql-databases.aspx But I'm just a beginner so I don't know what the config.php and opendb.php suppose to mean. Do I have to create those 2 files in order for this code to work? If yes, then how do I create it, it isn't included in the site how to create it. <?php include 'config.php'; include 'opendb.php'; $tableName = 'mypet'; $backupFile = 'backup/mypet.sql'; $query = "SELECT * INTO OUTFILE '$backupFile' FROM $tableName"; $result = mysql_query($query); include 'closedb.php'; ?> can I just include these lines on the top code so that I will not be putting the include 'opendb.php' anymore: $con = mysql_connect("localhost","root",""); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("Hospital", $con);

    Read the article

  • mySQL select and group by values

    - by Foo
    I'd like to count and group rows by specific values. This seems fairly simple, but I can't seem to do it. I have a table set up similar to this: Table: Ratings id pID uID rating 1 1 2 7 2 1 7 7 3 1 5 4 4 1 1 1 id is the primary key, piD and uID are foreign-keys. Rating contains values between 1 and 10, and only between 1 and 10. I want to run some statistics and count the number of ratings with a certain value. In the example above, two have left a rating of 7. So I wrote the following query: SELECT COUNT(*) AS 'count' , 'rating' FROM 'ratings' WHERE pID= '1' GROUP BY `rating` ORDER BY `rating` Which yields the nice result as: count ratings 1 1 1 4 2 7 I'd like to get the mySQL query to include values between 1 and 10 as well. For example: Desired Result count ratings 1 1 0 2 0 3 1 4 0 5 0 6 2 7 0 8 0 9 0 10 Unfortunately, I'm relatively new to SQL and I've been reading through everything I could get my hands on for the past hour, but I can't get it to work. I've been leaning along the lines of a some type of JOIN. If anyone can point me in the right direction, it'd be appreciated. Thanks.

    Read the article

  • How do I check for Existence of a Record in GAE

    - by VDev
    I am trying to create a simple view in Django & GAE, which will check if the user has a profile entity and prints a different message for each case. I have the program below, but somehow GAE always seem to return a object. My program is below import datetime from django.http import HttpResponse, HttpResponseRedirect from google.appengine.api import users from google.appengine.ext import db from models import Profile import logging #from accounts.views import profile # Create your views here. def login_view(request): user = users.get_current_user() profile = db.GqlQuery("SELECT * FROM Profile WHERE account = :1", users.get_current_user()) logging.info(profile) logging.info(user) if profile: return HttpResponse("Congratulations Your profile is already created.") else: return HttpResponse("Sorry Your profile is NOT created.") My model object is Profile defined as follows: class Profile(db.Model): first_name = db.StringProperty() last_name = db.StringProperty() gender = db.StringProperty(choices=set(["Male", "Female"])) account = db.UserProperty(required = True) friends = db.ListProperty(item_type=users.User) last_login = db.DateTimeProperty(required=True) Thanks for the help.

    Read the article

< Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >