Search Results

Search found 33029 results on 1322 pages for 'database queries'.

Page 141/1322 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • SIMD Extensions for the Database Storage Engine

    - by jchang
    For the last 15 years, Intel and AMD have been progressively adding special purpose extensions to their processor architectures. The extensions mostly pertain to vector operations with Single Instruction, Multiple Data (SIMD) concept. The motivation was that achieving significant performance improvement over each successive generation for the general purpose elements had become extraordinarily difficult. On the other hand, SIMD performance could be significantly improved with special purpose registers...(read more)

    Read the article

  • How to structure (normalize?) a database of physical parameters?

    - by Arrieta
    Hello: I have a collection of physical parameters associated with different items. For example: Item, p1, p2, p3 a, 1, 2, 3 b, 4, 5, 6 [...] where px stands for parameter x. I could go ahead and store the database exactly as presented; the schema would be CREATE TABLE t1 (item TEXT PRIMARY KEY, p1 FLOAT, p2 FLOAT, p3 FLOAT); I could retrieve the parameter p1 for all the items with the statement: SELECT p1 FROM t1; A second alternative is to have an schema like: CREATE TABLE t1 (id INT PRIMARY KEY, item TEXT, par TEXT, val FLOAT) This seems much simpler if you have many parameters (as I do). However, the parameter retrieval seems very awkward: SELECT val FROM t1 WHERE par == 'p1' What do you advice? Should go for the "pivoted" (first) version or the id, par, val (second) version? Many thanks.

    Read the article

  • Php efficiency question --> Database call vs. File Write vs. Calling C++ executable

    - by JP19
    Hi, What I wish to achieve is - log all information about each and every visit to every page ofmy website (like ip address, browser, referring page, etc). Now this is easy to do. What I am interested is doing this in a way so as to cause minimum overhead (runtime) in the php scripts. What is the best approach for this efficiency-wise: 1) Log all information to a database table 2) Write to a file (from php directly) 3) Call a C++ executable, that will write this info to a file in parallel [so the script can continue execution without waiting for the file write to occur ...... is this even possible] I may be trying to optimize unnecessarily/prematurely, but still - any thoughts / ideas on this would be appreciated. (I think efficiency of file write/logging can really be a concern if I have say 100 visits per minute...) Thanks & Regards, JP

    Read the article

  • Database table design vs. ease of use.

    - by Gastoni
    I have a table with 3 fields: color, fruit, date. I can pick 1 fruit and 1 color, but I can do this only once each day. examples: red, apple, monday red, mango, monday blue, apple, monday blue, mango, monday red, apple, tuesday The two ways in which I could build the table are: 1.- To have color, fruit and date be a composite primary key (PK). This makes it easy to insert data into the table because all the validation needed is done by the database. PK color PK fruit PK date 2.- Have and id column set as PK and then all the other fields. Many say thats the way it should be, because composite PKs are evil. For example, CakePHP does no support them. PK id color fruit date Both have advantages. Which would be the 'better' approach?

    Read the article

  • Connect to a MySQL database and count the number of rows.

    - by Hugo
    Hi there! I need to connect to a MySQL database and then show the number of rows. This is what I've got so far; <?php include "connect.php"; db_connect(); $result = mysql_query("SELECT * FROM hacker"); $num_rows = mysql_num_rows($result); echo $num_rows; ?> When I use that code I end up with this error; Warning: mysql_num_rows(): supplied argument is not a valid MySQL result resource in C:\Documents and Settings\username\Desktop\xammp\htdocs\news2\results.php on line 10 Thanks in advance :D

    Read the article

  • Storing hierarchical template into a database

    - by pduersteler
    If this title is ambiguous, feel free to change it, I don't know how to put this in a one-liner. Example: Let's assume you have a html template which contains some custom tags, like <text_field />. We now create a page based on a template containing more of those custom tags. When a user wants to edit the page, he sees a text field. he can input things and save it. This looks fairly easy to set up. You either have something like a template_positions table which stores the content of those fields. Case: I now have a bit of a blockade keeping things as simple as possible. Assume you have the same tag given in your example, and additionally, <layout> and <repeat> tags. Here's an example how they should be used: <repeat> <layout name="image-left"> <image /> <text_field /> </layout> <layout name="image-right"> <text_field /> <image /> </layout> </repeat> We now have a block which can be repeated, obviously. This means: when creting/editing a page containing such a template block, I can choose between a layout image-left and image-right which then gets inserted as content element (where content for <image /> and <text_field /> gets stored). And because this is inside a <repeat>, content elements from the given layouts can be inserted multiple times. How do you store this? Simply said, this could be stored with the same setup I've wrote in the example above, I just need to add a parent_id or something similiar to maintain a hierarchy. but I think I am missing something. At least the relation between an inserted content element and the origin/insertion point is missing. And what happens when I update the template file? Do I have to give every custom tag that acts as editable part of a template an identifier that matches an identifier in the template to substitue them correctly? Or can you think of a clean solution that might be better?

    Read the article

  • Identifying Incompatibility Issues When Migrating SQL Server Database to Windows Azure

    In this article, Marcin Policht looks at migrating existing SQL Server databases to Windows Azure, starting with identifying obstacles associated with such migrations. Optimize SQL Server performance“With SQL Monitor, we can be proactive in our optimization process, instead of waiting until a customer reports a problem,” John Trumbul, Sr. Software Engineer. Optimize your servers with a free trial.

    Read the article

  • Interpret a rule applying multiple xpath queries on multiple XML documents

    - by Damien
    Hi, I need to build a component which would take a few XML documents in input and check the following kind of rules: XML1:/bookstore/book[price>35.00] != null and (XML2:/city/name = 'Montreal' or XML3://customer[@language] contains 'en') Basically my component should be able to: substitute the XML tokens with the corresponding XML document(before colon) apply xpath query on this XML document check the xpath output against expected result ("=", "!=", "contains") follow the basic syntax ("and", "or" and parentheses) tell if the rule is true or false Do you know any library which could help me? maybe JavaCC? Thanks

    Read the article

  • Recovering SQL Server 2008 Database From Error 2008

    MS SQL Server 2008 is the latest version of SQL Sever. It has been designed with the SQL Server Always On technologies that minimize the downtime and maintain appropriate levels of application availa... [Author: Mark Willium - Computers and Internet - May 13, 2010]

    Read the article

  • Best database setup for one click games

    - by ewizard
    I am building a one click game website/mobile app, and I am debating between using MySQL and MongoDB for the backend. The way I have been exploring it is with a NodeJS/Express/Angular/Passport/MongoDB stack - I have also implemented Socket.io. I have gotten to the point where I am sending data from the flash game to the server (NodeJS). The only data that needs to be sent is basic user information, the players score at the end of each game, and some x,y positions for each players game (for anti-cheating). It seems like MySQL would work fine, but as I am already using MongoDB - are there any major drawbacks to continuing to work with MongoDB on this project?

    Read the article

  • Data historian queries

    - by Scott Dennis
    Hi, I have a table that contains data for electric motors the format is: DATE(DateTime) | TagName(VarChar(50) | Val(Float) | 2009-11-03 17:44:13.000 | Motor_1 | 123.45 2009-11-04 17:44:13.000 | Motor_1 | 124.45 2009-11-05 17:44:13.000 | Motor_1 | 125.45 2009-11-03 17:44:13.000 | Motor_2 | 223.45 2009-11-04 17:44:13.000 | Motor_2 | 224.45 Data for each motor is inserted daily, so there would be 31 Motor_1s and 31 Motor_2s etc. We do this so we can trend it on our control system displays. I am using views to extract last months max val and last months min val. Same for this months data. Then I join the two and calculate the difference to get the actual run hours for that month. The "Val" is a nonresetable Accumulation from a PLC(Controller). This is my query for Last months Max Value: SELECT TagName, Val AS Hours FROM dbo.All_Data_From_Last_Mon AS cur WHERE (NOT EXISTS (SELECT TagName, Val FROM dbo.All_Data_From_Last_Mon AS high WHERE (TagName = cur.TagName) AND (Val > cur.Val))) This is my query for Last months Max Value: SELECT TagName, Val AS Hours FROM dbo.All_Data_From_Last_Mon AS cur WHERE (NOT EXISTS (SELECT TagName, Val FROM dbo.All_Data_From_Last_Mon AS high WHERE (TagName = cur.TagName) AND (Val < cur.Val))) This is the query that calculates the difference and runs a bit slow: SELECT dbo.Motors_Last_Mon_Max.TagName, STR(dbo.Motors_Last_Mon_Max.Hours - dbo.Motors_Last_Mon_Min.Hours, 12, 2) AS Hours FROM dbo.Motors_Last_Mon_Min RIGHT OUTER JOIN dbo.Motors_Last_Mon_Max ON dbo.Motors_Last_Mon_Min.TagName = dbo.Motors_Last_Mon_Max.TagName I know there is a better way. Ultimately I just need last months total and this months total. Any help would be appreciated. Thanks in advance

    Read the article

  • Redirecting all page queries to the homepage in Rails

    - by Dean Putney
    I've got a simple Rails application running as a splash page for a website that's going through a transition to a new server. Since this is an established website, I'm seeing user requests hitting pages that don't exist in the Rails application. How can I redirect all unknown requests to the homepage instead of throwing a routing error?

    Read the article

  • Mit Oracle Datenbanken in die Pole-Position!

    - by Alliances & Channels Redaktion
    Stellen Sie sich vor, Sie haben die Wahl zwischen einem hübschen, aber uralten Kleinwagen und einem stylischen Tourenwagen auf technischem Höchststand. Beide haben etwas für sich, keine Frage, doch auf der Rennstrecke, wo es allein um Performance geht, ist Nostalgie fehl am Platz. Nicht anders ist es mit Datenbanken. Wer also Wert auf Leistung, Sicherheit und die optimale Ausnutzung von Hardware und IT-Ressourcen legt, sollte sich für ein Database-Tuning entscheiden. Die wesentlichen Vorteile der Oracle Datenbanken bringt dieses Video kurz und knackig auf den Punkt – und ist damit auch bestens zum Einsatz bei Kunden geeignet. Oracle Database Tuning from Worm Marketing Consulting GmbH on Vimeo.

    Read the article

  • Data architecture for event log metrics?

    - by elliot42
    My service has a large ongoing number of user events, and we would like to do things like "count occurrence of event type T since date D." We are trying to make two basic decisions: What to store? Storing every event vs. only storing aggregates (Event log style) log every event and count them later, vs. (Time-series style) store a single aggregated "count of event E for date D" for every day Where to store the data In a relational database (particularly MySQL) In a non-relational (NoSQL) database In flat log files (collected centrally over the network via syslog-ng) What is standard practice / where can I read more about comparing the different types of systems? Additional details: The total event stream is large, potentially hundreds of thousands of entries per day But our current need is only to count certain types of events within it We don't necessarily need real-time access to the raw data or aggregation results IMHO, "log all events to files, crawl them at a later time to filter and aggregate the stream" is a pretty standard UNIX Way, but my Rails-y compatriots seem to think that nothing is real unless it's in MySQL.

    Read the article

  • Programming language for simple program?

    - by jamherst
    I am wondering about which programming languages people see fit to create a program idea that I had. I am looking to create a fairly simple program whose main functions are adding to, managing, and searching through a database of people, all through a polished GUI. It will be for use in the business world, so I think Windows would be the priority, but Mac and Linux support wouldn't be bad. Also, eventually I would like to add the ability for an instance of one program on a computer to interact with other instances on the same network, mainly through the sharing of a database. Most of my experience is in Java, but I don't particularly like the appearance of Java GUIs, so I'm looking for an alternative. I noticed that a lot of people have suggested C++ or C# in similar posts, so what are some of the advantages/disadvantages of one or both if that is your suggestion. Thanks for any help in advance.

    Read the article

  • What are the best settings of the H2 database for high concurrency?

    - by dexter
    There are a lot of settings that can be used in H2 database. AUTO_SERVER, MVCC, LOCK_MODE, FILE_LOCK and MULTI_THREADED. I wonder what combination works best for high concurrency setup e.g. one thread is doing INSERTs and another connection does some UPDATEs and SELECTs? I tried MVCC=TRUE;LOCK_MODE=3lFILE_LOCK=NO but whenever I do some UPDATEs in one connection, the other connection does not see it even though I commit it. By the way the connections are from different processes e.g. separate program.

    Read the article

  • Cost of Web Server that hosted and delivered text only

    - by slandau
    We are developing an application that needs a web server to interact with the two (or more) entities involved. They will not ever see anything on the web, but the server is required for the transfer of data between them. It's sort of a holding point. Now, the only thing the server is going to be holding is textual data. The two entities are going to be doing the work with the data. I was wondering what the cost of this type of server would be. Since it would be JUST a database with no front end, would it make sense to employ a service through Amazon or Google that just holds data for me to access instead of buying a server and making my own database? The amount of data can grow very large however it's only text, and all data over a day old will be deleted for the most part every day. Thanks!

    Read the article

  • how to copy database files from the network access server to Client PC in c#.net?

    - by zoya
    im using a code to copy the files from the database of server PC. so im accessing that server PC through IP address but it is giving me error and not copying the files in the folder of my PC (client PC) this is my code that im using...can u tell me where im wrong?? the file path is given on my listview in winform.. public string RecordingFileCopy(string recordpath,string ipadd) { string strFinalPath; strFinalPath = String.Format("\\{0}'{1}'",ipadd,recordpath); return strFinalPath; } on button click event.... { try { foreach (ListViewItem item in listView1.Items) { string sourceFile = item.SubItems[5].Text; RecordingFileCopy(sourceFile,"10.0.4.123"); File.Copy(sourceFile, Path.Combine(@"E:\name\MyDir", Path.GetFileName(sourceFile))); } } catch { MessageBox.Show("Files are not copied to folder", _strMsg, MessageBoxButtons.OK, MessageBoxIcon.Error); } }

    Read the article

  • In MYSQL is it better to have one big table or many smaller tables

    - by user307922
    Hi All, I am making a database of my client's customers to send email promotions to. The database will include all about 12 of my clients and each of them has an average of 2100 customers. I was wondering if it would be better to have a table in the db for each one of my clients that contains a list of their customers or if I should just make one big table... The customers will be queried daily. I know it is a broad question but any advice would be appreciated. Cheers, Chuck

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >