Search Results

Search found 11543 results on 462 pages for 'partition wise join'.

Page 352/462 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • Why is my mysql database timestamp changing by itself?

    - by Scarface
    Hey guys quick question, I have an entry that I put in my database, and as I echo the value, the value in the database stays the same while the data echoed keeps increasing, which is messing up my function. If anyone knows whats going down, would appreciate any suggestions. <?php include("../includes/connection.php"); $query="SELECT * FROM points LEFT JOIN users ON points.user_id=users.id WHERE points.topic_id='82' AND users.username='gman'"; $check=mysql_query($query); while ($row=mysql_fetch_assoc($check)){ $points_id=$row['points_id']; echo $timestamp=$row['timestamp']; } ?>

    Read the article

  • How to avoid the same calculations on column values over and over again in a select?

    - by Peter
    I sometimes write SELECTs on the form: SELECT a.col1+b.col2*c.col4 as calc_col1, a.col1+b.col2*c.col4 + xxx as calc_col1_PLUS_MORE FROM .... INNER JOIN ... ON a.col1+b.col2*c.col4 < d.some_threshold WHERE a.col1+b.col2*c.col4 > 0 When the calculations get rather involved and used up to 3-5 times within the same SELECT, I would really like to refactor that out in a function or similar in order to 1) hopefully improve performance / make use of cache 2) avoid forgetting to update one of the 4 calculations when I at a later stage realize I need to change the calculation. I usually have these selects within SPs. Any ideas?

    Read the article

  • SQL Server Table locks in long query - Solution: NoLock?

    - by Kovu
    a report in my application runs a query that needs between 5 - 15 seconds (constrained to count of rows that will be returned). The query has 8 joins to nearly all main-tables of my application (Customers, sales, units etc). A little tool shows me, that in this time, all those 8 tables are locked with a shared table lock. That means, no update operation will be done in this time. A solution from a friend is, to have every join in the query, which is not mandetory to have 100% correct data (dirty read), with a NoLock, so only 1 of this 8 tables will be locked completly. Is that a good solution? For a report in which 99% of data came from one table, unlock the less prio tables?

    Read the article

  • SQL Server Clustered Index: (Physical) Data Page Order

    - by scherand
    I am struggling understanding what a clustered index in SQL Server 2005 is. I read the MSDN article Clustered Index Structures (among other things) but I am still unsure if I understand it correctly. The (main) question is: what happens if I insert a row (with a "low" key) into a table with a clustered index? The above mentioned MSDN article states: The pages in the data chain and the rows in them are ordered on the value of the clustered index key. And Using Clustered Indexes for example states: For example, if a record is added to the table that is close to the beginning of the sequentially ordered list, any records in the table after that record will need to shift to allow the record to be inserted. Does this mean that if I insert a row with a very "low" key into a table that already contains a gazillion rows literally all rows are physically shifted on disk? I cannot believe that. This would take ages, no? Or is it rather (as I suspect) that there are two scenarios depending on how "full" the first data page is. A) If the page has enough free space to accommodate the record it is placed into the existing data page and data might be (physically) reordered within that page. B) If the page does not have enough free space for the record a new data page would be created (anywhere on the disk!) and "linked" to the front of the leaf level of the B-Tree? This would then mean the "physical order" of the data is restricted to the "page level" (i.e. within a data page) but not to the pages residing on consecutive blocks on the physical hard drive. The data pages are then just linked together in the correct order. Or formulated in an alternative way: if SQL Server needs to read the first N rows of a table that has a clustered index it can read data pages sequentially (following the links) but these pages are not (necessarily) block wise in sequence on disk (so the disk head has to move "randomly"). How close am I? :)

    Read the article

  • Having trouble using 'AND' in CONTAINSTABLE SQL SERVER FULL TEXT SEARCH

    - by Joshua
    I've been using FULL-TEXT for awhile but I cannot seem to get the most relevant results sometimes. If I have an field with something like "An Overview of Pain Medicine 5/12/2006" and a user types "An Overview 5/12/2006" So we create a search like: '"An" AND "Overview" AND "5/12/2006"' - 0 results (bad) '"Overview" AND "5/12/2006"' - 1 result (good) The CONTAINSTABLE portion of my query: FROM ce_Activity A INNER JOIN CONTAINSTABLE(View_Activities,(Searchable), @Search) AS KeyTbl ON A.ActivityID = KeyTbl.[KEY] "Searchable" is a field contains the activity title, and start date(converted to string) in one field so it's all search friendly. Why would this happen?

    Read the article

  • Wildcard subdomain .htaccess and Codeigniter

    - by Gautam
    Hi All, I am trying to create the proper .htaccess that would allow me to map as such: http://domain.com/ --> http://domain.com/home http://domain.com/whatever --> http://domain.com/home/whatever http://user.domain.com/ --> http://domain.com/user http://user.domain.com/whatever --> http://domain.com/user/whatever/ Here, someone would type in the above URLs, however internally, it would be redirecting as if it were the URL on the right. Also the subdomain would be dynamic (that is, http://user.domain.com isn't an actual subdomain but would be a .htaccess rewrite) Also /home is my default controller so no subdomain would internally force it to /home controller and any paths following it (as shown in #2 example above) would be the (catch-all) function within that controller. Like wise if a subdomain is passed it would get passed as a (catch-all) controller along with any (catch-all) functions for it (as shown in #4 example above) Hopefully I'm not asking much here but I can't seem to figure out the proper .htaccess or routing rules (in Codeigniter) for this. httpd.conf and hosts are setup just fine. EDIT #1 Here's my .htaccess that is coming close but is messing up at some point: RewriteEngine On RewriteBase / RewriteCond %{SCRIPT_FILENAME} !-d RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{HTTP_HOST} ^([a-z0-9-]+).domain [NC] RewriteRule (.*) index.php/%1/$1 [QSA] RewriteCond $1 !^(index\.php|images|robots\.txt) RewriteRule ^(.*)$ /index.php/$1 [L,QSA] With the above, when I visit: http://test.domain/abc/123 this is what I notice in $_SERVER var (I've removed some of the fields): Array ( [REDIRECT_STATUS] => 200 [SERVER_NAME] => test.domain [REDIRECT_URL] => /abc/123 [QUERY_STRING] => [REQUEST_URI] => /abc/123 [SCRIPT_NAME] => /index.php [PATH_INFO] => /test/abc/123 [PATH_TRANSLATED] => redirect:\index.php\test\test\abc\123\abc\123 [PHP_SELF] => /index.php/test/abc/123 ) You can see the PATH_TRANSLATED is not properly being formed and I think that may be screwing things up?

    Read the article

  • Downloading a Directory Tree with FTPLIB

    - by Anthony Lemmer
    I'd like to download a directory and all of its contents to the local HD. Here's the code I have thus far (crashes if there's a sub-directory, else grabs all the files): import ftplib import configparser import os def runBackups(): #Load INI filename = 'connections.ini' config = configparser.SafeConfigParser() config.read(filename) connections = config.sections() i = 0 while i < len(connections): #Load Settings uri = config.get(connections[i], "uri") username = config.get(connections[i], "username") password = config.get(connections[i], "password") backupPath = config.get(connections[i], "backuppath") archiveTo = config.get(connections[i], "archiveto") #Start Back-ups ftp = ftplib.FTP(uri) ftp.login(username, password) ftp.set_debuglevel(2) ftp.cwd(backupPath) files = ftp.nlst() for filename in files: ftp.retrbinary('RETR %s' % filename, open(os.path.join(archiveTo, filename), 'wb').write) ftp.quit() i += 1 print() print("Back-ups complete.") print()

    Read the article

  • How do I set up mod rewrite to do this?

    - by Ali
    Hi guys heres the scene - I'm building a web application which basically creates accounts for all users. Currently its setup like this the file structure: root/index.php root/someotherfile.php root/images/image-sub-folder/image.jpg root/js/somejs.js When users create an account they choose a fixed group name and then users can join that group. Initially I thought of having an extra textbox in the login screen to enter the group the user belongs to login to. BUt I would like instead to have something like virtual folders in this case: root/group-name/index.php I heard it can be done with apache mod rewrite but I'm not sure how to do this - any help here? Basically instead of having something like &group-name=yourGroupName appended to every page I would just like something of the nature above.

    Read the article

  • global scope of variable

    - by shantanuo
    The following shell scrip will check the disk space and change the variable "diskfull" to 1 if the usage is more than 10% The last echo always shows 0 I tried the global diskfull=1 in the if clause but it did not work. How do I change the variable to 1 if the disk consumed is more than 10% #!/bin/sh diskfull=0 ALERT=10 df -HP | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5 " " $1 }' | while read output; do #echo $output usep=$(echo $output | awk '{ print $1}' | cut -d'%' -f1 ) partition=$(echo $output | awk '{ print $2 }' ) if [ $usep -ge $ALERT ]; then diskfull=1 exit fi done echo $diskfull

    Read the article

  • Code golf: the Mandelbrot set

    - by Stefano Borini
    Usual rules for the code golf. Here is an implementation in python as an example from PIL import Image im = Image.new("RGB", (300,300)) for i in xrange(300): print "i = ",i for j in xrange(300): x0 = float( 4.0*float(i-150)/300.0 -1.0) y0 = float( 4.0*float(j-150)/300.0 +0.0) x=0.0 y=0.0 iteration = 0 max_iteration = 1000 while (x*x + y*y <= 4.0 and iteration < max_iteration): xtemp = x*x - y*y + x0 y = 2.0*x*y+y0 x = xtemp iteration += 1 if iteration == max_iteration: value = 255 else: value = iteration*10 % 255 print value im.putpixel( (i,j), (value, value, value)) im.save("image.png", "PNG") The result should look like this Use of an image library is allowed. Alternatively, you can use ASCII art. This code does the same for i in xrange(40): line = [] for j in xrange(80): x0 = float( 4.0*float(i-20)/40.0 -1.0) y0 = float( 4.0*float(j-40)/80.0 +0.0) x=0.0 y=0.0 iteration = 0 max_iteration = 1000 while (x*x + y*y <= 4.0 and iteration < max_iteration): xtemp = x*x - y*y + x0 y = 2.0*x*y+y0 x = xtemp iteration += 1 if iteration == max_iteration: line.append(" ") else: line.append("*") print "".join(line) The result ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** **************************************** *************************************** **************************************** *************************************** **************************************** *************************************** **************************************** *************************************** **************************************** *************************************** **************************************** *************************************** **************************************** *************************************** *************************************** ************************************** ************************************* ************************************ ************************************ *********************************** *********************************** ********************************** ************************************ *********************************** ************************************* ************************************ *********************************** ********************************** ******************************** ******************************* **************************** *************************** ***************************** **************************** **************************** *************************** ************************ * * *********************** *********************** * * ********************** ******************** ******* ******* ******************* **************************** *************************** ****************************** ***************************** ***************************** * * * **************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ******************************************************************************** ********************************************************************************

    Read the article

  • Mysql out of disk space

    - by Paddy
    I have just finished developing a rails app which has a mysql db as a backend. The app is meant for high traffic and will store lots of information. I am planning to set up my own web server and host the site from it. If in future my disk space runs out i would want to expand by adding more space. But say if my mysql database is housed in my /disk0s1 and by adding a new drive i have more partitions (and hence more disk space), how then would i extend my database to store information on those partitions too, and at the same time prevent any information from being written on the original partition. Should i go for multiple databases? if so how? If i went for a hosting solution i wouldn't be bothering about this as i would just have to worry about making payments for the extra space :) I always wondered how space is added on-the-go by these webhosts. Is there any specific mysql configuration that i have to make?

    Read the article

  • rails: include statement with two ON conditions

    - by Markus
    Hi, I have tree tables books bookmarks users where there is a n to m relation from books to users trough bookmarks. Im looking for a query, where I get all the books of a certain user including the bookmarks. If no bookmarks are there, there should be a null included... my sql statement looks like: SELECT * FROM `books` LEFT OUTER JOIN `bookmarks ` ON bookmarks.book_id = books.id AND bookmarks.user_id = ? In rails I only know the :include statement, but how can I add the second bookmarks.user_id = ? statement in the ON section of this query? if I put it in the :conditions part, no null results would get returned! Thanks! Markus

    Read the article

  • Get children count via HQL

    - by Thomas Lötzer
    Hi, I have a one-to-many mapping between a parent entity and child entities. Now I need to find the number of children associated with each parent for a list of parents. I am trying to do this with HQL but I am not sure how I can get the list of parents in there. Also, I don't know how I can return the entity itself and not just its ID. My current HQL query is: select new map(parent.id as parentId, count(*) as childCount) from Parent parent left join parent.children children group by parent.id but this only returns the ID and does not filter on specific parents. EDIT Based on Pascal's answer I have modified the query to select new map(parent as parent, count(elements(parent.children)) as childCount) from Parent parent group by parent That does work, but is prohibitively slow: 30 seconds instead of 400 ms on the same database.

    Read the article

  • My First F# program

    - by sudaly
    Hi I just finish writing my first F# program. Functionality wise the code works the way I wanted, but not sure if the code is efficient. I would much appreciate if someone could review the code for me and point out the areas where the code can be improved. Thanks Sudaly open System open System.IO open System.IO.Pipes open System.Text open System.Collections.Generic open System.Runtime.Serialization [<DataContract>] type Quote = { [<field: DataMember(Name="securityIdentifier") >] RicCode:string [<field: DataMember(Name="madeOn") >] MadeOn:DateTime [<field: DataMember(Name="closePrice") >] Price:float } let m_cache = new Dictionary<string, Quote>() let ParseQuoteString (quoteString:string) = let data = Encoding.Unicode.GetBytes(quoteString) let stream = new MemoryStream() stream.Write(data, 0, data.Length); stream.Position <- 0L let ser = Json.DataContractJsonSerializer(typeof<Quote array>) let results:Quote array = ser.ReadObject(stream) :?> Quote array results let RefreshCache quoteList = m_cache.Clear() quoteList |> Array.iter(fun result->m_cache.Add(result.RicCode, result)) let EstablishConnection() = let pipeServer = new NamedPipeServerStream("testpipe", PipeDirection.InOut, 4) let mutable sr = null printfn "[F#] NamedPipeServerStream thread created, Wait for a client to connect" pipeServer.WaitForConnection() printfn "[F#] Client connected." try // Stream for the request. sr <- new StreamReader(pipeServer) with | _ as e -> printfn "[F#]ERROR: %s" e.Message sr while true do let sr = EstablishConnection() // Read request from the stream. printfn "[F#] Ready to Receive data" sr.ReadLine() |> ParseQuoteString |> RefreshCache printfn "[F#]Quot Size, %d" m_cache.Count let quot = m_cache.["MSFT.OQ"] printfn "[F#]RIC: %s" quot.RicCode printfn "[F#]MadeOn: %s" (String.Format("{0:T}",quot.MadeOn)) printfn "[F#]Price: %f" quot.Price

    Read the article

  • Implementing ma? osx loop device driver.

    - by Inso Reiges
    Hello, I am aware that there is a native implementation of loop device system on osx, the hdiutil/hdix driver. But due to a number of reasons i need to roll out my own custom loop driver. Since hdix is closed-source can anyone give some starting pointers, links, advises, etc. on the subject? I had expirience implementing loop drivers on linux and windows but i don't really have a clue where to start from on osx. The basic functionality i need to implement is the same: given any file on disk, simulate a virtual block device interface for it. I also would like my loop driver to use all the native partition filter stacking available for real and hdix virtual block devices on osx.

    Read the article

  • EXTEND_MODEL_CASES SQL 2005 workaround

    - by user282382
    Hi, I have a time series based mining model in SQL 2005 Analysis Serveries. I understand in 2008 you can do what if analysis by using EXTEND_MODEL_CASES with a Natural Prediction Join. I'm looking for a workaround or some method of doing the same thing but with 2005. My time series has 3 inputs, and one predict_only. I'd like to use the prediction function to see what types of prediction it makes for 6-12 time intervals in the future with inputs in a separate table. Is there any way to do this or something similar? Thanks

    Read the article

  • C#JSON serialization

    - by Bridget the Midget
    I'm trying out the HighStock library for creating stock charts. To fill the chart with data, their example specifies this source. The first parameter is unixtime in milliseconds and the second parameter is the stock closing price. I don't know if this is valid json, but I would argue that the following would be a more appropriate way of writing json. [{"Closing":63.15000,"Date":1262559600000},{"Closing":64.75000,"Date":1262646000000}, ... I guess that I have no other option than to adapt to HighStocks syntax. I could solve this by looping and add correct syntax to a string, but that seems rudimentary. Would it be more wise to serialize C# objects to create my json, and if that's the case - how can I reach the syntax specified in the example? Lets just say this is my c# object: public class Quote { public double Date { get; set; } public decimal Closing { get; set; } } Am I making it unnecessary complex? Should I just format a json string?

    Read the article

  • Mysql SELECT nested query, very complicated?

    - by smartbear
    Okay, first following are my tables: Table house: id | items_id | 1 | 1,5,10,20 | Table items: id | room_name | refer 1 | kitchen | 3 5 | room1 | 10 Table kitchen: id | detail_name | refer 3 | spoon | 4 5 | fork | 10 Table spoon: id | name | color | price | quantity_available | 4 | spoon_a | white | 50 | 100 | 5 | spoon_b | black | 30 | 200 | How to do a nested select statement, where I want to select id, name, color, price and quantity_available column, from the each value inside the 'items_id' column in 'house' table? This is very challenging!! EDIT: after read robin's answer Table house: id | items_id | house1 | 1 | house1 | 5 | house1 | 10 | house2 | 20 | If this it the house table, how to do the nested, join, or whatever select statement??

    Read the article

  • One table, need multiple values from different rows/tuples

    - by WmasterJ
    I have tables like: 'profile_values' userID | fid | value -------+---------+------- 1 | 3 | [email protected] 1 | 45 | 203-234-2345 3 | 3 | [email protected] 1 | 45 | 123-456-7890 And: 'users' userID | name -------+------- 1 | joe 2 | jane 3 | jake I want to join them and have one row with two of the values like: 'profile_values' userID | name | email | phone -------+-------+----------------+-------------- 1 | joe | [email protected] | 203-234-2345 2 | jane | [email protected] | 123-456-7890 I have solved it but it feels clumsy and I want to know if there is a better way to do it. Meaning solutions that are either more readable or faster(optimized) or simply best-practice. Current solution: multiple tables selected, many conditional statements: SELECT u.userID AS memberid, u.name AS first_name, pv1.value AS fname, pv2.value as lname FROM users AS u, profile_values AS pv1, profile_values AS pv2, WHERE u.userID = pv1.userID AND pv1.fid = 3 AND u.userID = pv2.userID AND pv2.fid = 45; Thanks for the help!

    Read the article

  • MS SQL and .NET Typed Dataset AllowDBNull Metadata

    - by Christian Pena
    Good afternoon, I am generating a typed dataset from a stored procedure. The stored procedure may contain something like: select t1.colA, t2.colA AS t2colA from t1 inner join t2 on t1.key = t2.key When I generate the typed dataset, the dataset knows whether t1.colA allows NULLs, but it always puts FALSE in AllowDBNull for t2.colA even if t2.colA allows NULL. Is this because the column is aliased? Is there any way, from SQL, to hint to VS that the column allows NULL? We currently have to go in and update the column's AllowDBNull if we regenerate the table. Thanks in advance. Christian

    Read the article

  • Rendering to a single Bitmap object from multiple threads

    - by Lee Treveil
    What im doing is rendering a number of bitmaps to a single bitmap. There could be hundreds of images and the bitmap being rendered to could be over 1000x1000 pixels. Im hoping to speed up this process by using multiple threads but since the Bitmap object is not thread-safe it cant be rendered to directly concurrently. What im thinking is to split the large bitmap into sections per cpu, render them separately then join them back together at the end. I haven't done this yet incase you guys/girls have any better suggestions. Any ideas? Thanks

    Read the article

  • How to find N Consecutive records in a table using SQL

    - by user320587
    Hi, I have the following Table definition with sample data. In the following table, Customer Product & Date are key fields Table One Customer Product Date SALE X A 01/01/2010 YES X A 02/01/2010 YES X A 03/01/2010 NO X A 04/01/2010 NO X A 05/01/2010 YES X A 06/01/2010 NO X A 07/01/2010 NO X A 08/01/2010 NO X A 09/01/2010 YES X A 10/01/2010 YES X A 11/01/2010 NO X A 12/01/2010 YES In the above table, I need to find the N or N consecutive records where there was no sale, Sale value was 'NO' For example, if N is 2, the the result set would return the following Customer Product Date SALE X A 03/01/2010 NO X A 04/01/2010 NO X A 06/01/2010 NO X A 07/01/2010 NO X A 08/01/2010 NO Can someone help me with a SQL query to get the desired results. I am using SQL Server 2005. I started playing using ROW_NUMBER() AND PARTITION clauses but no luck. Thanks for any help

    Read the article

  • PHP/MySQL User system w/ groups.

    - by Ben C
    I have a user system set up in a 'users' table, and I have groups set up in a 'groups' table. Essentially, I want users to be able to join any and all the groups that they want, in the same was as one would on facebook. How would one go about structuring this in a mysql/php database system? Just a quick summary would be helpful! I've looked around, but I can only find info on creating 'groups' where the users just have a sort of 1, 2 or 3 permission rank. Thanks!

    Read the article

  • Cleaning up temp folder after long-running subprocess exits

    - by dbr
    I have a Python script (running inside another application) which generates a bunch of temporary images. I then use subprocess to launch an application to view these. When the image-viewing process exists, I want to remove the temporary images. I can't do this from Python, as the Python process may have exited before the subprocess completes. I.e I cannot do the following: p = subprocess.Popen(["imgviewer", "/example/image1.jpg", "/example/image1.jpg"]) p.communicate() os.unlink("/example/image1.jpg") os.unlink("/example/image2.jpg") ..as this blocks the main thread, nor could I check for the pid exiting in a thread etc The only solution I can think of means I have to use shell=True, which I would rather avoid: cmd = ['imgviewer'] cmd.append("/example/image2.jpg") for x in cleanup: cmd.extend(["&&", "rm", x]) cmdstr = " ".join(cmd) subprocess.Popen(cmdstr, shell = True) This works, but is hardly elegant, and will fail with filenames containing spaces etc.. Basically, I have a background subprocess, and want to remove the temp files when it exits, even if the Python process no longer exists.

    Read the article

  • Simple MySQL Query taking 45 seconds (Gets a record and its "latest" child record)

    - by Brian Lacy
    I have a query which gets a customer and the latest transaction for that customer. Currently this query takes over 45 seconds for 1000 records. This is especially problematic because the script itself may need to be executed as frequently as once per minute! I believe using subqueries may be the answer, but I've had trouble constructing it to actually give me the results I need. SELECT customer.CustID, customer.leadid, customer.Email, customer.FirstName, customer.LastName, transaction.*, MAX(transaction.TransDate) AS LastTransDate FROM customer INNER JOIN transaction ON transaction.CustID = customer.CustID WHERE customer.Email = '".$email."' GROUP BY customer.CustID ORDER BY LastTransDate LIMIT 1000 I really need to get this figured out ASAP. Any help would be greatly appreciated!

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >