Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 212/293 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • Character encoding issues in MySQL

    - by Eric
    In my database we have fields where the data is not readable. I now know why it happened but I don't know how to fix it. I found a way to get the info back from the database: SELECT id, name FROM projects WHERE LENGTH(name) != CHAR_LENGTH(name); One of the rows returned shows: id | name ------------------------- 1008 | Cajón el Diablo This should be: id | name ------------------------- 1008 | Cajón el Diablo Can somebody help me figure out how to fix this problem? How can I convert this using SQL? Is SQL not good? If not, how about Python?

    Read the article

  • Shortest command to calculate the sum of a column of output on Unix?

    - by Andrew
    I'm sure there is a quick and easy way to calculate the sum of a column of values on Unix systems (using something like awk or xargs perhaps), but writing a shell script to parse the rows line by line is the only thing that comes to mind at the moment. For example, what's the simplest way to modify the command below to compute and display the total for the SEGSZ column (70300)? ipcs -mb | head -6 IPC status from /dev/kmem as of Mon Nov 17 08:58:17 2008 T ID KEY MODE OWNER GROUP SEGSZ Shared Memory: m 0 0x411c322e --rw-rw-rw- root root 348 m 1 0x4e0c0002 --rw-rw-rw- root root 61760 m 2 0x412013f5 --rw-rw-rw- root root 8192

    Read the article

  • Query to find duplicate item in 2 table

    - by Rico
    I have this table Antecedent Consequent I1 I2 I1 I1,I2,I3 I1 I4,I1,I3,I4 I1,I2 I1 I1,I2 I1,I4 I1,I2 I1,I3 I1,I4 I3,I2 I1,I2,I3 I1,I4 I1,I3,I4 I4 AS you can see it's pretty messed up. is there anyway i can remove rows if item in consequent exist in antecedent (in 1 row) for example: INPUT: Antecedent Consequent I1 I2 I1 I1,I2,I3 <---- DELETE since I1 exist in antecedent I1 I4,I1,I3,I4 <---- DELETE since I1 exist in antecedent I1,I2 I1 <---- DELETE since I1 exist in antecedent I1,I2 I1,I4 <---- DELETE since I1 exist in antecedent I1,I2 I1,I3 <---- DELETE since I1 exist in antecedent I1,I4 I3,I2 I1,I2,I3 I1,I4 <---- DELETE since I1 exist in antecedent I1,I3,I4 I4 <---- DELETE since I4 exist in antecedent OUTPUT: Antecedent Consequent I1 I2 I1,I4 I3,I2 is there anyway i can do that by query?

    Read the article

  • can't save form content to database, help plsss!!

    - by dana
    i'm trying to save 100 caracters form user in a 'microblog' minimal application. my code seems to not have any mystakes, but doesn't work. the mistake is in views.py, i can't save the foreign key to user table models.py looks like this: class NewManager(models.Manager): def create_post(self, post, username): new = self.model(post=post, created_by=username) new.save() return new class New(models.Model): post = models.CharField(max_length=120) date = models.DateTimeField(auto_now_add=True) created_by = models.ForeignKey(User, blank=True) objects = NewManager() class NewForm(ModelForm): class Meta: model = New fields = ['post'] # widgets = {'post': Textarea(attrs={'cols': 80, 'rows': 20}) def save_new(request): if request.method == 'POST': created_by = User.objects.get(created_by = user) date = request.POST.get('date', '') post = request.POST.get('post', '') new_obj = New(post=post, date=date, created_by=created_by) new_obj.save() return HttpResponseRedirect('/') else: form = NewForm() return render_to_response('news/new_form.html', {'form': form},context_instance=RequestContext(request)) i didn't mention imports here - they're done right, anyway. my mistake is in views.py, when i try to save it says: local variable 'created_by' referenced before assignment it i put created_py as a parameter, the save needs more parameters... it is really weird help please!!

    Read the article

  • what changes when your input is giga/terabyte sized?

    - by Wang
    I just took my first baby step today into real scientific computing today when I was shown a data set where the smallest file is 48000 fields by 1600 rows (haplotypes for several people, for chromosome 22). And this is considered tiny. I write Python, so I've spent the last few hours reading about HDF5, and Numpy, and PyTable, but I still feel like I'm not really grokking what a terabyte-sized data set actually means for me as a programmer. For example, someone pointed out that with larger data sets, it becomes impossible to read the whole thing into memory, not because the machine has insufficient RAM, but because the architecture has insufficient address space! It blew my mind. What other assumptions have I been relying in the classroom that just don't work with input this big? What kinds of things do I need to start doing or thinking about differently? (This doesn't have to be Python specific.)

    Read the article

  • How can I sqldump a huge database?

    - by meder
    SELECT count(*) from table gives me 3296869 rows. The table only contains 4 columns, storing dropped domains. I tried to dump the sql through: $backupFile = $dbname . date("Y-m-d-H-i-s") . '.gz'; $command = "mysqldump --opt -h $dbhost -u $dbuser -p $dbpass $dbname | gzip > $backupFile"; However, this just dumps an empty 20 KB gzipped file. My client is using shared hosting so the server specs and resource usage aren't top of the line. I'm not even given ssh access or access directly to the database so I have to make queries through PHP scripts I upload via FTP ( SFTP isn't an option, again ). Is there some way I can perhaps sequentially download portions of it, or pass an argument to mysqldump that will optimize it? I came across http://jeremy.zawodny.com/blog/archives/000690.html which mentions the -q flag and tried that but it didn't seem to do anything differently.

    Read the article

  • javascript pop-up menu help

    - by baiano
    I am working on a project similar to a table where the user will be able to add rows. Right now there is just one row type available but I would like to give the user the ability to select from a list without changing the layout of the page. So, I put together a menu that appears on mouseover of the 'add row' link and disappears on mouseout (with a slight delay and fade in/out) using mootools event listeners. It looks like: I am now trying to figure out an easy way to make it so that the list stays available when the user's mouse leaves the 'add a row' link to go to select an item from the list. I looked through various mootools add-ons and tutorial but didn't find anything all that helpful. Does anyone know of a good tutorial guide me through this or can otherwise point me in the right direction here?

    Read the article

  • PHP find array keys

    - by Jens Törnell
    In PHP I have an array that looks like this: $array[0]['width'] = '100'; $array[0]['height'] = '200'; $array[2]['width'] = '150'; $array[2]['height'] = '250'; I don't know how many items there are in the array. Some items can be deleted which explains the missing [1] key. I want to add a new item after this, like this: $array[]['width'] = '300'; $array[]['height'] = '500'; However the code above don't work, because it adds a new key for each row. It should be the same for the two rows above. A clever way to solve it? An alternative solution would be to find the last key. I failed trying the 'end' function.

    Read the article

  • [MySQL, InnoDb] Rating place

    - by Pavel
    I'm trying to generate rating place table using following receipt http://stackoverflow.com/questions/1776821/assign-places-in-the-rating-mysql-php but my database is high loaded. I tried not to create table, but use MEMORY TABLE and update it using following SQL query insert into tops (uid) select uid from users order by exp desc; but got the following MySQL error Deadlock found when trying to get lock; try restarting transaction because there are too many queries until SQL select is being executed. How to solve this problem? P.S. CREATE TABLE tops as SELECT work almost fine except high server load... up to load average: 50 if tops are non-memory table. My table users has near 4.5 millions of rows. Thanks for any advices.

    Read the article

  • Generate MySQL data dump in SQL from PHP

    - by Álvaro G. Vicario
    I'm writing a PHP script to generate SQL dumps from my database for version control purposes. It already dumps the data structure by means of running the appropriate SHOW CREATE .... query. Now I want to dump data itself but I'm unsure about the best method. My requirements are: I need a record per row Rows must be sorted by primary key SQL must be valid and exact no matter the data type (integers, strings, binary data...) Dumps should be identical when data has not changed I can detect and run mysqldump as external command but that adds an extra system requirement and I need to parse the output in order to remove headers and footers with dump information I don't need (such as server version or dump date). I'd love to keep my script as simple as I can so it can be hold in an standalone file. What are my alternatives?

    Read the article

  • How to create following cube?

    - by Itsgkiran
    Hi! For example........... Database Table: BatchID BatchName Chemical Value -------------------------------------------------------- BI-1 BN-1 CH-1 1 BI-2 BN-2 CH-2 2 -------------------------------------------------------- I need to display following cube BI-1 BI-2 BN-1 BN-2 ----------------------------------------- CH-1 1 null ------------------------------------------ CH-2 null 2 ------------------------------------------ Here BI-1,BN-1 are two rows in a single columns i need to display chemical value as row of that. What is query MDX query for this. Could Please help me to solve this problem. Thank You.

    Read the article

  • Scrolling through list element causes text elements to scroll as well

    - by DMK
    I'm using a list element with variableRowHeight and word-wrap set to true like in the example below: http://blog.flexexamples.com/2007/10/27/creating-multi-line-list-rows-with-variable-row-heights/ When I scroll through the list with a mouse scrollwheel the text in the listItems also scroll. I know that this is to do with the height of the textField... but I am unsure how to best resolve this issue. Any thoughts are appreciated! The issue that I am experiencing also occurs in the example above.

    Read the article

  • How to get proper alignment when printing to file

    - by user1067334
    I have this Structure the elements of which that I need to write in a text file struct Stage3ADisplay { int nSlot; char *Item; char *Type; int nIndex; unsigned char attributesMD[17]; //the last character is \0 unsigned char contentsMD[17]; //only for regular files - //the last character is \0 }; buffer = malloc(sizeof(Stage3ADisplayVar[nIterator]->nSlot) + sizeof(Stage3ADisplayVar[nIterator]->Item) + sizeof(Stage3ADisplayVar[nIterator]->Type) + sizeof(Stage3ADisplayVar[nIterator]->nIndex) + sizeof(Stage3ADisplayVar[nIterator]->attributesMD) + sizeof(Stage3ADisplayVar[nIterator]->contentsMD) + 1); sprintf (buffer,"%d %s %s %d %x %x",Stage3ADisplayVar[nIterator]->nSlot, Stage3ADisplayVar[nIterator]->Item,Stage3ADisplayVar[nIterator]->Type,Stage3ADisplayVar[nIterator]->nIndex,Stage3ADisplayVar[nIterator]->attributesMD,Stage3ADisplayVar[nIterator]->contentsMD); How do I make sure the rows in the file are properly aligned. Thank you.

    Read the article

  • Does anyone use AODL in a real application?

    - by HyperQuantum
    We are currently using the Excel interop API in .NET to generate simple spreadsheet documents from a template. So we load the template first, insert some rows, fill in some data (dates, text, and numbers), and make Excel visible so that the user can print or save the document we just generated. But I'd like to get rid of the Excel dependency, and switch to the ODF format as well. Googling suggests AODL (C# libs for generating ODF docs) as the most obvious solution. But their last release is 1.3.0.0 BETA, and seems to be 3 years old. So I'm not sure if it's a good idea to depend on a potentially dead project... In that case, I'd need to find another solution. Any ideas? Or maybe someone could assure me that AODL is still alive?

    Read the article

  • Anyway to get a value similar to @@ROWCOUNT when TOP is used?

    - by cfeduke
    If I have a SQL statement such as: SELECT TOP 5 * FROM Person WHERE Name LIKE 'Sm%' ORDER BY ID DESC PRINT @@ROWCOUNT -- shows '5' Is there anyway to get a value like @@ROWCOUNT that is the actual count of all of the rows that match the query without re-issuing the query again sans the TOP 5? The actual problem is a much more complex and intensive query that performs beautifully since we can use TOP n or SET ROWCOUNT n but then we cannot get a total count which is required to display paging information in the UI correctly. Presently we have to re-issue the query with a @Count = COUNT(ID) instead of *.

    Read the article

  • SQL: Is it quicker to insert sorted data into a table?

    - by AngryWhenHungry
    A table in Sybase has a unique varchar(32) column, and a few other columns. It is indexed on this column too. At regular intervals, I need to truncate it, and repopulate it with fresh data from other tables. insert into MyTable select list_of_columns from OtherTable where some_simple_conditions order by MyUniqueId If we are dealing with a few thousand rows, would it help speed up the insert if we have the order by clause for the select? If so, would this gain in time compensate for the extra time needed to order the select query? I could try this out, but currently my data set is small and the results don't say much.

    Read the article

  • What's the simplest way to let the user download a file in ASP.NET MVC ?

    - by drasto
    In an ASP.NET MVC I have a database table. I want to have a button on some view page, if some user clicks that button I my application will generate XML file containing all rows in the database. Then the file containing XML should be sent to the client so that the user will see a download pop-up window. Similarly I want to allow user to upload an XML file whose content will be added to the database. What's the simplest way to let the user upload and download file ? Thanks for all the answers

    Read the article

  • How to join 2 tables & display them correctly?

    - by steven
    http://img293.imageshack.us/img293/857/tablez.jpg Here is a picture of the 2 tables. The mybb_users table is the table that has the users that signed up for the forum. The mybb_userfields is the table that contain custom profile field data that they are able to customize & change in their profile. Now, all I want to do is display all users in rows with the custom profile field data that they provided in their profile(which is in the mybb_userfields table) How can I display these fields correctly together? For instance, p0gz is a male,lives in AZ,he owns a 360,does not know his bandwidth & Flip Side Phoenix is his team. How can it just be like "p0gz-male-az-360-dont know-flipside phoenix" in a row~???

    Read the article

  • Basic SQL query question

    - by user318573
    Hello, I have the following query: SELECT Base.ReportDate, Base.PropertyCode, Base.FirstName, Base.MiddleName, Base.LastName, Person.FirstName, Person.MiddleName, Person.LastName FROM LeaseT INNER JOIN Base ON LeaseT.LeaseID = Base.LeaseID INNER JOIN Person ON LeaseT.TenantID = Person.ID works fine, except there could be 0 to 'N' people in the 'Person' table for each record in the Base table, but I only want to return exactly 1 for each 'Base' record (doesn't matter which, but the one with the lowest Person.ID) would be a reasonable choice. If there is 0 rows in the person table, I still need to return the row, but with null values for the 'person' fields. How would I structure the SQL to do that? Thanks. Edit: Yes, the tables are probably not structured properly, but restructuring at this time is not possible - got to work with what is there.

    Read the article

  • Get dimensions for dataset matlab, Size Function

    - by Tim
    Hello I have been trying to display the dimensions of a dataset, below is the following matlab code that i used...but it gives me the wrong output....i'e number of rows and columns values are wrong....any help regarding this issue would be highly appreciated [file_input,pathname] = uigetfile( ... {'*.txt', 'Text (*.txt)'; ... '*.xls', 'Excel (*.xls)'; ... '*.*', 'All Files (*.*)'}, ... 'Select files'); [pathstr, name, ext, versn] = fileparts(file_input) r = size(name,1); c = size(name,2); disp(r) disp(c)

    Read the article

  • ASP.NET MVC DataAnnotations different tables. Can it be done?

    - by mazhar kaunain baig
    i have a language table which is as a foreign key in the link table , The link can be in 3 languages meaning there will be 3 rows in the link table every time i enter the record . i am using jQuery tabs to enter the records in 3 languages . OK so that thing is that there will be three text boxes for every field in the table.link name field will have 3 text boxes, description will have 3 text boxes and so on. i am using LINK to SQL with VS2010. i will be creating link class with MetadataType so how will i handle for eg link name attribute 3 times

    Read the article

  • Does Oracle 11g automatically index fields frequently used for full table scans?

    - by gustafc
    I have an app using an Oracle 11g database. I have a fairly large table (~50k rows) which I query thus: SELECT omg, ponies FROM table WHERE x = 4 Field x was not indexed, I discovered. This query happens a lot, but the thing is that the performance wasn't too bad. Adding an index on x did make the queries approximately twice as fast, which is far less than I expected. On, say, MySQL, it would've made the query ten times faster, at the very least. I'm suspecting Oracle adds some kind of automatic index when it detects that I query a non-indexed field often. Am I correct? I can find nothing even implying this in the docs.

    Read the article

  • Clustered index on frequently changing reference table of one or more foreign keys

    - by Ian
    My specific concern is related to the performance of a clustered index on a reference table that has many rapid inserts and deletes. Table 1 "Collection" collection_pk int (among other fields) Table 2 "Item" item_pk int (among other fields) Reference Table "Collection_Items" collection_pk int, item_pk int (combined primary key) Because the primary key is composed of both pks, a clustered index is created and the data physically ordered in the table according to the combined keys. I have many users creating and deleting collections and adding and removing items to those collections very frequently affecting the "Collection_Items" table, and its clustered index. QUESTION PART: Since the "Collection_Items" table is so dynamic, wouldn't there be a big performance hit on constantly resorting the table rows because of the clustered index ? If yes, what should I do to minimize this ?

    Read the article

  • Stored procedure and trigger

    - by noober
    Hello all, I had a task -- to create update trigger, that works on real table data change (not just update with the same values). For that purpose I had created copy table then began to compare updated rows with the old copied ones. When trigger completes, it's neccessary to actualize the copy: UPDATE CopyTable SET id = s.id, -- many, many fields FROM MainTable s WHERE s.id IN (SELECT [id] FROM INSERTED) AND CopyTable.id = s.id; I don't like to have this ugly code in trigger anymore, so I has extracted it to a stored procedure: CREATE PROCEDURE UpdateCopy AS BEGIN UPDATE CopyTable SET id = s.id, -- many, many fields FROM MainTable s WHERE s.id IN (SELECT [id] FROM INSERTED) AND CopyTable.id = s.id; END The result is -- Invalid object name 'INSERTED'. How can I workaround this? Regards,

    Read the article

  • How can I join this 2 queries?(A select query with join and An unpivot query)

    - by MANG KANOR
    Here are my two queries SELECT EWND.Position, NKey = CASE WHEN ISNULL(Translation.Name, '') = '' THEN EWND.Name ELSE Translation.Name END, Unit = EW_N_DEF.Units FROM EWND INNER JOIN EW_N_DEF ON EW_N_DEF.Nutr_No = EWND.Nutr_No LEFT JOIN Translation ON Translation.CodeMain = EWND.Nutr_no WHERE Translation.CodeTrans = 1 ORDER BY EWND.Position And this is the unpivot one SELECT * FROM (SELECT N1,N2,N3,N4,N5,N6,N7,N8,N9,N10,N11,N12,N13,N14,N15,N16,N17,N18,N19,N20,N21,N22,N23,N24,N25,N26,N27,N28,N29,N30,N31,N32,N33,N34 FROM EWNVal WHERE Code=6035) Test UNPIVOT (Value FOR NUTCODE IN (N1,N2,N3,N4,N5,N6,N7,N8,N9,N10,N11,N12,N13,N14,N15,N16,N17,N18,N19,N20,N21,N22,N23,N24,N25,N26,N27,N28,N29,N30,N31,N32,N33,N34) )AS test Both Queries put out same number of rows but not columns, Is it possible to join this two? I tried the union but it has problems that I cant solve Thanks in advance!

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >