Search Results

Search found 51778 results on 2072 pages for 'super columns'.

Page 47/2072 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • How to do insert one row from one table to another table(what has less columns)?

    - by chobo2
    Hi I am trying to find rows that are not in one table and insert them into another. The table I am trying to insert into has less columns then the other one. These columns are null but it would be cool if I could hardcode a value before I do the insert. But I am having so much trouble with just trying to get it to insert. I have something like this SELECT p.ProductId, p.ProductName INTO SomeTable FROM Product as p WHERE p.ProductName != 'iPad' I will get a error like this though Msg 4104, Level 16, State 1, Line 1 The multi-part identifier "p.ProductId" could not be bound. I am not sure what I am doing wrong. I copied and pasted the names in so I don't think it is a spelling mistake. I am using ms sql 2005 express.

    Read the article

  • Why is selecting specified columns, and all, wrong in Oracle SQL?

    - by TomatoSandwich
    Say I have a select statement that goes.. select * from animals That gives a a query result of all the columns in the table. Now, if the 42nd column of the table animals is is_parent, and I want to return that in my results, just after gender, so I can see it more easily. But I also want all the other columns. select is_parent, * from animals This returns ORA-00936: missing expression. The same statement will work fine in Sybase, and I know that you need to add a table alias to the animals table to get it to work ( select is_parent, a.* from animals ani), but why must Oracle need a table alias to be able to work out the select?

    Read the article

  • How to set DataGridView columns text format to uppercase by adding new property?

    - by Jooj
    Hello, I have a custom DataGridView control and want to set the text format for custom columns in the designer (CellStyle builder). Let's say I want to make the text format to uppercase. After searching about this I've found some solutions with adding new events and then changing the text format but this is not what I want. I want to add a new property to all designed columns and there set or change the text format. How to do this? Thank and best regards.

    Read the article

  • Generating a .CSV with Several Columns - Use a Dictionary?

    - by Qanthelas
    I am writing a script that looks through my inventory, compares it with a master list of all possible inventory items, and tells me what items I am missing. My goal is a .csv file where the first column contains a unique key integer and then the remaining several columns would have data related to that key. For example, a three row snippet of my end-goal .csv file might look like this: 100001,apple,fruit,medium,12,red 100002,carrot,vegetable,medium,10,orange 100005,radish,vegetable,small,10,red The data for this is being drawn from a couple sources. 1st, a query to an API server gives me a list of keys for items that are in inventory. 2nd, I read in a .csv file into a dict that matches keys with item name for all possible keys. A snippet of the first 5 rows of this .csv file might look like this: 100001,apple 100002,carrot 100003,pear 100004,banana 100005,radish Note how any key in my list of inventory will be found in this two column .csv file that gives all keys and their corresponding item name and this list minus my inventory on hand yields what I'm looking for (which is the inventory I need to get). So far I can get a .csv file that contains just the keys and item names for the items that I don't have in inventory. Give a list of inventory on hand like this: 100003,100004 A snippet of my resulting .csv file looks like this: 100001,apple 100002,carrot 100005,radish This means that I have pear and banana in inventory (so they are not in this .csv file.) To get this I have a function to get an item name when given an item id that looks like this: def getNames(id_to_name, ids): return [id_to_name[id] for id in ids] Then a function which gives a list of keys as integers from my inventory server API call that returns a list and I've run this function like this: invlist = ServerApiCallFunction(AppropriateInfo) A third function takes this invlist as its input and returns a dict of keys (the item id) and names for the items I don't have. It also writes the information of this dict to a .csv file. I am using the set1 - set2 method to do this. It looks like this: def InventoryNumbers(inventory): with open(csvfile,'w') as c: c.write('InvName' + ',InvID' + '\n') missinginvnames = [] with open("KeyAndItemNameTwoColumns.csv","rb") as fp: reader = csv.reader(fp, skipinitialspace=True) fp.readline() # skip header invidsandnames = {int(id): str.upper(name) for id, name in reader} invids = set(invidsandnames.keys()) invnames = set(invidsandnames.values()) invonhandset = set(inventory) missinginvidsset = invids - invonhandset missinginvids = list(missinginvidsset) missinginvnames = getNames(invidsandnames, missinginvids) missinginvnameswithids = dict(zip(missinginvnames, missinginvids)) print missinginvnameswithids with open(csvfile,'a') as c: for invname, invid in missinginvnameswithids.iteritems(): c.write(invname + ',' + str(invid) + '\n') return missinginvnameswithids Which I then call like this: InventoryNumbers(invlist) With that explanation, now on to my question here. I want to expand the data in this output .csv file by adding in additional columns. The data for this would be drawn from another .csv file, a snippet of which would look like this: 100001,fruit,medium,12,red 100002,vegetable,medium,10,orange 100003,fruit,medium,14,green 100004,fruit,medium,12,yellow 100005,vegetable,small,10,red Note how this does not contain the item name (so I have to pull that from a different .csv file that just has the two columns of key and item name) but it does use the same keys. I am looking for a way to bring in this extra information so that my final .csv file will not just tell me the keys (which are item ids) and item names for the items I don't have in stock but it will also have columns for type, size, number, and color. One option I've looked at is the defaultdict piece from collections, but I'm not sure if this is the best way to go about what I want to do. If I did use this method I'm not sure exactly how I'd call it to achieve my desired result. If some other method would be easier I'm certainly willing to try that, too. How can I take my dict of keys and corresponding item names for items that I don't have in inventory and add to it this extra information in such a way that I could output it all to a .csv file? EDIT: As I typed this up it occurred to me that I might make things easier on myself by creating a new single .csv file that would have date in the form key,item name,type,size,number,color (basically just copying in the column for item name into the .csv that already has the other information for each key.) This way I would only need to draw from one .csv file rather than from two. Even if I did this, though, how would I go about making my desired .csv file based on only those keys for items not in inventory?

    Read the article

  • Does the order of the columns in a SELECT statement make a difference?

    - by Frank Computer
    This question was inspired by a previous question posted on SO, "Does the order of the WHERE clause make a differnece?". Would it improve a SELECT statement's performance if the the columns used in the WHERE section are placed at the begining of the SELECT statement? example: SELECT customer.id, transaction.id, transaction.efective_date, transaction.a, [...] FROM customer, transaction WHERE customer.id = transaction.id; I do know that limiting the list of columns to only the needed ones in a SELECT statement improves performance as opposed to using SELECT * because the current list is smaller.

    Read the article

  • Is an index required for columns in ON clause?

    - by newbie
    Do I have to create an index on columns referenced in Joins? E.g. SELECT * FROM left_table INNER JOIN right_table ON left_table.foo = right_table.bar WHERE ... Should I create indexes on left_table(foo), right_table(bar), or both? I noticed different results when I used EXPLAIN (Postgresql) with and without indexes and switching around the order of the comparison (right_table.bar = left_table.foo) I know for sure that indexes are used for the left of the WHERE clause but I am wondering whether I need indexes for columns listed in ON clauses.

    Read the article

  • Java extends classes - Share the extended class fields within the super class.

    - by Bastan
    Straight to the point... I have a class public class P_Gen{ protected String s; protected Object oP_Gen; public P_Gen(String str){ s = str; oP_Gen = new Myclass(this); } } Extended class: public class P extends P_Gen{ protected Object oP; public P(String str){ oP = new aClass(str); super(str); } } MyClass: public class MyClass{ protected Object oMC; public MyClass(P extendedObject){ oMc = oP.getSomething(); } } I came to realize that MyClass can only be instantiated with (P_Gen thisObject) as opposed to (P extendedObject). The situation is that I have code generated a bunch of classes like P_Gen. For each of them I have generated a class P which would contains my P specific custom methods and fields. When I'll regenerate my code in the future, P would not be overwritten as P_Gen would. ** So what happened in my case???!!!... I realized that MyClass would beneficiate from the info stored in P in addition to only P_Gen. Would that possible? I know it's not JAVA "realistic" since another class that extends P_Gen might not have the same fields... BY DESIGN, P_Gen will not be extended by anything but P.... And that's where it kinda make sens. :-) at least in other programming language ;-) In other programming language, it seems like P_Gen.this === P.this, in other word, "this" becomes a combination of P and P_Gen. Is there a way to achieve this knowing that P_Gen won't be extended by anything than P?

    Read the article

  • Efficient Search function with Linq to SQL

    - by Bayonian
    Hi, I'm using VB.NET and Linq to SQL. I have a table with thousands of rows and growing. Right now I'm using .Contains() in the Where clause to perform the query. Below is my search function : Public Shared Function DemoSearchFunction(ByVal keyword As String) As DataTable Dim db As New BibleDataClassesDataContext() Dim query = From b In db.khmer_books _ From ch In db.khmer_chapters _ From v In db.testing_khmers _ Where v.t_v.Contains(keyword) And ch.kh_book_id = b.kh_b_id And v.t_chid = ch.kh_ch_id _ Select b.kh_b_id, b.kh_b_title, ch.kh_ch_id, ch.kh_ch_number, v.t_id, v.t_vn, v.t_v Dim dtDataTableOne = New DataTable("dtOne") dtDataTableOne.Columns.Add("bid", GetType(Integer)) dtDataTableOne.Columns.Add("btitle", GetType(String)) dtDataTableOne.Columns.Add("chid", GetType(Integer)) dtDataTableOne.Columns.Add("chn", GetType(Integer)) dtDataTableOne.Columns.Add("vid", GetType(Integer)) dtDataTableOne.Columns.Add("vn", GetType(Integer)) dtDataTableOne.Columns.Add("verse", GetType(String)) For Each r In query dtDataTableOne.Rows.Add(New Object() {r.kh_b_id, r.kh_b_title, r.kh_ch_id, r.kh_ch_number, r.t_id, r.t_vn, r.t_v}) Next Return dtDataTableOne End Function I would like to know other methods for doing efficient search using Linq to SQL. Thanks.

    Read the article

  • Database Design Question

    - by deniz
    Hi, I am designing a database for a project. I have a table that has 10 columns, most of them are used whenever the table is accessed, and I need to add 3 more rows; View Count Thumbs Up (count) Thumbs Down (Count) which will be used on %90 of the queries when the table is accessed. So, my question is that whether it is better to break the table up and create new table which will have these 3 columns + Foreign ID, or just make it 13 columns and use no joins? Since these columns will be used frequently, I guess adding 3 more columns is better, but if I need to create 10 more columns which will be used %90 of the time, should I add them as well, or create a new table and use joins? I am not sure when to break the table if the columns are used very frequently. Do you have any suggestions? Thanks in advance,

    Read the article

  • How can we copy datacolumn with data from one table to another ?

    - by Harikrishna
    I have one Datatable like DataTable addressAndPhones; And there are four columns name,address,phoneno,and other details and I only want two columns Name and address from that so I do for that is DataTable addressAndPhones2; addressAndPhones2.Columns.Add(new DataColumn(addressAndPhones.Columns["name"].ColumnName)); addressAndPhones2.Columns.Add(new DataColumn(addressAndPhones.Columns["address"].ColumnName)); But it gives me error so how can I copy fix no of columns data from one table to another table ? ERROR :Object reference not set to an instance of an object. EDIT : Only column is copied to another table, data of that column is not copied to another table.

    Read the article

  • Error while coping datacolumn with data from one table to another .

    - by Harikrishna
    I have one Datatable like DataTable addressAndPhones; And there are four columns name,address,phoneno,and other details and I only want two columns Name and address from that so I do for that is DataTable addressAndPhones2; addressAndPhones2.Columns.Add(new DataColumn(addressAndPhones.Columns["name"].ColumnName)); addressAndPhones2.Columns.Add(new DataColumn(addressAndPhones.Columns["address"].ColumnName)); But it gives me error so how can I copy fix no of columns data from one table to another table ? ERROR :Object reference not set to an instance of an object.

    Read the article

  • SQL SERVER – Using expressor Composite Types to Enforce Business Rules

    - by pinaldave
    One of the features that distinguish the expressor Data Integration Platform from other products in the data integration space is its concept of composite types, which provide an effective and easily reusable way to clearly define the structure and characteristics of data within your application.  An important feature of the composite type approach is that it allows you to easily adjust the content of a record to its ultimate purpose.  For example, a record used to update a row in a database table is easily defined to include only the minimum set of columns, that is, a value for the key column and values for only those columns that need to be updated. Much like a class in higher level programming languages, you can also use the composite type as a way to enforce business rules onto your data by encapsulating a datum’s name, data type, and constraints (for example, maximum, minimum, or acceptable values) as a single entity, which ensures that your data can not assume an invalid value.  To what extent you use this functionality is a decision you make when designing your application; the expressor design paradigm does not force this approach on you. Let’s take a look at how these features are used.  Suppose you want to create a group of applications that maintain the employee table in your human resources database. Your table might have a structure similar to the HumanResources.Employee table in the AdventureWorks database.  This table includes two columns, EmployeID and rowguid, that are maintained by the relational database management system; you cannot provide values for these columns when inserting new rows into the table. Additionally, there are columns such as VacationHours and SickLeaveHours that you might choose to update for all employees on a monthly basis, which justifies creation of a dedicated application. By creating distinct composite types for the read, insert and update operations against this table, you can more easily manage this table’s content. When developing this application within expressor Studio, your first task is to create a schema artifact for the database table.  This process is completely driven by a wizard, only requiring that you select the desired database schema and table.  The resulting schema artifact defines the mapping of result set records to a record within the expressor data integration application.  The structure of the record within the expressor application is a composite type that is given the default name CompositeType1.  As you can see in the following figure, all columns from the table are included in the result set and mapped to an identically named attribute in the default composite type. If you are developing an application that needs to read this table, perhaps to prepare a year-end report of employees by department, you would probably not be interested in the data in the rowguid and ModifiedDate columns.  A typical approach would be to drop this unwanted data in a downstream operator.  But using an alternative composite type provides a better approach in which the unwanted data never enters your application. While working in expressor  Studio’s schema editor, simply create a second composite type within the same schema artifact, which you could name ReadTable, and remove the attributes corresponding to the unwanted columns. The value of an alternative composite type is even more apparent when you want to insert into or update the table.  In the composite type used to insert rows, remove the attributes corresponding to the EmployeeID primary key and rowguid uniqueidentifier columns since these values are provided by the relational database management system. And to update just the VacationHours and SickLeaveHours columns, use a composite type that includes only the attributes corresponding to the EmployeeID, VacationHours, SickLeaveHours and ModifiedDate columns. By specifying this schema artifact and composite type in a Write Table operator, your upstream application need only deal with the four required attributes and there is no risk of unintentionally overwriting a value in a column that does not need to be updated. Now, what about the option to use the composite type to enforce business rules?  If you review the composition of the default composite type CompositeType1, you will note that the constraints defined for many of the attributes mirror the table column specifications.  For example, the maximum number of characters in the NationaIDNumber, LoginID and Title attributes is equivalent to the maximum width of the target column, and the size of the MaritalStatus and Gender attributes is limited to a single character as required by the table column definition.  If your application code leads to a violation of these constraints, an error will be raised.  The expressor design paradigm then allows you to handle the error in a way suitable for your application.  For example, a string value could be truncated or a numeric value could be rounded. Moreover, you have the option of specifying additional constraints that support business rules unrelated to the table definition. Let’s assume that the only acceptable values for marital status are S, M, and D.  Within the schema editor, double-click on the MaritalStatus attribute to open the Edit Attribute window.  Then click the Allowed Values checkbox and enter the acceptable values into the Constraint Value text box. The schema editor is updated accordingly. There is one more option that the expressor semantic type paradigm supports.  Since the MaritalStatus attribute now clearly specifies how this type of information should be represented (a single character limited to S, M or D), you can convert this attribute definition into a shared type, which will allow you to quickly incorporate this definition into another composite type or into the description of an output record from a transform operator. Again, double-click on the MaritalStatus attribute and in the Edit Attribute window, click Convert, which opens the Share Local Semantic Type window that you use to name this shared type.  There’s no requirement that you give the shared type the same name as the attribute from which it was derived.  You should supply a name that makes it obvious what the shared type represents. In this posting, I’ve overviewed the expressor semantic type paradigm and shown how it can be used to make your application development process more productive.  The beauty of this feature is that you choose when and to what extent you utilize the functionality, but I’m certain that if you opt to follow this approach your efforts will become more efficient and your work will progress more quickly.  As always, I encourage you to download and evaluate expressor Studio for your current and future data integration needs. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Finding trends in multi-category data in Excel

    - by Miral
    I have an Excel spreadsheet that contains hundreds of rows of data that each represent a single sample in a larger population. Each row is divided into three columns that contain frequency counts of a specific type of thing. Together the three columns summed on a single row represent 100%, though each row will sum to a different value. What I'm most interested in are the proportions of each of these types (ie. percentages of each column relative to the sum of the three columns). I can easily calculate this on a per-row basis, but what I'm really interested in is trying to find an overall trend from the entire population. I don't really spend much time doing data analysis so the only thing I can think of trying is to create those percentage columns and then average them, but I'm sure there must be a better way to visualise this.

    Read the article

  • Spreadsheet functions to query route planner for travel time/distance

    - by Rich
    I would like to achieve something whereby I have a spreadsheet such that the columns are: Column A - place name Column B - place name Column C - distance by road between places in columns A and B Column D - travel time by road between places in columns A and B I thought it might be possible using Google Docs' spreadsheet and its 'Google' functions, but I've not found any that might do the trick. In the end I could knock up an app to do it using the Google Maps API but would rather avoid it if I can.

    Read the article

  • Sending changes from multiple tables in disconnected dataset to SQLServer...

    - by Stecy
    We have a third party application that accept calls using an XML RPC mechanism for calling stored procs. We send a ZIP-compressed dataset containing multiple tables with a bunch of update/delete/insert using this mechanism. On the other end, a CLR sproc decompress the data and gets the dataset. Then, the following code gets executed: using (var conn = new SqlConnection("context connection=true")) { if (conn.State == ConnectionState.Closed) conn.Open(); try { foreach (DataTable table in ds.Tables) { string columnList = ""; for (int i = 0; i < table.Columns.Count; i++) { if (i == 0) columnList = table.Columns[0].ColumnName; else columnList += "," + table.Columns[i].ColumnName; } var da = new SqlDataAdapter("SELECT " + columnList + " FROM " + table.TableName, conn); var builder = new SqlCommandBuilder(da); builder.ConflictOption = ConflictOption.OverwriteChanges; da.RowUpdating += onUpdatingRow; da.Update(ds, table.TableName); } } catch (....) { ..... } } Here's the event handler for the RowUpdating event: public static void onUpdatingRow(object sender, SqlRowUpdatingEventArgs e) { if ((e.StatementType == StatementType.Update) && (e.Command == null)) { e.Command = CreateUpdateCommand(e.Row, sender as SqlDataAdapter); e.Status = UpdateStatus.Continue; } } and the CreateUpdateCommand method: private static SqlCommand CreateUpdateCommand(DataRow row, SqlDataAdapter da) { string whereClause = ""; string setClause = ""; SqlConnection conn = da.SelectCommand.Connection; for (int i = 0; i < row.Table.Columns.Count; i++) { char quoted; if ((row.Table.Columns[i].DataType == Type.GetType("System.String")) || (row.Table.Columns[i].DataType == Type.GetType("System.DateTime"))) quoted = '\''; else quoted = ' '; string val = row[i].ToString(); if (row.Table.Columns[i].DataType == Type.GetType("System.Boolean")) val = (bool)row[i] ? "1" : "0"; bool isPrimaryKey = false; for (int j = 0; j < row.Table.PrimaryKey.Length; j++) { if (row.Table.PrimaryKey[j].ColumnName == row.Table.Columns[i].ColumnName) { if (whereClause != "") whereClause += " AND "; if (row[i] == DBNull.Value) whereClause += row.Table.Columns[i].ColumnName + "=NULL"; else whereClause += row.Table.Columns[i].ColumnName + "=" + quoted + val + quoted; isPrimaryKey = true; break; } } /* Only values for column that is not a primary key can be modified */ if (!isPrimaryKey) { if (setClause != "") setClause += ", "; if (row[i] == DBNull.Value) setClause += row.Table.Columns[i].ColumnName + "=NULL"; else setClause += row.Table.Columns[i].ColumnName + "=" + quoted + val + quoted; } } return new SqlCommand("UPDATE " + row.Table.TableName + " SET " + setClause + " WHERE " + whereClause, conn); } However, this is really slow when we have a lot of records. Is there a way to optimize this or an entirely different way to send lots of udpate/delete on several tables? I would really much like to use TSQL for this but can't figure a way to send a dataset to a regular sproc. Additional notes: We cannot directly access the SQLServer database. We tried without compression and it was slower.

    Read the article

  • Android launches system settings instead of my app

    - by jsundin
    Hi, For some reason whenever I (try to) start my app the phone decides to launch system settings instead of my "main activity". And yes, I am referring to the "Android system settings", and not something from my app. This only happens on my phone, and I suppose it probably could be related to the fact that my app had just opened system settings when I decided to re-launch with a new version from Eclipse. It is possible to start the app from within Eclipse, but when I navigate back from the app it returns to the system settings rather than the home screen, as if the settings activity was started first and then my activity. If I then start the app from the phone all I get is system settings yet again. The app is listening to the VIEW-action for a specific URL substring, and when I start the app using a matching URL I get the same result as when I start it from Eclipse, app starts, but when I return I return to settings. I have tried googling for this problem, and all I could find was something about Android saving state when an app gets killed, but without any information on how to reset this state. I have tried uninstalling the app, killing system settings, rebooting the phone, reinstalling, clearing application data.. no luck.. For what it's worth, here's the definition of my main activity from the manifest, <activity android:name=".HomeActivity" android:label="@string/app_name" android:screenOrientation="portrait" android:clearTaskOnLaunch="true" android:launchMode="singleTop"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> <intent-filter> <action android:name="android.intent.action.VIEW"></action> <category android:name="android.intent.category.DEFAULT"></category> <category android:name="android.intent.category.BROWSABLE"></category> <data android:pathPrefix="/isak-web-mobile/smart/" android:scheme="http" android:host="*"></data> </intent-filter> </activity> And here is the logcat-line from when I try to start my app, nothing about any settings anywhere. I/ActivityManager( 1301): Starting activity: Intent { act=android.intent.action.MAIN cat=[android.intent.category.LAUNCHER] flg=0x10200000 cmp=se.opencare.isak/.HomeActivity } When I launch from Eclipse I also get this line (as one would expect), I/ActivityManager( 1301): Start proc se.opencare.isak for activity se.opencare.isak/.HomeActivity: pid=23068 uid=10163 gids={3003, 1007, 1015} If it matters the phone is a HTC Desire Z running 2.2.1. Currently, this is my HomeActivity, public class HomeActivity extends Activity { public static final String TAG = "HomeActivity"; @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { Log.d(TAG, "onActivityResult(" + requestCode + ", " + resultCode + ", " + data + ")"); super.onActivityResult(requestCode, resultCode, data); } @Override protected void onCreate(Bundle savedInstanceState) { Log.d(TAG, "onCreate(" + savedInstanceState + ")"); super.onCreate(savedInstanceState); } @Override protected void onDestroy() { Log.d(TAG, "onDestroy()"); super.onDestroy(); } @Override protected void onPause() { Log.d(TAG, "onPause()"); super.onPause(); } @Override protected void onPostCreate(Bundle savedInstanceState) { Log.d(TAG, "onPostCreate(" + savedInstanceState + ")"); super.onPostCreate(savedInstanceState); } @Override protected void onPostResume() { Log.d(TAG, "onPostResume()"); super.onPostResume(); } @Override protected void onRestart() { Log.d(TAG, "onRestart()"); super.onRestart(); } @Override protected void onRestoreInstanceState(Bundle savedInstanceState) { Log.d(TAG, "onRestoreInstanceState(" + savedInstanceState + ")"); super.onRestoreInstanceState(savedInstanceState); } @Override protected void onResume() { Log.d(TAG, "onResume()"); super.onResume(); } @Override protected void onStart() { Log.d(TAG, "onStart()"); super.onStart(); } @Override protected void onStop() { Log.d(TAG, "onStop()"); super.onStop(); } @Override protected void onUserLeaveHint() { Log.d(TAG, "onUserLeaveHint()"); super.onUserLeaveHint(); } } Nothing (of the above) is written to the log.

    Read the article

  • Clicking sound on MacBook Pro 13" 2010 model when turning the laptop on it's horizontal axis

    - by roosteronacid
    When I leave my MacBook Pro 13" level for a minute or two (sometimes I only need to keep it level for a few seconds), and then pick it up and turn it on it's horizontal axis, I hear a single click, coming from either the hard drive or the Super Drive. Is the click I am hearing some sort of locking mechanism in the hard drive? Or in the Super Drive?.. And therefore nothing to worry about. Or is either my Super Drive or hard drive faulty?

    Read the article

  • Populating cells with data from another spreadsheet after just keying in a few letters

    - by Wendy Griffin
    I have 1 workbook with 2 spreadsheets. Spreadsheet 2 column A contains a long list of company names, Columns B - H contain critical information about the company. Spreadsheet 1 contains all of the columns as Spreadsheet 2 plus some other columns. What I'm trying to achieve is that when you start to type in the first 3 characters of a company name on Spreadsheet 1 it would then have a drop down of the companies (as listed on Spreadsheet 2) that share the first 3-5 letters and you would select one. Upon selecting a company name all of the corresponding company information would populate in the other columns on spreadsheet 1 automatically. This is to avoid copying a row from Spreadsheet 2 and pasting it in Spreadsheet 1. Any help with this would be greatly appreciated. Cheers!

    Read the article

  • Emacs and windows manager keyboard shortcuts without Win key

    - by Little Bobby Tables
    I found a classic M-Series keyboard and I want to use it. However, it does not have the "Windows" key (a.k.a "Super"), only the Shift, Control and Alt modifiers. My keyboard shortcuts are cluttered as-is, since that I try to control both Emacs and the windows manager (Gnome) only from the keyboard. I rely on the "Super" key to identify the windows manager shortcuts. What it the best practice for keyboard-centric work without the "Super" key?

    Read the article

  • MS Excel - splitting a formula into individual cells?

    - by Nick
    I'm not sure if this is possible, or if I'll have to do it manually, but I have lots of cells in the following format: =87.12+56.52-16.50+98.21-9.51 If possible, I'd like to break it up into columns, like so: I have a data in excel in the format: 87.12 | 56.52 | -16.50 | 98.21 | -9.51 I've tried text to columns based on the '+' symbol, but it falls short when I then try to break it down by the '-' symbol, it moves into columns as appropriate, it removes the minus from the start of the figure Any suggestions would be very welcome! Thank you

    Read the article

  • How can I merge a Gmail account and OpenID account with Superuser?

    - by user61116
    I have recently created an account with Super User with a Gmail account Now that I'm trying to login in only find the OpenID login and when i login with my OpenID it seems Super User doesn't recognize it with my Gmail account The result is I have 2 accounts now an old one(Gmail) with reputation and a new one without (OpenID) When I First Signup I did it with Windows 7 now I'm working with Ubuntu 10.10 So my question is: How can i merge my previously superuser Gmail account and my OpenID account with Super User?

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Building Queries Systematically

    - by Jeremy Smyth
    The SQL language is a bit like a toolkit for data. It consists of lots of little fiddly bits of syntax that, taken together, allow you to build complex edifices and return powerful results. For the uninitiated, the many tools can be quite confusing, and it's sometimes difficult to decide how to go about the process of building non-trivial queries, that is, queries that are more than a simple SELECT a, b FROM c; A System for Building Queries When you're building queries, you could use a system like the following:  Decide which fields contain the values you want to use in our output, and how you wish to alias those fields Values you want to see in your output Values you want to use in calculations . For example, to calculate margin on a product, you could calculate price - cost and give it the alias margin. Values you want to filter with. For example, you might only want to see products that weigh more than 2Kg or that are blue. The weight or colour columns could contain that information. Values you want to order by. For example you might want the most expensive products first, and the least last. You could use the price column in descending order to achieve that. Assuming the fields you've picked in point 1 are in multiple tables, find the connections between those tables Look for relationships between tables and identify the columns that implement those relationships. For example, The Orders table could have a CustomerID field referencing the same column in the Customers table. Sometimes the problem doesn't use relationships but rests on a different field; sometimes the query is looking for a coincidence of fact rather than a foreign key constraint. For example you might have sales representatives who live in the same state as a customer; this information is normally not used in relationships, but if your query is for organizing events where sales representatives meet customers, it's useful in that query. In such a case you would record the names of columns at either end of such a connection. Sometimes relationships require a bridge, a junction table that wasn't identified in point 1 above but is needed to connect tables you need; these are used in "many-to-many relationships". In these cases you need to record the columns in each table that connect to similar columns in other tables. Construct a join or series of joins using the fields and tables identified in point 2 above. This becomes your FROM clause. Filter using some of the fields in point 1 above. This becomes your WHERE clause. Construct an ORDER BY clause using values from point 1 above that are relevant to the desired order of the output rows. Project the result using the remainder of the fields in point 1 above. This becomes your SELECT clause. A Worked Example   Let's say you want to query the world database to find a list of countries (with their capitals) and the change in GNP, using the difference between the GNP and GNPOld columns, and that you only want to see results for countries with a population greater than 100,000,000. Using the system described above, we could do the following:  The Country.Name and City.Name columns contain the name of the country and city respectively.  The change in GNP comes from the calculation GNP - GNPOld. Both those columns are in the Country table. This calculation is also used to order the output, in descending order To see only countries with a population greater than 100,000,000, you need the Population field of the Country table. There is also a Population field in the City table, so you'll need to specify the table name to disambiguate. You can also represent a number like 100 million as 100e6 instead of 100000000 to make it easier to read. Because the fields come from the Country and City tables, you'll need to join them. There are two relationships between these tables: Each city is hosted within a country, and the city's CountryCode column identifies that country. Also, each country has a capital city, whose ID is contained within the country's Capital column. This latter relationship is the one to use, so the relevant columns and the condition that uses them is represented by the following FROM clause:  FROM Country JOIN City ON Country.Capital = City.ID The statement should only return countries with a population greater than 100,000,000. Country.Population is the relevant column, so the WHERE clause becomes:  WHERE Country.Population > 100e6  To sort the result set in reverse order of difference in GNP, you could use either the calculation, or the position in the output (it's the third column): ORDER BY GNP - GNPOld or ORDER BY 3 Finally, project the columns you wish to see by constructing the SELECT clause: SELECT Country.Name AS Country, City.Name AS Capital,        GNP - GNPOld AS `Difference in GNP`  The whole statement ends up looking like this:  mysql> SELECT Country.Name AS Country, City.Name AS Capital, -> GNP - GNPOld AS `Difference in GNP` -> FROM Country JOIN City ON Country.Capital = City.ID -> WHERE Country.Population > 100e6 -> ORDER BY 3 DESC; +--------------------+------------+-------------------+ | Country            | Capital    | Difference in GNP | +--------------------+------------+-------------------+ | United States | Washington | 399800.00 | | China | Peking | 64549.00 | | India | New Delhi | 16542.00 | | Nigeria | Abuja | 7084.00 | | Pakistan | Islamabad | 2740.00 | | Bangladesh | Dhaka | 886.00 | | Brazil | Brasília | -27369.00 | | Indonesia | Jakarta | -130020.00 | | Russian Federation | Moscow | -166381.00 | | Japan | Tokyo | -405596.00 | +--------------------+------------+-------------------+ 10 rows in set (0.00 sec) Queries with Aggregates and GROUP BY While this system might work well for many queries, it doesn't cater for situations where you have complex summaries and aggregation. For aggregation, you'd start with choosing which columns to view in the output, but this time you'd construct them as aggregate expressions. For example, you could look at the average population, or the count of distinct regions.You could also perform more complex aggregations, such as the average of GNP per head of population calculated as AVG(GNP/Population). Having chosen the values to appear in the output, you must choose how to aggregate those values. A useful way to think about this is that every aggregate query is of the form X, Y per Z. The SELECT clause contains the expressions for X and Y, as already described, and Z becomes your GROUP BY clause. Ordinarily you would also include Z in the query so you see how you are grouping, so the output becomes Z, X, Y per Z.  As an example, consider the following, which shows a count of  countries and the average population per continent:  mysql> SELECT Continent, COUNT(Name), AVG(Population)     -> FROM Country     -> GROUP BY Continent; +---------------+-------------+-----------------+ | Continent     | COUNT(Name) | AVG(Population) | +---------------+-------------+-----------------+ | Asia          |          51 |   72647562.7451 | | Europe        |          46 |   15871186.9565 | | North America |          37 |   13053864.8649 | | Africa        |          58 |   13525431.0345 | | Oceania       |          28 |    1085755.3571 | | Antarctica    |           5 |          0.0000 | | South America |          14 |   24698571.4286 | +---------------+-------------+-----------------+ 7 rows in set (0.00 sec) In this case, X is the number of countries, Y is the average population, and Z is the continent. Of course, you could have more fields in the SELECT clause, and  more fields in the GROUP BY clause as you require. You would also normally alias columns to make the output more suited to your requirements. More Complex Queries  Queries can get considerably more interesting than this. You could also add joins and other expressions to your aggregate query, as in the earlier part of this post. You could have more complex conditions in the WHERE clause. Similarly, you could use queries such as these in subqueries of yet more complex super-queries. Each technique becomes another tool in your toolbox, until before you know it you're writing queries across 15 tables that take two pages to write out. But that's for another day...

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >