Search Results

Search found 12287 results on 492 pages for 'column oriented'.

Page 281/492 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • Generating a .CSV with Several Columns - Use a Dictionary?

    - by Qanthelas
    I am writing a script that looks through my inventory, compares it with a master list of all possible inventory items, and tells me what items I am missing. My goal is a .csv file where the first column contains a unique key integer and then the remaining several columns would have data related to that key. For example, a three row snippet of my end-goal .csv file might look like this: 100001,apple,fruit,medium,12,red 100002,carrot,vegetable,medium,10,orange 100005,radish,vegetable,small,10,red The data for this is being drawn from a couple sources. 1st, a query to an API server gives me a list of keys for items that are in inventory. 2nd, I read in a .csv file into a dict that matches keys with item name for all possible keys. A snippet of the first 5 rows of this .csv file might look like this: 100001,apple 100002,carrot 100003,pear 100004,banana 100005,radish Note how any key in my list of inventory will be found in this two column .csv file that gives all keys and their corresponding item name and this list minus my inventory on hand yields what I'm looking for (which is the inventory I need to get). So far I can get a .csv file that contains just the keys and item names for the items that I don't have in inventory. Give a list of inventory on hand like this: 100003,100004 A snippet of my resulting .csv file looks like this: 100001,apple 100002,carrot 100005,radish This means that I have pear and banana in inventory (so they are not in this .csv file.) To get this I have a function to get an item name when given an item id that looks like this: def getNames(id_to_name, ids): return [id_to_name[id] for id in ids] Then a function which gives a list of keys as integers from my inventory server API call that returns a list and I've run this function like this: invlist = ServerApiCallFunction(AppropriateInfo) A third function takes this invlist as its input and returns a dict of keys (the item id) and names for the items I don't have. It also writes the information of this dict to a .csv file. I am using the set1 - set2 method to do this. It looks like this: def InventoryNumbers(inventory): with open(csvfile,'w') as c: c.write('InvName' + ',InvID' + '\n') missinginvnames = [] with open("KeyAndItemNameTwoColumns.csv","rb") as fp: reader = csv.reader(fp, skipinitialspace=True) fp.readline() # skip header invidsandnames = {int(id): str.upper(name) for id, name in reader} invids = set(invidsandnames.keys()) invnames = set(invidsandnames.values()) invonhandset = set(inventory) missinginvidsset = invids - invonhandset missinginvids = list(missinginvidsset) missinginvnames = getNames(invidsandnames, missinginvids) missinginvnameswithids = dict(zip(missinginvnames, missinginvids)) print missinginvnameswithids with open(csvfile,'a') as c: for invname, invid in missinginvnameswithids.iteritems(): c.write(invname + ',' + str(invid) + '\n') return missinginvnameswithids Which I then call like this: InventoryNumbers(invlist) With that explanation, now on to my question here. I want to expand the data in this output .csv file by adding in additional columns. The data for this would be drawn from another .csv file, a snippet of which would look like this: 100001,fruit,medium,12,red 100002,vegetable,medium,10,orange 100003,fruit,medium,14,green 100004,fruit,medium,12,yellow 100005,vegetable,small,10,red Note how this does not contain the item name (so I have to pull that from a different .csv file that just has the two columns of key and item name) but it does use the same keys. I am looking for a way to bring in this extra information so that my final .csv file will not just tell me the keys (which are item ids) and item names for the items I don't have in stock but it will also have columns for type, size, number, and color. One option I've looked at is the defaultdict piece from collections, but I'm not sure if this is the best way to go about what I want to do. If I did use this method I'm not sure exactly how I'd call it to achieve my desired result. If some other method would be easier I'm certainly willing to try that, too. How can I take my dict of keys and corresponding item names for items that I don't have in inventory and add to it this extra information in such a way that I could output it all to a .csv file? EDIT: As I typed this up it occurred to me that I might make things easier on myself by creating a new single .csv file that would have date in the form key,item name,type,size,number,color (basically just copying in the column for item name into the .csv that already has the other information for each key.) This way I would only need to draw from one .csv file rather than from two. Even if I did this, though, how would I go about making my desired .csv file based on only those keys for items not in inventory?

    Read the article

  • Is there any Application Server Frameworks for other languages/platforms than JavaEE and .NET?

    - by Jonas
    I'm a CS student and has rare experience from the enterprise software industry. When I'm reading about enterprise software platforms, I mostly read about these two: Java Enterprise Edition, JavaEE .NET and Windows Communication Foundation By "enterprise software platforms" I mean frameworks and application servers with support for the same characteristics as J2EE and WCF has: [JavaEE] provide functionality to deploy fault-tolerant, distributed, multi-tier Java software, based largely on modular components running on an application server. WCF is designed in accordance with service oriented architecture principles to support distributed computing where services are consumed by consumers. Clients can consume multiple services and services can be consumed by multiple clients. Services are loosely coupled to each other. Is there any alternatives to these two "enterprise software platforms"? Isn't any other programming languages used in a bigger rate for this problem area? I.e Why isn't there any popular application servers for C++/Qt?

    Read the article

  • Will PHP script running on top of Apache be faster than C# stand alloun programm doing same thing (s

    - by Ole Jak
    I mean PHP scripts on Apache are oriented for many users to use tham at the same time. So will 1000 requests which came at the (relativly) same time be fully responsed faster than C# .Net programm perfoming algorithm 1000 times in while loop? So we input same data, we perform same algorithm, which is written in a wary same way (respecting language diferencis ofcourse), outputing same data (lat us say saving it to file for tham to be relativly equal) Who will be faster on some 1000 times of performing O(NN) algorithm, in which case (if it is possible) one will owerrun another?

    Read the article

  • How to set size for divs with different parents

    - by user340524
    I want to create a div layout which is similiar to the following table result: <html> <head> <title>Basic</title> <style> table { border: 1px solid;} </style> </head> <body> <table style="border: 1px solid;"> <tr> <td> Asia</td> <td> <table> <tr> <td>South Asia</td> </td> <td><table> <tr> <td>Republic</td> <td><table> <tr><td>Singapore</td></tr> <tr><td>India</td></tr> </table></td> </tr> <tr> <td>Monarchy</td> <td><table> <tr><td>Bhutan</td></tr> <tr><td>Nepal</td></tr> </table></td> </tr> </table></td> </tr> <tr> <td>East Asia</td> <td><table> <tr> <td>Republic</td> <td><table> <tr><td>China</td></tr> <tr><td>South Corea</td></tr> </table></td> </tr> <tr> <td>Constitutional Monarchy</td> <td><table> <tr><td>something</td></tr> <tr><td>Japan</td></tr> </table></td> </tr> </table></td> </tr> </table></td> </tr> </table> </body> </html> I managed to replicate this with some effort. The problem is that I want the names of the countries to be in a column or if you will - the containers for the government types to be the same width so other containers will align. If I don't do it in nested containers (in the example - nested tables) the rows will get displaced. Currently rows are shown exactly how I want them - the text is in the vertical middle of the what they refer to. Only thing that comes up to my mind is to set the text in the same columns as class=column1, class=column2, etc. and then somehow define the width for the column classes. Problem is the data is defined dynamically and I can't say how much pixels or % of the page I can give to a column, I just need it to stretch with the text. This is my first time I ask about help here so if I am doing it wrong, tell me how do improve my inquiry.

    Read the article

  • Indexing an image in-between already made content

    - by Christophersson
    I have a pre-code page coded as follows: <div id="linearBg"> <div id="wrapper"> <div class="logo"></div> <div class="navigation"></div> <div class="video"></div> <div class="content"></div> </div> </div> Where linearBg is a gradient background, the back board of the website. Wrapper is the container for the inner div's, and the rest are content oriented. So i've already implemented this with styles and all sorts, but the thing is I want to add: <div class="watermark"></div> underneath/behind both the content and video div, sort of like a reverse watermark, I've tried z-indexing but i'm not an expert. Could you guide me on to do make this possible? Thanks in advance

    Read the article

  • RESTful web service, PUTting an unnamed resource?

    - by James L
    I have a back-end service that creates unique identifiers for resources. The general idea is that resources are saved and versioned, so you can perform: GET http://service/sales/targets/7818181919/latest or GET http://service/sales/targets/7818181919/4 for version 4, and so on. My question is about the most correct way to upload these resources in the first place. How about: PUT http://service/sales/targets/ returning 303 See other /service/sales/targets/ It seems a little wrong as you should PUT and GET from exactly the same place using a resource-oriented interface, but I can't think of a better option. Any ideas?

    Read the article

  • Choosing an alternative ( visual basic) high level programming language

    - by user370244
    I used to be visual basic 6 programmer, i was pleased with visual basic : it is high level language that do stuff fast,easy to learn, easy to do stuff in,you can drag and drop stuff to the form and write your code,it is simply amazing.however microsoft buried VB6 and pointed us to VB.NET which is so different that it is not the old VB anymore.I didn't like what microsoft did and would like to look somewhere AWAY from microsoft and from any other proprietary language . I would like to look into a similar language that is easy,cross platform (windows / Linux), object oriented, visual design, non proprietary and compile (for some guarding against reverse engineering). i am freelancer so the choice is entirely mine,i don't care about performance of programs, the time taken to develop a given programs is much more important. desktop / database / GUI /networking programming is what i am looking for. so any such language offered by our open source community ? thank you so much

    Read the article

  • Is the FoldLeft function available in R?

    - by JSmaga
    Hi, I would like to know if there is an implementation of the foldLeft function (and foldRight?) in R. The language is supposed to be "rather" functional oriented and hence I think there should be something like this, but I could not find it in the documentation. To me, foldLeft function applies on a list and has the following signature: foldLeft[B](z : B)(f : (B, A) => B) : B It is supposed to return the following result: f(... (f(f(z, a0), a1) ...), an) if the list is [a0, a1, ..., an]. (I use the definition of the Scala List API) Does anybody know if such a function exists in R?

    Read the article

  • is "Object();" a predefined function in javascript?

    - by Qlidnaque
    I come across code such as "personObj=new Object();" where a new object called personObj is being defined. What I'm trying to find out is whether Object() is a prefined function in javascript, because I understand by using the mentioned code, a instance of a class is being formed but in the example code where I'm studying from, the class Object() is not being defined anywhere, so I was wondering if Object() was a predefined function in javascript and whether I can be directed to some online resources, as all that shows up in google when I try to find Object() are articles in general javascript object oriented programming.

    Read the article

  • In PHP: How to call a $variable inside one function that was defined previously inside another funct

    - by Sam
    I'm just starting with Object Oriented PHP and I have the following issue: I have a class that contains a function that contains a certain script. I need to call a variable located in that script within another function further down the same class. For example: class helloWorld { function sayHello() { echo "Hello"; $var = "World"; } function sayWorld() { echo $var; } } in the above example I want to call $var which is a variable that was defined inside a previous function. This doesn't work though, so how can I do this?

    Read the article

  • Fastest way to convert a binary file to SQLite database

    - by chown
    I've some binary files and I'm looking for a way to convert each of those files to a SQLite database. I've already tried C# but the performance is too slow. I'm seeking an advice on how and what programming language should be the best to perform this kind of conversion. Though I prefer any Object Oriented Language more (like C#, Java etc), I'm open for any programming language that boosts up the conversion. I don't need a GUI frontend for the conversion, running the script/program from console is okay. Thanks in advance

    Read the article

  • should a student be diversifying or mastering programming languages?

    - by Max Link
    As the question states, is it better if a student diversifies or explores when learning programming languages or should they focus only on 2-3 languages and really get to know them well? Example of what I mean by diversifying: Functional -> Scheme Procedural -> C Object Oriented -> Java Dynamic or scripting -> Python Other -> C++ I have a few breaks in between semesters sometimes (up to 3 months) and I'm thinking of either learning a new language or "master" those that I know right now. Which would benefit me in the future? I know some(about 3 months of self studying each) Java, C, and C++ already . If I'm not mistaken, where I live, the industry is heavy on Java, C++, and C#.

    Read the article

  • Is it possible to Serialize and Deserialize objects in C++?

    - by GK
    As we know c++ is also an Object Oriented Programming language where most the things are objects like java. So wanted to know is the Serialize and deserializ features are available in c++ as well as we do it in java? If yes how it can be achieved? In java We use Serializable Interface to say that this type of object can be serialized and deserialized. So in c++ how? And out of curiosity is it same in c# as in java?

    Read the article

  • FastObjects.NET(is an OODB from Versant) performence in Real Scenerios?

    - by Lalit
    FastObjects.NET Saves the whole class object(if marked with attribute Persistent) at once in file system(using serilization or similar technology). They are promissing that it is even faster then normal SQL DB approach. My team also thought it is better and faster to save the whole object once instead of each field one by one. Defination of their website: FastObjects .NET 10.0 fully conforms to the Microsoft.NET 2.0 framework. Tightly integrated with Visual Studio 2005, it offers a developer-friendly, object-oriented alternative to a relational database for .NET persistence. I want to have your experiences of using FastObjects in production scenerio? They are promising for Indexing/Transaction/clustoring/replication.

    Read the article

  • Private vs. Public members in practice (how important is encapsulation?)

    - by Asmor
    One of the biggest advantages of object-oriented programming is encapsulation, and one of the "truths" we've (or, at least, I've) been taught is that members should always be made private and made available via accessor and mutator methods, thus ensuring the ability to verify and validate the changes. I'm curious, though, how important this really is in practice. In particular, if you've got a more complicated member (such as a collection), it can be very tempting to just make it public rather than make a bunch of methods to get the collection's keys, add/remove items from the collection, etc. Do you follow the rule in general? Does your answer change depending on whether it's code written for yourself vs. to be used by others? Are there more subtle reasons I'm missing for this obfuscation?

    Read the article

  • tables wrapping to next line when width 100%

    - by jmo
    I'm encountering some weirdness with tables in css. The layout is fairly simple, a fixed-width nav bar on the left and the content on the right. When the content includes a table with a width of 100% the table ends up getting pushed down until it has room to take up the full width of the screen (instead of just the area to the right of the nav bar). If I remove the width=100% from the table's css, then it looks fine, but obviously the table doesn't grow to fill the space of the div. The problem is that i want the table to grow and shrink with the window but still stay in the bounds of its div. Thanks. Here's a simple example: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Test</title> <style type="text/css"> #content { padding-right:20px; background:white; overflow:hidden; margin:20px; } #content .column { position:relative; padding-bottom: 20010px; margin-bottom: -20000px; } #center { width:100%; padding-top:15px; } body { min-width:700px; } #left { width: 330px; padding: 0 10px; padding-top:10px; float:left; } .tableData { width:100%; } </style> </head> <body> <div id="content"> <div class="column" id="left"> <div> Some text goes in here<br/> some more text<br/> some more text<br/> some more text<br/> some more text<br/> some more text<br/> </div> </div> <div class="column" id="center"> Some text at the top; <hr/> <table class="tableData"> <thead> <tr><th>A</th><th>B</th><th>C</th></tr> </thead> <tbody> <tr> <td>A1 A1 A1 A1</td> <td>B1 B1 B1 B1</td> <td>C1 C1 C1 C1 C</td> </tr> <tr> <td>A2 A2 A2 A2 </td> <td>B2 B2 B2 B2 </td> <td>C2 C2 C2 C2</td> </tr> <tr> <td>A3 A3 A3 A3 A3 </td> <td>B3 B3 B3 B3 B3 </td> <td>C3 C3 C3 C3 C3</td> </tr> <tr> <td>A4 A4 A4 A4 A4</td> <td>B4 B4 B4 B4 B4</td> <td>C4 C4 C4 C4 C4</td> </tr> </tbody> </table> </div> </div> </body> </html>

    Read the article

  • SQL SERVER – Enumerations in Relational Database – Best Practice

    - by pinaldave
    Marko Parkkola This article has been submitted by Marko Parkkola, Data systems designer at Saarionen Oy, Finland. Marko is excellent developer and always thinking at next level. You can read his earlier comment which created very interesting discussion here: SQL SERVER- IF EXISTS(Select null from table) vs IF EXISTS(Select 1 from table). I must express my special thanks to Marko for sending this best practice for Enumerations in Relational Database. He has really wrote excellent piece here and welcome comments here. Enumerations in Relational Database This is a subject which is very basic thing in relational databases but often not very well understood and sometimes badly implemented. There are of course many ways to do this but I concentrate only two cases, one which is “the right way” and one which is definitely wrong way. The concept Let’s say we have table Person in our database. Person has properties/fields like Firstname, Lastname, Birthday and so on. Then there’s a field that tells person’s marital status and let’s name it the same way; MaritalStatus. Now MaritalStatus is an enumeration. In C# I would definitely make it an enumeration with values likes Single, InRelationship, Married, Divorced. Now here comes the problem, SQL doesn’t have enumerations. The wrong way This is, in my opinion, absolutely the wrong way to do this. It has one upside though; you’ll see the enumeration’s description instantly when you do simple SELECT query and you don’t have to deal with mysterious values. There’s plenty of downsides too and one would be database fragmentation. Consider this (I’ve left all indexes and constraints out of the query on purpose). CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] NVARCHAR(10) ) You have nvarchar(20) field in the table that tells the marital status. Obvious problem with this is that what if you create a new value which doesn’t fit into 20 characters? You’ll have to come and alter the table. There are other problems also but I’ll leave those for the reader to think about. The correct way Here’s how I’ve done this in many projects. This model still has one problem but it can be alleviated in the application layer or with CHECK constraints if you like. First I will create a namespace table which tells the name of the enumeration. I will add one row to it too. I’ll write all the indexes and constraints here too. CREATE TABLE [CodeNamespace] ( [Id] INT IDENTITY(1, 1), [Name] NVARCHAR(100) NOT NULL, CONSTRAINT [PK_CodeNamespace] PRIMARY KEY ([Id]), CONSTRAINT [IXQ_CodeNamespace_Name] UNIQUE NONCLUSTERED ([Name]) ) GO INSERT INTO [CodeNamespace] SELECT 'MaritalStatus' GO Then I create a table that holds the actual values and which reference to namespace table in order to group the values under different namespaces. I’ll add couple of rows here too. CREATE TABLE [CodeValue] ( [CodeNamespaceId] INT NOT NULL, [Value] INT NOT NULL, [Description] NVARCHAR(100) NOT NULL, [OrderBy] INT, CONSTRAINT [PK_CodeValue] PRIMARY KEY CLUSTERED ([CodeNamespaceId], [Value]), CONSTRAINT [FK_CodeValue_CodeNamespace] FOREIGN KEY ([CodeNamespaceId]) REFERENCES [CodeNamespace] ([Id]) ) GO -- 1 is the 'MaritalStatus' namespace INSERT INTO [CodeValue] SELECT 1, 1, 'Single', 1 INSERT INTO [CodeValue] SELECT 1, 2, 'In relationship', 2 INSERT INTO [CodeValue] SELECT 1, 3, 'Married', 3 INSERT INTO [CodeValue] SELECT 1, 4, 'Divorced', 4 GO Now there’s four columns in CodeValue table. CodeNamespaceId tells under which namespace values belongs to. Value tells the enumeration value which is used in Person table (I’ll show how this is done below). Description tells what the value means. You can use this, for example, column in UI’s combo box. OrderBy tells if the values needs to be ordered in some way when displayed in the UI. And here’s the Person table again now with correct columns. I’ll add one row here to show how enumerations are to be used. CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] INT ) GO INSERT INTO [Person] SELECT 'Marko', 'Parkkola', '1977-03-04', 3 GO Now I said earlier that there is one problem with this. MaritalStatus column doesn’t have any database enforced relationship to the CodeValue table so you can enter any value you like into this field. I’ve solved this problem in the application layer by selecting all the values from the CodeValue table and put them into a combobox / dropdownlist (with Value field as value and Description as text) so the end user can’t enter any illegal values; and of course I’ll check the entered value in data access layer also. I said in the “The wrong way” section that there is one benefit to it. In fact, you can have the same benefit here by using a simple view, which I schema bound so you can even index it if you like. CREATE VIEW [dbo].[Person_v] WITH SCHEMABINDING AS SELECT p.[Firstname], p.[Lastname], p.[BirthDay], c.[Description] MaritalStatus FROM [dbo].[Person] p JOIN [dbo].[CodeValue] c ON p.[MaritalStatus] = c.[Value] JOIN [dbo].[CodeNamespace] n ON n.[Id] = c.[CodeNamespaceId] AND n.[Name] = 'MaritalStatus' GO -- Select from View SELECT * FROM [dbo].[Person_v] GO This is excellent write up byMarko Parkkola. Do you have this kind of design setup at your organization? Let us know your opinion. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Database, DBA, Readers Contribution, Software Development, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Creating packages in code – Execute SQL Task

    The Execute SQL Task is for obvious reasons very well used, so I thought if you are building packages in code the chances are you will be using it. Using the task basic features of the task are quite straightforward, add the task and set some properties, just like any other. When you start interacting with variables though it can be a little harder to grasp so these samples should see you through. Some of these more advanced features are explained in much more detail in our ever popular post The Execute SQL Task, here I’ll just be showing you how to implement them in code. The abbreviated code blocks below demonstrate the different features of the task. The complete code has been encapsulated into a sample class which you can download (ExecSqlPackage.cs). Each feature described has its own method in the sample class which is mentioned after the code block. This first sample just shows adding the task, setting the basic properties for a connection and of course an SQL statement. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Set required properties taskHost.Properties["Connection"].SetValue(taskHost, sqlConnection.ID); taskHost.Properties["SqlStatementSource"].SetValue(taskHost, "SELECT * FROM sysobjects"); For the full version of this code, see the CreatePackage method in the sample class. The AddSqlConnection method is a helper method that adds an OLE-DB connection to the package, it is of course in the sample class file too. Returning a single value with a Result Set The following sample takes a different approach, getting a reference to the ExecuteSQLTask object task itself, rather than just using the non-specific TaskHost as above. Whilst it means we need to add an extra reference to our project (Microsoft.SqlServer.SQLTask) it makes coding much easier as we have compile time validation of any property and types we use. For the more complex properties that is very valuable and saves a lot of time during development. The query has also been changed to return a single value, one row and one column. The sample shows how we can return that value into a variable, which we also add to our package in the code. To do this manually you would set the Result Set property on the General page to Single Row and map the variable on the Result Set page in the editor. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Add variable to hold result value package.Variables.Add("Variable", false, "User", 0); // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = 'sysrowsets'"; // Set single row result set task.ResultSetType = ResultSetType.ResultSetType_SingleRow; // Add result set binding, map the id column to variable task.ResultSetBindings.Add(); IDTSResultBinding resultBinding = task.ResultSetBindings.GetBinding(0); resultBinding.ResultName = "id"; resultBinding.DtsVariableName = "User::Variable"; For the full version of this code, see the CreatePackageResultVariable method in the sample class. The other types of Result Set behaviour are just a variation on this theme, set the property and map the result binding as required. Parameter Mapping for SQL Statements This final example uses a parameterised SQL statement, with the coming from a variable. The syntax varies slightly between connection types, as explained in the Working with Parameters and Return Codes in the Execute SQL Taskhelp topic, but OLE-DB is the most commonly used, for which a question mark is the parameter value placeholder. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, ".", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = ?"; // Add variable to hold parameter value package.Variables.Add("Variable", false, "User", "sysrowsets"); // Add input parameter binding task.ParameterBindings.Add(); IDTSParameterBinding parameterBinding = task.ParameterBindings.GetBinding(0); parameterBinding.DtsVariableName = "User::Variable"; parameterBinding.ParameterDirection = ParameterDirections.Input; parameterBinding.DataType = (int)OleDBDataTypes.VARCHAR; parameterBinding.ParameterName = "0"; parameterBinding.ParameterSize = 255; For the full version of this code, see the CreatePackageParameterVariable method in the sample class. You’ll notice the data type has to be specified for the parameter IDTSParameterBinding .DataType Property, and these type codes are connection specific too. My enumeration I wrote several years ago is shown below was probably done by reverse engineering a package and also the API header file, but I recently found a very handy post that covers more connections as well for exactly this, Setting the DataType of IDTSParameterBinding objects (Execute SQL Task). /// <summary> /// Enumeration of OLE-DB types, used when mapping OLE-DB parameters. /// </summary> private enum OleDBDataTypes { BYTE = 0x11, CURRENCY = 6, DATE = 7, DB_VARNUMERIC = 0x8b, DBDATE = 0x85, DBTIME = 0x86, DBTIMESTAMP = 0x87, DECIMAL = 14, DOUBLE = 5, FILETIME = 0x40, FLOAT = 4, GUID = 0x48, LARGE_INTEGER = 20, LONG = 3, NULL = 1, NUMERIC = 0x83, NVARCHAR = 130, SHORT = 2, SIGNEDCHAR = 0x10, ULARGE_INTEGER = 0x15, ULONG = 0x13, USHORT = 0x12, VARCHAR = 0x81, VARIANT_BOOL = 11 } Download Sample code ExecSqlPackage.cs (10KB)

    Read the article

  • SQL Server – Learning SQL Server Performance: Indexing Basics – Video

    - by pinaldave
    Today I remember one of my older cartoon years ago created for Indexing and Performance. Every single time when Performance is discussed, Indexes are mentioned along with it. In recent times, data and application complexity is continuously growing.  The demand for faster query response, performance, and scalability by organizations is increasing and developers and DBAs need to now write efficient code to achieve this. DBA and Developers A DBA’s role is critical, because a production environment has to run 24×7, hence maintenance, trouble shooting, and quick resolutions are the need of the hour.  The first baby step into any performance tuning exercise in SQL Server involves creating, analysing, and maintaining indexes. Though we have learnt indexing concepts from our college days, indexing implementation inside SQL Server can vary.  Understanding this behaviour and designing our applications appropriately will make sure the application is performed to its highest potential. Video Learning Vinod Kumar and myself we often thought about this and realized that practical understanding of the indexes is very important. One can not master every single aspects of the index. However there are some minimum expertise one should gain if performance is one of the concern. We decided to build a course which just addresses the practical aspects of the performance. In this course, we explored some of these indexing fundamentals and we elaborated on how SQL Server goes about using indexes.  At the end of this course of you will know the basic structure of indexes, practical insights into implementation, and maintenance tips and tricks revolving around indexes.  Finally, we will introduce SQL Server 2012 column store indexes.  We have refrained from discussing internal storage structure of the indexes but have taken a more practical, demo-oriented approach to explain these core concepts. Course Outline Here are salient topics of the course. We have explained every single concept along with a practical demonstration. Additionally shared our personal scripts along with the same. Introduction Fundamentals of Indexing Index Fundamentals Index Fundamentals – Visual Representation Practical Indexing Implementation Techniques Primary Key Over Indexing Duplicate Index Clustered Index Unique Index Included Columns Filtered Index Disabled Index Index Maintenance and Defragmentation Introduction to Columnstore Index Indexing Practical Performance Tips and Tricks Index and Page Types Index and Non Deterministic Columns Index and SET Values Importance of Clustered Index Effect of Compression and Fillfactor Index and Functions Dynamic Management Views (DMV) – Fillfactor Table Scan, Index Scan and Index Seek Index and Order of Columns Final Checklist: Index and Performance Well, we believe we have done our part, now waiting for your comments and feedback. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology, Video

    Read the article

  • SQL SERVER – Get Latest SQL Query for Sessions – DMV

    - by pinaldave
    In recent SQL Training I was asked, how can one figure out what was the last SQL Statement executed in sessions. The query for this is very simple. It uses two DMVs and created following quick script for the same. SELECT session_id, TEXT FROM sys.dm_exec_connections CROSS APPLY sys.dm_exec_sql_text(most_recent_sql_handle) AS ST While working with DMVs if you ever find any DMV has column with name sql_handle you can right away join that DMV with another DMV sys.dm_exec_sql_text and can get the text of the SQL statement. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: DMV, SQL DMV

    Read the article

  • Creating packages in code – Execute SQL Task

    The Execute SQL Task is for obvious reasons very well used, so I thought if you are building packages in code the chances are you will be using it. Using the task basic features of the task are quite straightforward, add the task and set some properties, just like any other. When you start interacting with variables though it can be a little harder to grasp so these samples should see you through. Some of these more advanced features are explained in much more detail in our ever popular post The Execute SQL Task, here I’ll just be showing you how to implement them in code. The abbreviated code blocks below demonstrate the different features of the task. The complete code has been encapsulated into a sample class which you can download (ExecSqlPackage.cs). Each feature described has its own method in the sample class which is mentioned after the code block. This first sample just shows adding the task, setting the basic properties for a connection and of course an SQL statement. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Set required properties taskHost.Properties["Connection"].SetValue(taskHost, sqlConnection.ID); taskHost.Properties["SqlStatementSource"].SetValue(taskHost, "SELECT * FROM sysobjects"); For the full version of this code, see the CreatePackage method in the sample class. The AddSqlConnection method is a helper method that adds an OLE-DB connection to the package, it is of course in the sample class file too. Returning a single value with a Result Set The following sample takes a different approach, getting a reference to the ExecuteSQLTask object task itself, rather than just using the non-specific TaskHost as above. Whilst it means we need to add an extra reference to our project (Microsoft.SqlServer.SQLTask) it makes coding much easier as we have compile time validation of any property and types we use. For the more complex properties that is very valuable and saves a lot of time during development. The query has also been changed to return a single value, one row and one column. The sample shows how we can return that value into a variable, which we also add to our package in the code. To do this manually you would set the Result Set property on the General page to Single Row and map the variable on the Result Set page in the editor. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Add variable to hold result value package.Variables.Add("Variable", false, "User", 0); // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = 'sysrowsets'"; // Set single row result set task.ResultSetType = ResultSetType.ResultSetType_SingleRow; // Add result set binding, map the id column to variable task.ResultSetBindings.Add(); IDTSResultBinding resultBinding = task.ResultSetBindings.GetBinding(0); resultBinding.ResultName = "id"; resultBinding.DtsVariableName = "User::Variable"; For the full version of this code, see the CreatePackageResultVariable method in the sample class. The other types of Result Set behaviour are just a variation on this theme, set the property and map the result binding as required. Parameter Mapping for SQL Statements This final example uses a parameterised SQL statement, with the coming from a variable. The syntax varies slightly between connection types, as explained in the Working with Parameters and Return Codes in the Execute SQL Taskhelp topic, but OLE-DB is the most commonly used, for which a question mark is the parameter value placeholder. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, ".", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = ?"; // Add variable to hold parameter value package.Variables.Add("Variable", false, "User", "sysrowsets"); // Add input parameter binding task.ParameterBindings.Add(); IDTSParameterBinding parameterBinding = task.ParameterBindings.GetBinding(0); parameterBinding.DtsVariableName = "User::Variable"; parameterBinding.ParameterDirection = ParameterDirections.Input; parameterBinding.DataType = (int)OleDBDataTypes.VARCHAR; parameterBinding.ParameterName = "0"; parameterBinding.ParameterSize = 255; For the full version of this code, see the CreatePackageParameterVariable method in the sample class. You’ll notice the data type has to be specified for the parameter IDTSParameterBinding .DataType Property, and these type codes are connection specific too. My enumeration I wrote several years ago is shown below was probably done by reverse engineering a package and also the API header file, but I recently found a very handy post that covers more connections as well for exactly this, Setting the DataType of IDTSParameterBinding objects (Execute SQL Task). /// <summary> /// Enumeration of OLE-DB types, used when mapping OLE-DB parameters. /// </summary> private enum OleDBDataTypes { BYTE = 0x11, CURRENCY = 6, DATE = 7, DB_VARNUMERIC = 0x8b, DBDATE = 0x85, DBTIME = 0x86, DBTIMESTAMP = 0x87, DECIMAL = 14, DOUBLE = 5, FILETIME = 0x40, FLOAT = 4, GUID = 0x48, LARGE_INTEGER = 20, LONG = 3, NULL = 1, NUMERIC = 0x83, NVARCHAR = 130, SHORT = 2, SIGNEDCHAR = 0x10, ULARGE_INTEGER = 0x15, ULONG = 0x13, USHORT = 0x12, VARCHAR = 0x81, VARIANT_BOOL = 11 } Download Sample code ExecSqlPackage.cs (10KB)

    Read the article

  • Enable Automatic Code First Migrations On SQL Database in Azure Web Sites

    - by Steve Michelotti
    Now that Azure supports .NET Framework 4.5, you can use all the latest and greatest available features. A common scenario is to be able to use Entity Framework Code First Migrations with a SQL Database in Azure. Prior to Code First Migrations, Entity Framework provided database initializers. While convenient for demos and prototypes, database initializers weren’t useful for much beyond that because, if you delete and re-create your entire database when the schema changes, you lose all of your operational data. This is the void that Migrations are meant to fill. For example, if you add a column to your model, Migrations will alter the database to add the column rather than blowing away the entire database and re-creating it from scratch. Azure is becoming increasingly easier to use – especially with features like Azure Web Sites. Being able to use Entity Framework Migrations in Azure makes deployment easier than ever. In this blog post, I’ll walk through enabling Automatic Code First Migrations on Azure. I’ll use the Simple Membership provider for my example. First, we’ll create a new Azure Web site called “migrationstest” including creating a new SQL Database along with it:   Next we’ll go to the web site and download the publish profile:   In the meantime, we’ve created a new MVC 4 website in Visual Studio 2012 using the “Internet Application” template. This template is automatically configured to use the Simple Membership provider. We’ll do our initial Publish to Azure by right-clicking our project and selecting “Publish…”. From the “Publish Web” dialog, we’ll import the publish profile that we downloaded in the previous step:   Once the site is published, we’ll just click the “Register” link from the default site. Since the AccountController is decorated with the [InitializeSimpleMembership] attribute, the initializer will be called and the initial database is created.   We can verify this by connecting to our SQL Database on Azure with SQL Management Studio (after making sure that our local IP address is added to the list of Allowed IP Addresses in Azure): One interesting note is that these tables got created with the default Entity Framework initializer – which is to create the database if it doesn’t already exist. However, our database did already exist! This is because there is a new feature of Entity Framework 5 where Code First will add tables to an existing database as long as the target database doesn’t contain any of the tables from the model. At this point, it’s time to enable Migrations. We’ll open the Package Manger Console and execute the command: PM> Enable-Migrations -EnableAutomaticMigrations This will enable automatic migrations for our project. Because we used the "-EnableAutomaticMigrations” switch, it will create our Configuration class with a constructor that sets the AutomaticMigrationsEnabled property set to true: 1: public Configuration() 2: { 3: AutomaticMigrationsEnabled = true; 4: } We’ll now add our initial migration: PM> Add-Migration Initial This will create a migration class call “Initial” that contains the entire model. But we need to remove all of this code because our database already exists so we are just left with empty Up() and Down() methods. 1: public partial class Initial : DbMigration 2: { 3: public override void Up() 4: { 5: } 6: 7: public override void Down() 8: { 9: } 10: } If we don’t remove this code, we’ll get an exception the first time we attempt to run migrations that tells us: “There is already an object named 'UserProfile' in the database”. This blog post by Julie Lerman fully describes this scenario (i.e., enabling migrations on an existing database). Our next step is to add the Entity Framework initializer that will automatically use Migrations to update the database to the latest version. We will add these 2 lines of code to the Application_Start of the Global.asax: 1: Database.SetInitializer(new MigrateDatabaseToLatestVersion<UsersContext, Configuration>()); 2: new UsersContext().Database.Initialize(false); Note the Initialize() call will force the initializer to run if it has not been run before. At this point, we can publish again to make sure everything is still working as we are expecting. This time we’re going to specify in our publish profile that Code First Migrations should be executed:   Once we have re-published we can once again navigate to the Register page. At this point the database has not been changed but Migrations is now enabled on our SQL Database in Azure. We can now customize our model. Let’s add 2 new properties to the UserProfile class – Email and DateOfBirth: 1: [Table("UserProfile")] 2: public class UserProfile 3: { 4: [Key] 5: [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] 6: public int UserId { get; set; } 7: public string UserName { get; set; } 8: public string Email { get; set; } 9: public DateTime DateOfBirth { get; set; } 10: } At this point all we need to do is simply re-publish. We’ll once again navigate to the Registration page and, because we had Automatic Migrations enabled, the database has been altered (*not* recreated) to add our 2 new columns. We can verify this by once again looking at SQL Management Studio:   Automatic Migrations provide a quick and easy way to keep your database in sync with your model without the worry of having to re-create your entire database and lose data. With Azure Web Sites you can set up automatic deployment with Git or TFS and automate the entire process to make it dead simple.

    Read the article

  • Serious about Embedded: Java Embedded @ JavaOne 2012

    - by terrencebarr
    It bears repeating: More than ever, the Java platform is the best technology for many embedded use cases. Java’s platform independence, high level of functionality, security, and developer productivity address the key pain points in building embedded solutions. Transitioning from 16 to 32 bit or even 64 bit? Need to support multiple architectures and operating systems with a single code base? Want to scale on multi-core systems? Require a proven security model? Dynamically deploy and manage software on your devices? Cut time to market by leveraging code, expertise, and tools from a large developer ecosystem? Looking for back-end services, integration, and management? The Java platform has got you covered. Java already powers around 10 billion devices worldwide, with traditional desktops and servers being only a small portion of that. And the ‘Internet of Things‘ is just really starting to explode … it is estimated that within five years, intelligent and connected embedded devices will outnumber desktops and mobile phones combined, and will generate the majority of the traffic on the Internet. Is your platform and services strategy ready for the coming disruptions and opportunities? It should come as no surprise that Oracle is keenly focused on Java for Embedded. At JavaOne 2012 San Francisco the dedicated track for Java ME, Java Card, and Embedded keeps growing, with 52 sessions, tutorials, Hands-on-Labs, and BOFs scheduled for this track alone, plus keynotes, demos, booths, and a variety of other embedded content. To further prove Oracle’s commitment, in 2012 for the first time there will be a dedicated sub-conference focused on the business aspects of embedded Java: Java Embedded @ JavaOne. This conference will run for two days in parallel to JavaOne in San Francisco, will have its own business-oriented track and content, and targets C-level executives, architects, business leaders, and decision makers. Registration and Call For Papers for Java Embedded @ JavaOne are now live. We expect a lot of interest in this new event and space is limited, so be sure to submit your paper and register soon. Hope to see you there! Cheers, – Terrence Filed under: Mobile & Embedded Tagged: ARM, Call for Papers, Embedded Java, Java Embedded, Java Embedded @ JavaOne, Java ME, Java SE Embedded, Java SE for Embedded, JavaOne San Francisco, PowerPC

    Read the article

  • SQL SERVER – Advanced Data Quality Services with Melissa Data – Azure Data Market

    - by pinaldave
    There has been much fanfare over the new SQL Server 2012, and especially around its new companion product Data Quality Services (DQS). Among the many new features is the addition of this integrated knowledge-driven product that enables data stewards everywhere to profile, match, and cleanse data. In addition to the homegrown rules that data stewards can design and implement, there are also connectors to third party providers that are hosted in the Azure Datamarket marketplace.  In this review, I leverage SQL Server 2012 Data Quality Services, and proceed to subscribe to a third party data cleansing product through the Datamarket to showcase this unique capability. Crucial Questions For the purposes of the review, I used a database I had in an Excel spreadsheet with name and address information. Upon a cursory inspection, there are miscellaneous problems with these records; some addresses are missing ZIP codes, others missing a city, and some records are slightly misspelled or have unparsed suites. With DQS, I can easily add a knowledge base to help standardize my values, such as for state abbreviations. But how do I know that my address is correct? And if my address is not correct, what should it be corrected to? The answer lies in a third party knowledge base by the acknowledged USPS certified address accuracy experts at Melissa Data. Reference Data Services Within DQS there is a handy feature to actually add reference data from many different third-party Reference Data Services (RDS) vendors. DQS simplifies the processes of cleansing, standardizing, and enriching data through custom rules and through service providers from the Azure Datamarket. A quick jump over to the Datamarket site shows me that there are a handful of providers that offer data directly through Data Quality Services. Upon subscribing to these services, one can attach a DQS domain or composite domain (fields in a record) to a reference data service provider, and begin using it to cleanse, standardize, and enrich that data. Besides what I am looking for (address correction and enrichment), it is possible to subscribe to a host of other services including geocoding, IP address reference, phone checking and enrichment, as well as name parsing, standardization, and genderization.  These capabilities extend the data quality that DQS has natively by quite a bit. For my current address correction review, I needed to first sign up to a reference data provider on the Azure Data Market site. For this example, I used Melissa Data’s Address Check Service. They offer free one-month trials, so if you wish to follow along, or need to add address quality to your own data, I encourage you to sign up with them. Once I subscribed to the desired Reference Data Provider, I navigated my browser to the Account Keys within My Account to view the generated account key, which I then inserted into the DQS Client – Configuration under the Administration area. Step by Step to Guide That was all it took to hook in the subscribed provider -Melissa Data- directly to my DQS Client. The next step was for me to attach and map in my Reference Data from the newly acquired reference data provider, to a domain in my knowledge base. On the DQS Client home screen, I selected “New Knowledge Base” under Knowledge Base Management on the left-hand side of the home screen. Under New Knowledge Base, I typed a Name and description of my new knowledge base, then proceeded to the Domain Management screen. Here I established a series of domains (fields) and then linked them all together as a composite domain (record set). Using the Create Domain button, I created the following domains according to the fields in my incoming data: Name Address Suite City State Zip I added a Suite column in my domain because Melissa Data has the ability to return missing Suites based on last name or company. And that’s a great benefit of using these third party providers, as they have data that the data steward would not normally have access to. The bottom line is, with these third party data providers, I can actually improve my data. Next, I created a composite domain (fulladdress) and added the (field) domains into the composite domain. This essentially groups our address fields together in a record to facilitate the full address cleansing they perform. I then selected my newly created composite domain and under the Reference Data tab, added my third party reference data provider –Melissa Data’s Address Check- and mapped in each domain that I had to the provider’s Schema. Now that my composite domain has been married to the Reference Data service, I can take the newly published knowledge base and create a project to cleanse and enrich my data. My next task was to create a new Data Quality project, mapping in my data source and matching it to the appropriate domain column, and then kick off the verification process. It took just a few minutes with some progress indicators indicating that it was working. When the process concluded, there was a helpful set of tabs that place the response records into categories: suggested; new; invalid; corrected (automatically); and correct. Accepting the suggestions provided by  Melissa Data allowed me to clean up all the records and flag the invalid ones. It is very apparent that DQS makes address data quality simplistic for any IT professional. Final Note As I have shown, DQS makes data quality very easy. Within minutes I was able to set up a data cleansing and enrichment routine within my data quality project, and ensure that my address data was clean, verified, and standardized against real reference data. As reviewed here, it’s easy to see how both SQL Server 2012 and DQS work to take what used to require a highly skilled developer, and empower an average business or database person to consume external services and clean data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: DQS

    Read the article

  • Social Network Updates: While You Were Busy Marketing 2

    - by Mike Stiles
    Since social moves at the speed of data, it’s already time for another update, as we did back in April, on the changes the various social networks have made or gone through while you were busy marketing. Facebook There’s a lot of talk Facebook’s developing a mobile product to act like Flipboard and surface news, from both users and media outlets. The biggest news was Facebook/Instagram’s introduction of 15-second videos, enhanced with with filters, to take some of Vine’s candy. You can also delete parts of videos and rerecord them, and there’s image stabilization. Facebook’s ad revenue is coming along just fine, thank you very much. 35% quarter-to-quarter growth in Q2. And it looks like new formats like Mobile App Install Ads and Unpublished Page Posts are adding to the mix. If you don’t already, you’ll soon see a little camera in comment boxes letting you insert photos right into the comments you make. The drive toward “more visual” continues. The other big news is Facebook’s adoption of our Twitter friend, the hashtag. Adding # sets apart the post topic so it can be easily found or discovered. It’s also being added to Google Plus, Tumblr, and Pinterest. Twitter Want to send someone a promoted tweet when they’re in range of your store? That could be happening by the end of this year. Some users have been seeing automatic in-stream previews of images on Twitter.com. Right now it’s images in your own tweets, but we can assume all tweets are next. Get your followers organized! Twitter raised the limit on the number of lists you can create from 20 to 1,000. They also raised the number of accounts you can have in a list from 500 to 5,000. Twitter started notifying you when someone favorites a tweet you’re mentioned in or re-tweets a tweet you re-tweeted. Anyway, it’s the first time Twitter’s notified you about indirect interactions like that. Who’s afraid of Instagram? A study shows 6-second Vine videos are being posted to Twitter at the rate of 9/second, up from 5/second 2 months ago. Vine has over 13 million users and branded Vines are 4x more likely to be shared than video ads. Google Plus Now featuring a 3-column redesigned stream, and images that take up a whole column. And photo filters Auto Highlight and Auto Awesome work to turn your photos into a real show. Google Hangouts is the workhorse for all Google messaging now, it’s not just an online chat with 9 people anymore. Google Plus Dashboard improves the connection between your company’s Google Plus business page and your Google Plus Local. Updates go out across all Google properties and you can do your managing from the dashboard. With Google Plus’ authorship system, you can build “Author Rank” based on what you write and put on the web. If your stuff is +1’ed and shared a lot, you’re the real deal and there are search result benefits. LinkedIn "Who's Viewed Your Updates" shows you what you’ve shared recently, who saw it and what they did about it in real-time. “Influencers” is, well, influential. Traffic to all LI news products has gone up 8x since it was introduced. LinkedIn is quickly figuring out how to get users to stick around awhile. You and your brand can post images and documents in status updates now. In fact, that whole “document posting” thing is making some analysts wonder if LinkedIn will drift on over to the Dropboxes and YouSendIts of the world. C’mon, admit it. Your favorite part of LinkedIn is being able to see who’s viewed your profile. Now you’ve got even more info and can see what/who you have in common. Premium users get even deeper insights about how people are finding them. If you’re a big fan of security, you’ll love that LinkedIn started offering two-factor authentication (2FA). It’s optional, but step 2 is a one-time code texted to your registered mobile. Pinterest A study showed pins have a looong shelf life compared to other social net posts. “Clicks kept coming for 30 days and beyond.” Most pins are timeless, and the infinite scroll causes people to see older pins. Is it a keeper? Pinterest jumped 82% to 54 million users in the past year. It’s valued at $2.5 billion and is one of the biggest sources of referral traffic there is. That said, CEO Ben Silbermann adds, "Right now, we don't make money." A new search feature stops you from having to endlessly scroll through your own pins looking for that waterfall picture you posted. Simply select “just my pins” in the search bar. New "Rich Pins" lets brands add info like price and availability to pins that can be updated daily via a data feed from your merchant site. Not so fast, you have to apply to Pinterest for it first. Like other social nets, Pinterest does not allow sexual content, nudity, or even partial nudity. However…some art contains nudity, and Pinterest wants to allow art. What constitutes “art” will be judged by…what we have to assume are Pinterest employees who love their job. @mikestilesPhoto: stock.xchng, Tim Marmon

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >